Related
I'm making an appointments app.
I have this gradient structure (created in Pixelmator), with which I want to mark the times of day:
In the intended scheme, 8am would be solid green, 12 noon would be solid yellow, and 8pm would be solid blue.
I need an algorithm to take the times of day and turn them into those colors, but I can't figure it out, particularly from noon to evening.
These colors are composed using the HSB value system: all colors have S and B at 100%, and from left to right the hue values are 121 (green), 60 (yellow), and 229 (blue).
The progression from the green to yellow is (morning to noon) is straightforward, because it's just a linear scaling from 121 to 60, but from yellow to blue (noon to evening), is not; this is clear if you think about the fact that going from 60 to 229 in a linear fashion would first duplicate the green-to-yellow gradient, just in reverse order, and then would go to from green to blue. In other words, a strictly linear progression would make the gradient look more like this:
Can anyone point me in the right direction to understanding how to make the algorithm I need here? Do I have to use a different color value system, like RGB?
Thanks in advance for any and all help!
Pablo-No gives a reasonable answer if it's OK for the yellow->blue transition to go through red. But the OP's original picture doesn't go through red, it goes through some kind of grey. Perhaps the saturation S should be used to try to achieve this:
// Assume time is a real value between 8 (8am) and 20 (8pm)
// H is between 0 and 360
// S and B are between 0 and 100
B = 255;
if (time < 12)
{
// Before noon, linearly go from H=121 (green) to H=60 (yellow)
H = (time - 8) * (60-121)/4.0 + 121;
S = 100;
}
else
{
// After noon, linearly go from H=60 (green) to H=229 (blue)
// But in the middle, where the color goes green, linearly scale
// Saturation down to zero and back to 100.
H = (time - 12) * (229-60)/8.0 + 60;
auto secondGreenTime = (121-60)*8.0/(229-60) + 12;
if (time < secondGreenTime)
S = (time - 12) * (-100.0)/(secondGreenTime-12) + 100;
else
S = (time - secondGreenTime) * 100.0/(20-secondGreenTime);
}
Pixelmator looks like it's using RGB gradients. Demo:
const canvas = document.getElementById("gradient");
const ctx = canvas.getContext("2d");
for (let i = 0; i < canvas.width; i++) {
const alpha = (i + 0.5) / canvas.width;
const r = 2 * Math.min(alpha, 1 - alpha);
const g = Math.min(1, 2 * (1 - alpha));
const b = Math.max(0, 2 * alpha - 1);
ctx.fillStyle = `rgba(${255*r},${255*g},${255*b})`
ctx.fillRect(i, 0, 1, canvas.height);
}
<canvas id="gradient" width="240" height="40">
Here is an algorithm for that:
Convert the hour to 24 hour and pass minutes and seconds to a fraction or a decimal number (i.e 8:30 -> 8.5, 8:20 -> 25/3)
Substract 8 to the hour (now we have a number from 0 to 12)
If the hour, h, is between 0 and 4 we will do ((-h+4)*(61/4))+60
else we will do ((-h+12)*(191/8))-131
If the value is negative we'll add 360
The value we obtain will be the hue value of the color
I'm trying to make the balls falling at a random speed but the speed is changing only when I reload the page/script, I would like to get a random speed dynamically, one ball comes at 5, the next one 1.4, next 2.6 and so on...
https://codepen.io/Le-future/pen/gKNoEE
I tried to use the following :
// set how fast the objects will fall
var spawnRateOfDescent = Math.random() * (5 - 0.5) + 0.5;
Each ball should have its own unique speed property. You can add it as follows:
First adjustment (lines 72-73):
image: images[Math.floor(Math.random()*images.length)], // add a comma here
speed: Math.random() * 10 + 3 // add this line and tweak the numbers to taste
Second adjustment in your animate function (line 107 [or 108 if you added a line]):
object.y += object.speed; // instead of: object.y += spawnRateOfDescent;
I am sampling some pixels from a reference image Ir and then moving them on a secondary image In. The first function I have written is as follows:
[r,c,d] = size(Ir);
rSample = fix(r * 0.4); % sample 40 percent of pixels
cSample = fix(c * 0.4); % sample 40 percent of pixels
rIdx = randi(r,rSample,1); % uniformly sample indices for rows
cIdx = randi(c,cSample,1); % uniformly sample indices for columns
kk = 1;
for ii = 1:length(rIdx)
for jj=1:length(cIdx)
In(rIdx(ii),cIdx(jj),:) = Ir(rIdx(ii),cIdx(jj),:) * fcn(rIdx(ii),cIdx(jj));
kk = kk + 1;
end
end
Another method to increase the performance (speed) of the code, that I came around is as follows:
nSample = fix(r*c*0.4);
Idx = randi(r*c,nSample,1);
for ii = 1:nSample
[I,J] = ind2sub([r,c],Idx(ii,1));
In(I,J,:) = Ir(I,J,:) * fcn(I,J);
end
In both codes, fcn(I,J) is a function that performs some computation on the pixel at [I,J] and the process can be different depending on the indices of the pixel.
Although I have removed one for-loop, I guess there is a better technique to increase the performance of the code even more.
Update:
As suggested by #Daniel the following line of the code does the job.
In(rIdx,cIdx,:)=Ir(rIdx,cIdx,:);
But the point is, I prefer to have only the sampled pixels to be able to process them faster. For instance having the samples in a vector format wit 3 layers for RGB.
Io = Ir(rIdx,cIdx,:);
Io1 = Io(:,:,1);
Io1v = Io1(:);
Ir=ones(30,30,3);
In=Ir*.5;
[r,c,d] = size(Ir);
rSamples = fix(r * 0.4); % sample 40 percent of pixels
cSamples = fix(c * 0.4); % sample 40 percent of pixels
rIdx = randi(r,rSamples,1); % uniformly sample indices for rows
cIdx = randi(c,cSamples,1); % uniformly sample indices for columns
In(rIdx,cIdx,:)=Ir(rIdx,cIdx,:);
I am a fairly intelligent person, but when I see a certain kind of math I might as well be a gigantic moron. I could really use some help here.
I have been researching a ton of things as I learn iOS game development and I came across a formula while doing some searches. Here is the formula:
x(t) = x(0) + v(0)*t + .5 (F/m) * t^2
Also stated was solving for x and y:
Fx = (x(t) - x(0) - vx(0)*t) * 2m/t^2
Fy = (y(t) - y(0) - vy(0)*t) * 2m/t^2
Source: Box2D.org forums
Now for my actual question, what does that mean? Keep in mind that in this situation I am an idiot. It would be great if someone could explain the variables in simple terms and how they relate to box2d. How would I apply this formula? Here is an example of my code (firing projectiles):
- (void)spawnProjectile:(CGPoint)from direction:(CGPoint)direction inParent:(CCNode*)parentNode
{
double curTime = CACurrentMediaTime();
double timeTillShotDies = curTime + SHOT_TYPE1_EXIST_TIME;
b2Body *shotBody = projectileBodyTracker[nextShot];
b2Vec2 moveToPosition = b2Vec2(from.x/PTM_RATIO, from.y/PTM_RATIO);
shotBody->SetTransform(moveToPosition, 0.0);
shotBody->SetActive(true);
CCSprite *shot = [projectiles objectAtIndex:nextShot];
shot.position = ccp(from.x/PTM_RATIO, from.y/PTM_RATIO);
shot.visible = YES;
[projectilesTracker replaceObjectAtIndex:nextShot withObject:[NSNumber numberWithDouble:timeTillShotDies]];
CCParticleSystemQuad *particle = [projectileParticleTracker objectAtIndex:nextShot];
[particle resetSystem];
nextShot++;
if(nextShot >= projectiles.count) nextShot = 0;
// dx2 and dy2 are analog stick values (see below code)
b2Vec2 force = b2Vec2(dx2, dy2);
shotBody->SetLinearVelocity(force);
[AudioController playLaserShot];
}
In this particular chunk of code I am firing from the player at the angle the analog is at. I also would need to use the formula to fire from the enemy to the player. This is a top down space shooter.
So to summarize, how do I solve constant force over time for x and y, in terms of box2d code?
Extra info:
dx2 = (float)joypadBG2.position.x - (float)convertedPoint.x;
dy2 = (float)joypadBG2.position.y - (float)convertedPoint.y;
All objects are preloaded and kept that way. Bodies are set inactive and sprites set invisible. Particle systems are stopped. The opposite is true for using a projectile again.
Thank you very much for any help you may be able to provide. I hope I haven't forgotten anything.
The first equation describes the movement of an object that is subject to a constant force.
The object starts at position x(0) and has speed v(0). Both x and v are vectors, so in a 2d shooter, x(0) would be (x0,y0), or the xy-position, and v(0) would be (vx0, vy0).
If there is no gravity then F=0 on unpropelled projectiles (projectiles without thrusters),
so the velocity will be constant.
x(t1) = x(t0) + vx * (t1-t0)
y(t1) = y(t0) + vy * (t1-t0)
t1-t0 or dt (delta-t) is the time elapsed since the last time you updated the position of the projectile.
If thusters or gravity are excerting force on an object then the velocity will change over time.
vx(t1) = vx(t0) + ax * (t1-t0)
vy(t1) = vy(t0) + ay * (t1-t0)
a is the acceleration. In a game you usually don't care about mass and force, just acceleration. In physics, a = F/m.
Edit 1:
In computer games, you update the position of an object very frequently (typically around 60 times per second). You have the position and velocity of the object at the previous update and you want to calculate the new position.
You update the position by assuming that the velocity was constant:
positionVectorAt(newTime) = positionVector(lastTime) + velocityVector*(newTime - lastTime);
If the velocity of the object is changed you also update the velocity:
velocityVectorAt(newTime) = velocityVector(lastTime) + accelerationVector*(newTime - lastTime);
Let's say we have a sprite at
positionVector.x=100;
positionVector.y=10;
The initial speed is
velocityVector.x = 3;
velocityVector.y = -10;
The sprite is using thrusters which is giving a horizonal acceleration per second of
thrusterVector.x= 5;
and it is also subject to gravity which gives a vertical acceleration per second of
gravityVector.y = -10;
The code to update the sprites position will be:
deltaTime = now - lastTime; // Time elapsed since last position update
// Update the position
positionVector.x = positionVector.x + velocityVector.x * deltaTime;
positionVector.y = positionVector.y + velocityVector.y * deltaTime;
// Update the velocity
velocityVector.x = velocityVector.x + (thrusterVector.x + gravityVector.x) * deltaTime;
velocityVector.y = velocityVector.y + (thrusterVector.y + gravityVector.y) * deltaTime;
// Done! The sprite now has a new position and a new velocity!
Here is a quick explanation:
x(t) = x(0) + v(0)*t + .5 (F/m) * t^2
Fx = (x(t) - x(0) - vx(0)*t) * 2m/t^2
Fy = (y(t) - y(0) - vy(0)*t) * 2m/t^2
These 3 equations are standard movement ones:
t: time
x(t): position at time t
v(t): speed at time t
vx(t): horizontal component of speed at time t
vy(t): vertical component of speed at time t
m: mass
F: force
Fx: horizontal component of the force
Fy: vertical component of the force
So of course, each time you see x(0) or vy(0), these values are taken at time t, i.e. they are initial values. These equations are basic cinematic equations with the basic cinematic variables (position, speed, force, mass).
I've been implementing an adaptation of Viola-Jones' face detection algorithm. The technique relies upon placing a subframe of 24x24 pixels within an image, and subsequently placing rectangular features inside it in every position with every size possible.
These features can consist of two, three or four rectangles. The following example is presented.
They claim the exhaustive set is more than 180k (section 2):
Given that the base resolution of the detector is 24x24, the exhaustive set of rectangle features is quite large, over 180,000 . Note that unlike the Haar basis, the set of rectangle
features is overcomplete.
The following statements are not explicitly stated in the paper, so they are assumptions on my part:
There are only 2 two-rectangle features, 2 three-rectangle features and 1 four-rectangle feature. The logic behind this is that we are observing the difference between the highlighted rectangles, not explicitly the color or luminance or anything of that sort.
We cannot define feature type A as a 1x1 pixel block; it must at least be at least 1x2 pixels. Also, type D must be at least 2x2 pixels, and this rule holds accordingly to the other features.
We cannot define feature type A as a 1x3 pixel block as the middle pixel cannot be partitioned, and subtracting it from itself is identical to a 1x2 pixel block; this feature type is only defined for even widths. Also, the width of feature type C must be divisible by 3, and this rule holds accordingly to the other features.
We cannot define a feature with a width and/or height of 0. Therefore, we iterate x and y to 24 minus the size of the feature.
Based upon these assumptions, I've counted the exhaustive set:
const int frameSize = 24;
const int features = 5;
// All five feature types:
const int feature[features][2] = {{2,1}, {1,2}, {3,1}, {1,3}, {2,2}};
int count = 0;
// Each feature:
for (int i = 0; i < features; i++) {
int sizeX = feature[i][0];
int sizeY = feature[i][1];
// Each position:
for (int x = 0; x <= frameSize-sizeX; x++) {
for (int y = 0; y <= frameSize-sizeY; y++) {
// Each size fitting within the frameSize:
for (int width = sizeX; width <= frameSize-x; width+=sizeX) {
for (int height = sizeY; height <= frameSize-y; height+=sizeY) {
count++;
}
}
}
}
}
The result is 162,336.
The only way I found to approximate the "over 180,000" Viola & Jones speak of, is dropping assumption #4 and by introducing bugs in the code. This involves changing four lines respectively to:
for (int width = 0; width < frameSize-x; width+=sizeX)
for (int height = 0; height < frameSize-y; height+=sizeY)
The result is then 180,625. (Note that this will effectively prevent the features from ever touching the right and/or bottom of the subframe.)
Now of course the question: have they made a mistake in their implementation? Does it make any sense to consider features with a surface of zero? Or am I seeing it the wrong way?
Upon closer look, your code looks correct to me; which makes one wonder whether the original authors had an off-by-one bug. I guess someone ought to look at how OpenCV implements it!
Nonetheless, one suggestion to make it easier to understand is to flip the order of the for loops by going over all sizes first, then looping over the possible locations given the size:
#include <stdio.h>
int main()
{
int i, x, y, sizeX, sizeY, width, height, count, c;
/* All five shape types */
const int features = 5;
const int feature[][2] = {{2,1}, {1,2}, {3,1}, {1,3}, {2,2}};
const int frameSize = 24;
count = 0;
/* Each shape */
for (i = 0; i < features; i++) {
sizeX = feature[i][0];
sizeY = feature[i][1];
printf("%dx%d shapes:\n", sizeX, sizeY);
/* each size (multiples of basic shapes) */
for (width = sizeX; width <= frameSize; width+=sizeX) {
for (height = sizeY; height <= frameSize; height+=sizeY) {
printf("\tsize: %dx%d => ", width, height);
c=count;
/* each possible position given size */
for (x = 0; x <= frameSize-width; x++) {
for (y = 0; y <= frameSize-height; y++) {
count++;
}
}
printf("count: %d\n", count-c);
}
}
}
printf("%d\n", count);
return 0;
}
with the same results as the previous 162336
To verify it, I tested the case of a 4x4 window and manually checked all cases (easy to count since 1x2/2x1 and 1x3/3x1 shapes are the same only 90 degrees rotated):
2x1 shapes:
size: 2x1 => count: 12
size: 2x2 => count: 9
size: 2x3 => count: 6
size: 2x4 => count: 3
size: 4x1 => count: 4
size: 4x2 => count: 3
size: 4x3 => count: 2
size: 4x4 => count: 1
1x2 shapes:
size: 1x2 => count: 12 +-----------------------+
size: 1x4 => count: 4 | | | | |
size: 2x2 => count: 9 | | | | |
size: 2x4 => count: 3 +-----+-----+-----+-----+
size: 3x2 => count: 6 | | | | |
size: 3x4 => count: 2 | | | | |
size: 4x2 => count: 3 +-----+-----+-----+-----+
size: 4x4 => count: 1 | | | | |
3x1 shapes: | | | | |
size: 3x1 => count: 8 +-----+-----+-----+-----+
size: 3x2 => count: 6 | | | | |
size: 3x3 => count: 4 | | | | |
size: 3x4 => count: 2 +-----------------------+
1x3 shapes:
size: 1x3 => count: 8 Total Count = 136
size: 2x3 => count: 6
size: 3x3 => count: 4
size: 4x3 => count: 2
2x2 shapes:
size: 2x2 => count: 9
size: 2x4 => count: 3
size: 4x2 => count: 3
size: 4x4 => count: 1
all. There is still some confusion in Viola and Jones' papers.
In their CVPR'01 paper it is clearly stated that
"More specifically, we use three
kinds of features. The value of a
two-rectangle feature is the difference between the sum of the
pixels within two rectangular regions.
The regions have the same size and
shape and are horizontally or
vertically adjacent (see Figure 1).
A three-rectangle feature computes the sum within two outside
rectangles subtracted from the sum in
a center rectangle. Finally a
four-rectangle feature".
In the IJCV'04 paper, exactly the same thing is said. So altogether, 4 features. But strangely enough, they stated this time that the the exhaustive feature set is 45396! That does not seem to be the final version.Here I guess that some additional constraints were introduced there, such as min_width, min_height, width/height ratio, and even position.
Note that both papers are downloadable on his webpage.
Having not read the whole paper, the wording of your quote sticks out at me
Given that the base resolution of the
detector is 24x24, the exhaustive set
of rectangle features is quite large,
over 180,000 . Note that unlike the
Haar basis, the set of rectangle
features is overcomplete.
"The set of rectangle features is overcomplete"
"Exhaustive set"
it sounds to me like a set up, where I expect the paper writer to follow up with an explaination for how they cull the search space down to a more effective set, by, for example, getting rid of trivial cases such as rectangles with zero surface area.
edit: or using some kind of machine learning algorithm, as the abstract hints at. Exhaustive set implies all possibilities, not just "reasonable" ones.
There is no guarantee that any author of any paper is correct in all their assumptions and findings. If you think that assumption #4 is valid, then keep that assumption, and try out your theory. You may be more successful than the original authors.
Quite good observation, but they might implicitly zero-pad the 24x24 frame, or "overflow" and start using first pixels when it gets out of bounds, as in rotational shifts, or as Breton said they might consider some features as "trivial features" and then discard them with the AdaBoost.
In addition, I wrote Python and Matlab versions of your code so I can test the code myself (easier to debug and follow for me) and so I post them here if anyone find them useful sometime.
Python:
frameSize = 24;
features = 5;
# All five feature types:
feature = [[2,1], [1,2], [3,1], [1,3], [2,2]]
count = 0;
# Each feature:
for i in range(features):
sizeX = feature[i][0]
sizeY = feature[i][1]
# Each position:
for x in range(frameSize-sizeX+1):
for y in range(frameSize-sizeY+1):
# Each size fitting within the frameSize:
for width in range(sizeX,frameSize-x+1,sizeX):
for height in range(sizeY,frameSize-y+1,sizeY):
count=count+1
print (count)
Matlab:
frameSize = 24;
features = 5;
% All five feature types:
feature = [[2,1]; [1,2]; [3,1]; [1,3]; [2,2]];
count = 0;
% Each feature:
for ii = 1:features
sizeX = feature(ii,1);
sizeY = feature(ii,2);
% Each position:
for x = 0:frameSize-sizeX
for y = 0:frameSize-sizeY
% Each size fitting within the frameSize:
for width = sizeX:sizeX:frameSize-x
for height = sizeY:sizeY:frameSize-y
count=count+1;
end
end
end
end
end
display(count)
In their original 2001 paper they only state that they used three kinds of features:
we use three kinds of features
with two, three and four rectangles respectively.
Since each kind has two orientations (that differ by 90 degrees), perhaps for the computation of the total number of features they used 2*3 types of features: 2 two-rectangle features, 2 three-rectangle features and 2 four-rectangle features. With this assumption there are indeed over 180,000 features:
feature_types = [(1,2), (2,1), (1,3), (3,1), (2,2), (2,2)]
window_size = (24,24)
total_features = 0
for f_type in feature_types:
for f_height in range(f_type[0], window_size[0] + 1, f_type[0]):
for f_width in range(f_type[1], window_size[1] + 1, f_type[1]):
total_features += (window_size[0] - f_height + 1) * (window_size[1] - f_width + 1)
print(total_features)
# 183072
The second four-rectangle feature differs from the first only by a sign, so there is no need to keep it and if we drop it then the total number of features reduces to 162,336.