I like the natural feel decay adds to my animations but the problem I have is that I can't get it to land on a whole number unlike the other animations.
Is there a vector math formula I can use to calc the deceleration value and velocity so that it does?
i.e.
current animated value is 0.75
velocity = 2
Would like to animate to 2.0 so deceleration rate = ???
Related
I'm trying to do a simple fade in/out animation in Lua.
I feel like these variables should be enough to solve for the alpha/opacity I want to set the box at every frame, but I'm having a lot of trouble with the fade out, since alpha = targetAlpha * animationPos always returns 0 while multiplying by the target alpha of 0.
All of these variables are decimal values between 0-1, representing alpha or %time completed.
targetAlpha - The alpha value at the end of animation.
initialAlpha - The alpha the box started at when the animation initialized.
animationPos - The current position (%time completed) of the animation
currentAlpha - Current alpha of the box.
Maybe I'm just super fried today, but I've been trying what feels like a billion combinations of these vars to find the equation that works, and to no luck.
Any help is appreciated!
What you want is a linear interpolation, which takes two values a and b, and an interpolation value f between 0 and 1.
function lerp(a, b, f)
return a * (1 - f) + b * f
end
And now you can just interpolate between initial and target alpha using your current animation progress:
alpha = lerp(initialAlpha, targetAlpha, animationPos)
I need to implement a high pass Butterworth filter in MATLAB for the purposes of image filtering. I have implemented one but it looks like it doesn't work. Here is the code I have written. Can anyone tell me what is wrong?
n=1;
d=50;
A=1.5;
im=imread('imagex.jpg');
h=size(im,1);
w=size(im,2);
[x y]=meshgrid(-floor(w/2):floor(w-1/2),-floor(h/2):floor(h-1/2));
hhp=(1./(d./(x.^2+y.^2).^0.5).^(2*n));
image_2Dfilter=fftshift(fft2(im));
Image_butterworth=image_2Dfilter;
imshow(Image_butterworth);
ifftshow(Image_butterworth);
For one thing, there is no such command called ifftshow. Secondly, you aren't filtering anything. All you're doing is visualizing the spectrum of the image.
In terms of visualizing the spectrum, how you're doing it right now is very dangerous. For one thing, you are visualizing the coefficients at each spatial frequency component which is complex-valued in nature. If you want to visualize the spectrum in a way that makes sense to most of us, it's better to take a look at either the magnitude or phase. However, because this is a Butterworth filter, it's best to apply it to the magnitude of the filter.
You can find the magnitude of the spectrum by using the abs function. Even when you do that, if you did imshow directly on the magnitude, you will get a visualization that is zero everywhere except for the middle. This is because the DC component is so large and the rest of the spectrum is small in comparison.
Let me show you an example. This is the cameraman image that is part of the image processing toolbox:
im = imread('cameraman.tif');
figure;
imshow(im);
Now, let's visualize the spectrum and ensuring that the DC component is in the centre of the image - you already did this with fftshift. It's also a good idea to cast the image to double to ensure the best precision of data. In addition, make sure you apply abs to find the magnitude:
fftim = fftshift(fft2(double(im)));
mag = abs(fftim);
figure;
imshow(mag, []);
As you can see, it's not very useful due to the reason that I mentioned. A better way to visualize the spectrum of the image is usually to apply a log transformation to the spectrum. This is also useful if you want to de-mean or remove the mean so that the dynamic range fits better for display. In other words, you would add 1 to the magnitude, then apply a logarithm to the magnitude so that higher values can taper off. It doesn't matter which base you use, so I'll just use the natural logarithm which is encapsulated by the log command:
figure;
imshow(log(1 + mag), []);
Now that's much better. Now we'll get onto your filtering mechanism. Your Butterworth filter is slightly incorrect. The meshgrid of coordinates is slightly wrong. The -1 operation that's at the ending interval needs to go outside:
[x y]=meshgrid(-floor(w/2):floor(w/2)-1,-floor(h/2):floor(h/2)-1);
Remember, you are defining a symmetric interval about the centre of the image, and what you had originally wasn't correct. I'd also like to mention that this looks like a high-pass filter, so the output should look like an edge detection. In addition, the definition of the Butterworth high pass filter is incorrect. The correct definition of the filter in frequency domain is:
D(u,v) is the distance from the centre of the image in frequency domain, Do is the cutoff distance while B is a controlling scale factor controlling what the desired gain would be at the cutoff distance. n is the order of the filter. Do in your case is d = 50. In practice, B = sqrt(2) - 1 so that at the cutoff distance of Do, D(u,v) = 1 / sqrt(2) = 0.707, which is the 3 dB cutoff frequency mostly seen in electronics circuit filters. Sometimes you'll see B being set to 1 for simplicity, but it's common to set this to B = sqrt(2) - 1.
However, your current code isn't doing any filtering. To filter in the frequency domain, you simply multiply the spectrum of the image with the spectrum of the filter itself. This is equivalent to convolution in the spatial domain. Once you do that, you simply undo the fftshift that was performed on the image, take the inverse FFT and then eliminate any imaginary components that are due to numerical imprecision. Also, let's cast to uint8 to make sure that we respect the original image type.
That can be done like so:
%// Your code with meshgrid fix
n=1;
d=50;
h=size(im,1);
w=size(im,2);
fftim = fftshift(fft2(double(im)));
[x y]=meshgrid(-floor(w/2):floor(w/2)-1,-floor(h/2):floor(h/2)-1);
%hhp=(1./(d./(x.^2+y.^2).^0.5).^(2*n));
%%%%%%// New code
B = sqrt(2) - 1; %// Define B
D = sqrt(x.^2 + y.^2); %// Define distance to centre
hhp = 1 ./ (1 + B * ((d ./ D).^(2 * n)));
out_spec_centre = fftim .* hhp;
%// Uncentre spectrum
out_spec = ifftshift(out_spec_centre);
%// Inverse FFT, get real components, and cast
out = uint8(real(ifft2(out_spec)));
%// Show image
imshow(out);
If you want to see what the filtered spectrum looks like, just do this:
figure;
imshow(log(1 + abs(out_spec_centre)), []);
We get:
This makes sense. You see that in the middle of the spectrum, it's slightly darker in comparison to the outer edges of the spectrum. That's because with the high-pass Butterworth filter, you are amplifying the higher frequency terms and it gets visualized to be a higher intensity.
Now, out contains your filtered image, and we finally get this:
That looks like a fine result! However, naively casting the image to uint8 truncates any negative values to 0 and any positive values greater than 255 to 255. Because this is an edge detection, you want to detect both the negative and positive transitions... so a good idea would be to normalize the output so that it ranges from [0,1], and then cast with uint8 after you multiply by 255. This way, no changes in the image get visualized to gray, negative changes get visualized as dark and positive changes get visualized as white.... so you'd do something like this:
%// Your code with meshgrid fix
n=1;
d=50;
h=size(im,1);
w=size(im,2);
fftim = fftshift(fft2(double(im)));
[x y]=meshgrid(-floor(w/2):floor(w/2)-1,-floor(h/2):floor(h/2)-1);
%hhp=(1./(d./(x.^2+y.^2).^0.5).^(2*n));
%%%%%%// New code
B = sqrt(2) - 1; %// Define B
D = sqrt(x.^2 + y.^2); %// Define distance to centre
hhp = 1 ./ (1 + B * ((d ./ D).^(2 * n)));
out_spec_centre = fftim .* hhp;
%// Uncentre spectrum
out_spec = ifftshift(out_spec_centre);
%// Inverse FFT, get real components
out = real(ifft2(out_spec));
%// Normalize and cast
out = (out - min(out(:))) / (max(out(:)) - min(out(:)));
out = uint8(255*out);
%// Show image
imshow(out);
We get this:
I think that you should work a little bit diferent
n=1;
D0=50; % change the name for d0, d is usuaally the (u²+v²)⁽1/2)
A=1.5; % normally the amplitude is 1
im=imread('cameraman.jpg');
[M,N]=size(im); % is easy to get the h and w like this
% compute the 2d fourier transform in order to multiply
F=fft2(double(im));
% compute your filter and do the meshgrid for your matrix but it is M*n, and get only the real part
u=0:(M-1);
v=0:(N-1);
idx=find(u>M/2);
u(idx)=u(idx)-M;
idy=find(v>N/2);
v(idy)=v(idy)-N;
[V,U]=meshgrid(v,u);
D=sqrt(U.^2+V.^2);
H =A * (1./(1 + (D0./D).^(2*n)));
% multiply element by element
G=H.*F;
g=real(ifft2(double(G)));
subplot(1,2,1); imshow(im); title('Input image');
subplot(1,2,2); imshow(g,[ ]); title('filtered image');
I am trying to write my own (or at least gain a better understanding of) Gaussian Blur filter using Python 2.7. I would really appreciate some direction. Everywhere else I have looked just uses built-ins...
You need to loop through each pixel in the image. At every pixel take weighted samples from its surroundings and sum them all together to new value of the pixel. So the code would look something like this:
for x in range(input.size[0]):
for y in range(input.size[1]):
result[x, y] = 0
result[x, y] += 0.01 * input[x-1, y+1] + 0.08 * input[x, y+1] + 0.01 * input[x+1, y+1]
result[x, y] += 0.08 * input[x-1, y ] + 0.64 * input[x, y ] + 0.08 * input[x+1, y ]
result[x, y] += 0.01 * input[x-1, y-1] + 0.08 * input[x, y-1] + 0.01 * input[x+1, y-1]
BUT in my code I'm not taking care of the edges of the image. This will result under and over indexing the image. There are at least three different easy ways how to take care of the edges:
You can decrease the range of the for loop so it doesn't blur the pixels on the edge and crop not blurred pixels out of the image after the blur.
You can make if statements in which you check that if you are on the edge of the image. On the edge you are not taking samples out of range and adjusting weights of the other pixels to sum to 1.0.
You can mirror the image to every side. This can be done by actually mirroring the image or by accessing pixels inside the image as far away of the edge as far the over indexing would have gone.
With the options 2 and 3 the edges are not as blurred as the center of the image. This is a minor issue if your sample window size is 3x3 but it can be visible with much bigger sample window sizes.
If you want to achieve good performance, you can try for example replacing the for loops with OpenCL or OpenGL launch and write the inner loop into OpenCL kernel or GLSL shader. These will result as many pixels as possible to be computed in parallel. These can be optimized even further by blurring first in horizontal axes and then in vertical axes, which reduces sample counts and should be faster with bigger sample windows.
About the same thing are explained with other words in this post.
I'm trying to build a to scale model of the solar system. I wanted to see if someone could explain to me how the rotation speed works. Here's the important piece:
objects[index].rotation.y += calculateRotationSpeed(value.radius,value.revolution) * delta;
How does the rotation speed relate to actual time? So if you have a speed of 1, is that a movement of 1 px per millisecond? Or if you have a speed of 0.1, is that less that a px a second?
Basically I'm trying to calculate the correct rotation speed for the planets given their radius and amount of hours in a day. So if you were on earth, it would complete 1 rotation in 24 hours. Here's the function that I wrote that's doing the calculation now:
/* In a day */
function calculateRotationSpeed(radius,hrs,delta) {
var cir = findCircumference(radius);
if(delta) {
var d = delta;
} else {
var d = 1;
}
var ms = hrs2ms(hrs) * d;
var pxPerMS = km2px(cir) / ms;
return pxPerMS;
}
I gave it a try and it still seems to be moving too fast. I also need something similar to calculate orbit speeds.
Rotation and Units
Rotation in Three.JS is measured in radians. For those that are completely unfamiliar with radians (a small excerpt from an old paper of mine):
Like the mathematical constant Pi, a radian (roughly 57.3 degrees) is derived from the relationship between a circle's radius (or diameter) and its circumference. One radian is the angle which will always span an arc on the circumference of a circle which is equal in length to the radius of that same circle (true for any circle, regardless of size). Similarly, Pi is the ratio of circumference over diameter, such that the circumference of the unit circle is precisely Pi. Radians and degrees are not actually true units, in fact angles are in general dimensionless (like percentages and fractions, we do not use actual units to describe them).
However, unlike the degree, the radian was not defined arbitrarily, making it the more natural choice in most cases; often times being much easier and much more elegant, clear, and concise than using degrees in mathematical formulae. The Babylonians probably gave us the degree, dividing their circle into 6 equal sections (using the angle of an equilateral triangle). each of these 6 sections were probably further subdivided into 60 equal parts given their sexagesimal (base 60) number system. This would also allow them to use such a system for astronomy because the estimated number of days in a year was much less accurate during their time and was often considered 360.
Basic Rotation in Three.JS
So now, knowing you're working in radians, if you increment the rotation of an object by 1 you will be incrementing the rotation of the object by one radian. For example, consider making the following calls in the callback to requestAnimationFrame:
mesh.rotation.x += 1; // Rotates by 1 radian per frame
mesh.rotation.x += Math.PI / 180; // Rotates by 1 degree per frame
mesh.rotation.x += 45 * Math.PI / 180 // Rotates by 45 degrees per frame
As the above examples show, we can easily convert a value in degrees into a value in radians by multiplying by a constant factor of Math.PI / 180.
Taking Framerate Into Account
In your case, you will also need to take into consideration how much time passes with each frame. This is your delta. Think about it like this: What framerate are we running at? We'll declare a global clock variable which will store a THREE.Clock object which has an interface to the information we require:
clock = new THREE.Clock();
Then, in the callback to requestAnimationFrame, we can use clock to get two values that will be useful for our animation logic:
time = clock.getElapsedTime(); // seconds since clock was instantiated
delta = clock.getDelta(); // seconds since getDelta was last called
The delta value is meant to represent the time between each frame. However, note that this is only true when clock.getDelta is called consistently, exactly once in the same place somewhere within the callback to requestAnimationFrame. If clock.getDelta somehow gets called more than once or is called inconsistently it's going to screw things up.
Rotating With A Delta Factor
Now, if your scene doesn't bog down the processor or the GPU, then Three.JS and it's included requestAnimationFrame will try to keep things running at a smooth 60 frames per second. This means that ideally we will have approximately 1/60 = .016666 seconds between each frame. This is your delta value which can be obtained by calling clock.getDelta each frame.
We can use the delta value to decouple the rate at which we animate objects by multiplying as shown below. In this case, multiplying by delta will allow us to update our rotation at a rate defined in terms of seconds (as opposed to updating the rotation per frame as we did before). Multiplying by the delta value will also allow us to smoothly animate objects at a constant velocity without being effected by any small variations in the framerate from frame to frame and will even maintain that velocity even in the case that the framerate drops below the target 60fps (for example 30FPS, 45FPS, etc).
So, the examples we considered previously now become:
mesh.rotation.x += delta * 1; // Rotates 1 radian per second
mesh.rotation.x += delta * Math.PI / 180; // Rotates 1 degree per second
mesh.rotation.x += delta * 45 * Math.PI / 180; // Rotates 45 degrees per second
Rotational Speed and Units
Because radians and degrees are not actually units defined in terms of distance/size), then when we we calculate our rotational speed (angular velocity) you'll see that it is going to be a function only of time and is not dependent on the radius as in your code.
Calculating Rotational Speeds Based On Time
For example, you don't need the radius of a planet to calculate its angular velocity, instead you can calculate it using only the number of hours in a day or the amount of time it takes for the planet to complete a single revolution (ie. the duration it takes for the planet to rotate 2 * PI radians on it's axis).
If we assume that the Earth has exactly 24 hours = 24 * 60 * 60 = 86,400 seconds in a day (it doesn't). Then, given that there are 2 * PI radians in a complete revolution (360 deg), we can calculate the Earth's constant angular velocity in radians as:
radsPerRevolution = 2 * Math.PI;
secsPerRevolution = 24 * 60 * 60;
angularVelocity = radsPerRevolution / secsPerRevolution ; // 0.0000727 rad/sec
The above only needs to be calculated once, outside of the callback to requestAnimationFrame, as the value never changes. You could probably find textbook values that will be more accurate than this (taking into account a more accurate measurement than our 24 hour flat figure for the amount of time it takes for Earth to complete a revolution).
At this point, rotating our mesh with the same angular velocity as Earth would be as simple as updating its rotation every frame by incrementing its value by delta multiplied by the constant angularVelocity. If angularVelocity is defined as above, this can be done by calling the following in the callback to requestAnimationFrame:
mesh.rotation.x += delta * angularVelocity;
In Conclusion
I wouldn't worry about making sure you have all of the angular velocities for the planets exactly correct. Instead, a better idea might be to work out what the ratios between each of the angular velocities (of the planets) are and use those. This might work better since it will allow you to speed up or slow down the animation as desired, and as when working with any model (particularly astronomical models) the most important thing is that you keep it to scale, not necessarily, the scale doesn't necessarily have to be 1:1.
I'm looking at this example in particular:
http://www.airtightinteractive.com/demos/processing_js/noisefield08.html
And here's the code for it:
http://www.airtightinteractive.com/demos/processing_js/noisefield08.pjs
I guess I need explaining for what these lines do in the particle class:
d=(noise(id,x/mouseY,y/mouseY)-0.5)*mouseX;
x+=cos(radians(d))*s;
y+=sin(radians(d))*s;
I understand that noise calculates a value based on the coordinates given, but I don't get the logic in dividing the particles' x pos by the mouseY, or the y pos by the mouseY. I also don't understand what 'id', which seems to be a counter stands for, or what the next two lines accomplish.
Thanks
Move mouse to change particle motion.
d seems to be the direction of motion. By putting mouseY and mouseX into the calculation of d it allows the underlying field to depend on the mouse position. Without a better understanding of the function itself I can't tell you exactly what affect mouseY and mouseX have on the field.
By running cos(radians(d)) and sin(radians(d)) the code turns an angle (d) into a unit vector. For example, if d was 1 radian then cos(radians(d)) would be -1 and sin(radians(d)) would be 0 so it turns the angle 1 radians into the unit vector (-1,0).
So it appears that there is some underlying motion field which determines the direction the particles move. The motion field is represented by the noise function and takes in the current position of the particle, the particle id (perhaps to give each particle independent motion or perhaps to remember a history of the particle's motion and base the future motion on that history) and the current position of the mouse.
The actual distance the particle moves is s which is determined randomly to be between 2 and 7 pixels.
By running cos(radians(d)) and sin(radians(d)) the code turns an angle (d) into a unit vector. For example, if d was 1 radian then cos(radians(d)) would be -1 and sin(radians(d)) would be 0 so it turns the angle 1 radians into the unit vector (-1,0).
Slight correction: that is a rotation of pi radians (180 degrees), not 1 radian (around 57 degrees).