How can I solve for current opacity in this animation? - algorithm

I'm trying to do a simple fade in/out animation in Lua.
I feel like these variables should be enough to solve for the alpha/opacity I want to set the box at every frame, but I'm having a lot of trouble with the fade out, since alpha = targetAlpha * animationPos always returns 0 while multiplying by the target alpha of 0.
All of these variables are decimal values between 0-1, representing alpha or %time completed.
targetAlpha - The alpha value at the end of animation.
initialAlpha - The alpha the box started at when the animation initialized.
animationPos - The current position (%time completed) of the animation
currentAlpha - Current alpha of the box.
Maybe I'm just super fried today, but I've been trying what feels like a billion combinations of these vars to find the equation that works, and to no luck.
Any help is appreciated!

What you want is a linear interpolation, which takes two values a and b, and an interpolation value f between 0 and 1.
function lerp(a, b, f)
return a * (1 - f) + b * f
end
And now you can just interpolate between initial and target alpha using your current animation progress:
alpha = lerp(initialAlpha, targetAlpha, animationPos)

Related

How can i solve light reversed problem in ray tracing?

I'm making an image using python. But the Lambertian shading does not work.
At first the image saved like this.
enter image description here
But when I reversed the normal vector of sphere, the image saved like this.enter image description here
This is my shading code.
v = -m*ray
if s == 'Sphere':
n = view.viewPoint - list[idx].c - v
n = -n / np.sqrt(np.sum(n*n))
for i in light:
l_i = v + i.position - view.viewPoint
l_i = l_i / np.sqrt(np.sum(l_i * l_i))
x = list[idx].s.d[0] * i.intensity[0] * max(np.dot(l_i, n), 0)
y = list[idx].s.d[1] * i.intensity[1] * max(np.dot(l_i, n), 0)
z = list[idx].s.d[2] * i.intensity[2] * max(np.dot(l_i, n), 0)
list is sphere's list and idx is the number of the closest sphere.
I'd be grateful if anyone could help me. I have been doing this for a week
You have not stated what you think is wrong.
Where is the light in relation to the spheres in the first image? Is it above and slightly behind them? If so - the image looks correct.
Assuming the statements above are correct, then the second image looks correct. The reason the light is on the bottom of the spheres is because the normal is now pointing "in" so the dot() product sign will be opposite to that in the first image.
Note that in your example code, it doesn't look like you have any shadow ray treatment. In other words - all objects will be lit as if all other objects are transparent. No objects will cast shadows on to other objects. This also explains why you can see the bottom of the spheres when the light is coming from the top. If you had proper shadow rays, then it wouldn't actually matter which way the normal is pointing (I would remove the max() functions at that point).

Animated.Decay - formula to reach value

I like the natural feel decay adds to my animations but the problem I have is that I can't get it to land on a whole number unlike the other animations.
Is there a vector math formula I can use to calc the deceleration value and velocity so that it does?
i.e.
current animated value is 0.75
velocity = 2
Would like to animate to 2.0 so deceleration rate = ???

Building A Gaussian Blur?

I am trying to write my own (or at least gain a better understanding of) Gaussian Blur filter using Python 2.7. I would really appreciate some direction. Everywhere else I have looked just uses built-ins...
You need to loop through each pixel in the image. At every pixel take weighted samples from its surroundings and sum them all together to new value of the pixel. So the code would look something like this:
for x in range(input.size[0]):
for y in range(input.size[1]):
result[x, y] = 0
result[x, y] += 0.01 * input[x-1, y+1] + 0.08 * input[x, y+1] + 0.01 * input[x+1, y+1]
result[x, y] += 0.08 * input[x-1, y ] + 0.64 * input[x, y ] + 0.08 * input[x+1, y ]
result[x, y] += 0.01 * input[x-1, y-1] + 0.08 * input[x, y-1] + 0.01 * input[x+1, y-1]
BUT in my code I'm not taking care of the edges of the image. This will result under and over indexing the image. There are at least three different easy ways how to take care of the edges:
You can decrease the range of the for loop so it doesn't blur the pixels on the edge and crop not blurred pixels out of the image after the blur.
You can make if statements in which you check that if you are on the edge of the image. On the edge you are not taking samples out of range and adjusting weights of the other pixels to sum to 1.0.
You can mirror the image to every side. This can be done by actually mirroring the image or by accessing pixels inside the image as far away of the edge as far the over indexing would have gone.
With the options 2 and 3 the edges are not as blurred as the center of the image. This is a minor issue if your sample window size is 3x3 but it can be visible with much bigger sample window sizes.
If you want to achieve good performance, you can try for example replacing the for loops with OpenCL or OpenGL launch and write the inner loop into OpenCL kernel or GLSL shader. These will result as many pixels as possible to be computed in parallel. These can be optimized even further by blurring first in horizontal axes and then in vertical axes, which reduces sample counts and should be faster with bigger sample windows.
About the same thing are explained with other words in this post.

Algorithm to make overly bright (HDR) colours become white?

You know how every colour eventually turns white in an image if it's bright enough or sufficiently over-exposed? I'm trying to figure out a function to do this to apply to generated HDR images, in a realistic and pleasing looking way (using idealised camera performance as a reference I guess).
The problem the algorithm/function I want to obtain should solve is, let's say you have an orange pixel with the (linear RGB) values {1.0, 0.2, 0.0}. Everything is fine if you multiply each value by a factor of 1.0 or less, but let's say you multiply that pixel by 6, now you get {6.0, 1.2, 0.0}, what do you do with your out of range red and green value of 6.0 and 1.2? You could clip them which would give you {1.0, 1.0, 0.0}, which sadly is what Photoshop and 3DS Max seem to do, but it looks so very wrong as now your formerly orange pixel is yellow (so if you start with any saturated hue (meaning at least one channel is 0.0) you always end up with either magenta, yellow or cyan) and it will never become white.
I considered taking half of the excess of one channel and splitting it equally between the other channels, so for example {1.6, 0.5, 0.1} would become {1.0, 0.8, 0.4} but it's too simplistic and not very realistic. I strongly doubt that an acceptable solution could be anywhere near this trivial.
I'm sure there must have been research done on the topic, but I cannot find any relevant literature and sensitometry doesn't seem to be quite what I'm looking for.
Modifying the Python code I left in an answer on another question to work in the range [0.0-1.0]:
def redistribute_rgb(r, g, b):
threshold = 1.0
m = max(r, g, b)
if m <= threshold:
return r, g, b
total = r + g + b
if total >= 3 * threshold:
return threshold, threshold, threshold
x = (3 * threshold - total) / (3 * m - total)
gray = threshold - x * m
return gray + x * r, gray + x * g, gray + x * b
This should return acceptable results in either a linear or gamma-corrected color space, although linear will be better.
Multiplying each r,g,b value by the same amount retains their original proportions and thus the hue, up to the point where x=0 and you've achieved white. You've expressed interest in a non-linear response once clipping starts, but I'm not entirely sure how to work that in. The math was carefully chosen so that at least one of the returned values will be at the threshold, and none will be above.
Running this on your example of (1.6, 0.5, 0.1) returns (1.0, 0.6615, 0.5385).
I've found a way to do it based on Mark Ransom's suggestion with a twist. When the colour is out of gamut we compute the grey colour of equivalent perceptual luminosity then we linearly interpolate between the out-of-gamut input colour and that grey value to find the first in-gamut colour. Weighting each RGB channel to get the perceptual luminosity part is the tricky part seeing as the most commonly used formula from CIELab L = 0.2126*red + 0.7152*green + 0.0722*blue is quite blatantly wrong as it makes the blue way too bright. Instead I did some tests and chose the weights which looked the most correct to me, though these are not definite and you might want to tweak them, although for this particular problem this is perhaps not too crucial.
Or in fewer words the solution is to desaturate the out-of-gamut colour just enough that it might be in-gamut.
Here is my solution in C code. All variables are in floating point format.
Wr=0.125; Wg=0.68; Wb=0.195; // these are the weights for each colour
max = MAXN(MAXN(red, grn), blu); // max is the maximum value of the 3 colours
if (max > 1.) // if the colour is out of gamut
{
L = Wr*red + Wg*grn + Wb*blu; // Luminosity of the colour's grey point
if (L < 1.) // if the grey point is no brighter than white
{
// t represents the ratio on the line between the input colour
// and its corresponding grey point. t is between 0 and 1,
// a lower t meaning closer to the grey point and a
// higher t meaning closer to the input colour
t = (1.-L) / (max-L);
// a simple linear interpolation between the
// input colour and its grey point
red = red*t + L*(1.-t);
grn = grn*t + L*(1.-t);
blu = blu*t + L*(1.-t);
}
else // if it's too bright regardless of saturation
{
red = grn = blu = 1.;
}
}
Here's what it looks like with a linear orange gradient:
It does not use anything like arbitrary gamma which is good, the only mostly arbitrary thing has to do with the Luminosity weights, but I guess those are quite necessary.
You have to map it to some non-linear scale. For example: http://en.wikipedia.org/wiki/Gamma_correction .
Ex: Let y = f(x) = log(1+x) - log(1-x) define the "actual" luminescence.
The reverse function is x = g(y) = (e^y-1)/(e^y+1).
now, you have values x=1 and x=0.2. For the first case the corresponding y is infinity. Six times the infinity is still infinity. If you use function g, you get new x_new = 1.
For x=0.2, y = 0.4054651. After multiplying by 6, y_new = 2.432791 . The corresponding x_new = 0.8385876.
For x=0, x_new will still be 0 (I will leave the calculations to you).
So starting from (1.0, 0.2, 0.0) your new set of points are (1.0, 0.8385876, 0.0).
This is one example of mapping function. There are infinite number of them. Choose one that looks best to you.

Image colorization algorithm

I have an image whose pixel colors I want to change to match a particular color (though not completely).
As an example, I want to tint the image of a red car so that it appears blue. I can do this with the GIMP and with ImageMagick, but I would like to know which algorithm they are using to do this so I can implement it in my own program.
I have tried to do this with simple addition of the difference between the colors but it doesn't work very well.
As just a shot in the dark, untested suggestion from someone who's getting into image processing fairly recently... maybe you could just scale the channels?
For example:
RGB_Pixel.r = RGB_Pixel.r * 0.75;
RGB_Pixel.g = RGB_Pixel.g * 0.75;
RGB_Pixel.b = RGB_Pixel.b * 1.25;
If you loop through your image pixel-by-pixel with those three changes, I'd expect you to see the image shift towards blue, and the numbers of course can be trial-and-error'd.
EDIT:
Now if you want to ONLY change the color of pixels that are a certain color to begin with, say, you want to turn a blue car red without doing anything to the rest of the picture, you'll need to run a check on each pixel to see what color it looks like. One way to do this is to use a Euclidean distance:
int* R = RGB_Pixel.r;
int* G = RGB_Pixel.g;
int* B = RGB_Pixel.b;
// You are looking for Blue, which is [0 0 255];
// this variable D is the distance of your current pixel from the desired color.
float D = sqrt( (R-0)*(R-0) + (G-0)*(G-0) + (B-255)*(B-255) );
if(D < threshold)
{
R = R * 0.75;
G = G * 0.75;
B = B * 1.25;
}
The threshold variable is a number between 1 and 255 that represents the maximum distance a color can be from the color you're looking for and still be considered "close enough". This is because you don't want to only look for [0 0 255], very rarely will you find perfect blue (or perfect anything) in an image.
You want to use the lowest threshold you can get away with so that you don't end up coloring other things that aren't part of the object you're looking for, but you want to make sure your threshold is high enough that it covers your entire image. One way to do this is to set up multiple D variables, each with a different target color, so you can capture a few separate types of "blue" without using a really high threshold. For instance, to the human eye, [102 102 200] looks like blue, but might require a pretty high threshold to catch if [0 0 255] is your target color.
I suggest playing with this calculator to get a feel for which colors you want to search for specifically.

Resources