How can i solve light reversed problem in ray tracing? - raytracing

I'm making an image using python. But the Lambertian shading does not work.
At first the image saved like this.
enter image description here
But when I reversed the normal vector of sphere, the image saved like this.enter image description here
This is my shading code.
v = -m*ray
if s == 'Sphere':
n = view.viewPoint - list[idx].c - v
n = -n / np.sqrt(np.sum(n*n))
for i in light:
l_i = v + i.position - view.viewPoint
l_i = l_i / np.sqrt(np.sum(l_i * l_i))
x = list[idx].s.d[0] * i.intensity[0] * max(np.dot(l_i, n), 0)
y = list[idx].s.d[1] * i.intensity[1] * max(np.dot(l_i, n), 0)
z = list[idx].s.d[2] * i.intensity[2] * max(np.dot(l_i, n), 0)
list is sphere's list and idx is the number of the closest sphere.
I'd be grateful if anyone could help me. I have been doing this for a week

You have not stated what you think is wrong.
Where is the light in relation to the spheres in the first image? Is it above and slightly behind them? If so - the image looks correct.
Assuming the statements above are correct, then the second image looks correct. The reason the light is on the bottom of the spheres is because the normal is now pointing "in" so the dot() product sign will be opposite to that in the first image.
Note that in your example code, it doesn't look like you have any shadow ray treatment. In other words - all objects will be lit as if all other objects are transparent. No objects will cast shadows on to other objects. This also explains why you can see the bottom of the spheres when the light is coming from the top. If you had proper shadow rays, then it wouldn't actually matter which way the normal is pointing (I would remove the max() functions at that point).

Related

Slant/Skew a Texture - Monogame

I am trying to Slant/Skew a texture to create some shadows for my game.
I have read over this helpful answer that shows this can be done by passing a matrix to spriteBatch.Begin().
Because my linear algebra skills are not very developed, I am having some troubles meeting my desired results. I am hoping to skew my shadow so it looks similar to the following. Where the shadow is slanted by an angle, but the bottom of the shadow lines up with the (feet in this case) bottom of the sprite.
I originally tied the skew matrix provided in the solution above:
Matrix skew = Matrix.Identity;
skew.M12 = (float)Math.Tan(MathHelper.ToRadians(36.87f));
But this ends up rotation the shadow against the world's origin. I see the solution also notes this, and provides the follow to rotate again the sprite.
Matrix myMatrix = Matrix.CreateTranslation(-100, -100, 0)
* Matrix.CreateScale(2f, 0.5f, 1f)
* Matrix.CreateTranslation(100, 100, 0);
Though I'm not sure where to apply this myMatrix Matrix. I have tried applying it to both the shadow sprite, the castingShadow sprite, and also multiplying them together and applying to the shadow with no luck.
I have also tried using other methods like Matrix.CreateRotationX(MathHelper.ToRadians(0.87f)) with no luck.
There is actually a Matrix.CreateShadow() method too, but it requires a Plane, which I have no semblance of in my game.
Can anyone can help me figure out the required Matrix for this slanting, or point me in the direction of some resources?
Thanks!
Okay, so I found a transform to use to get the desired slant.
Thanks to #David Gouveia and #AndreRussell from this post
Matrix matrix = Matrix.CreateRotationX(MathHelper.ToRadians(60)) *
Matrix.CreateRotationY(MathHelper.ToRadians(30)) *
Matrix.CreateScale(1,1,0);
EDIT:
So the above solution solved how I wanted to slant my texture, but had some weird positioning side effects. To address this, I ended up with a transform like the following:
Matrix slant = Matrix.CreateTranslation(-loc.X + angleX, -loc.Y, 0f) *
Matrix.CreateRotationX(MathHelper.ToRadians(angleX)) *
Matrix.CreateRotationY(MathHelper.ToRadians(30)) *
Matrix.CreateScale(1.4f, 1f, 0) *
Matrix.CreateTranslation(loc.X + angleX, loc.Y, 0f);
Where angleX was set based on the "sun" X position and loc vector is where I want the object and object's shadow to appear.

Calculate 3D distance based on change in intensity

I have three sections (top, mid, bot) of grayscale images (3D). In each section, I have a point with coordinates (x,y) and intensity values [0-255]. The distance between each section is 20 pixels.
I created an illustration to show how those images were generated using a microscope:
Illustration
Illustration (side view): red line is the object of interest. Blue stars represents the dots which are visible in top, mid, bot section. The (x,y) coordinates of these dots are known. The length of the object remains the same but it can rotate in space - 'out of focus' (illustration shows a rotating line at time point 5). At time point 1, the red line is resting (in 2D image: 2 dots with a distance equal to the length of the object).
I want to estimate the x,y,z-coordinate of the end points (represents as stars) by using the changes in intensity, the knowledge about the length of the object and the information in the sections I have. Any help would be appreciated.
Here is an example of images:
Bot section
Mid section
Top section
My 3D PSF data:
https://drive.google.com/file/d/1qoyhWtLDD2fUy2zThYUgkYM3vMXxNh64/view?usp=sharing
Attempt so far:
enter image description here
I guess the correct approach would be to record three images with slightly different z-coordinates for your bot and your top frame, then do a 3D-deconvolution (using Richardson-Lucy or whatever algorithm).
However, a more simple approach would be as I have outlined in my comment. If you use the data for a publication, I strongly recommend to emphasize that this is just an estimation and to include the steps how you have done it.
I'd suggest the following procedure:
Since I do not have your PSF-data, I fake some by estimating the PSF as a 3D-Gaussiamn. Of course, this is a strong simplification, but you should be able to get the idea behind it.
First, fit a Gaussian to the PSF along z:
[xg, yg, zg] = meshgrid(-32:32, -32:32, -32:32);
rg = sqrt(xg.^2+yg.^2);
psf = exp(-(rg/8).^2) .* exp(-(zg/16).^2);
% add some noise to make it a bit more realistic
psf = psf + randn(size(psf)) * 0.05;
% view psf:
%
subplot(1,3,1);
s = slice(xg,yg,zg, psf, 0,0,[]);
title('faked PSF');
for i=1:2
s(i).EdgeColor = 'none';
end
% data along z through PSF's center
z = reshape(psf(33,33,:),[65,1]);
subplot(1,3,2);
plot(-32:32, z);
title('PSF along z');
% Fit the data
% Generate a function for a gaussian distibution plus some background
gauss_d = #(x0, sigma, bg, x)exp(-1*((x-x0)/(sigma)).^2)+bg;
ft = fit ((-32:32)', z, gauss_d, ...
'Start', [0 16 0] ... % You may find proper start points by looking at your data
);
subplot(1,3,3);
plot(-32:32, z, '.');
hold on;
plot(-32:.1:32, feval(ft, -32:.1:32), 'r-');
title('fit to z-profile');
The function that relates the intensity I to the z-coordinate is
gauss_d = #(x0, sigma, bg, x)exp(-1*((x-x0)/(sigma)).^2)+bg;
You can re-arrange this formula for x. Due to the square root, there are two possibilities:
% now make a function that returns the z-coordinate from the intensity
% value:
zfromI = #(I)ft.sigma * sqrt(-1*log(I-ft.bg))+ft.x0;
zfromI2= #(I)ft.sigma * -sqrt(-1*log(I-ft.bg))+ft.x0;
Note that the PSF I have faked is normalized to have one as its maximum value. If your PSF data is not normalized, you can divide the data by its maximum.
Now, you can use zfromI or zfromI2 to get the z-coordinate for your intensity. Again, I should be normalized, that is the fraction of the intensity to the intensity of your reference spot:
zfromI(.7)
ans =
9.5469
>> zfromI2(.7)
ans =
-9.4644
Note that due to the random noise I have added, your results might look slightly different.

Calculate the angles of a pixel to a camera plane in a depth-image

I have a z-image from a ToF Camera (Kinect V2). I do not have the pixel size, but I know that the depth image has a resolution of 512x424. I also know that I have a fov of 70.6x60 degrees.
I asked how to get the Pixel size before here. In Matlab this code looks like the following.
The brighter the pixel, the closer the object.
close all
clear all
%Load image
depth = imread('depth_0_30_0_0.5.png');
frame_width = 512;
frame_height = 424;
horizontal_scaling = tan((70.6 / 2) * (pi/180));
vertical_scaling = tan((60 / 2) * (pi/180));
%pixel size
with_size = horizontal_scaling * 2 .* (double(depth)/frame_width);
height_size = vertical_scaling * 2 .* (double(depth)/frame_height);
The image itself is a cube rotated by 30 degree, and can be seen here: .
What I want to do now is calculate the horizontal angle of a pixel to the camera-plane and the vertical angle to the camera plane.
I tried to do this with triangulation, I calculate the z-distance from one pixel to another, first in the horizontal direction and then in the vertical direction. I do this with a convolution:
%get the horizontal errors
dx = abs(conv2(depth,[1 -1],'same'));
%get the vertical errors
dy = abs(conv2(depth,[1 -1]','same'));
After this I calculate it via the atan, like this:
horizontal_angle = rad2deg(atan(with_size ./ dx));
vertical_angle = rad2deg(atan(height_size ./ dy));
horizontal_angle(horizontal_angle == NaN) = 0;
vertical_angle(vertical_angle == NaN) = 0;
Which gives back promising results, like these:
However, using a little bit more complex image like this, which is turned by 60° and 30°.
Gives back the same angle images for horizontal and vertical angles, which look like this:
After subtracting both images from each other, I get the following image - which shows that there is a difference between those two.
So, I have the following questions: How can I proof this concept? Is the math correct, and the test case is just poorly chosen? Is the angle difference from horizontal to vertical angles in the two images too close? Are there any errors in the calculation ?
While my previous code may looks good, it had a flaw. I tested it with smaller images (5x5,3x3 and so on) and saw, that there is an offset created by the difference picture (dx,dy) made by the convolution. It is simple not possible to map the difference picture (which holds the difference between two pixels) to the pixels itself, since the difference picture is smaller than the original one.
For a fast fix, I do a downsampling. So I changed the filter mask to:
%get the horizontal differences
dx = abs(conv2(depth,[1 0 -1],'valid'));
%get the vertical differences
dy = abs(conv2(depth,[1 0 -1]','valid'));
And changed the angle function to:
%get the angles by the tangent
horizontal_angle = rad2deg(atan(with_size(2:end-1,2:end-1)...
./ dx(2:end-1,:)))
vertical_angle = rad2deg(atan(height_size(2:end-1,2:end-1)...
./ dy(:,2:end-1)))
Also I used a padding function to get the angle map to the same size as the original images.
horizontal_angle = padarray(horizontal_angle,[1 1],0);
vertical_angle = padarray(vertical_angle[1 1],0);

LookAt Rotation Using Euler Axis Angles

I'm using the blender game engine and python I made a script that makes an empty follow my cursor in 3D space. (I use the keyboard for height for now).
Now I wanted to implement a LookAt function for a general object rather than a camera, using python. I want the object to look exactly at the point I'm hovering (the empty position) at the screen. For now I'm using a cube so basically one face of the cube should always face the empty.
So, I thought of using matrices or quaternions but the problem is that All I have is a direction vector and I chose the x axis for the local look direction. So either way I need to calculate the euler angles and convert them to axis-rotation angles. (theta*[axis^]).
The resources I have in the Blender Game Engine is: mathutils (provide quarternions, euler based rotations (via axis-angles), matrices) - though it doesn't have any updated documentation which is just annyoingly horrible! I have to print help to get some sort of info!
Now I've been able to make the object look at the empty when I rotate only the Z axis. I used a little trick that handles the angle sign for me using simple trigonometry, so sign is handled and I don't need any matrix trickery or quarternions. The problem begins when I try to rotate once again - I want to rotate the Y axis for the up-down look (as known in 3D we need two sorts of rotations to face someone, the third is just for rotating the view upside-down - "rolling the camrea") since this rotation axis is the look direction vector.
Here's my script:
import bge
from mathutils import Vector, Matrix
import math
# Basic stuff
cont = bge.logic.getCurrentController()
own = cont.owner
scene = bge.logic.getCurrentScene()
c = scene.objects["Cube"]
e = scene.objects["Empty"]
# axises (we're using localOrientation)
x = Vector((1.0,0.0,0.0))
y = Vector((0.0,1.0,0.0))
z = Vector((0.0,0.0,1.0))
vec = Vector(e.worldPosition - c.worldPosition) # direction vector
# Converting direction vector into euler angles
# Using trigonometry we get: tan(psi) = cos(phi2)/cos(phi1)
# Where phi1 is the angle between x axises (euler angle)
# and phi2 is the euler of the y axises.
# psi is the z rotation angle.
# get cos(euler_angle)
phi1 = vec.dot(x)/vec.length # = cos p1
phi2 = vec.dot(y)/vec.length # = cos p2
phi3 = vec.dot(z)/vec.length # = cos p3
# get the rotation/steer angles
zAngle = math.atan(phi2/phi1)
yAngle = math.atan2(phi3,phi1)
xAngle = math.atan(phi2/phi3)
# use only 2 as the third must adapt (also: view concept - x is the looking direction, rotating it would make rolling)
r = c.localOrientation.to_euler()
r.z = zAngle
r.y = -yAngle
#r.x = xAngle
c.localOrientation = r
Seperately each axis works perfectly, but when combined, there are little jump glitches when I get through the global Y axis.
Also, it seems that the "local" orientation in blender is just the same as the "worldOrientation" which is also annoying cause I'm not sure anymore in what frame of reference I'm working anymore. If anyone knows, please help !
Edit 1:
Appearantely there's a built in logic block that handles this for me and when I press "3D" it tracks AND succeeds on rotating BOTH axises. Though, I still want to know what's the problem with my script! What did the 3D button do that I didn't?
Edit 2:
I tried stop making trigo tricks and found out that when I use local orientation I ALWAYS get a gimbal lock in one axis. That's probably what happened behind the scenes. Thanks for anyone interested, if you have any good trick I'd still be glad to hear =]!
I have a youtube tutorial on how to make the camera look at specific objects. It may help.
https://www.youtube.com/watch?v=hwbObDkiJrE
But the concept, when using the gui, is to open the object->relations panel and for the object you want to be doing the LookAt, you make it the child of the object you want it to follow (the parent). You then select 'Vertex' as the relationship. This will then affect the rotation angles of the child object only.
Try this,
bpy.data.objects['child'].parent = bpy.data.objects['parent']
bpy.data.objects['child'].parent_type = 'VERTEX'
and actually there is more info here
https://blender.stackexchange.com/questions/26108/how-do-i-parent-objects

What is the best way to check all pixels within certain radius?

I'm currently developing an application that will alert users of incoming rain. To do this I want to check certain area around user location for rainfall (different pixel colours for intensity on rainfall radar image). I would like the checked area to be a circle but I don't know how to do this efficiently.
Let's say I want to check radius of 50km. My current idea is to take subset of image with size 100kmx100km (user+50km west, user+50km east, user+50km north, user+50km south) and then check for each pixel in this subset if it's closer to user than 50km.
My question here is, is there a better solution that is used for this type of problems?
If the occurrence of the event you are searching for (rain or anything) is relatively rare, then there's nothing wrong with scanning a square or pixels and then, only after detecting rain in that square, checking whether that rain is within the desired 50km circle. Note that the key point here is that you don't need to check each pixel of the square for being inside the circle (that would be very inefficient), you have to search for your event (rain) first and only when you found it, check whether it falls into the 50km circle. To implement this efficiently you also have to develop some smart strategy for handling multi-pixel "stains" of rain on your image.
However, since you are scanning a raster image, you can easily implement the well-known Bresenham circle algorithm to find the starting and the ending point of the circle for each scan line. That way you can easily limit your scan to the desired 50km radius.
On the second thought, you don't even need the Bresenham algorithm for that. For each row of pixels in your square, calculate the points of intersection of that row with the 50km circle (using the usual schoolbook formula with square root), and then check all pixels that fall between these intersection points. Process all rows in the same fashion and you are done.
P.S. Unfortunately, the Wikipedia page I linked does not present Bresenham algorithm at all. It has code for Michener circle algorithm instead. Michener algorithm will also work for circle rasterization purposes, but it is less precise than Bresenham algorithm. If you care for precision, find a true Bresenham on somewhere. It is actually surprisingly diffcult to find on the net: most search hits erroneously present Michener as Bresenham.
There is, you can modify the midpoint circle algorithm to give you an array of for each y, the x coordinate where the circle starts (and ends, that's the same thing because of symmetry). This array is easy to compute, pseudocode below.
Then you can just iterate over exactly the right part, without checking anything.
Pseudo code:
data = new int[radius];
int f = 1 - radius, ddF_x = 1;
int ddF_y = -2 * radius;
int x = 0, y = radius;
while (x < y)
{
if (f >= 0)
{
y--;
ddF_y += 2; f += ddF_y;
}
x++;
ddF_x += 2; f += ddF_x;
data[radius - y] = x; data[radius - x] = y;
}
Maybe you can try something that will speed up your algorithm.
In brute force algorithm you will probably use equation:
(x-p)^2 + (y-q)^2 < r^2
(p,q) - center of the circle, user position
r - radius (50km)
If you want to find all pixels (x,y) that satisfy above condition and check them, your algorithm goes to O(n^2)
Instead of scanning all pixels in this circle I will check only only pixels that are on border of the circle.
In that case, you can use some more clever way to define circle.
x = p+r*cos(a)
y = q*r*sin(a)
a - angle measured in radians [0-2pi]
Now you can sample some angles, for example twenty of them, iterate and find all pairs (x,y) that are border for radius 50km. Now check are they on the rain zone and alert user.
For more safety I recommend you to use multiple radians (smaller than 50km), because your whole rain cloud can be inside circle, and your app will not recognize him. For example use 3 incircles (r = 5km, 15km, 30km) and do same thing. Efficiency of this algorithm only depends on number of angles and number of incircles.
Pseudocode will be:
checkRainDanger()
p,q <- position
radius[] <- array of radii
for c = 1 to length(radius)
a=0
while(a<2*pi)
x = p + radius[c]*cos(a)
y = q + radius[c]*sin(a)
if rainZone(x,y)
return true
else
a+=pi/10
end_while
end_for
return false //no danger
r2=r*r
for x in range(-r, +r):
max_y=sqrt(r2-x*x)
for y in range(-max_y, +max_y):
# x,y is in range - check for rain

Resources