Is it possible to make a raycast that triggers when it intersects zero on the z-axis, rather than waiting to hit an object?
I have an orthographic camera at (0, 0, -40), and facing directly towards the origin. I want to send a raycast from the center of each edge of the camera's viewport to find the (x, y) points where they intersect the z-axis at zero.
If it's an orthographic camera, the edges of the frustum are parallel like a box. Looking straight down the Z-axis, you can get the x,y values easily with:
x: camera.left or camera.right
y: camera.top or camera.bottom
They don't diverge at all, since ortho cams don't have perspective, so they'll also cross the Z-plane at the position of the camera's edges.
I know i can go from 3d space to 2d space of the Mesh by getting the corresponding uv coordinates of the vertex.
When i transform to uv space, each vertex will have its color and i can put the color in the pixel position what the uv co-ordinate returns for a particular vertex, but the issue is how do i derive the pixels that lie inbetween them, i want a smooth gradient.
For example, the color value at uv co-ordinate (0.5,0.5)->(u,v) is [30,40,50]->(RGB) and at [0.75,0.75] its [70,80,90] and lets say there are three vertices and theres one more at [0.25.0.6] as [10,20,30], how do i derive the colors that goes on the area these three uv/vertex coordinates fill, i mean the inbetween values for the pixels?
Just draw this mesh on a GPU. You know already that you can replace vertex positions with UVs so you have it represented on the texture space.
Keep your vertex colors unchanged and draw this mesh on your target texture, GPU will do color interpolation for every triangle you draw.
And keep in mind that if two or more triangles share the same texture space then the result will depend on triangle order.
I would like to create a vertex shader in Three.js to render the faces of a textured geometry so that all the triangles are face-on to the camera.
This is to emulate the functionality and performance of Three.js Points, but without the size limitation of gl_PointSize.
I'm not really sure what calculation to perform in the vertex shader. Any help appreciated.
you will have to add custom attribute to your geometry, easiest one to use would be a vector to the center of the triangle
in vertex shader you will have to calculate how to move each vertex, you now have
vertex position
vector to center
vertex normal == face normal
camera orientation (from matrices)
from that you can calculate the triangle center, which be static and calculate the rotation vertex has to make around the triangle center around axis perpendicular to the vector to center so that normal will come out as inverse of the camera orientation
the math is not very complicated, but writing shader code is tedious because of the non-existent debug - i advise you to first write a code that rotates the positions of geometry(using only the same parametres) and port it to shader
I have a convex closed shape in 2 D space (on the x-y plane). I do not know what it looks like. I rotate this shape about approximately the center of its bounding box 64 times by 5.625 degrees (360/64). For each rotation I have the x-coordinates of the extreme points of the shape. In other words I know the left and right x extents of the shape for each rotation (assuming an orthographic projection). How do I obtain 64 points on the shape that do not contradict the x projections.
Note that the 2D shape is rotating, but the coordinate axes are not rotating along with it. So if your object is a line, the x projection of each end if plotted will essentially be a sin/cos wave depending on its original orientation.
The higher the number of rotations, if I have the solution - the closer I will get to my actual shape.
In reality I do not know the exact point I am rotating the shape about, however any solution assuming I do know will still be helpful, as I don't mind the reconstruction being imperfect.
We used the straight-forward method to reconstruct.
Projection is a shade of the object.
You start with a bounding 2D box. For each projection you cut away from the 2D shape left and right parts that fall outside of the projection. So, the main function calculates intersection of two convex 2D shapes. You calculate these intersections for each projection.
We have several purple projections P1, P2, P3, P4 of the original green object:
Knowing position of a purple projection build two red rays coming from the end points of a projection and intersect them with the reconstructed object:
The red object was reconstructed using 4 projections. When compared to original green you can see that they are not the same. The more projections you have the less error you'll get in the final result.
I'm trying to write a simple voxel raycaster as a learning exercise. This is purely CPU based for now until I figure out how things work exactly -- fow now, OpenGL is just (ab)used to blit the generated bitmap to the screen as often as possible.
Now I have gotten to the point where a perspective-projection camera can move through the world and I can render (mostly, minus some artifacts that need investigation) perspective-correct 3-dimensional views of the "world", which is basically empty but contains a voxel cube of the Stanford Bunny.
So I have a camera that I can move up and down, strafe left and right and "walk forward/backward" -- all axis-aligned so far, no camera rotations. Herein lies my problem.
Screenshots: (1) raycasting voxels while... ...(2) the camera remains... ...(3) strictly axis-aligned.
Now I have for a few days been trying to get rotation to work. The basic logic and theory behind matrices and 3D rotations, in theory, is very clear to me. Yet I have only ever achieved a "2.5 rendering" when the camera rotates... fish-eyey, bit like in Google Streetview: even though I have a volumetric world representation, it seems --no matter what I try-- like I would first create a rendering from the "front view", then rotate that flat rendering according to camera rotation. Needless to say, I'm by now aware that rotating rays is not particularly necessary and error-prone.
Still, in my most recent setup, with the most simplified raycast ray-position-and-direction algorithm possible, my rotation still produces the same fish-eyey flat-render-rotated style looks:
camera "rotated to the right by 39 degrees" -- note how the blue-shaded left-hand side of the cube from screen #2 is not visible in this rotation, yet by now "it really should"!
Now of course I'm aware of this: in a simple axis-aligned-no-rotation-setup like I had in the beginning, the ray simply traverses in small steps the positive z-direction, diverging to the left or right and top or bottom only depending on pixel position and projection matrix. As I "rotate the camera to the right or left" -- ie I rotate it around the Y-axis -- those very steps should be simply transformed by the proper rotation matrix, right? So for forward-traversal the Z-step gets a bit smaller the more the cam rotates, offset by an "increase" in the X-step. Yet for the pixel-position-based horizontal+vertical-divergence, increasing fractions of the x-step need to be "added" to the z-step. Somehow, none of my many matrices that I experimented with, nor my experiments with matrix-less hardcoded verbose sin/cos calculations really get this part right.
Here's my basic per-ray pre-traversal algorithm -- syntax in Go, but take it as pseudocode:
fx and fy: pixel positions x and y
rayPos: vec3 for the ray starting position in world-space (calculated as below)
rayDir: vec3 for the xyz-steps to be added to rayPos in each step during ray traversal
rayStep: a temporary vec3
camPos: vec3 for the camera position in world space
camRad: vec3 for camera rotation in radians
pmat: typical perspective projection matrix
The algorithm / pseudocode:
// 1: rayPos is for now "this pixel, as a vector on the view plane in 3d, at The Origin"
rayPos.X, rayPos.Y, rayPos.Z = ((fx / width) - 0.5), ((fy / height) - 0.5), 0
// 2: rotate around Y axis depending on cam rotation. No prob since view plane still at Origin 0,0,0
rayPos.MultMat(num.NewDmat4RotationY(camRad.Y))
// 3: a temp vec3. planeDist is -0.15 or some such -- fov-based dist of view plane from eye and also the non-normalized, "in axis-aligned world" traversal step size "forward into the screen"
rayStep.X, rayStep.Y, rayStep.Z = 0, 0, planeDist
// 4: rotate this too -- 0,zstep should become some meaningful xzstep,xzstep
rayStep.MultMat(num.NewDmat4RotationY(CamRad.Y))
// set up direction vector from still-origin-based-ray-position-off-rotated-view-plane plus rotated-zstep-vector
rayDir.X, rayDir.Y, rayDir.Z = -rayPos.X - me.rayStep.X, -rayPos.Y, rayPos.Z + rayStep.Z
// perspective projection
rayDir.Normalize()
rayDir.MultMat(pmat)
// before traversal, the ray starting position has to be transformed from origin-relative to campos-relative
rayPos.Add(camPos)
I'm skipping the traversal and sampling parts -- as per screens #1 through #3, those are "basically mostly correct" (though not pretty) -- when axis-aligned / unrotated.
It's a lot easier if you picture the system as a pinhole camera rather than anything else. Instead of shooting rays from the surface of a rectangle representing your image, shoot the rays from a point, through the rectangle that will be your image plane, into the scene. All the primary rays should have the same point of origin, only with slightly different directions. The directions are determined using basic trig by which pixel in the image plane you want them to go through. To make the simplest example, let's imagine your point is at the camera, and your image plane is one unit along the z axis, and two units tall and wide. That way, the pixel at the upper-left corner wants to go from (0,0,0) through (-1, -1, 1). Normalize (-1, -1, 1) to get the direction. (You don't actually need to normalize the direction just to do ray intersection, but if you decide not to, remember that your directions are non-normalized before you try to compute the distance the ray has travelled or anything like that.) For every other pixel, compute the point on the plane it wants to go through the way you've already been doing, by dividing the size of the plane by the number of pixels, in each direction.
Then, and this is the most important thing, don't try to do a perspective projection. That's necessary for scan-conversion techniques, to map every vertex to a point on the screen, but in ray-tracing, your rays accomplish that just by spreading out from one point into space. The direction from your start point (camera position, the origin in this example), through your image plane, is exactly the direction you need to trace with. If you were to want an orthographic projection instead (and you almost never want this), you'd accomplish this by having the direction be the same for all the rays, and the starting positions vary across the image plane.
If you do that, you'll have a good starting point. Then you can try again to add camera rotation, either by rotating the image plane about the origin before you iterate over it to compute ray directions, or by rotating the ray directions directly. There's nothing wrong with rotating directions directly! When you bear in mind that a direction is just the position your ray goes through if it starts from the origin, it's easy to see that rotating the direction, and rotating the point it goes through, do exactly the same thing.