I am working on a Perspective camera. The constructor must be:
PerspectiveCamera::PerspectiveCamera(Vec3f ¢er, Vec3f &direction, Vec3f &up, float angle)
This is construction different from most others, as it lacks near and far clipping planes. I know what to with center, direction, and up -- the standard look at algorithm.
We can construct the view matrix and translate matrix accordingly:
Thus, the viewing transformation is:
For an orthographic camera (which is working correctly for me), the inverse transformation is used to go from screen space to world space. The camera coordinates go from (-1,-1,0) --> (1,1,0) in screen space.
For perspective transformation, only the field of view is given. The Wikipedia 3D projection article gives a perspective projection matrix using the field of view angle and assuming camera coordinates go from (-1,-1) --> (1,1):
In my code, (ex,ey,ez) are the camera coordinates that go from (-1,-1, ez) --> (1,1, ez). Note that the 1 in (3,3) spot of K isn't in the Wikipedia article -- I put it in to make the matrix invertible. So that may be a problem.
But anyways, for perspective projection, I used this transformation:
K inverse is multiplied with p to make the canonical view volume to a view frustum, and the result of that is multiplied with M inverse to move into world coordinates.
I get the wrong results. The correct output is:
My output looks like this:
Am I using the right algorithm for perspective projection given my constraint (no near and far plane inputs)???
Just in case somebody else runs into this issue, the method presented in the question is not proper way to create a viewing frustum. The perspective matrix (K) is for projecting the far plane onto the near plane, and we don't have those planes in this case
To create a frustum, do the inverse transformation on (x,y,ez) [as opposed to (x,y,0) for orthographic projection). Find a new direction by subtracting the transformed point from the center of projection. Shoot the ray.
Related
I have a rather vague understanding of how rasterization is supposed to work.
So I totally understand how vertices make up a 3d image. I also ventured into model to world projection and even though I don't understand the math behind it ( I use helper libraries to multiply the matrices and have a chart denoting how to apply different transformations: rotate, scale, translate, etc).
So it's very easy for me to build some 3d model using blender and apply that logic to build a world matrix for each object.
But i've hit a brick wall trying to envision how to camera matrix is supposed to "look at" a specific cluster of vertices and what exactly happens to the object's world coordinates after the camera matrix is applied to the world matrix? and what does a camera matrix look like and how does the camera's "view axis" affect it's matrix (the camera could be looking at the z,x, y axis respectively)
I've managed to render a couple 3d objects with various rendering engines (openGL, XNA, etc) but most of it was due to having followed some guide on the internet or trying to interpret what some guy on youtube is trying to teach, and i'm still struggling trying to get an "intuitive" sense on how matrices are supposed to work as per camera parameters and how the camera is supposed to alter the object's world matrix
There are 5 steps in going from "world space" (Wx,Wy,Wz) to "screen space" (Sx,Sy): View, Clipping, Projection, Perspective Divide, Viewport. This is described pretty well here but some details are glossed over. I will try to explain the steps conceptually.
Imagine you have some vertices (what we want to render), a camera (with a position and orientation - which direction it is pointing), and a screen (a rectangular grid of WIDTHxHEIGHT pixels).
The Model Matrix I think you already understand: it scales, rotates, and translates each vertex into world coordinates (Wx,Wy,Wz,1.0). The last "1.0" (sometimes called the w component) allows us to represent translation and projection (as well as scaling and rotation) as a single 4x4 matrix.
The View Matrix (aka camera matrix) moves all the vertices to the point of view of the camera. I think of it as working in 2 steps: First it translates the entire scene (all vertices including the camera) such that in the new coordinate system the camera is at the origin. Second it rotates the entire scene such that the camera is looking from the origin in the direction of the -Z axis. There is a good description of this here. (Mathematically the rotation happens first, but I find it easier to visualize if I do the translation first.) At this point each vertex is in View coordinates (Vx,Vy,Vz,1.0). A good way to visualize this is to imagine the entire scene is embedded in ice; grab the block of ice and move it so the camera is at the origin pointing along the -z axis (and all the other objects in the world move along with the ice they are embedded in).
Next, the projection matrix encodes what kind of lens (wide angle vs telephoto) the camera has; in other words how much of the world will be visible on the screen. This is described well here but here is how it is calculated:
[ near/width ][ 0 ][ 0 ][ 0 ]
[ 0 ][ near/height ][ 0 ][ 0 ]
[ 0 ][ 0 ][(far+near)/(far-near) ][ 1 ]
[ 0 ][ 0 ][-(2*near*far)/(far-near)][ 0 ]
near = near plane distance (everything closer to the camera than this is clipped).
far = far plane distance (everything farther from the camera than this is clipped).
width = the widest object we can see if it is at the near plane.
height = the tallest object we can see if it is at the near plane.
. It results in "clip coordinates" (Cx,Cy,Cz,Cw=Vz). Note that the viewspace z coordinate (Vz) ends up in the w coordinate of the clip coordinates (Cw) (more on this below). This matrix stretches the world so that the camera's field of view is now 45 degrees up,down,left, and right. In other words, in this coordinate system if you look from the origin (camera position) straight along the -z axis (direction the camera is pointing) you will see what is in the center of the screen, and if you rotate your head up {down,left,right} you will see what will be at the top {bottom,left,right} of the screen. You can visualize this as a pyramid shape where the camera is at the top of the pyramid and the camera is looking straight down inside the pyramid. (This shape is called a "frustum" once you clip the top and bottom of the pyramid off with the near and far plane - see next paragraph.) The Cz value calculation makes vertices at the near plane have Cz=-Cw and vertices at the far plane have Cz=Cw
Clipping takes place in clip coordinates (which is why they are called that). Clipping means you take some scissors and clip away anything that is outside that pyramid shape. You also clip everything that is too close to the camera (the "near plane") and everything that is too far away from the camera (the "far plane"). See here for details.
Next comes the perspective divide. Remember that Cw == Vz? This is the distance from the camera to the vertex along the z axiz (the direction the camera is pointing). We divide each component by this Cw value to get Normalized Projection Coordinates (NPC) (Nx=Cx/Cw, Ny=Cy/Cw, Nz=Cz/Cw, Nw=Cw/Cw=1.0). All these values (Nx, Ny and Nz) will be between -1 and 1 because we clipped away anything where Cx > Cw or Cx < -Cw or Cy > Cw or Cy < -Cw or Cz > Cw or Cz < -Cw. Again see here for lots of details on this. The perspective divide is what makes things that are farther away appear smaller. The farther away from the camera something is, the larger the Cw (Vz) is, and the more its X and Y coordinate will be reduced when we divide.
The final step is the viewport transform. Nx Ny and Nz (each ranging from -1 to 1) are converted to pixel coordinates. For example Nx=-1 is at the left of the screen and Nx=1 is at the right of the screen, so we get Sx = (Nx * WIDTH/2) + (WIDTH/2) or equivalently Sx = (Nx+1) * WIDTH. Similar for Sy. You can think of Sz as the value that will be used in a depth buffer, so it needs to range from 0 for vertices at the near plane (Vz=near) to the maximum value that the depth buffer can hold (e.g. 2^24= 16777216 for a 24 bit z buffer) for vertices at the far plane (Vz=far).
The "camera matrix" as you called it sounds like a combination of two matrices: the view matrix and the projection matrix. It's possible you're only talking about one of these, but it's not clear.
View matrix: The view matrix is the inverse of what the camera's model matrix would be if you drew it in the world. In order to draw different camera angles, we actually move the entire world in the opposite direction - so there is only one camera angle.
Usually in OpenGL, the camera "really" stays at (0,0,0) and looks along the Z axis in the positive direction (towards 0,0,+∞). You can apply a rotation to the projection matrix to get a different direction, but why would you? Do all the rotation in the view matrix and your life is simpler.
So if you want your camera to be at (0,3,0) for example, instead of moving the camera up 3 units, we leave it at (0,0,0) and move the entire world down 3 units. If you want it to rotate 90 degrees, we actually rotate the world 90 degrees in the opposite direction. The world doesn't mind - it's just numbers in a computer - it doesn't get dizzy.
We only do this when rendering. All of the game physics calculations, for example, aren't done in the rotated world. The coordinates of the stuff in the world don't get changed when we rotate the camera - except inside the rendering system. Usually, we tell the GPU the normal world coordinates of the objects, and we get the GPU to move and rotate them for us, using a matrix.
Projection matrix: You know the view frustum? This shape you've probably seen before: (credit)
Everything inside the cut-off pyramid shape (frustum) is displayed on the screen. You know this.
Except the computer doesn't actually render in a frustum. It renders a cube. The view matrix transforms the frustum into a cube.
If you're familiar with linear algebra, you may notice that a 3D matrix can't make a cube into a frustum. That's what the 4th coordinate (w) is for. After this calculation, the x, y and z coordinates are all divided by w. By using a view matrix that makes w depend on z, the coordinates of far-away points get divided by a larger number, so they get pushed towards the middle of the screen - that's how a cube is able to turn into a frustum.
You don't have to have a frustum - that's what you get with a perspective projection. You can also use an orthographic projection, which turns a cube into a cube, by not changing w.
Unless you want to do a bunch of math yourself, I'd recommend you just use the library functions to generate projection matrices.
If you're multiplying vertices by several matrices in a row it's more efficient to combine them into one matrix, and then multiply the vertices by the combined matrix - hence you will often see MVP, MV and VP matrices used. (M = model matrix - I think it's the same thing you called a world matrix)
So, for anyone familiar with Google Maps, when you zoom, it does it around the cursor.
That is to say, the matrix transformation for such a zoom is as simple as:
TST^{-1}*x
Where T is the translation matrix representing the point of focus, S the scale matrix and x is any arbitrary point on the plane.
Now, I want to produce a similar effect with a spherical camera, think sketchfab.
When you zoom in and out, the camera needs to be translated so as to give a similar effect as the 2D zooming in Maps. To be more precise, given a fully composed MVP matrix, there exists a set of parallel planes that are parallel to the camera plane. Among those there exists a unique plane P that also contains the center of the current spherical camera.
Given that plane, there exists a point x, that is the unprojection of the current cursor position onto the camera plane.
If the center of the spherical camera is c then the direction from c to x is d = x - c.
And here's where my challenge comes. Zooming is implemented as just offsetting the camera radially from the center, given a change in zoom Delta, I need to find the translation vector u, colinear with d, that moves the center of the camera towards x, such that I get a similar visual effect as zooming in google maps.
Since I know this is a bit hard to parse I tried to make a diagram:
TL;DR
I want to offset a spherical camera towards the cursor when I zoom, how do i pick my translation vector?
I am learning camera matrix stuff. I already known that I can get the homography of the camera (3*3 matrix) by using four points in a plane in object space. I want to know if we can get the homagraphy with four points not in a plane? If yes, how can I get the matrix? What formulas should I look at?
I also confused homography with another concept: I only need to know three points if I want to convert from points from one coordinate to another coordinate system. So why we need four points in computing homography?
Homography maps points
1. On plane to points at another plane
2. Projections of points in 3D (no obligatory lying on the same plane) during a pure camera rotation or zoom.
The latter can be easily verified if you look at the rays that connect points while sensor plane rotates: green are two sensor positions and black is a 3d object
Since Homography is between projections and not between objects in 3D you don’t care what these projections represent. But this can be confusing, I agree. For example you can point your camera at 3D scene (that is not flat!), then rotate your camera and the two resulting pictures of the scene will be related by homography. This is, by the way, a foundation for image panoramas.
Three point correspondences you mentioned may be reladte to a transformation called Affine (happens during large zooms when a perspective effects disappears) or to the finding a rigid rotation and translation in 3D space. Both require 3 point correspondences but the former needs only 2D points while the latter needs 3D points. The latter case has 6DOF ( 3 for rotation and 3 for translation) while each correspondence provides 2DOF, hence 6/2=3 correspondences. Homography has 8 DOF so there should be 8/2=4 correspondences;
Below is a little diagram that explains the difference between affine and homographs transformation when the original square tilts forward. In affine case the perspective effect is negligible that is far side has the same length as a near one. In the case of Homography the far side is shorter.
If you only have 4 points - and they're not on the same plane - then computing a homography will not work.
If you have a loads of points, and 4 of them do lie on a plane but some don't, there are filters you can use to try to remove the ones not lying on a plane. The filters implemented by OpenCV are called RANSAC and LMeDs.
Also as Hammer says in a comment under your question - The 4th point is there to figure out perspective.
Homography is a 3X3 matrix, which consists of 8 independent unknowns which means it requires 4 equations to solve these unknowns. So, in order to calculate homography we need at least 4 points.
In homography we assume that Z=0 in world scene, so the image projected is assumed as 2D. In a very famous journal named ORB-SLAM, the author formulated a scene-selective approach depending on motion parallax in scene.
Homography is the relation between two planes and the degree of freedom in case of homography transform is 7; hence you need minimum 4 corresponding points.
4 points will give you 4 pair of (x,y) hence you can calculate 7 variables. Homography is homogines transfrom hence the (3,3) value in homography matrix is always 1.
So your first question that can you calculate homography with 3 points in the plane and 4th not on the plane : it's not possible. You need projection of that point on the plane and then you can calculate the homography.
Your 2nd question about how to calculate homography matrix, you can see implemetation of findHomography() in opencv.
I have two points that describe line, problem is that i know coordinates of one for orthographic matrix (ie 150x250x0), and coordinates for second for perspective matrix (0.5x0.5x20.0f). I would like to translate orthographic coordinates to perspective so i can draw a line using glsl shader :). How to accomplish this task?
You need to move one of your vertices to other matrix space. For example let's move 150x250x0 from orthographic to perspective space. To do this you need to transform your vertex by inverted orthographic matrix. I don't know what math library you use, maybe it already has function for matrix inversion. Otherwise use code from this link: http://www.gamedev.net/topic/180189-matrix-inverse/ . After this step your vertex is in world space.
PS: Matrix inversion takes some significant time for calculations. If you can track trasformations steps (translation, rotation and scale) the easier way should be to invert these steps separately and compose a matrix after that.
I'm trying to write a simple voxel raycaster as a learning exercise. This is purely CPU based for now until I figure out how things work exactly -- fow now, OpenGL is just (ab)used to blit the generated bitmap to the screen as often as possible.
Now I have gotten to the point where a perspective-projection camera can move through the world and I can render (mostly, minus some artifacts that need investigation) perspective-correct 3-dimensional views of the "world", which is basically empty but contains a voxel cube of the Stanford Bunny.
So I have a camera that I can move up and down, strafe left and right and "walk forward/backward" -- all axis-aligned so far, no camera rotations. Herein lies my problem.
Screenshots: (1) raycasting voxels while... ...(2) the camera remains... ...(3) strictly axis-aligned.
Now I have for a few days been trying to get rotation to work. The basic logic and theory behind matrices and 3D rotations, in theory, is very clear to me. Yet I have only ever achieved a "2.5 rendering" when the camera rotates... fish-eyey, bit like in Google Streetview: even though I have a volumetric world representation, it seems --no matter what I try-- like I would first create a rendering from the "front view", then rotate that flat rendering according to camera rotation. Needless to say, I'm by now aware that rotating rays is not particularly necessary and error-prone.
Still, in my most recent setup, with the most simplified raycast ray-position-and-direction algorithm possible, my rotation still produces the same fish-eyey flat-render-rotated style looks:
camera "rotated to the right by 39 degrees" -- note how the blue-shaded left-hand side of the cube from screen #2 is not visible in this rotation, yet by now "it really should"!
Now of course I'm aware of this: in a simple axis-aligned-no-rotation-setup like I had in the beginning, the ray simply traverses in small steps the positive z-direction, diverging to the left or right and top or bottom only depending on pixel position and projection matrix. As I "rotate the camera to the right or left" -- ie I rotate it around the Y-axis -- those very steps should be simply transformed by the proper rotation matrix, right? So for forward-traversal the Z-step gets a bit smaller the more the cam rotates, offset by an "increase" in the X-step. Yet for the pixel-position-based horizontal+vertical-divergence, increasing fractions of the x-step need to be "added" to the z-step. Somehow, none of my many matrices that I experimented with, nor my experiments with matrix-less hardcoded verbose sin/cos calculations really get this part right.
Here's my basic per-ray pre-traversal algorithm -- syntax in Go, but take it as pseudocode:
fx and fy: pixel positions x and y
rayPos: vec3 for the ray starting position in world-space (calculated as below)
rayDir: vec3 for the xyz-steps to be added to rayPos in each step during ray traversal
rayStep: a temporary vec3
camPos: vec3 for the camera position in world space
camRad: vec3 for camera rotation in radians
pmat: typical perspective projection matrix
The algorithm / pseudocode:
// 1: rayPos is for now "this pixel, as a vector on the view plane in 3d, at The Origin"
rayPos.X, rayPos.Y, rayPos.Z = ((fx / width) - 0.5), ((fy / height) - 0.5), 0
// 2: rotate around Y axis depending on cam rotation. No prob since view plane still at Origin 0,0,0
rayPos.MultMat(num.NewDmat4RotationY(camRad.Y))
// 3: a temp vec3. planeDist is -0.15 or some such -- fov-based dist of view plane from eye and also the non-normalized, "in axis-aligned world" traversal step size "forward into the screen"
rayStep.X, rayStep.Y, rayStep.Z = 0, 0, planeDist
// 4: rotate this too -- 0,zstep should become some meaningful xzstep,xzstep
rayStep.MultMat(num.NewDmat4RotationY(CamRad.Y))
// set up direction vector from still-origin-based-ray-position-off-rotated-view-plane plus rotated-zstep-vector
rayDir.X, rayDir.Y, rayDir.Z = -rayPos.X - me.rayStep.X, -rayPos.Y, rayPos.Z + rayStep.Z
// perspective projection
rayDir.Normalize()
rayDir.MultMat(pmat)
// before traversal, the ray starting position has to be transformed from origin-relative to campos-relative
rayPos.Add(camPos)
I'm skipping the traversal and sampling parts -- as per screens #1 through #3, those are "basically mostly correct" (though not pretty) -- when axis-aligned / unrotated.
It's a lot easier if you picture the system as a pinhole camera rather than anything else. Instead of shooting rays from the surface of a rectangle representing your image, shoot the rays from a point, through the rectangle that will be your image plane, into the scene. All the primary rays should have the same point of origin, only with slightly different directions. The directions are determined using basic trig by which pixel in the image plane you want them to go through. To make the simplest example, let's imagine your point is at the camera, and your image plane is one unit along the z axis, and two units tall and wide. That way, the pixel at the upper-left corner wants to go from (0,0,0) through (-1, -1, 1). Normalize (-1, -1, 1) to get the direction. (You don't actually need to normalize the direction just to do ray intersection, but if you decide not to, remember that your directions are non-normalized before you try to compute the distance the ray has travelled or anything like that.) For every other pixel, compute the point on the plane it wants to go through the way you've already been doing, by dividing the size of the plane by the number of pixels, in each direction.
Then, and this is the most important thing, don't try to do a perspective projection. That's necessary for scan-conversion techniques, to map every vertex to a point on the screen, but in ray-tracing, your rays accomplish that just by spreading out from one point into space. The direction from your start point (camera position, the origin in this example), through your image plane, is exactly the direction you need to trace with. If you were to want an orthographic projection instead (and you almost never want this), you'd accomplish this by having the direction be the same for all the rays, and the starting positions vary across the image plane.
If you do that, you'll have a good starting point. Then you can try again to add camera rotation, either by rotating the image plane about the origin before you iterate over it to compute ray directions, or by rotating the ray directions directly. There's nothing wrong with rotating directions directly! When you bear in mind that a direction is just the position your ray goes through if it starts from the origin, it's easy to see that rotating the direction, and rotating the point it goes through, do exactly the same thing.