Rotate camera along "eye" direction in rgl - rgl

In rgl, you can set up camera direction with rgl.viewpoint. It accepts theta, phi: polar coordinates. They specify the position of the camera looking at the origin. However, there is yet another one degree of freedom: angle of rotation of the camera along "eye" vector. I.e. one can imagine two vectors associated with camera: "eye" vector and "up" vector; theta and phi allow one to adjust "eye" vector, but I want then to adjust "up" vector after it. Is it possible to do it?
I guess that it can be possible to do it with userMatrix parameter («4x4 matrix specifying user point of view»), but I found no information how to use it.

The ?par3d help topic documents the rendering process in the "Rendering" section. It's often tricky to accomplish what you're asking for, but in this case it's not too hard:
par3d(userMatrix = rotationMatrix(20*pi/180, 0,0,1)
%*% par3d("userMatrix"))
will rotate by 20 degrees around the user's z-axis, i.e. line of sight.

Related

Webot camera default parameters like pixel size and focus

I am using two cameras without lens or any other settings in webot to measure the position of an object. To apply the localization, I need to know the focus length, which is the distance from the camera center to the imaging plane center,namely f. I see the focus parameter in the camera node, but when I set it NULL as default, the imaging is still normal. Thus I consider this parameter has no relation with f. In addition, I need to know the width and height of a pixel in the image, namely dx and dy respectively. But I have no idea how to get these information.
This is the calibration model I used, where c means camera and w means world coordinate. I need to calculate xw,yw,zw from u,v. For ideal camera, gama is 0, u0, v0 are just half of the resolution. So my problems exist in fx and fy.
First important thing to know is that in Webots pixels are square, therefore dx and dy are equivalent.
Then in the Camera node, you will find a 'fieldOfView' which will give you the horizontal field of view, using the resolution of the camera you can then compute the vertical field of view too:
2 * atan(tan(fieldOfView * 0.5) / (resolutionX / resolutionY))
Finally, you can also get the near projection plane from the 'near' field of the Camera node.
Note also that Webots cameras are regular OpenGL cameras, you can therefore find more information about the OpenGL projection matrix here for example: http://www.songho.ca/opengl/gl_projectionmatrix.html

How can i handle a camera direction parallel to the y-axis my raytracer

I'm working on my raytracer and it seems I can't manage to handle the case where the direction vector of my camera is parallel to the vector (0,1,0).
I think it is linked to my way to compute the vector up and right for camera but I can't manage to find a work around.
Here is how I do it:
cam_up = vector_cross(cam_dir, {0, 1, 0});
camp_right = vector_cross(cam_right, cam_dir);
Can somebody enlighten me?
You have the correct formula for calculation of an orthogonal axis from a single cameraOut vector. However, as has been stated this formula will not account for the camera roll, which could be any direction in the plane perpendicular to the camera direction. This will be apparent when moving a camera across the pole (y-axis) as there will be undesireable behavior (yes it will be correctly aimed, but no doubt the roll won't be desired).
For more information, look into gimbal lock.
The roll itself is not really incorrect, however in reality for this camera transition to be smooth and appear correct (rather than suddenly flip or spin as it's direction becomes 0,1,0), you need to correct any roll incurred. This is a rotation about the cameraOut axis and ideally should be relative to the previous cameraAlong. This means in order to maintain the correct roll (or perceived correct roll) you need to consider the camera POSE (position and orientation) from the previous frame and ensure the roll is mitigated. Of course, if the camera doesn't move (i.e. your rendering a frame with a static camera position) you do not have a previous camera state so the position cannot be calculated and instead must be explicitly defined as part of the scene definition.
Personally I store an entire orthogonal axis for a camera so the orientation and roll is always clearly defined. This is only for completeness, to be honest you don't need to store the entire axis, 2 vectors cameraOut and cameraAlong (the third one being cameraUp) are enough. cameraAlong is dependant on the handed-ness of your coordinate system (e.g. for initial camera position say position (0,0,0) in left hand coordinate system, the cameraAlong direction will be in the right direction in relation to the viewer, for right hand system the cameraAlong would be the other way around. The cameraUp and cameraOut would are the same in both coordinate systems).
Hope this helps.
P.S This isn't ray tracing specific and the same principles apply for OpenGL/DirectX or any 3D representation.

How to copy the rotation value of a sprite to a plane?

I was wondering if there is a way to copy the rotational information of a sprite into a plane. It seems that for the sprite the rotation is set within the material and it is just one value. I tried copying the matrix of the sprite into that of the plane but had no luck.
Sprite rotation is defined in WebGL as you can read in SpritePlugin.js. Thus, if you wish to copy the rotation of an existing sprite in your scene, the easiest way may be to use quaternion. They allow better defining than x/y/z-angle defined ones (Euler rotations) as they ask for the rotation axis and the angle value. In your context, at each camera movement you modify a quaternion. You set it with quaternion.setFromAxisAngle(axis,angle). Then apply it to the plane with plane.setRotationFromQuaternion(quaternion).
Math side :
The axis would be the cross product between the vectors lastCameraPosition-sprite.position and actualCameraPosition-sprite.position,
the angle, the one between those vectors, got with the acos of their dot product divided by the product of their lengths.
For more about those maths, functions getMouseProjectionOnBall and rotateCamera in TrackBallControls are of great interest.
Hope it helped !

Can't center a 3D object on screen

Currently, I'm taking each corner of my object's bounding box and converting it to Normalized Device Coordinates (NDC) and I keep track of the maximum and minimum NDC. I then calculate the middle of the NDC, find it in the world and have my camera look at it.
<Determine max and minimum NDCs>
centerX = (maxX + minX) / 2;
centerY = (maxY + minY) / 2;
point.set(centerX, centerY, 0);
projector.unprojectVector(point, camera);
direction = point.sub(camera.position).normalize();
point = camera.position.clone().add(direction.multiplyScalar(distance));
camera.lookAt(point);
camera.updateMatrixWorld();
This is an approximate method correct? I have seen it suggested in a few places. I ask because every time I center my object the min and max NDCs should be equal when their are calculated again (before any other change is made) but they are not. I get close but not equal numbers (ignoring the negative sign) and as I step closer and closer the 'error' between the numbers grows bigger and bigger. IE the error for the first few centers are: 0.0022566539084770687, 0.00541687811360958, 0.011035676399427596, 0.025670088917273515, 0.06396864345885889, and so on.
Is there a step I'm missing that would cause this?
I'm using this code as part of a while loop to maximize and center the object on screen. (I'm programing it so that the user can enter a heading an elevation and the camera will be positioned so that it's viewing the object at that heading and elevation. After a few weeks I've determined that (for now) it's easier to do it this way.)
However, this seems to start falling apart the closer I move the camera to my object. For example, after a few iterations my max X NDC is 0.9989318709122867 and my min X NDC is -0.9552042384799428. When I look at the calculated point though, I look too far right and on my next iteration my max X NDC is 0.9420058636660581 and my min X NDC is 1.0128126740876888.
Your approach to this problem is incorrect. Rather than thinking about this in terms of screen coordinates, think about it terms of the scene.
You need to work out how much the camera needs to move so that a ray from it hits the centre of the object. Imagine you are standing in a field and opposite you are two people Alex and Burt, Burt is standing 2 meters to the right of Alex. You are currently looking directly at Alex but want to look at Burt without turning. If you know the distance and direction between them, 2 meters and to the right. You merely need to move that distance and direction, i.e. right and 2 meters.
In a mathematical context you need to do the following:
Get the centre of the object you are focusing on in 3d space, and then project a plane parallel to your camera, i.e. a tangent to the direction the camera is facing, which sits on that point.
Next from your camera raycast to the plane in the direction the camera is facing, the resultant difference between the centre point of the object and the point you hit the plane from the camera is the amount you need to move the camera. This should work irrespective of the direction or position of the camera and object.
You are playing the what came first problem. The chicken or the egg. Every time you change the camera attributes you are effectively changing where your object is projected in NDC space. So even though you think you are getting close, you will never get there.
Look at the problem from a different angle. Place your camera somewhere and try to make it as canonical as possible (ie give it a 1 aspect ratio) and place your object around the cameras z-axis. Is this not possible?

3D space: following the direction that an object is pointing towards, using the mouse pointer

Given the 3D vector of the direction that the camera is facing and the orientation/direction vector of a 3D object in the 3D space, how can I calculate the 2-dimensional slope that the mouse pointer must follow on the screen in order to visually be moving along the direction of said object?
Basically I'd like to be able to click on an arrow and make it move back and forth by dragging it, but only if the mouse pointer drags (roughly) along the length of the arrow, i.e. in the direction that it's pointing to.
thank you
I'm not sure I 100% understand your question. Would you mind posting a diagram?
You might find these of interest. I answered previous questions to calculate a local X Y Z axis given a camera direction (look at) vector, and also a question to translate an object in a plane parallel to the camera.
Both of these examples use Vector dot product, Vector cross product to compute the required vectors. In your example the vector dot product can be also used to output the angle between two vectors once you have found them.
It depends to an extent on the transformation that you are using to convert your 3d real world coordinates to 2d screen coordinates, e.g. perspective, isometric, etc... You will typically have a forward (3d -> 2d) and backward (2d -> 3d) transformation in play, where the backward transformation loses information. (i.e. going forward each 3d point will map to a unique 2d point, but going back from the point may not yield the same 3d point). You can often project the mouse point onto the object to get the missing dimension.
For mouse dragging, you typically get the user to specify an operation (translation on the plane of projection, zooming in or out, or rotating about an anchor point). Your input is the mouse coordinate at the start and end of the drag, which you transform into your 3d coordinate system to get two 3d coordinates, which will give you dx, dy, dz for dragging / translation etc...

Resources