I have a web application I am trying to show a plane of map image tiles in a 3D space.
I want the plane to be always horizontal however the device rotate, the final effect is similar to this marine compass demo.
I can now capture device orientation through the W3C device orientation API for mobile devices, and I successfully rendered the map image tiles.
My problem is me lacking of essential math knowledge of how to rotate the camera correctly according to the device orientation.
I am using the Three.js library. I tried to set rotation of the camera object directly by the alpha/beta/gamma (converted to radian). But it's not working since the camera seems to always rotate according to the world axis used by openGL/webGL not according to its local axis.
I came across the idea of rotate a point 100 unit in front the camera and rotate the point relatively to the camera position by angles supplied by the device orientation API. But I don't know how to implement this either.
Can anyone help me with what I want to achieve?
EDIT:
MISTAKE CORRECTION:
For anyone interested in implementing similar things, I found out that Three.js objects uses local space axis by default, not world space axis, I was wrong. Though the official document stated that by setting "object.matrixAutoUpdate = false" and then modify "object.matrixWorld" and call "object.updateWorldMatrix()" you can manually move/rotate/scale the object in world axis. However it does not work when the object has a parent, the local axis matrix will always be used when the object has a parent.
According to the W3C Device Orientation Event Specification, the angles alpha, beta and gamma form a set of intrinsic Tait-Bryan angles of type Z-X'-Y''.
The Three.js camera also rotates according to intrinsic angles. However the default order in which the rotations are applied is:
camera.rotation.order = 'XYZ'.
What you need to do, then, is to set:
camera.rotation.order = 'ZXY'; // or whatever order is appropriate for your device
You then set the camera rotation like so:
camera.rotation.x = beta * Math.PI / 180;
camera.rotation.y = gamma * Math.PI / 180;
camera.rotation.z = alpha * Math.PI / 180;
Disclaimer: I do not have your device type. This is an educated guess based on my knowledge of three.js.
EDIT: Updated for three.js r.65
Related
I'm trying to do something similar to this example, except instead of having the snow flakes flutter about in all directions I'm trying to animate these sprites in only one direction, like having the snow flakes fall to the ground.
The example above was able to load multiple sprites into one geometry since it can vary the rotations of the points object:
particles.rotation.x = Math.random() * 6;
particles.rotation.y = Math.random() * 6;
particles.rotation.z = Math.random() * 6;
However, this won't work if you're animating all the points in one direction. In this case, would I have to create a new geometry for each sprite, or is there a more efficient way to do this using just one geometry?
There are several options. Instead of rotating randomly, you could:
Decrease the y position on each frame with particles.position.y -= 0.01;. When it crosses a certain threshold (For example: y <= -100), move them back up to the origin (y = 100). You'll have to stagger a few Sprite objects so you don't notice the jump.
Rotate along the x-axis, so the spinning motion makes them go down when in front of the camera.
Since the snowflakes will spin up on the opposite side, you could use some fog to hide the far side, and give it a more wintry feel.
Animating via custom shaders, although this is much more complex if you don't know GLSL shader code.
My Three.js app has a static perspective camera looking on (0,0,0). How can I find the x/y coordinates in the y=0 plane of the corners of the camera's viewfield? The app covers the entire web browser, so this would correspond to the corners of the web browser. I want to render 3D models between those corners.
I want to render 3D models between those corners.
Just having the mentioned corner points is not sufficient to determine whether the user can see an object or not. The camera also has a near/far plane and also a perspective which you should take into account.
I suggest you use a different workflow and create an instance of THREE.Frustum based on the camera's projection screen matrix. The code looks like so:
const frustum = new THREE.Frustum();
const projScreenMatrix = new THREE.Matrix4();
projScreenMatrix.multiplyMatrices( camera.projectionMatrix, camera.matrixWorldInverse );
frustum.setFromProjectionMatrix( _projScreenMatrix );
You can then use methods like Frustum.intersectsObject() or Frustum.intersectsSprite() to determine whether 3D objects are in the view frustum or not.
This is actually the way WebGLRenderer performs view frustum culling.
I want to fit my object to my output window. There are some answers that describe how to do this by moving the camera or changing the fov but for my use case, I want to do it by moving the object and leaving the camera at the origin.
Distance from camera to object is I think:
var dist = size / 2 / Math.tan(Math.PI * camera.fov / 360);
I think this is the distance from closest point on object to the camera so I mix in the radius of the sphere (in this example) to account for that.
I have tried to adapt the code in other answers but it is not working. Can anyone spot my mistake?
https://jsfiddle.net/m1rLzm12/
I'm working on a simple Three.js demo that uses OrbitControls.js.
I'd like to change the behavior of panning in OrbitControls. Currently, when you pan the camera, it moves the camera in a plane that is perpendicular to the viewing direction. I'd like to change it so that the camera stays a constant distance from the ground plane and moves parallel to it. Google Earth uses a similar control setup.
Edit: I should have mentioned this detail in the first place, but I'd also like the point where you click and start dragging to remain directly under the cursor throughout the entire drag. There needs to be that solid connection between the mouse movement and what the user expects to happen on the screen. Otherwise, it feels as though I'm 'slipping' when I try to move around the scene.
Can someone give me a high-level explanation of how this might be done (with or without OrbitControls.js)?
EDIT: OrbitControls now supports panning parallel to the "ground plane", and it is the default.
To pan parallel to screen-space (the legacy behavior), set:
controls.screenSpacePanning = true;
Also available is MapControls, which has an API similar to that of Google Earth.
three.js r.94
Some time ago I was working on exactly this issue, i.e. adaptation of OrbitControls.js to map navigation.
Here's the code of MapControls.js.
Here's the demo of the controls.
I figured it out. Here's the overview:
Store the mousedown event somewhere.
When the mouse moves, get the new mousedown event.
For each of those points, find the points on the plane where those clicks are located (You'll need to put the points into camera space, transform them into world space, then fire a ray from the camera through each point to find their intersections with the plane. This page explains the ray-plane intersection test).
Subtract the world-space start intersection point from the world-space end intersection point to get the offset.
Subtract that offset from the camera's target point and you're done!
In the case of OrbitControl.js, the camera always looks at the target point, and its position is relative to that point. So when you change the target, the camera moves with it. Since the target always lies on the plane, the camera moves parallel to that plane (as long as you're panning).
You should set your camera 'up' to z axe:
camera.up.set(0,0,1)
And then, the main problem with OrbitControl is its panUp() function. It should be fixed.
My pull request : https://github.com/mrdoob/three.js/pull/12727
y axe is relative to camera axes and should be relative to a fixed plan in the world. To define the expected y axe, make a 90° rotation of camera x axe, based on world z axe.
v.setFromMatrixColumn( objectMatrix, 0 ); // get X column of objectMatrix
v.applyAxisAngle( new THREE.Vector3( 0, 0, 1 ), Math.PI / 2 );
v.multiplyScalar( distance );
panOffset.add( v )
Enjoy!
I am writing a particle engine for iOS using Monotouch and openTK. My approach is to project the coordinate of each particle, and then write a correctly scaled textured rectangle at this screen location.
it works fine, but I have trouble calculating the correct depth value so that the sprite will correctly overdraw and be overdrawn by 3D objects in the scene.
This is the code I am using today:
//d=distance to projection plane
float d=(float)(1.0/(Math.Tan(MathHelper.DegreesToRadians(fovy/2f))));
Vector3 screenPos=Vector3.Transform(ref objPos,ref viewMatrix, out screenPos);
float depth=1-d/-screenPos.Z;
Then I am drawing a trianglestrip at the screen coordinate where I put the depth value calculated above as the z coordinate.
The results are almost correct, but not quite. I guess I need to take the near and far clipping planes into account somehow (near is 1 and far is 10000 in my case), but I am not sure how. I tried various ways and algorithms without getting accurate results.
I'd appreciate some help on this one.
What you really want to do is take your source position and pass it through modelview and projection or whatever you've got set up instead if you're not using the fixed pipeline. Supposing you've used one of the standard calls to set up the stack, such as glFrustum, and otherwise left things at identity then you can get the relevant formula directly from the man page. So reading directly from that you'd transform as:
z_clip = -( (far + near) / (far - near) ) * z_eye - ( (2 * far * near) / (far - near) )
w_clip = -z
Then, finally:
z_device = z_clip / w_clip;
EDIT: as you're working in ES 2.0, you can actually avoid the issue entirely. Supply your geometry for rendering as GL_POINTS and perform a normal transform in your vertex shader but set gl_PointSize to be the size in pixels that you want that point to be.
In your fragment shader you can then read gl_PointCoord to get a texture coordinate for each fragment that's part of your point, allowing you to draw a point sprite if you don't want just a single colour.