How to prevent culling or simply redraw on iTowns globe rotation outside of mouse, touch or key events? - three.js

I'm attempting to rotate the "globe" in iTowns in real time by adding a frame requester (in three.js this would be the requestAnimationFrame loop).
I am attempting to use iTowns to provide a real time 3D earth map simulation. As part of this I have an interval firing every second and playhead controls to manipulate time. I'm trying to rotate an earth representation along the z-axis because in iTowns that is the polar axis. I need to check the time every frame because I can calculate the earth rotation in radians/sec quite easily and as I manipulate the playhead to speed up time I can get a fluidly rotating earth. Also I'm utilizing the default Ortho layer provided in iTowns to add a geographic layer on top of the blue marble default.
// this.view is an iTowns/src/Core/Prefab/GlobeView
const globeLayer = this.view.getLayerById("globe");
const globe = globeLayer.object3d;
this.view.addFrameRequester(MAIN_LOOP_EVENTS.UPDATE_START, () => {
globe.rotation.z = ConversionUtils.toRadians(this.globals.getSceneRotation());
});
The globe rotates, however I believe iTowns is performing some sort of culling of the map tiles/globe. What happens is at some point as the earth rotation is taking place a portion of the globe no longer renders. If this was related to the Ortho layer I believe the blue marble would still show. We have another version of the application without any iTowns and we can rotate a Three.js SphereGeometry without any culling using the same mechanics; so I am 99% sure it is not an issue with Three.js unless iTowns is doing something I don't see.
How do I prevent the culling, which I see is documented as an optimization? Better yet, how can I tell it to redraw properly?
I've tried the following but to no apparent effect:
this.view.notifyChange(globeLayer, true);
this.view.notifyChange(undefined, true);
this.view.notifyChange(this.view.controls.camera, true);
globe.updateMatrixWorld(true);
There is "culling" logic in:
- iTowns/src/Core/Prefab/GlobeLayer.js
- iTowns/src/Layer/TiledGeometryLayer.js
This is invoked from iTowns/src/Core/MainLoop.js. Bubbling up to the method scheduleViewUpdate which is invoked on iTowns/src/Core/View.js notifyChange.
Further digging leads me to believe iTowns/src/Controls/GlobeControls.js, specifically the cameraTarget attribute that I can't manipulate, is either the answer or is related.
I am hoping I am missing something and there is a way to code this up. Alternatively, I realize the suggestion may be fork or PR iTowns.
It is worth noting I do not want to change the camera position, just rotate the globe because there are other objects also moving in the scene i.e. satellites in orbit.

Related

Cannon.js - How to prevent objects clipping 'floor' on update

I'm using Cannon.js with Three.js.
I've created a scene which consists of 1 heightfield and 5 balls. I want the balls to roll around the heightfield, using the cannon.js physics.
On mouse move, I rotate the heightfield along the y-axis to make the spheres roll back and forth.
I have an update loop, which copies the sphere position & quaternion from cannon.js and applies to the visual sphere of three.js.
The heightfield is also updated at the same time as the three.js visual floor. Both of these run in a for loop, in requestAnimationFrame.
updateMeshPositions() {
for (var i = 0; i !== this.meshes.length; i++) {
this.meshes[i].position.copy(this.bodies[i].position);
this.meshes[i].quaternion.copy(this.bodies[i].quaternion);
this.hfBody.position.copy(this.mesh.position);
this.hfBody.quaternion.copy(this.mesh.quaternion);
}
}
However, the problem is that when the 'floor' is rotating back and forth, the spheres are getting stuck and sometimes even falling through the floor. Here is an example on codepen - https://codepen.io/danlong/pen/qJwMBo
Move the mouse up and down on the screen to see this in action.
Is there a better or different way I should be rotating the 'floor' whilst keeping the sphere's moving?
Directly (i.e. "instantly") setting position/rotation is likely to break collision handling in all physics engines, including cannon.js . Effectively you are teleporting things through space, causing objects to get stuck in or pass through each other.
What you should do is
Set the velocity (both .velocity and .angularVelocity) or apply forces to the Cannon.js bodies
Copy the transform of those bodies to your visual meshes (notices this is exactly the other way around of what you are currently doing in the code)
Determining the right amount of force to apply to get the desired visual behavior is usually the tricky part.

A-Frame camera rotation using lookAt()

Word up SO,
I'm trying to pull together something akin to an 'anchor look' component in A-Frame – the idea was supposed to be like a combination of aframe-href-component and aframe-look-at-component, where clicking a link to an anchor (Link) would make the camera "look at" the entity whose id="" matches the anchor.
I thought I had a working concept just by modifying the look-at component a bit, i.e. poll for hash updates and Object3D.lookAt() the anchor, but there seems to be a problem I wasn't accounting for that probably comes from my poor understanding of Euler/quaternion/etc:
When the camera's rotation gets updated by lookAt(), it seems to lose its previous rotational reference – dragging the camera has strange rotation results, and the results get stranger the more you've rotated the camera before calling lookAt().
I've set up a basic codepen at http://codepen.io/wosevision/pen/JWRMyK containing my version of the component to demonstrate; what is causing this and what is the proper way?
You know the perspective camera nested in a group, when you drag the mouse to change the rotation, the group rotation changed, but the perspective camera didn't. the rotation would be strange if the perspective camera rotation is not (0,0,0).
It's very hard to set camera rotation, If you really want to do that, you need to look deep int the implementation of camera control and modify.

Can points or meshes be drawn at infinite distance?

I'm interested in drawing a stardome in THREE.js using either mesh points or a particle system.
I don't want the camera to be able to move any closer to any part of the stardome, since the stars are effectively at infinite distance.
I can think of a couple of ways to do this:
A very large mesh (or very large point/particle distances)
Camera and stardome have their movement exactly linked.
Is there any way to specify a mesh, point, or particle system is automaticaly rendered at infinite distance so it is always drawn behind any foreground objects?
I haven't used three.js, but my guess is no. OpenGL camera's need a "near clipping plane" and "far clipping plane", which effectively denote the minimum and maximum distance that it'll render things in. If you've played video games where you move too close to a wall and start to see through it, or see things in the distance suddenly vanish as you move away, those were probably the clipping planes at work.
The workaround is usually one of 2 ways:
1) Set the far clipping plane distance as high as it'll let you go. I don't know what data type three.js would use for this, but my guess is a 32-bit float.
2) Render it in "layers". Render all the stars first before anything else in the scene.
Option 2 is the one I usually use.
Even if you used option 1, you would still synchronize the position of the camera and skybox.
If you do not depth cull, draw the skybox first and match its position, but not rotation, to the camera.
Also disable lighting on the skybox. Instead, bake an ambience directly into its texture.
You're don't want things infinitely away, you just want them not to move with respect to the viewer and to not appear in front of things. The best way to do that is to prevent the viewer from getting closer to them which produces the illusion of the object being far away. The second thing is to modify your depth culling function so that the skybox is always considered further away than whatever you are currently drawing.
If you create a very large mesh object, you'll have to set your camera's far plane large enough to include the mesh which means you'll end up drawing things that you really do want to cull.

Refraction after leaving a transparent (glass) object

I'm currently writing a fragment shader, which (besides other things) imitates the refraction effect on a glass sphere.
So, when a ray enters the sphere, the ray changes direction. So far so good. Now, when the refracted ray leaves the glass object, does it change direction again? I'm pretty sure it does, but I've been poking around the Internet and I've found different opinions (e.g. at the bottom of this site it's clearly stated that there is no change in direction).
Thanks in advance.
Yes it changes... the angle from air to glass refraction is the same then from glass to air.
You can implement it very easily. First you have to render your scene in a cubemap which is centered inside the sphere.
the 2nd renderstep uses the normalvector and the camera position to point vector, with them you can use the function refract() to calculate the vector of the refraction.
You have to calculate where the ray goes out of the sphere and you also can use the refract funtion again. you only have to calculate the normalvector of the outputplace again.
the 3rd step is to use the texture() function of the cubemap and put the outputvector as coordinate inside the function.

Working with Three.js

Context: trying to take THREE.js and use it to display conic sections.
Method: creating a mesh of vertices and then connect face4's to all of them. Used two faces to produce a front and back side so that when the conic section rotates it won't matter from which angle the camera views it.
Problems encountered: 1. Trying to find a good way to create a intuitive mouse rotation scheme. If you think in spherical coordinates, then it feels like just making up/down change phi and left/right change phi would work. But that requires that you can move the camera. As far as I can tell, there is no way to change actively change the rotation of anything besides the objects. Does anyone know how to change the rotation of the camera or scene? 2. Is there a way to graph functions that is better than creating a mesh? If the mesh has many points then it is too slow, and if the mesh has few points then you cannot easily make out the shape of the conic sections.
Any sort of help would be most excellent.
I'm still starting to learn Three.js, so I'm not sure about the second part of your question.
For the first part, to change the camera, there is a very good way, which could also include zooming and moving the scene: the trackball camera.
For the exact code and how to use it, you can view:
https://github.com/mrdoob/three.js/blob/master/examples/webgl_trackballcamera_earth.html
At the botton of this page (http://mrdoob.com/122/Threejs) you can see the example in action (the globe in the third row from the bottom).
There is an orbit control script for the three.js camera.
I'm not sure if I understand the rotation bit. You do want to rotate an object, but you are correct, the rotation is relative.
When you rotate or move your camera, a matrix is calculated for that position/rotation, and it does indeed rotate the scene while keeping the camera static.
This is irrelevant though, because you work in model/world space, and you position your camera in it, the engine takes care of the rotations under the hood.
What you probably want is to set up an object, hook up your rotation with spherical coordinates, and link your camera as a child to this object. The translation along the cameras Z axis relative to the object should mimic your dolly (zoom is FOV change).
You can rotate the camera by changing its position. See the code I pasted here: https://gamedev.stackexchange.com/questions/79219/three-js-camera-turning-leftside-right
As others are saying OrbitControls.js is an intuitive way for users to manage the camera.
I tackled many of the same issues when building formulatoy.net. I used Morphing Geometries since I found mapping 3d math functions to a UV surface to require v little code and it allowed an easy way to implement different coordinate systems (Cartesian, spherical, cylindrical).
You could use particles instead of a mesh I suppose but a mesh seems best. The lattice material is not too useful if you're trying to understand a surface mathematically. At this point I'm thinking of drawing my own X,Y lines on the surface (or phi, theta lines etc) to better demonstrate cross-sections.
Hope that helps.
You can use trackball controls by which you can zoom in and out of an object,rotate the object,pan it.In trackball controls you are moving the camera around the object.Object still rotates with respect to the screen or renderer centre (0,0,0).

Resources