Is clipping done automatically in Three.js? - algorithm

So, I was reading about clipping in this Wikipedia article. It seems pretty essential to any and all games, so, do I have to do it, or is it done automatically by Three.js, or even WebGL? Thanks!

You can pass values for the near and far clipping planes to your camera object:
var camera = new THREE.PerspectiveCamera( fov, aspect, near, far );
near and far can contain values for example near = 0.1 and far = 10000 so an object which lies between these values will be rendered.
EDIT:
near and far, represent the clipping planes for your world. In a scene with thousands of objects and textures being drawn at once, it would be taxing on the CPU and GPU to try and show everything. Even worse, it would be wasteful to draw the things you cant even see. The near clipping plane is usually relatively close to the user, whereas the far clipping plane is somewhere off in the distance. As objects cross the far plane, they spontaneously appear or disappear. Some games use fog to make the appearance and disappearance of objects more realistic.

Related

Using three.js, how would you project a globe world to a map on the screen?

I am curious about the limits of three.js. The following question is asked mainly as a challenge, not because I actually need the specific knowledge/code right away.
Say you have a game/simulation world model around a sphere geometry representing a planet, like the worlds of the game Populous. The resolution of polygons and textures is sufficient to look smooth when the globe fills the view of an ordinary camera. There are animated macroscopic objects on the surface.
The challenge is to project everything from the model to a global map projection on the screen in real time. The choice of projection is yours, but it must be seamless/continuous, and it must be possible for the user to rotate it, placing any point on the planet surface in the center of the screen. (It is not an option to maintain an alternative model of the world only for visualization.)
There are no limits on the number of cameras etc. allowed, but the performance must be expected to be "realtime", say two-figured FPS or more.
I don't expect ayn proof in the form of a running application (although that would be cool), but some explanation as to how it could be done.
My own initial idea is to place a lot of cameras, in fact one for every pixel in the map projection, around the globe, within a Group object that is attached to some kind of orbit controls (with rotation only), but I expect the number of object culling operations to become a huge performance issue. I am sure there must exist more elegant (and faster) solutions. :-)
why not just use a spherical camera-model (think a 360° camera) and virtually put it in the center of the sphere? So this camera would (if it were physically possible) be wrapped all around the sphere, looking toward the center from all directions.
This camera could be implemented in shaders (instead of the regular projection-matrix) and would produce an equirectangular image of the planet-surface (or in fact any other projection you want, like spherical mercator-projection).
As far as I can tell the vertex-shader can implement any projection you want and it doesn't need to represent a camera that is physically possible. It just needs to produce consistent clip-space coordinates for all vertices. Fragment-Shaders for lighting would still need to operate on the original coordinates, normals etc. but that should be achievable. So the vertex-shader would just need compute (x,y,z) => (phi,theta,r) and go on with that.
Occlusion-culling would need to be disabled, but iirc three.js doesn't do that anyway.

Can points or meshes be drawn at infinite distance?

I'm interested in drawing a stardome in THREE.js using either mesh points or a particle system.
I don't want the camera to be able to move any closer to any part of the stardome, since the stars are effectively at infinite distance.
I can think of a couple of ways to do this:
A very large mesh (or very large point/particle distances)
Camera and stardome have their movement exactly linked.
Is there any way to specify a mesh, point, or particle system is automaticaly rendered at infinite distance so it is always drawn behind any foreground objects?
I haven't used three.js, but my guess is no. OpenGL camera's need a "near clipping plane" and "far clipping plane", which effectively denote the minimum and maximum distance that it'll render things in. If you've played video games where you move too close to a wall and start to see through it, or see things in the distance suddenly vanish as you move away, those were probably the clipping planes at work.
The workaround is usually one of 2 ways:
1) Set the far clipping plane distance as high as it'll let you go. I don't know what data type three.js would use for this, but my guess is a 32-bit float.
2) Render it in "layers". Render all the stars first before anything else in the scene.
Option 2 is the one I usually use.
Even if you used option 1, you would still synchronize the position of the camera and skybox.
If you do not depth cull, draw the skybox first and match its position, but not rotation, to the camera.
Also disable lighting on the skybox. Instead, bake an ambience directly into its texture.
You're don't want things infinitely away, you just want them not to move with respect to the viewer and to not appear in front of things. The best way to do that is to prevent the viewer from getting closer to them which produces the illusion of the object being far away. The second thing is to modify your depth culling function so that the skybox is always considered further away than whatever you are currently drawing.
If you create a very large mesh object, you'll have to set your camera's far plane large enough to include the mesh which means you'll end up drawing things that you really do want to cull.

Changing scale of mesh changes lighting brightness of scene?

I needed to refactor my custom mesh creation a bit
from:
create mesh of unified sizes (SIZE,SIZE,SIZE), than scale them as needed (setting scale for each axis)
to:
create mesh with correct size, do not scale later
meshes are custom generated (vertices, faces, normals, uvs), nothing of this process was altered, worked like a charm before
=> resulting meshes are the same size, position, etc.
The whole scene setup stays the same: lights, shadowing, materials, yet when using the second approach the whole lighting is very very bright and super reflective, is that a known issue?
material used is MeshPhongMaterial with map, bumMap, specMap, envMap
using three.js r68, no error/warning in console
before:
https://cloud.githubusercontent.com/assets/3647854/3876053/76b8f260-2158-11e4-9e96-c8de55eaec9a.png
after:
https://cloud.githubusercontent.com/assets/3647854/3876052/76b7fa86-2158-11e4-9393-8f3eece04c0b.png
Did you rescale the normals in the mesh?
The mesh format probably needs normalized normals, in which case, the new normals are now incorrect, but would've been correct, if you hadn't rescaled.
Alternately, you say the lights haven't been changed, maybe they need to be appropriately redirected in the scene. (Assuming you're applying different scaling factors in each axis.)

WebGL: missing triangles when moving camera

I am rendering a complex scene in WebGL (180 meshes) corresponding to a car model (Nissan GTX) . However when I move the camera around , it seems as if triangles were missing ,
These 'missing triangles' seem to jump randomly over the surface. Can this be a depth buffer problem or a normal calculation problem? I have no idea.
Nevertheless, if I zoom-in near the surface of the model, all triangles are there .
Does anyone have had this problem? any tips?!
I guess, you have a wireframe mesh directly on top of (or very near) a solid mesh and this is just a depth buffer problem. It works when zooming in, because the depth buffer precision is higher in the near area of the viewing frustum.
Try adjusting the screen space depth by working with the polygon offset (or a similar fragment shader modifying gl_FragDepth) when rendering a wireframe overlay directly on top of the solid mesh. Or just move them farther away in object space.
You might be right when you suspect a depth-buffer problem - the fact that the distance to the camera affects things suggests it could be a buffer precision problem. Check out the answer to this question for a possible solution.

Working with Three.js

Context: trying to take THREE.js and use it to display conic sections.
Method: creating a mesh of vertices and then connect face4's to all of them. Used two faces to produce a front and back side so that when the conic section rotates it won't matter from which angle the camera views it.
Problems encountered: 1. Trying to find a good way to create a intuitive mouse rotation scheme. If you think in spherical coordinates, then it feels like just making up/down change phi and left/right change phi would work. But that requires that you can move the camera. As far as I can tell, there is no way to change actively change the rotation of anything besides the objects. Does anyone know how to change the rotation of the camera or scene? 2. Is there a way to graph functions that is better than creating a mesh? If the mesh has many points then it is too slow, and if the mesh has few points then you cannot easily make out the shape of the conic sections.
Any sort of help would be most excellent.
I'm still starting to learn Three.js, so I'm not sure about the second part of your question.
For the first part, to change the camera, there is a very good way, which could also include zooming and moving the scene: the trackball camera.
For the exact code and how to use it, you can view:
https://github.com/mrdoob/three.js/blob/master/examples/webgl_trackballcamera_earth.html
At the botton of this page (http://mrdoob.com/122/Threejs) you can see the example in action (the globe in the third row from the bottom).
There is an orbit control script for the three.js camera.
I'm not sure if I understand the rotation bit. You do want to rotate an object, but you are correct, the rotation is relative.
When you rotate or move your camera, a matrix is calculated for that position/rotation, and it does indeed rotate the scene while keeping the camera static.
This is irrelevant though, because you work in model/world space, and you position your camera in it, the engine takes care of the rotations under the hood.
What you probably want is to set up an object, hook up your rotation with spherical coordinates, and link your camera as a child to this object. The translation along the cameras Z axis relative to the object should mimic your dolly (zoom is FOV change).
You can rotate the camera by changing its position. See the code I pasted here: https://gamedev.stackexchange.com/questions/79219/three-js-camera-turning-leftside-right
As others are saying OrbitControls.js is an intuitive way for users to manage the camera.
I tackled many of the same issues when building formulatoy.net. I used Morphing Geometries since I found mapping 3d math functions to a UV surface to require v little code and it allowed an easy way to implement different coordinate systems (Cartesian, spherical, cylindrical).
You could use particles instead of a mesh I suppose but a mesh seems best. The lattice material is not too useful if you're trying to understand a surface mathematically. At this point I'm thinking of drawing my own X,Y lines on the surface (or phi, theta lines etc) to better demonstrate cross-sections.
Hope that helps.
You can use trackball controls by which you can zoom in and out of an object,rotate the object,pan it.In trackball controls you are moving the camera around the object.Object still rotates with respect to the screen or renderer centre (0,0,0).

Resources