Context: trying to take THREE.js and use it to display conic sections.
Method: creating a mesh of vertices and then connect face4's to all of them. Used two faces to produce a front and back side so that when the conic section rotates it won't matter from which angle the camera views it.
Problems encountered: 1. Trying to find a good way to create a intuitive mouse rotation scheme. If you think in spherical coordinates, then it feels like just making up/down change phi and left/right change phi would work. But that requires that you can move the camera. As far as I can tell, there is no way to change actively change the rotation of anything besides the objects. Does anyone know how to change the rotation of the camera or scene? 2. Is there a way to graph functions that is better than creating a mesh? If the mesh has many points then it is too slow, and if the mesh has few points then you cannot easily make out the shape of the conic sections.
Any sort of help would be most excellent.
I'm still starting to learn Three.js, so I'm not sure about the second part of your question.
For the first part, to change the camera, there is a very good way, which could also include zooming and moving the scene: the trackball camera.
For the exact code and how to use it, you can view:
https://github.com/mrdoob/three.js/blob/master/examples/webgl_trackballcamera_earth.html
At the botton of this page (http://mrdoob.com/122/Threejs) you can see the example in action (the globe in the third row from the bottom).
There is an orbit control script for the three.js camera.
I'm not sure if I understand the rotation bit. You do want to rotate an object, but you are correct, the rotation is relative.
When you rotate or move your camera, a matrix is calculated for that position/rotation, and it does indeed rotate the scene while keeping the camera static.
This is irrelevant though, because you work in model/world space, and you position your camera in it, the engine takes care of the rotations under the hood.
What you probably want is to set up an object, hook up your rotation with spherical coordinates, and link your camera as a child to this object. The translation along the cameras Z axis relative to the object should mimic your dolly (zoom is FOV change).
You can rotate the camera by changing its position. See the code I pasted here: https://gamedev.stackexchange.com/questions/79219/three-js-camera-turning-leftside-right
As others are saying OrbitControls.js is an intuitive way for users to manage the camera.
I tackled many of the same issues when building formulatoy.net. I used Morphing Geometries since I found mapping 3d math functions to a UV surface to require v little code and it allowed an easy way to implement different coordinate systems (Cartesian, spherical, cylindrical).
You could use particles instead of a mesh I suppose but a mesh seems best. The lattice material is not too useful if you're trying to understand a surface mathematically. At this point I'm thinking of drawing my own X,Y lines on the surface (or phi, theta lines etc) to better demonstrate cross-sections.
Hope that helps.
You can use trackball controls by which you can zoom in and out of an object,rotate the object,pan it.In trackball controls you are moving the camera around the object.Object still rotates with respect to the screen or renderer centre (0,0,0).
Related
I'm using Three.js to create a spiral galaxy I've gone down the InstancedBufferGeometry so I can render lots of stars with great performance.
For now, I'm using a plane as my object, the trouble I have is that when I orbit around the galaxy these planes don't look at the camera.
I have tried using the lookat function however that doesn't seem to work.
Does anyone know how to get InstancedBufferGeometry to look at the camera.
Many thanks in advance.
The lookAt method belongs to THREE.Object3D, and it makes the entire object rotate towards a point, not each of its geometry's instances. If you're using InstancedBufferGeometry, you could perform these calculations in the vertexShader, but can be computationally expensive, given the quantity of planes you're rendering.
If you're using InstancedBufferGeometry for planes only, I recommend you use THREE.Points instead, which is made to automatically generate planes that always look towards the camera, as demonstrated in these examples:
https://threejs.org/examples/?q=point#webgl_points_sprites
https://threejs.org/examples/?q=point#webgl_custom_attributes_points
All you'd need to worry about is their positions, and the rotations will always "billboard" towards the camera without the need of manually calculating rotations.
So, im building an architectural visualization with Three.js, and one of the things the user should be able to do is to click on things and orbit around them. The problem is that the camera is able to clip through wall. I fixed that by assigning each clickable object its own limiting azimuth and polar angles. Now the Problem is that azimuth angles go from -PI to +PI and its impossible to limit between for example 1.5, and -2.4 because its limiting the "wrong" way. I hope this graphic explains that a little better:
Heres a link to the live version:
(You control by clicking on the ground)
https://jim-fx.com/modern/
As you can see, on objects on the right side of the room the limiting works flawless, but on the cabinet and the vases the camera clips through the wall.
If anyone could help me that would be amazing. And any other tipps are welcome aswell.
Greetings, Max
There is several solution to your problem. One is to implements a kind of collision detection with some real or virtual wall for your camera, wich stops the rotation. However, I guess your are looking for something simpler to implement.
As i don't know Three.js very well, I will provides you a generic solution, but which should be easily adaptable to Three.js.
The first thing is to do not use the built-in Three.js orbit control, but to implement your own, where you control all your transformations. And, this is in fact very easy.
To create an orbitable camera, you simply have to crate:
A "null" transformable object, which mean a simple transformable entity that does not embed any shape (is not rendered, is invisible, but exists). I hope Three.js provides such elementary thing.
A camera, which should be itself another transformable.
Once you have this, you simply parent the camera to the "null" object. Now parented to the "null" object, if you rotate the "null" object, you rotate the camera with. Then to orbits, you now have to move back the camera from the parent object:
Null Camera
+ - - - - - - - - - |>
Like this, the "null" object becomes the camera "look at point", and if you rotate the "null" object around Y (I believe Three.js use Y up), you controls the camera azimuth. If you rotate the "null" object in X or Z (depending coordinate system), you will control the camera altitude. Then, you even can control the camera forward-backward to close up to the "look at point" by moving your camera in its local Z axis..
Well, you now have an orbit-camera easy to control. But your problem is not yet solved: How to make this control Pi / -Pi possible in every camera initial orientation ?
Simple: You create second "null" transform object, name it "the socle", and you parent the first one to this last one: Like this, the rotation of the camera "look at point" is always local, and you can now rotate "the socle" to give your "Orbital camera" group, an initial orientation in the world space.
In fact, it is pretty like creating virtual gimbals. I hope I was clear, with pictures this would be more easy to visualize...
I'm interested in drawing a stardome in THREE.js using either mesh points or a particle system.
I don't want the camera to be able to move any closer to any part of the stardome, since the stars are effectively at infinite distance.
I can think of a couple of ways to do this:
A very large mesh (or very large point/particle distances)
Camera and stardome have their movement exactly linked.
Is there any way to specify a mesh, point, or particle system is automaticaly rendered at infinite distance so it is always drawn behind any foreground objects?
I haven't used three.js, but my guess is no. OpenGL camera's need a "near clipping plane" and "far clipping plane", which effectively denote the minimum and maximum distance that it'll render things in. If you've played video games where you move too close to a wall and start to see through it, or see things in the distance suddenly vanish as you move away, those were probably the clipping planes at work.
The workaround is usually one of 2 ways:
1) Set the far clipping plane distance as high as it'll let you go. I don't know what data type three.js would use for this, but my guess is a 32-bit float.
2) Render it in "layers". Render all the stars first before anything else in the scene.
Option 2 is the one I usually use.
Even if you used option 1, you would still synchronize the position of the camera and skybox.
If you do not depth cull, draw the skybox first and match its position, but not rotation, to the camera.
Also disable lighting on the skybox. Instead, bake an ambience directly into its texture.
You're don't want things infinitely away, you just want them not to move with respect to the viewer and to not appear in front of things. The best way to do that is to prevent the viewer from getting closer to them which produces the illusion of the object being far away. The second thing is to modify your depth culling function so that the skybox is always considered further away than whatever you are currently drawing.
If you create a very large mesh object, you'll have to set your camera's far plane large enough to include the mesh which means you'll end up drawing things that you really do want to cull.
I'm currently writing a fragment shader, which (besides other things) imitates the refraction effect on a glass sphere.
So, when a ray enters the sphere, the ray changes direction. So far so good. Now, when the refracted ray leaves the glass object, does it change direction again? I'm pretty sure it does, but I've been poking around the Internet and I've found different opinions (e.g. at the bottom of this site it's clearly stated that there is no change in direction).
Thanks in advance.
Yes it changes... the angle from air to glass refraction is the same then from glass to air.
You can implement it very easily. First you have to render your scene in a cubemap which is centered inside the sphere.
the 2nd renderstep uses the normalvector and the camera position to point vector, with them you can use the function refract() to calculate the vector of the refraction.
You have to calculate where the ray goes out of the sphere and you also can use the refract funtion again. you only have to calculate the normalvector of the outputplace again.
the 3rd step is to use the texture() function of the cubemap and put the outputvector as coordinate inside the function.
I have some project for child http://kinosura.kiev.ua/sova/ and i need to check faceIndex of all cubes in screen.
Now i use intersections array from mouse, but is working only when user pointer at the cube.
How to make ray or rays from camera to all object to check faceIndex ?
I try to make four rays to cubes but if i set cube.position as origin of like this:
raycaster.setFromCamera( cube1.positoin , camera )
I get empty array of intersections.
I also try to set static 2d vector as origin (get coordinate from mouse) but i have relative renderer size and this coordinate all time change... its not work(
Thanks for answer anyway.
I suggest that you try another approach It appears that your cubes do not cover one another, relative to the camera view. So use the surface normals, and compare them to the view direction to determine if they are facing the camera or facing away from the camera by a simple one-per-polygon dot product.
When you are creating your geometry, before adding it a THREE.Mesh call .generateFaceNormals() on it.
Instead of ray casting, iterate through all faces, grab the surface normal of the face, transform relative to the view (inverse transpose of the object's matrix), then dot(). might sound complicated, at first, but it's actually just a couple of steps and much faster than doing a lot of raycasts (which will probably include this anyway!)