I want to create a SpotLight such that the shadow map covers a specific orthographic area, or a PlaneGeometry. The default settings however appear to create a shadow area based on the frustrum of the light. Given that the frustrum is rotated and scaled compared to the plane, it is not possible to simply alter that frustrum. I'd have to make it very large to cover the intended area, and then the resolution is low.
Is there some way to create a light for which the shadow is calculated on a specific recipient area (either a plane or an orthographic region)?
You should create a second spotlight which is child of the first spotlight and position set to 0,0,0. And you should set the target to the plane. And onlyShadow set to true.
That way you can have the second spotlight pretending to be the first spotlight but with a detailed shadowmap.
EDIT OLD ANSWER (misinformed):
It is possible to use a directional light to create an orthograpic
shadow.
https://github.com/mrdoob/three.js/blob/master/src/extras/renderers/plugins/ShadowMapPlugin.js#L169
This directional Light can be set to be onlyshadow.
https://github.com/mrdoob/three.js/blob/master/src/lights/DirectionalLight.js#L16
The expected frame can be set with the shadowCamera.
https://github.com/mrdoob/three.js/blob/master/src/lights/DirectionalLight.js#L20-L26
And the angle can be set with setting the position to the calculated
normal of the plane.
Related
I am building an application in AFrame and I want to constrain the viewers movement, that is I want to limit where the camera can go in the scene. For example I have a a-plane that is the floor and I want the camera to stop moving when it reaches 0 on the Z axis to stop the camera from going through the floor or stop again if it reaches 20 on Z axis. I also wish to limit the movement in x,y directions. There are no obstacles in the scene besides the a-plane. Is creating a navigation mesh my only option or is there an easier way to constrain movement? Thanks!
I don't know of built in tools to do this, but you could do it with programming (this sounds pretty easy). You could create a custom component, attached to the camera, with a tick handler, that records the position of the camera in world space and stores in in a variable (camPosPrevFrame). Then create a function to test if the current position is outside of the bounds. If so, set the camera coordinate on the axis that has exceeded its limit, to the previously recorded boundary (camPosPrevFrame). If you are simply testing whether the camera is on one side of an orthagonal plane (say the world space xy plane), that is pretty simple math (camera.getWorldPosition.x>someAmount). If you have a more complex situation, there are ways to test if a point is on either side of any arbitrary plane (it involves the dot product).
I'm using THREE.OrbitControls to dolly a THREE.OrthographicCamera. But, even thought the ortho camera renders correctly as repositioned, all that is updating on the orthographic camera is the 'zoom' property. Even after calling camera.updateProjectionMatrix(). Do I need to manually update the 'position' property of the camera based on the updated 'zoom' property? I want to display its position in my UI after dollying it.
(Note, this is a rewrite of my other question,THREE.js Orthographic camera position not updating after zoom with OrbitControl, in which I thought I was zooming with the OrbitControl but was actually dollying. Sorry about this).
Dollying in/out with an ortho cam would have an unnoticeable effect. With ortho cams there is no perception of proximity because it has no perspective. All objects appear the same in size regardless of distance from the lens because the projection rays are all parallel. The only difference you'd notice is when the objects get clipped because they're past the near or far plane.
So, the decision was made that scrolling with OrbitControls would change the zoom of the camera, narrowing in/out of the center.
If you want to force the camera to move further/closer of its focus point, you could just translate it back/forth in the z-axis with:
camera.translateZ(distance); A (-) distance would move it closer, and a (+) distance would move it further from its focus point.
I have a grid of points (object3D's using THREE.Points) in my Three.js scene, with a model sat on top of the grid, as seen below. In code the model is called default mesh and uses a merged geometry for performance reasons:
I'm trying to work out which of the points in the grid my perspective camera can see at any given point i.e. every time the camera position is update using my orbital controls.
My first idea was to use raycasting to create a ray between the camera and each point in the grid. Then I can find which rays are being intersected with the model and remove the points corresponding to those rays from a list of all the points, thus leaving me with a list of points the camera can see.
So far so good, the ray creation and intersection code is placed in the render loop (as it has to be updated whenever the camera is moved), and therefore it's horrendously slow (obviously).
gridPointsVisible = gridPoints.geometry.vertices.slice(0);
startPoint = camera.position.clone();
//create the rays from each point in the grid to the camera position
for ( var point in gridPoints.geometry.vertices) {
direction = gridPoints.geometry.vertices[point].clone();
vector.subVectors(direction, startPoint);
ray = new THREE.Raycaster(startPoint, vector.clone().normalize());
if(ray.intersectObject( defaultMesh ).length > 0){
gridPointsVisible.pop(gridPoints.geometry.vertices[point]);
}
}
In the example model shown there are around 2300 rays being created, and the mesh has 1500 faces, so the rendering takes forever.
So I 2 questions:
Is there a better of way of finding which objects the camera can see?
If not, can I speed up my raycasting/intersection checks?
Thanks in advance!
Take a look at this example of GPU picking.
You can do something similar, especially easy since you have a finite and ordered set of spheres. The idea is that you'd use a shader to calculate (probably based on position) a flat color for each sphere, and render to an off-screen render target. You'd then parse the render target data for colors, and be able to map back to your spheres. Any colors that are visible are also visible spheres. Any leftover spheres are hidden. This method should produce results faster than raycasting.
WebGLRenderTarget lets you draw to a buffer without drawing to the canvas. You can then access the render target's image buffer pixel-by-pixel (really color-by-color in RGBA).
For the mapping, you'll parse that buffer and create a list of all the unique colors you see (all non-sphere objects should be some other flat color). Then you can loop through your points--and you should know what color each sphere should be by the same color calculation as the shader used. If a point's color is in your list of found colors, then that point is visible.
To optimize this idea, you can reduce the resolution of your render target. You may lose points only visible by slivers, but you can tweak your resolution to fit your needs. Also, if you have fewer than 256 points, you can use only red values, which reduces the number of checked values to 1 in every 4 (only check R of the RGBA pixel). If you go beyond 256, include checking green values, and so on.
I'm trying to create a shadow in my orthographic scene in three.js. I'd like to have a directional light so the shadow is offset from all objects equally in the scene. I am however having problems using DirectionalLight.
My first problem is that I can't get the shado to cover the entire scene, only part of it ever has a shadow. I played with the light's frustum settings, but can't figure out how to get it to cover the scene. Ideally I'd want the frustrum to match that of the camera.
The second problem is that the shadows aren't "clean". If I use a SpotLight the shadows have nice crisp borders (but obviously not the universal directionality I want). When I use a DirectionalLight the borders are misshappen and blurry.
In the samples the tile is a simply box created with CubeGeometry.
How can I create an ortographic directional light source for my scene?
Context: trying to take THREE.js and use it to display conic sections.
Method: creating a mesh of vertices and then connect face4's to all of them. Used two faces to produce a front and back side so that when the conic section rotates it won't matter from which angle the camera views it.
Problems encountered: 1. Trying to find a good way to create a intuitive mouse rotation scheme. If you think in spherical coordinates, then it feels like just making up/down change phi and left/right change phi would work. But that requires that you can move the camera. As far as I can tell, there is no way to change actively change the rotation of anything besides the objects. Does anyone know how to change the rotation of the camera or scene? 2. Is there a way to graph functions that is better than creating a mesh? If the mesh has many points then it is too slow, and if the mesh has few points then you cannot easily make out the shape of the conic sections.
Any sort of help would be most excellent.
I'm still starting to learn Three.js, so I'm not sure about the second part of your question.
For the first part, to change the camera, there is a very good way, which could also include zooming and moving the scene: the trackball camera.
For the exact code and how to use it, you can view:
https://github.com/mrdoob/three.js/blob/master/examples/webgl_trackballcamera_earth.html
At the botton of this page (http://mrdoob.com/122/Threejs) you can see the example in action (the globe in the third row from the bottom).
There is an orbit control script for the three.js camera.
I'm not sure if I understand the rotation bit. You do want to rotate an object, but you are correct, the rotation is relative.
When you rotate or move your camera, a matrix is calculated for that position/rotation, and it does indeed rotate the scene while keeping the camera static.
This is irrelevant though, because you work in model/world space, and you position your camera in it, the engine takes care of the rotations under the hood.
What you probably want is to set up an object, hook up your rotation with spherical coordinates, and link your camera as a child to this object. The translation along the cameras Z axis relative to the object should mimic your dolly (zoom is FOV change).
You can rotate the camera by changing its position. See the code I pasted here: https://gamedev.stackexchange.com/questions/79219/three-js-camera-turning-leftside-right
As others are saying OrbitControls.js is an intuitive way for users to manage the camera.
I tackled many of the same issues when building formulatoy.net. I used Morphing Geometries since I found mapping 3d math functions to a UV surface to require v little code and it allowed an easy way to implement different coordinate systems (Cartesian, spherical, cylindrical).
You could use particles instead of a mesh I suppose but a mesh seems best. The lattice material is not too useful if you're trying to understand a surface mathematically. At this point I'm thinking of drawing my own X,Y lines on the surface (or phi, theta lines etc) to better demonstrate cross-sections.
Hope that helps.
You can use trackball controls by which you can zoom in and out of an object,rotate the object,pan it.In trackball controls you are moving the camera around the object.Object still rotates with respect to the screen or renderer centre (0,0,0).