This is an effect I was able to get working relatively easy in Unity5, and I'm wondering how I could go about doing the same thing in three.js.
Bascially, I am projecting a particular shape (an "asteroids ship" or triangle) onto a curved surface. The main technique is you insert what's called a "cookie" (technical term is cucoloris) between the projector light and the screen and thus project that shape onto the (typically curved) surface.
A picture is worth a thousand words, so here's a screen print of the scene from unity 5:
I'm just looking for some general guidelines on where to start, not a detailed description. For instance, I see Projector and RayCaster. Would either of these work? Unfortunately, the term "cookie" is overloaded with other meanings, so I can't find any relevant references on searches for "three js cookie".
The cookie itself is regarded as a texture in Unity, as the following screen print shows:
Any advice would be greatly appreciated.
Many Thanks.
While I was never able to figure out how to use cookies with three.js, I ultimately came up with a much better solution using off-screen rendering. This method does not replace the usage of cookies in general, but if you're attempting to use a cookie to project a shape onto an irregularly shaped target surface, then you can instead render your projection scene to an off-screen buffer, and then apply that to the surface of the target object as a dynamic texture.
Here's a screen shot demonstrating what I did:
At the bottom, I am showing the contents of the offscreen buffer where I render the source scene (you do not have to display this buffer, I'm just showing it for illustration purposes). It's simply a flat plane. I move the objects about on that the scene in a simple linear fashion, not having to account for the curvature of the target geometry. You can basically think of this as "wrapping paper".
Then you assign this buffer as a texture to the target object, which in this case is the green cylinder in the upper portion of the screen:
this.cylMaterial = new THREE.MeshBasicMaterial()
this.bufferGamePlaneTexture = new THREE.WebGLRenderTarget(
window.innerWidth,
window.innerHeight,
{minFilter: THREE.LinearFilter, magFilter: THREE.NearestFilter}
)
...
this.cylMaterial.map = this.bufferGamePlaneTexture.texture
var projCylGeom = new THREE.CylinderGeometry(3, 3, 12, 50)
this.projCyl = new THREE.Mesh(projCylGeom, this.cylMaterial)
...
//render to offscreen buffer in your animation loop
bufferRenderer.render(
this.bufferGamePlaneScene,
this.bufferSceneCamera,
this.bufferGamePlaneTexture
)
As you can see, the source shapes end up as molded to fit the target geometry and curves as necessary, just as if the the shapes were directly drawn on the target surface.
In effect, you're wrapping your target object with the custom-generated "gift wrapping paper" you generated earlier. The game engine takes care of all the necessary transformations, so you can render your source scene with simple "linear" semantics, independent of the target geometry. This also allows you to easily apply it to a different surfaces (e.g. a sphere) without having to make any changes to source scene logic.
Related
I need to get the camera up direction and i've tried many ways with no luck, i'm not an expert of quaternions so i'm doubting i did it right.
I've tried:
camera.up
camera.up.applyMatrix4(camera.matrixWorld);
new THREE.Vertex3(0,1,0).applyMatrix4(camera.matrixWorld);
camera.up.normalize().applyMatrix4(camera.matrixWorld);
after this i create two planes passing by two points of my interest, and add the plane helper to the scene and i can see they are very far from where i was expecting them. (i'm expecting two planes that looks like the top and bottom of the camera frustum).
P.s. the camera is a shadow camera of a directional light so an orthographic camera, and i manipulate the directional light position and target before doing this operation, but i've called updateMatrixWorld on the light, on it's target and the camera, on the camera i've called also updateProjectionMatrix... still no results
I've made a sandbox to see what i've tried till now, and better visualize what i want to achieve:
https://codesandbox.io/embed/throbbing-cache-j5yse
once i manage to get the green arrow to point to the top of the blue triangle of the camera helper i'm good to go
In the normal render flow, shadow camera matrices are updated as part of rendering the shadow map (WebGLShadowMap.render).
However, if you want the updated matrix values before the render, then you'll need to update them manually (you already understand this part).
The shadow camera is a property of (not a child of) the DirectionalLight. As such, it doesn't follow the same rules as other scene objects when it comes to updating its matrices (because it's not really a child of the scene). Instead, you need to call the shadow property's updateMatrices method (inherited from LightShadow.updateMatrices).
const dl = new THREE.DirectionalLight(0xffffff, 1)
dl.shadow.updateMatrices(dl) // <<------------------------ Updates the shadow camera
This updates the shadow camera with information from the DirectionalLight's own matrix, and its target's matrix, to properly orient the shadow camera.
Finally, it looks like you're trying to get the "world up" of the camera. Personally, I'd use the convenience function localToWorld:
let up = new THREE.Vector3(0, 1, 0)
dl.shadow.camera.localToWorld(up) // destructively converts "up" from local-to-camera into world coordinates
via trial and errors i've figured out that what gave me the correct result was:
calling
directionalLight.shadow.updateMatrices(...)
and then
new THREE.Vector3(0,1,0).applyQuaternion(directionalLight.shadow.camera.quaternion)
I have a grid of points (object3D's using THREE.Points) in my Three.js scene, with a model sat on top of the grid, as seen below. In code the model is called default mesh and uses a merged geometry for performance reasons:
I'm trying to work out which of the points in the grid my perspective camera can see at any given point i.e. every time the camera position is update using my orbital controls.
My first idea was to use raycasting to create a ray between the camera and each point in the grid. Then I can find which rays are being intersected with the model and remove the points corresponding to those rays from a list of all the points, thus leaving me with a list of points the camera can see.
So far so good, the ray creation and intersection code is placed in the render loop (as it has to be updated whenever the camera is moved), and therefore it's horrendously slow (obviously).
gridPointsVisible = gridPoints.geometry.vertices.slice(0);
startPoint = camera.position.clone();
//create the rays from each point in the grid to the camera position
for ( var point in gridPoints.geometry.vertices) {
direction = gridPoints.geometry.vertices[point].clone();
vector.subVectors(direction, startPoint);
ray = new THREE.Raycaster(startPoint, vector.clone().normalize());
if(ray.intersectObject( defaultMesh ).length > 0){
gridPointsVisible.pop(gridPoints.geometry.vertices[point]);
}
}
In the example model shown there are around 2300 rays being created, and the mesh has 1500 faces, so the rendering takes forever.
So I 2 questions:
Is there a better of way of finding which objects the camera can see?
If not, can I speed up my raycasting/intersection checks?
Thanks in advance!
Take a look at this example of GPU picking.
You can do something similar, especially easy since you have a finite and ordered set of spheres. The idea is that you'd use a shader to calculate (probably based on position) a flat color for each sphere, and render to an off-screen render target. You'd then parse the render target data for colors, and be able to map back to your spheres. Any colors that are visible are also visible spheres. Any leftover spheres are hidden. This method should produce results faster than raycasting.
WebGLRenderTarget lets you draw to a buffer without drawing to the canvas. You can then access the render target's image buffer pixel-by-pixel (really color-by-color in RGBA).
For the mapping, you'll parse that buffer and create a list of all the unique colors you see (all non-sphere objects should be some other flat color). Then you can loop through your points--and you should know what color each sphere should be by the same color calculation as the shader used. If a point's color is in your list of found colors, then that point is visible.
To optimize this idea, you can reduce the resolution of your render target. You may lose points only visible by slivers, but you can tweak your resolution to fit your needs. Also, if you have fewer than 256 points, you can use only red values, which reduces the number of checked values to 1 in every 4 (only check R of the RGBA pixel). If you go beyond 256, include checking green values, and so on.
I try to render both sides of a transparent object with three.js. Other objects located within the transparent object should show too. Sadly I get artifacts I don't know too handle. Here is a test page: https://dl.dropbox.com/u/3778149/webgl_translucency/test.html
Here is an image of the said artifact. They seem to stem from the underlying sphere geometry.
Interestingly the artifacts are not visible for blending mode THREE.SubtractiveBlending = 2.
Any help appreciated!
Alex
Self-transparency is particularly difficult in WebGL and three.js. You just have to really understand the issues, and then adapt your code to achieve the effect you want.
You can achieve the look of a double-sided, transparent sphere in three.js, with a trick: You need to render two transparent spheres -- one with material.side = THREE.BackSide, and one with material.side = THREE.FrontSide.
Using such methods is generally required if you want self-transparency without artifacts -- especially if you allow the camera or object to move.
three.js r.143
Generally to do transparent objects you need to sort them front to back (I'm guessing three.js already does this). If your object is convex (like both of those are) then you can sometimes get by by rendering each object twice, once with gl.cullFace(gl.CCW) and again with gl.cullFace(gl.CW). So for example if the cube is inside the sphere you'd effectively do
gl.enable(gl.CULL_FACE);
gl.cullFace(gl.CW);
drawSphere(); // draws the back of the sphere
drawCube(); // draws the back of the cube
gl.cullFace(gl.CCW);
drawCube(); // draws the front of the cube.
drawSphere(); // draws the front of the sphere.
I have no idea how to do that in three.js
This only handles objects that are convex and not intersecting (one object is contained entirely inside the other).
To render that scene correctly with alpha blending, the triangles would have to be rendered from back to front each frame. Your scene is particularly challenging since you have one object inside another, and rendering both sides, which would require rendering part of the sphere, then the cube, then the rest of the sphere. I doubt three.js (or any other scene graph library) can handle this case.
Additive or subtractive blending will work without sorting, but doesn't look as nice.
Make a clon of the original mesh and flip its normals; then make two identical "one sided" material for each with different name. Not the most classy approach but it worked just fine. I struggled with the same problem, this is what I did :P
The .json file looks like this:
{
"materials":[
{ "name":"ext", "texture":"f_03.jpg", "ambient":[255.0,255.0,255.0], "diffuse":[255.0,255.0,255.0], "specular":[255.0,255.0,255.0], "opacity":0.7 },
{ "name":"int", "texture":"f_03.jpg", "ambient":[255.0,255.0,255.0], "diffuse":[255.0,255.0,255.0], "specular":[255.0,255.0,255.0], "opacity":0.7 }
],
"meshes":[
{
"name":"Cylinder001",
"material":"ext", ...
{
"name":"Cylinder002",
"material":"int", ...
I try to render both sides of a transparent object with three.js. Other objects located within the transparent object should show too. Sadly I get artifacts I don't know too handle. Here is a test page: https://dl.dropbox.com/u/3778149/webgl_translucency/test.html
Here is an image of the said artifact. They seem to stem from the underlying sphere geometry.
Interestingly the artifacts are not visible for blending mode THREE.SubtractiveBlending = 2.
Any help appreciated!
Alex
Self-transparency is particularly difficult in WebGL and three.js. You just have to really understand the issues, and then adapt your code to achieve the effect you want.
You can achieve the look of a double-sided, transparent sphere in three.js, with a trick: You need to render two transparent spheres -- one with material.side = THREE.BackSide, and one with material.side = THREE.FrontSide.
Using such methods is generally required if you want self-transparency without artifacts -- especially if you allow the camera or object to move.
three.js r.143
Generally to do transparent objects you need to sort them front to back (I'm guessing three.js already does this). If your object is convex (like both of those are) then you can sometimes get by by rendering each object twice, once with gl.cullFace(gl.CCW) and again with gl.cullFace(gl.CW). So for example if the cube is inside the sphere you'd effectively do
gl.enable(gl.CULL_FACE);
gl.cullFace(gl.CW);
drawSphere(); // draws the back of the sphere
drawCube(); // draws the back of the cube
gl.cullFace(gl.CCW);
drawCube(); // draws the front of the cube.
drawSphere(); // draws the front of the sphere.
I have no idea how to do that in three.js
This only handles objects that are convex and not intersecting (one object is contained entirely inside the other).
To render that scene correctly with alpha blending, the triangles would have to be rendered from back to front each frame. Your scene is particularly challenging since you have one object inside another, and rendering both sides, which would require rendering part of the sphere, then the cube, then the rest of the sphere. I doubt three.js (or any other scene graph library) can handle this case.
Additive or subtractive blending will work without sorting, but doesn't look as nice.
Make a clon of the original mesh and flip its normals; then make two identical "one sided" material for each with different name. Not the most classy approach but it worked just fine. I struggled with the same problem, this is what I did :P
The .json file looks like this:
{
"materials":[
{ "name":"ext", "texture":"f_03.jpg", "ambient":[255.0,255.0,255.0], "diffuse":[255.0,255.0,255.0], "specular":[255.0,255.0,255.0], "opacity":0.7 },
{ "name":"int", "texture":"f_03.jpg", "ambient":[255.0,255.0,255.0], "diffuse":[255.0,255.0,255.0], "specular":[255.0,255.0,255.0], "opacity":0.7 }
],
"meshes":[
{
"name":"Cylinder001",
"material":"ext", ...
{
"name":"Cylinder002",
"material":"int", ...
Context: trying to take THREE.js and use it to display conic sections.
Method: creating a mesh of vertices and then connect face4's to all of them. Used two faces to produce a front and back side so that when the conic section rotates it won't matter from which angle the camera views it.
Problems encountered: 1. Trying to find a good way to create a intuitive mouse rotation scheme. If you think in spherical coordinates, then it feels like just making up/down change phi and left/right change phi would work. But that requires that you can move the camera. As far as I can tell, there is no way to change actively change the rotation of anything besides the objects. Does anyone know how to change the rotation of the camera or scene? 2. Is there a way to graph functions that is better than creating a mesh? If the mesh has many points then it is too slow, and if the mesh has few points then you cannot easily make out the shape of the conic sections.
Any sort of help would be most excellent.
I'm still starting to learn Three.js, so I'm not sure about the second part of your question.
For the first part, to change the camera, there is a very good way, which could also include zooming and moving the scene: the trackball camera.
For the exact code and how to use it, you can view:
https://github.com/mrdoob/three.js/blob/master/examples/webgl_trackballcamera_earth.html
At the botton of this page (http://mrdoob.com/122/Threejs) you can see the example in action (the globe in the third row from the bottom).
There is an orbit control script for the three.js camera.
I'm not sure if I understand the rotation bit. You do want to rotate an object, but you are correct, the rotation is relative.
When you rotate or move your camera, a matrix is calculated for that position/rotation, and it does indeed rotate the scene while keeping the camera static.
This is irrelevant though, because you work in model/world space, and you position your camera in it, the engine takes care of the rotations under the hood.
What you probably want is to set up an object, hook up your rotation with spherical coordinates, and link your camera as a child to this object. The translation along the cameras Z axis relative to the object should mimic your dolly (zoom is FOV change).
You can rotate the camera by changing its position. See the code I pasted here: https://gamedev.stackexchange.com/questions/79219/three-js-camera-turning-leftside-right
As others are saying OrbitControls.js is an intuitive way for users to manage the camera.
I tackled many of the same issues when building formulatoy.net. I used Morphing Geometries since I found mapping 3d math functions to a UV surface to require v little code and it allowed an easy way to implement different coordinate systems (Cartesian, spherical, cylindrical).
You could use particles instead of a mesh I suppose but a mesh seems best. The lattice material is not too useful if you're trying to understand a surface mathematically. At this point I'm thinking of drawing my own X,Y lines on the surface (or phi, theta lines etc) to better demonstrate cross-sections.
Hope that helps.
You can use trackball controls by which you can zoom in and out of an object,rotate the object,pan it.In trackball controls you are moving the camera around the object.Object still rotates with respect to the screen or renderer centre (0,0,0).