When i put an object to scene, re-size him by morph targets like in this example:
https://threejsdoc.appspot.com/doc/three.js/examples/webgl_morphtargets.html
when i move camera out of the original size of object, but still looking on re-sized parts, object is disappear.
That means, when i stretch box 100 times, i still need to look on his original center, when i look only on his re-sized part, it becomes invisible.
Do you have any experience with this behavior?
How can i set the object to be visible of full length?
Meshes are frustum-culled based on their "un-morphed" geometry.
You can prevent a mesh from being frustum-culled by setting:
mesh.frustumCulled = false;
three.js r.68
Related
I have an existing WebGL canvas that is being rendered without using ThreeJS, and is for all intents and purposes a black box to me, apart from two facts: (1) I have access to the underlying webgl canvas DOM element and can position and resize it on the screen, and (2) I know the properties of the camera for the scene, and get updates on every render cycle for that camera.
The problem I need to solve can be simplified to the following: I need to have my own separate ThreeJS canvas that displays both the black box canvas data, and then elements that I draw, like a cube for a simple example. I can already easily overlay the two canvases, set the transparency on my canvas for everything but the cube, and then align the two with the camera events from the black box library. This works quite well.
The issue with this is that when I draw my objects, like a cube, they don't respect the depth buffer of the black box canvas. So I might have a cube that is properly aligned with the backing scene and movements of the scene, but then it isn't properly masked when something in the black box canvas is closer to the camera than the cube. My thought is that I need to solve this in one of two ways: (1) I can have my renderer write to the other canvas with autoClear = false and preserveDrawingBuffer = true, or (2) I can somehow copy the depth buffer from the black box canvas into my canvas, and then set up my renderer so that it respects the new depth buffer.
I haven't been successful with either approach yet, so I'm wondering if this is possible, and if so which of the above approaches, or what other approach, can solve this problem?
--Edit--
See https://jsfiddle.net/zdxyoajb/ for angular/typescript implementation of the above attempts. In the following animate loop, if I comment out the overlayRenderer lines, the below sphere will be red and offset from the center (as it should be), but if I don't comment the lines, I get the below image. I also get the following error:
WebGL: INVALID_OPERATION: uniformMatrix4fv: location is not from current program
animate() {
requestAnimationFrame(() => this.animate());
this.blackBoxCamera.copy(this.overlayCamera);
this.blackBoxRenderer.render(this.blackBoxScene, this.blackBoxCamera);
this.overlayRenderer.state.reset();
this.overlayRenderer.render(this.overlayScene, this.overlayCamera);
}
I'm working with an orthographic view in three.js/WebGL renderer, and I want a magnifying glass that tracks with the user mouse. I'm looking for the best way of doing this that's efficient.
When working with html5 canvas raw commands, this was easy: I simply defined a circular clip region, zoomed my coordinates, and re-drew the whole scene. With 3d objects, it's less obvious how do to it.
The method I've found so far is to do the following:
Define a second camera that looks into the zoomed region. Set the orthographic clip coordinates to be small so that it doesn't need to do much work
Create a THREE.WebGLRenderTarget
Tell the renderer and all my line textures that the resolution is about to change
Render the scene into the RenderTarget
Add a CircleGeometry as a MeshObject at the spot at the mouse position (in world coordinate but above the rest of the scene, close to the camera). Call this the lens.
Give the lens the WebGLRenderTarget as a texture.
Go back to my default camera, reset all my resolution parameters, and redraw the scene with the 'lens' object added.
This works (see image below) but I'm worried about parts of it:
I have to render twice per frame
Lines don't draw well, because the resolution problems. I have to keep track of all materials that need to know screen resolution and update all of them twice per screen render.
Related problems:
I want to overlay some plot axes on top of this, and possibly gridlines. These would change as the view pans. I'm not sure if I should make these 3d objects, or do it in a 2d canvas context I lay overtop.
I want to overlay some plot lines, and have them show up sensibly in the zoomed view. "Sensible" here is hard to figure out: I don't want them too fat in the zoomed view, but I also don't want to scale them up as much as the image detail (which is being rendered as a texture onto Plane objects behind).
This is a long post, but I'm still new to three.js and looking for good ideas.
I have a grid of points (object3D's using THREE.Points) in my Three.js scene, with a model sat on top of the grid, as seen below. In code the model is called default mesh and uses a merged geometry for performance reasons:
I'm trying to work out which of the points in the grid my perspective camera can see at any given point i.e. every time the camera position is update using my orbital controls.
My first idea was to use raycasting to create a ray between the camera and each point in the grid. Then I can find which rays are being intersected with the model and remove the points corresponding to those rays from a list of all the points, thus leaving me with a list of points the camera can see.
So far so good, the ray creation and intersection code is placed in the render loop (as it has to be updated whenever the camera is moved), and therefore it's horrendously slow (obviously).
gridPointsVisible = gridPoints.geometry.vertices.slice(0);
startPoint = camera.position.clone();
//create the rays from each point in the grid to the camera position
for ( var point in gridPoints.geometry.vertices) {
direction = gridPoints.geometry.vertices[point].clone();
vector.subVectors(direction, startPoint);
ray = new THREE.Raycaster(startPoint, vector.clone().normalize());
if(ray.intersectObject( defaultMesh ).length > 0){
gridPointsVisible.pop(gridPoints.geometry.vertices[point]);
}
}
In the example model shown there are around 2300 rays being created, and the mesh has 1500 faces, so the rendering takes forever.
So I 2 questions:
Is there a better of way of finding which objects the camera can see?
If not, can I speed up my raycasting/intersection checks?
Thanks in advance!
Take a look at this example of GPU picking.
You can do something similar, especially easy since you have a finite and ordered set of spheres. The idea is that you'd use a shader to calculate (probably based on position) a flat color for each sphere, and render to an off-screen render target. You'd then parse the render target data for colors, and be able to map back to your spheres. Any colors that are visible are also visible spheres. Any leftover spheres are hidden. This method should produce results faster than raycasting.
WebGLRenderTarget lets you draw to a buffer without drawing to the canvas. You can then access the render target's image buffer pixel-by-pixel (really color-by-color in RGBA).
For the mapping, you'll parse that buffer and create a list of all the unique colors you see (all non-sphere objects should be some other flat color). Then you can loop through your points--and you should know what color each sphere should be by the same color calculation as the shader used. If a point's color is in your list of found colors, then that point is visible.
To optimize this idea, you can reduce the resolution of your render target. You may lose points only visible by slivers, but you can tweak your resolution to fit your needs. Also, if you have fewer than 256 points, you can use only red values, which reduces the number of checked values to 1 in every 4 (only check R of the RGBA pixel). If you go beyond 256, include checking green values, and so on.
I try to render both sides of a transparent object with three.js. Other objects located within the transparent object should show too. Sadly I get artifacts I don't know too handle. Here is a test page: https://dl.dropbox.com/u/3778149/webgl_translucency/test.html
Here is an image of the said artifact. They seem to stem from the underlying sphere geometry.
Interestingly the artifacts are not visible for blending mode THREE.SubtractiveBlending = 2.
Any help appreciated!
Alex
Self-transparency is particularly difficult in WebGL and three.js. You just have to really understand the issues, and then adapt your code to achieve the effect you want.
You can achieve the look of a double-sided, transparent sphere in three.js, with a trick: You need to render two transparent spheres -- one with material.side = THREE.BackSide, and one with material.side = THREE.FrontSide.
Using such methods is generally required if you want self-transparency without artifacts -- especially if you allow the camera or object to move.
three.js r.143
Generally to do transparent objects you need to sort them front to back (I'm guessing three.js already does this). If your object is convex (like both of those are) then you can sometimes get by by rendering each object twice, once with gl.cullFace(gl.CCW) and again with gl.cullFace(gl.CW). So for example if the cube is inside the sphere you'd effectively do
gl.enable(gl.CULL_FACE);
gl.cullFace(gl.CW);
drawSphere(); // draws the back of the sphere
drawCube(); // draws the back of the cube
gl.cullFace(gl.CCW);
drawCube(); // draws the front of the cube.
drawSphere(); // draws the front of the sphere.
I have no idea how to do that in three.js
This only handles objects that are convex and not intersecting (one object is contained entirely inside the other).
To render that scene correctly with alpha blending, the triangles would have to be rendered from back to front each frame. Your scene is particularly challenging since you have one object inside another, and rendering both sides, which would require rendering part of the sphere, then the cube, then the rest of the sphere. I doubt three.js (or any other scene graph library) can handle this case.
Additive or subtractive blending will work without sorting, but doesn't look as nice.
Make a clon of the original mesh and flip its normals; then make two identical "one sided" material for each with different name. Not the most classy approach but it worked just fine. I struggled with the same problem, this is what I did :P
The .json file looks like this:
{
"materials":[
{ "name":"ext", "texture":"f_03.jpg", "ambient":[255.0,255.0,255.0], "diffuse":[255.0,255.0,255.0], "specular":[255.0,255.0,255.0], "opacity":0.7 },
{ "name":"int", "texture":"f_03.jpg", "ambient":[255.0,255.0,255.0], "diffuse":[255.0,255.0,255.0], "specular":[255.0,255.0,255.0], "opacity":0.7 }
],
"meshes":[
{
"name":"Cylinder001",
"material":"ext", ...
{
"name":"Cylinder002",
"material":"int", ...
I try to render both sides of a transparent object with three.js. Other objects located within the transparent object should show too. Sadly I get artifacts I don't know too handle. Here is a test page: https://dl.dropbox.com/u/3778149/webgl_translucency/test.html
Here is an image of the said artifact. They seem to stem from the underlying sphere geometry.
Interestingly the artifacts are not visible for blending mode THREE.SubtractiveBlending = 2.
Any help appreciated!
Alex
Self-transparency is particularly difficult in WebGL and three.js. You just have to really understand the issues, and then adapt your code to achieve the effect you want.
You can achieve the look of a double-sided, transparent sphere in three.js, with a trick: You need to render two transparent spheres -- one with material.side = THREE.BackSide, and one with material.side = THREE.FrontSide.
Using such methods is generally required if you want self-transparency without artifacts -- especially if you allow the camera or object to move.
three.js r.143
Generally to do transparent objects you need to sort them front to back (I'm guessing three.js already does this). If your object is convex (like both of those are) then you can sometimes get by by rendering each object twice, once with gl.cullFace(gl.CCW) and again with gl.cullFace(gl.CW). So for example if the cube is inside the sphere you'd effectively do
gl.enable(gl.CULL_FACE);
gl.cullFace(gl.CW);
drawSphere(); // draws the back of the sphere
drawCube(); // draws the back of the cube
gl.cullFace(gl.CCW);
drawCube(); // draws the front of the cube.
drawSphere(); // draws the front of the sphere.
I have no idea how to do that in three.js
This only handles objects that are convex and not intersecting (one object is contained entirely inside the other).
To render that scene correctly with alpha blending, the triangles would have to be rendered from back to front each frame. Your scene is particularly challenging since you have one object inside another, and rendering both sides, which would require rendering part of the sphere, then the cube, then the rest of the sphere. I doubt three.js (or any other scene graph library) can handle this case.
Additive or subtractive blending will work without sorting, but doesn't look as nice.
Make a clon of the original mesh and flip its normals; then make two identical "one sided" material for each with different name. Not the most classy approach but it worked just fine. I struggled with the same problem, this is what I did :P
The .json file looks like this:
{
"materials":[
{ "name":"ext", "texture":"f_03.jpg", "ambient":[255.0,255.0,255.0], "diffuse":[255.0,255.0,255.0], "specular":[255.0,255.0,255.0], "opacity":0.7 },
{ "name":"int", "texture":"f_03.jpg", "ambient":[255.0,255.0,255.0], "diffuse":[255.0,255.0,255.0], "specular":[255.0,255.0,255.0], "opacity":0.7 }
],
"meshes":[
{
"name":"Cylinder001",
"material":"ext", ...
{
"name":"Cylinder002",
"material":"int", ...