This question already has answers here:
Show children of invisible parents
(2 answers)
Closed 7 years ago.
I'd like to have a collection of objects that appear to be floating free in space but are actually connected to each other so that they move and rotate as one. Can I put them inside a larger mesh that is itself completely invisible, that I can apply transformations to? I tried setting transparency: true on the MeshNormalMaterial constructor, but that didn't seem to have any effect.
As a simple representative example: say I want to render just one pair of opposite corner cubies in a Rubik's Cube, but leave the rest of the Cube invisible. I can rotate the entire cube and watch the effect on the smaller cubes as they move together, or I can rotate them in place and break the illusion that they're part of a larger object.
In this case, I imagine I would create three meshes using BoxGeometry or CubeGeometry, one with a side length triple that of the other two. I would add the two smaller meshes to the larger one, and add the larger one to the scene. But when I try that, I get one big cube and can't see the smaller ones inside it. If I set visible to false on the larger mesh, the smaller meshes disappear along with it, even if I explicitly set visible to true on them.
Group them inside an Object3D.
var parent = new THREE.Object3d();
parent.add( child1 ); // etc
parent.rotation.x = 2; // rotates group
Related
I know that with QObjectPicker I can mouse pick a single entity. But how can I select multiple objects by drawing a rectangle on the screen?
I think this is actually pretty complicated. But here are my two cents:
If you only need to be able to select unoccluded objects
(i.e. don't need to select occluded ones) you could add a second frame graph branch to your existing one and draw each object with a unique color but to an offscreen texture. Then retrieve this texture, check which colors lie within the drawn rectangle and retrieve the corresponding objects and select them (compare to this question/answer).
I'm not sure how well this works in Qt3D because I've always had some issues with QRenderCapture. It didn't seem to have an impact where I added it in the frame graph, i.e. always captured the last state so maybe even if you have multiple render targets it might capture the wrong one etc. Qt3D is still in a pretty rough state I'd say.
If you need an example of how to render to an offscreen texture check out my example on GitHub.
If you need to be able to select occluded objects too
then it gets pretty complicated. I'm just providing some ideas here. I don't know if they will work.
If you don't have that many objects maybe you could implement the idea from above for each single object. I.e. for each object you have an offscreen frame graph branch that filters out all other objects. Then you could check each rendered texture for the rectangle drawn with the mouse. But again I'm not sure how well this works with Qt3D and if you have many objects (like in a game) it will probably crash because of the many offscreen textures.
You could also implement something like "inverse" frustum culling. In frustum culling, you omit rendering objects that lie outside the view frustum of the camera. You could compute a frustum using the rectangle coordinates drawn with the mouse. Check out the QFrustumCulling code. You would need to compute the planes differently of course, using a modified view matrix. When the user draws the rectangle, compute the frustum and check all objects. Unfortunately, this also selects objects whose bounding sphere intersects with the frustum, even though you might visible not touch any part of the object. If that bothers you, you could directly select all objects whose sphere is completely within the frustum and for all objects which only partly intersect do the intersection computation on a per-triangle basis and exit computation for the current object as soon as a triangle intersects the frustum. Depending on the number of triangles this could be very costly computational-wise.
I'd definitely stick to being able to select only unoccluded objects especially because picking in OpenGL seems to be realized by drawing the ojbects with colors these days.
I am working on a CAD type system using threejs. I have thin objects next to other objects (think thin 2mm metal sheeting fixed to posts on a building measured in metres). When I am zoomed in it all looks fine. The objects do not intersect at all. As I zoom out the objects get smaller and I end up with cases where the post object 'glimmers' (sort of shows through) the metal sheet object as I rotate it around.
I understand it's the small numbers I am working with that is causing this effect. However, is there a way to set a priority such that one object (the metal sheeting) is more important than another object (post) so it doesn't get that sort of effect?
To answer the question from the title, it is possible to prioritize drawing orders with.
myMesh.renderOrder = 5
myOtherMesh.renderOrder = 7
It is then possible to apply different depth effects, turn off the test etc.
Another way is to group objects with, layers. Set the appropriate layer mask on the camera and then render (multiple times).
myMesh.layers.set(5)
camera.layers.set(1)
renderer.render(scene,camera)
camera.layers.set(5)
renderer.render(scene,camera)
This is called z-fighting, where two fragments are so close in the given depth space that their z-values are within the margin of error that their true depths might get inverted.
The easiest way to resolve this is to reduce the scale of your depth buffer. This is controlled by the near and far properties on your camera. You'll need to play with the values to determine what works best for your senario. If you can minimize the distance between the planes, you'll have better luck avoiding z-fighting.
For example, if (as a loose estimate) the bounding sphere of your entire model has a diameter of 100, then the distance between near and far need only be 100. However, their values are set as the distance into camera space. So as you zoom out, and your camera moves further away, you should adjust the values to maintain the minimum distance between them. If your camera is at z = 100, then set near = 50 and far = 150. When you pull your camera back to z = 250, then update near = 200 and far = 300.
Another option is to use the WebGLRenderer.logarithmicDepthBuffer option. (example)
Edit: There is one other cause: the faces of the shapes are actually co-planar. If two triangles are occupying the same space, then you're all but guaranteeing z-fighting.
The simple solution is to move one of the components such that the faces are no longer co-planar. You could also potentially apply a polygonOffset to the sheet metal material, but your use-case doesn't sound like that is appropriate.
I have a grid of points (object3D's using THREE.Points) in my Three.js scene, with a model sat on top of the grid, as seen below. In code the model is called default mesh and uses a merged geometry for performance reasons:
I'm trying to work out which of the points in the grid my perspective camera can see at any given point i.e. every time the camera position is update using my orbital controls.
My first idea was to use raycasting to create a ray between the camera and each point in the grid. Then I can find which rays are being intersected with the model and remove the points corresponding to those rays from a list of all the points, thus leaving me with a list of points the camera can see.
So far so good, the ray creation and intersection code is placed in the render loop (as it has to be updated whenever the camera is moved), and therefore it's horrendously slow (obviously).
gridPointsVisible = gridPoints.geometry.vertices.slice(0);
startPoint = camera.position.clone();
//create the rays from each point in the grid to the camera position
for ( var point in gridPoints.geometry.vertices) {
direction = gridPoints.geometry.vertices[point].clone();
vector.subVectors(direction, startPoint);
ray = new THREE.Raycaster(startPoint, vector.clone().normalize());
if(ray.intersectObject( defaultMesh ).length > 0){
gridPointsVisible.pop(gridPoints.geometry.vertices[point]);
}
}
In the example model shown there are around 2300 rays being created, and the mesh has 1500 faces, so the rendering takes forever.
So I 2 questions:
Is there a better of way of finding which objects the camera can see?
If not, can I speed up my raycasting/intersection checks?
Thanks in advance!
Take a look at this example of GPU picking.
You can do something similar, especially easy since you have a finite and ordered set of spheres. The idea is that you'd use a shader to calculate (probably based on position) a flat color for each sphere, and render to an off-screen render target. You'd then parse the render target data for colors, and be able to map back to your spheres. Any colors that are visible are also visible spheres. Any leftover spheres are hidden. This method should produce results faster than raycasting.
WebGLRenderTarget lets you draw to a buffer without drawing to the canvas. You can then access the render target's image buffer pixel-by-pixel (really color-by-color in RGBA).
For the mapping, you'll parse that buffer and create a list of all the unique colors you see (all non-sphere objects should be some other flat color). Then you can loop through your points--and you should know what color each sphere should be by the same color calculation as the shader used. If a point's color is in your list of found colors, then that point is visible.
To optimize this idea, you can reduce the resolution of your render target. You may lose points only visible by slivers, but you can tweak your resolution to fit your needs. Also, if you have fewer than 256 points, you can use only red values, which reduces the number of checked values to 1 in every 4 (only check R of the RGBA pixel). If you go beyond 256, include checking green values, and so on.
So I'd like to create some lines that can be modified from points that are connecting them.
An example of initial state
First one has been moved down, second one up and third one right and down.
On the implementation side I currently have two meshes. First one is stretched out so that it would cover the distance from its starting point to the next point and second one marks the starting point.
var meshLine = new THREE.Mesh(boxGeometry, material);
meshLine.position.set(x,y,z);
meshLine.scale(1,1,distancetonextpoint);
var meshPoint = new THREE.Mesh(sphereGeometry, material);
meshPoint.position.set(x,y,z);
meshPoint.scale(2,2,2);
What I want from it is that when the user drags the circular point other lines would stretch or change their position accordingly to the one being dragged.
Is there some more reasonable solution for this as I feel mine is not quite good and clean. I'd have to do quite heavy lifting to get the movement done.
I've also looked at this example which looks visually very nice but could not integrate it to my system.
You mean you need to edit the object's geometry by dragging their vertices (here a line).
Objects'vertices can't be dragged theirselves, so you need to loop through the geometry and create little spheres at each vertex position ;
You set a raycaster to pick those spheres, as in the examples ;
Your screen is 2D so to drag objects in 3D you need a surface perpendicular to the screen, that intersects the sphere position. For this you set an invisible plane at the vertex position and make it look at the camera ;
Once you can correctly drag the spheres, you tell the corresponding vertices on the object (your lines) they must keep the same position as their spheres ;
End with geometry.verticesNeedUpdate=true.
And you have your new geometry
For code details on picking objects look at the official picking objects'example draggable cubes
This example shows how to use it for editing objects
Comment if you need more explanations
When several objects overlap on the same plane, they start to flicker. How do I tell the renderer to put one of the objects in front?
I tried to use .renderDepth, but it only works partly -
see example here: http://liveweave.com/ahTdFQ
Both boxes have the same size and it works as intended. I can change which of the boxes is visible by setting .renderDepth. But if one of the boxes is a bit smaller (say 40,50,50) the contacting layers are flickering and the render depth doesn't work anymore.
How to fix that issue?
When .renderDepth() doesn't work, you have to set the depths yourself.
Moving whole meshes around is indeed not really efficient.
What you are looking for are offsets bound to materials:
material.polygonOffset = true;
material.polygonOffsetFactor = -0.1;
should solve your issue. See update here: http://liveweave.com/syC0L4
Use negative factors to display and positive factors to hide.
Try for starters to reduce the far range on your camera. Try with 1000. Generally speaking, you shouldn't be having overlapping faces in your 3d scene, unless they are treated in a VERY specific way (look up the term 'decal textures'/'decals'). So basically, you have to create depth offsets, and perhaps even pre sort the objects when doing this, which all requires pretty low-level tinkering.
If the far range reduction helps, then you're experiencing a lack of precision (depending on the device). Also look up 'z fighting'
UPDATE
Don't overlap planes.
How do I tell the renderer to put one of the objects in front?
You put one object in front of the other :)
For example if you have a camera at 0,0,0 looking at an object at 0,0,10, if you want another object to be behind the first object put it at 0,0,11 it should work.
UPDATE2
What is z-buffering:
http://en.wikipedia.org/wiki/Z-buffering
http://msdn.microsoft.com/en-us/library/bb976071.aspx
Take note of "floating point in range of 0.0 - 1.0".
What is z-fighting:
http://en.wikipedia.org/wiki/Z-fighting
...have similar values in the z-buffer. It is particularly prevalent with
coplanar polygons, where two faces occupy essentially the same space,
with neither in front. Affected pixels are rendered with fragments
from one polygon or the other arbitrarily, in a manner determined by
the precision of the z-buffer.
"The renderer cannot reposition anything."
I think that this is completely untrue. The renderer can reposition everything, and probably does if it's not shadertoy, or some video filter or something. Every time you move your camera the renderer repositions everything (the camera is actually the only thing that DOES NOT MOVE).
It seems that you are missing some crucial concepts here, i'd start with this:
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/
About the depth offset mentioned:
How this would work, say you want to draw a decal on a surface. You can 'draw' another mesh on this surface - by say, projecting a quad onto it. You want to draw a bullet hole over a concrete wall and end up with two coplanar surfaces - the wall, the bullet hole. You can figure out the depth buffer precision, find the smallest value, and then move the bullet hole mesh by that value towards the camera. The object does not get scaled (you're doing this in NDC which you can visualize as a cube and moving planes back and forth in the smallest possible increment), but does translate in depth direction, ending up in front of the other.
I don't see any flicker. The cube movement in 3D seems to be super-smooth. Can you try in a different computer (may be faster one)? I used Chrome on Macbook Pro.