What’s a good way to have click targets that are larger than the actual scene object?
So far we have been using a larger invisible (yet raycastable) object to do this but it comes at the cost of requiring two draw calls instead of one.
Is there any better solutions?
So far we have been using a larger invisible (yet raycastable) object to do this but it comes at the cost of requiring two draw calls instead of one.
There is no additional draw call if you set Object3D.visible to false. However, you can still perform raycasting against invisible 3D objects. Use Raycaster.layers to selectively ignore 3D objects when performing intersection tests.
So what you are doing is already fine. You might want to consider to raycast only against bounding volumes if the raycasting performance becomes a bottleneck in your app. The idea is to create an instance of Box3 (AABB) or Sphere (bounding sphere) of your actual scene object and only use it for raycasting.
I know that with QObjectPicker I can mouse pick a single entity. But how can I select multiple objects by drawing a rectangle on the screen?
I think this is actually pretty complicated. But here are my two cents:
If you only need to be able to select unoccluded objects
(i.e. don't need to select occluded ones) you could add a second frame graph branch to your existing one and draw each object with a unique color but to an offscreen texture. Then retrieve this texture, check which colors lie within the drawn rectangle and retrieve the corresponding objects and select them (compare to this question/answer).
I'm not sure how well this works in Qt3D because I've always had some issues with QRenderCapture. It didn't seem to have an impact where I added it in the frame graph, i.e. always captured the last state so maybe even if you have multiple render targets it might capture the wrong one etc. Qt3D is still in a pretty rough state I'd say.
If you need an example of how to render to an offscreen texture check out my example on GitHub.
If you need to be able to select occluded objects too
then it gets pretty complicated. I'm just providing some ideas here. I don't know if they will work.
If you don't have that many objects maybe you could implement the idea from above for each single object. I.e. for each object you have an offscreen frame graph branch that filters out all other objects. Then you could check each rendered texture for the rectangle drawn with the mouse. But again I'm not sure how well this works with Qt3D and if you have many objects (like in a game) it will probably crash because of the many offscreen textures.
You could also implement something like "inverse" frustum culling. In frustum culling, you omit rendering objects that lie outside the view frustum of the camera. You could compute a frustum using the rectangle coordinates drawn with the mouse. Check out the QFrustumCulling code. You would need to compute the planes differently of course, using a modified view matrix. When the user draws the rectangle, compute the frustum and check all objects. Unfortunately, this also selects objects whose bounding sphere intersects with the frustum, even though you might visible not touch any part of the object. If that bothers you, you could directly select all objects whose sphere is completely within the frustum and for all objects which only partly intersect do the intersection computation on a per-triangle basis and exit computation for the current object as soon as a triangle intersects the frustum. Depending on the number of triangles this could be very costly computational-wise.
I'd definitely stick to being able to select only unoccluded objects especially because picking in OpenGL seems to be realized by drawing the ojbects with colors these days.
The following illustrates the rendering order I would like to obtain for two plane geometries:
http://jsfiddle.net/Axy2F/8/
This works fine under r58 but under r61 the red square is obscured regardless of how I structure the scene graph. I'm unclear whether this is a bug in r61, or whether I was doing things incorrectly in r58, in a way that just happened to work.
Am I right in assuming that behind.add(child) should suffice to have the red square "beneath" the indigo one in the scene graph, and therefore rendered on top of it?
If not, what is the correct way to establish the rendering order by controlling the construction of the scene graph (that works with r61)? I would like to avoid setting renderDepth explicitly. Note that setting rendered.sortObjects to false does not help.
The object that is in front is the object that is closest to the camera. Being a child has nothing to do with it.
Both your objects have position ( 0, 0, 0 ), so they are the same distance from the camera.
This will lead to z-fighting, which is worse with CanvasRenderer than it is with WebGLRenderer.
Change the position of the child to render it in front. For example,
child.position.z = 1;
FYI, r.61 has a different tie-breaker rule than r.58 did. This is why the rendering is different in r.61.
I can't wrap my head around this and was hoping someone might be able to help me. I have an object3D which is being placed on a terrain (plane mesh). I create a Ray, positioned at the object but really high, and find the intersection point into the terrain (essentially the Y intersection). Once I have that I position the object on the terrain at that position.
From the intersect object function I also get the Face which contains the normal at that point. What I'd like to do is align the mesh so that it has the same rotation as the point its standing on.
Once the object is aligned with the world I also need it to face a target its heading towards. Currently I'm using the lookAt function to achieve this.
So I guess my question is two parts. The first is how to align the object with the world. And the second is, how to get that object to face a target without messing up the calculation of the first? (I guess this could be achieved with a child node? The parent node aligned to the world and the sub-mesh node looking at the target?)
Thanks guys
Mat
I am trying to create a scene graph for my 3D game in which every object's transformations are relative to it's parent. Each node of the graph has a Rotation, Scaling and Translation vector.
What is the correct way of combining the relative transformation matrices to get the absolute transformation of an object? I would be glad if you could also explain your solution.
Here is an example of to do it WRONG:
This actually turned out to be the solution:
Matrix GetAbsoluteTransformation()
{
if (!this.IsRoot())
{
return this.Transformation * this.Parent.GetAbsoluteTransformation();
}
else
{
return this.Transformation;
}
}
In this case, when the parent node is rotated, scaled or moved, the child is transformed the same amount, which is a correct behaviour!
But the child will only rotate around its own origin and does not move around the parent's origin.
Applications:
There is a car model with four wheels. The wheels are relative positioned around the car's origin. The wheels can rotate just like real wheels do. If I now rotate the car, the wheels should at all time stay attached to it. In this case the car is the root and the wheels are the child nodes.
Another example would be a model of the solar system. Planets rotate around their own axis, move around the sun, and moons move around planets.
With regards to your "how to do it wrong", I hate to break it to you, but it's not wrong; rather, it's incomplete. You need to do those kinds of work independently of the parent child relationship.
Here's an example of that: The wheel is attached to the car just like you mentioned. If you translate or rotate the car, you don't need to touch the wheels - they're in the same location relative to the car. However, when you try to get the wheel's new location in the 'real world', you have to traverse down the tree, applying the matrix transformations as you go. That all works, right?
When you rotate an object, it rotates around its OWN origin. So a wheel should probably be rotating around its y axis, and a planet around its z axis. But now if you need to move a planet "around the sun", you're doing something completely different. This has to be calculated separately. That's not to say it's not going to be eased by using some of the same match you already have, (although I can't say for sure without doing the math myself) but it's entirely different.
You're looking at actually changing the state of the object. That's the beauty of the scene graph! If you didn't have a scene graph, you would have to figure out all the various values all the way back to the main scene and then do all kinds of math. Here, you just have to do a little bit of trig and algebra to move around the planet (you can google the celestial mechanics) and move the planet relative to its star. The next time the main scene asks where the planet is, it will just go down the scene graph! :-D
I think this is the correct behavior.
I don't think rotating around the parent's origin is something that can be accomplished with a simple matrix stack. I think you can only propagate from parents to children.
You could try instead setting the relative rotation and transformation based on offsets from the parent's absolute origin, though that's a lot more calculations than simply pushing onto a matrix stack.
Here's a similar question: Getting absolute position and rotation from inside a scene graph?
It depends on whether you are using OpenGL or Direct3D. In OpenGL, transforms are applied right-to-left, whereas in Direct3D, they apply left-to-right. They are both perfectly valid ways of designing the transform system, but it means you have to think about them differently.
I find it easiest to think in OpenGL's system, but in reverse. Instead of thinking about how the vertices of an object move around as transforms are applied right-to-left, I imagine the coordinate system of the object being transformed in a left-to-right order. Each transform is applied relative to the new local coordinate system, not relative to the world.
In the case of the wheels on the car, there are three transforms involved: the car's position in space, a wheel's origin relative to the car, and the wheel's orientation relative to its neutral position. Simply apply these in left-to-right order (or vice-versa for Direct3D). To draw four wheels, first apply the car's transform, then remember the current transform, then apply the location and orientation transforms in turn, going back to the remembered car transform after each.