What would be a better parctice, writing the drawing method inside the GameObject class or in the Game class?
GameObject obj = new GameObject();
obj.Draw();
Or
GameObject obj = new GameObject();
DrawGameObject(obj);
It depends on what Game and GameObject are responsible for. Typically, a GameObject would contain all the information about what to draw -- it'd have a model, for instance. But objects aren't independent of each other -- knowing what to draw isn't sufficient to know how to draw it.
So a Game probably knows a lot more than a GameObject. In particular, it might know:
whether or not a particular object should be drawn (e.g. it's occluded based on the current camera)
how to draw a particular object (shadows from other objects; reflections from nearby surfaces; ambient light factors; etc.)
etc.
Of the two options presented, it probably makes more sense to put the Draw method outside the GameObject.
Related
What’s a good way to have click targets that are larger than the actual scene object?
So far we have been using a larger invisible (yet raycastable) object to do this but it comes at the cost of requiring two draw calls instead of one.
Is there any better solutions?
So far we have been using a larger invisible (yet raycastable) object to do this but it comes at the cost of requiring two draw calls instead of one.
There is no additional draw call if you set Object3D.visible to false. However, you can still perform raycasting against invisible 3D objects. Use Raycaster.layers to selectively ignore 3D objects when performing intersection tests.
So what you are doing is already fine. You might want to consider to raycast only against bounding volumes if the raycasting performance becomes a bottleneck in your app. The idea is to create an instance of Box3 (AABB) or Sphere (bounding sphere) of your actual scene object and only use it for raycasting.
I know that with QObjectPicker I can mouse pick a single entity. But how can I select multiple objects by drawing a rectangle on the screen?
I think this is actually pretty complicated. But here are my two cents:
If you only need to be able to select unoccluded objects
(i.e. don't need to select occluded ones) you could add a second frame graph branch to your existing one and draw each object with a unique color but to an offscreen texture. Then retrieve this texture, check which colors lie within the drawn rectangle and retrieve the corresponding objects and select them (compare to this question/answer).
I'm not sure how well this works in Qt3D because I've always had some issues with QRenderCapture. It didn't seem to have an impact where I added it in the frame graph, i.e. always captured the last state so maybe even if you have multiple render targets it might capture the wrong one etc. Qt3D is still in a pretty rough state I'd say.
If you need an example of how to render to an offscreen texture check out my example on GitHub.
If you need to be able to select occluded objects too
then it gets pretty complicated. I'm just providing some ideas here. I don't know if they will work.
If you don't have that many objects maybe you could implement the idea from above for each single object. I.e. for each object you have an offscreen frame graph branch that filters out all other objects. Then you could check each rendered texture for the rectangle drawn with the mouse. But again I'm not sure how well this works with Qt3D and if you have many objects (like in a game) it will probably crash because of the many offscreen textures.
You could also implement something like "inverse" frustum culling. In frustum culling, you omit rendering objects that lie outside the view frustum of the camera. You could compute a frustum using the rectangle coordinates drawn with the mouse. Check out the QFrustumCulling code. You would need to compute the planes differently of course, using a modified view matrix. When the user draws the rectangle, compute the frustum and check all objects. Unfortunately, this also selects objects whose bounding sphere intersects with the frustum, even though you might visible not touch any part of the object. If that bothers you, you could directly select all objects whose sphere is completely within the frustum and for all objects which only partly intersect do the intersection computation on a per-triangle basis and exit computation for the current object as soon as a triangle intersects the frustum. Depending on the number of triangles this could be very costly computational-wise.
I'd definitely stick to being able to select only unoccluded objects especially because picking in OpenGL seems to be realized by drawing the ojbects with colors these days.
Basically, I want to take a model, create a point cloud of the model and have the models move around. I could populate the models and move them around on the CPU, but I want to find a way to handle the animation on the GPU instead.
So I was thinking of using a THREE.Points object and associating a mesh with each point. I know you can associate a sprite with each point in a THREE.Points object, as seen in this example:
https://threejs.org/examples/?q=point#webgl_points_sprites
Is there a way to associate a mesh (namely, an imported model) with each point, so that I can animate the vertices (and thus, the models) on the GPU with a vertex shader?
Yes there is a way. You should look into the THREE.InstancedBufferGeometry
I would like to create an object into the center of a scene (there is nothing else) in three.js and I need controls. I am not want to use Orbit or Trackball controls.
So the question is which way is more performance: rotate the camera around the object or rotate the object?
In general, and I don't really need to say this, fewer operations yields better performance.
For your simple example case...
If you have a camera, a light, and a single object, then moving the object will be faster. Even if you parent the light to the camera, there would be more matrix operations to move the camera/light.
But that's where it ends.
If you have more objects than you do camera+lights, then it's better to rotate the camera and lights around the object(s). This is the approach used by most controls (including TrackballControls and OrbitControls) because you almost always have more scene objects than cameras+lights.
Recommendation: Unless you're never going to use your control for scenes with more than one object, rotate the camera around the objects. Just watch out for gimbal lock.
It's my first time to create an iPhone game. I would like to move a ball, I think there are two ways move an object in OpenGL:
Let each object has a unique glViewPort, then change the origin of
it to move the object.
Let each object has a unique translation matrix.
I did move one object using glViewPort, but not sure if this is the best practice or not.
Whenever you are rendering an object (If you are new, I assume you draw within some glBegin glEnd block) load the GL_MODELVIEW_MATRIX and apply your different transformations on it (Translation, Rotation, Scale). The ModelView Matrix should be set before starting your glBegin/End block.
BTW, it is not a wise idea to move the camera too much unless it is really required.