I am building a very simple game with Away3D and I currently have a character imported from Maya and objects for him to hold.
The question is, how can I correctly position an object at the character's hand if he is constantly being animated? (breathing, walking, etc)
Thanks!
It depends on the animation you are using.
1) Skeletal based animation will have joints for each area of the avatar that moves. You can extract joint transforms from the SkeletonAnimator globalMatrices property - this returns a concatenated array of the 4x4 transform matrices for each joint transform from which you can grab the transform for the joint you want to use as the attaching position
2) Vertex base animation uses geometry objects for each frame and interpolates between them. Because this calculation is done on the GPU, you will need to recalculate any interpolation for a vertex (or set of vertices) yourself before you can create a position. this can be done by accessing the activeState property and casting as a VertexClipState, then returning the currentGeometry and nextGeometry properties. It less straightforward than skeletal animation and you'll also have less information about the avatar position (no rotation info) making things a little trickier to have things like avatars holding swords etc but it can be done.
Related
I am modeling a pouch in Blender and displaying it in an html environment using Three.js. I'm using a small cube object as a zipper bulge toward the top, and am making its offset from the top of the pouch customizable (up and down only). I want the main pouch object to be flattened above the zipper, the same way it is in the image I'm adding.
My pouch model
I tried making the model with a separate object for the pouch body which could shrink and reveal an underlying flattened cube as the top part, but the texture I was using got distorted when the bottom part was scaled down (example below). Is anyone able to point me toward a solution for this? I believe anchoring the UV map at the bottom of the object and allowing the texture to run off (as opposed to scale) as the object shortens could solve this, but I don't know how to do that.
non-scaled reference
distortion with scale
I know that with QObjectPicker I can mouse pick a single entity. But how can I select multiple objects by drawing a rectangle on the screen?
I think this is actually pretty complicated. But here are my two cents:
If you only need to be able to select unoccluded objects
(i.e. don't need to select occluded ones) you could add a second frame graph branch to your existing one and draw each object with a unique color but to an offscreen texture. Then retrieve this texture, check which colors lie within the drawn rectangle and retrieve the corresponding objects and select them (compare to this question/answer).
I'm not sure how well this works in Qt3D because I've always had some issues with QRenderCapture. It didn't seem to have an impact where I added it in the frame graph, i.e. always captured the last state so maybe even if you have multiple render targets it might capture the wrong one etc. Qt3D is still in a pretty rough state I'd say.
If you need an example of how to render to an offscreen texture check out my example on GitHub.
If you need to be able to select occluded objects too
then it gets pretty complicated. I'm just providing some ideas here. I don't know if they will work.
If you don't have that many objects maybe you could implement the idea from above for each single object. I.e. for each object you have an offscreen frame graph branch that filters out all other objects. Then you could check each rendered texture for the rectangle drawn with the mouse. But again I'm not sure how well this works with Qt3D and if you have many objects (like in a game) it will probably crash because of the many offscreen textures.
You could also implement something like "inverse" frustum culling. In frustum culling, you omit rendering objects that lie outside the view frustum of the camera. You could compute a frustum using the rectangle coordinates drawn with the mouse. Check out the QFrustumCulling code. You would need to compute the planes differently of course, using a modified view matrix. When the user draws the rectangle, compute the frustum and check all objects. Unfortunately, this also selects objects whose bounding sphere intersects with the frustum, even though you might visible not touch any part of the object. If that bothers you, you could directly select all objects whose sphere is completely within the frustum and for all objects which only partly intersect do the intersection computation on a per-triangle basis and exit computation for the current object as soon as a triangle intersects the frustum. Depending on the number of triangles this could be very costly computational-wise.
I'd definitely stick to being able to select only unoccluded objects especially because picking in OpenGL seems to be realized by drawing the ojbects with colors these days.
I am curious about the limits of three.js. The following question is asked mainly as a challenge, not because I actually need the specific knowledge/code right away.
Say you have a game/simulation world model around a sphere geometry representing a planet, like the worlds of the game Populous. The resolution of polygons and textures is sufficient to look smooth when the globe fills the view of an ordinary camera. There are animated macroscopic objects on the surface.
The challenge is to project everything from the model to a global map projection on the screen in real time. The choice of projection is yours, but it must be seamless/continuous, and it must be possible for the user to rotate it, placing any point on the planet surface in the center of the screen. (It is not an option to maintain an alternative model of the world only for visualization.)
There are no limits on the number of cameras etc. allowed, but the performance must be expected to be "realtime", say two-figured FPS or more.
I don't expect ayn proof in the form of a running application (although that would be cool), but some explanation as to how it could be done.
My own initial idea is to place a lot of cameras, in fact one for every pixel in the map projection, around the globe, within a Group object that is attached to some kind of orbit controls (with rotation only), but I expect the number of object culling operations to become a huge performance issue. I am sure there must exist more elegant (and faster) solutions. :-)
why not just use a spherical camera-model (think a 360° camera) and virtually put it in the center of the sphere? So this camera would (if it were physically possible) be wrapped all around the sphere, looking toward the center from all directions.
This camera could be implemented in shaders (instead of the regular projection-matrix) and would produce an equirectangular image of the planet-surface (or in fact any other projection you want, like spherical mercator-projection).
As far as I can tell the vertex-shader can implement any projection you want and it doesn't need to represent a camera that is physically possible. It just needs to produce consistent clip-space coordinates for all vertices. Fragment-Shaders for lighting would still need to operate on the original coordinates, normals etc. but that should be achievable. So the vertex-shader would just need compute (x,y,z) => (phi,theta,r) and go on with that.
Occlusion-culling would need to be disabled, but iirc three.js doesn't do that anyway.
I'm writing a DirectX 11 overlay for a game. Creating textures is quite simple and I have good knowledge of C/C++.
The problem I am having is in my test window I can print the texture but as soon as I change the camera angle the texture moves with it. That is what most people want.
What I want to know is how do I print something in 2D to always appear on screen whether the camera moves or not?
Basically, since you use dx11, you use shaders to render your elements.
So standard 3d objects generally follow this guideline:
-Use 3 transforms : world (position object), view (transform in camera space), projection (transform in screen space).
In you vertex shader you multiply all that lot to convert from 3d to 2d.
Since now what you want is to display your elements in 2d (non relative to camera), you can easily create a new shader that doesn't take view/projection into account, so you just don't use those matrices in your vertex shader. (you can still use world for 2d transformation).
That's pretty much the easiest way, if you need pixel precise 2d elements, you need to create a billboard transform/shader. Basically you have your render target resolution, and standard render space is -1 -> 1, so you modify scale/translation to convert between both of those spaces.
When you render you overlay, also ensure that you disable depth completely.
If you need sample let me know I'll make one up quickly, but it should be quite simple.
I have to draw a great collection of spheres in a 3D physical simulation of a "spring-mass" like system.
I would like to know an efficient method to draw spheres without having to compile a display list at every step of my simulation (each step may vary from milliseconds to seconds, depending on the number of bodies involved in the computation).
I've read that vertex-buffer objects are an efficient method to draw objects which need also to be sometimes updated.
Is there any method to draw OpenGL spheres in a way faster than glutSolidSphere?
Spheres are self-similar; every sphere is just a scaled version of any other sphere. I see no need to regenerate any geometry. Indeed, I see no need to have more than one sphere at all.
It's simply a matter of providing the proper scaling matrix. I would suggest a sphere of radius one centered at the origin for your display list or buffer object mesh. Then you can just transform it to different locations, using a scale to set the new radius.
I would like to know an efficient method to draw spheres without having to compile a display list at every step of my simulation (each step may vary from milliseconds to seconds, depending on the number of bodies involved in the computation).
Why are you generating a display list at all, if the geometry you put into is is dynamic. Display lists are meant for static geometry that never or only seldomly changes.
I've read that vertex-buffer objects are an efficient method to draw objects which need also to be sometimes updated.
Actually VBOs are most efficient with static geometry as well. In general you want to keep the number of actual geometry updates as low as possible. In your case the only thing updating are the positions (and maybe the size) of the spheres. This is a prime example for instanced drawing. However this also works well, with updating only a uniform or the transformation matrix and do the call drawing a sphere.
The idea of Vertex Arrays and VBOs is, that you draw a whole batch of geometry with a single call. A sphere would be such a batch.