Why does my object "sink into the ground"? - three.js

I have a rather simple react-three-fiber setup that includes cannon.js-powered physics. In the scene there is a cup -- which is modelled as a cylinder whose top radius is bigger than the bottom one -- that is placed on a surface.
When I run the code, during the loading screen everything looks fine. But when physics kick in, the cup suddenly "sinks" into the ground. Why is that? I can't make sense of that...
One theory of mine was that the "physics shape" of the cylinder is not identical with the "optical shape" that gets rendered, but even then the movement I observe still doesn't make sense with any reasonable bounding box I can imagine...
Working example: https://codesandbox.io/s/amazing-proskuriakova-4slpq

Physics are finicky and really hard to debug because you're often trying to intuit the effects of an invisible system by its effects on whatever hybrid view you have.
I notice if i bring the mass downn to a more reasonable value, like 5, the object appears to roll around like a sphere or some other shape.. so I think your theory is sound. I don't know off the top of my head what the solution is, but I do know that the only physics engine I "trust" in the js space, except for very simple simulations, is Ammo.js. It's hard to use, but is an emscripten port of a truly amazing AAA quality library. https://threejs.org/examples/?q=phys#physics_ammo_break
I would start by getting a cube and a sphere working.. once you have verified that those work as expected.. ideally using real-ish world scale units, like a 1x1x1 cube, with a mass of 1. Use a texture on the sphere so you know that its rolling like you expect. Once you have verified the simpler primitives work, move onto the more complex geometries.

Best way forward would be to make an issue on the use-cannon GH. That lib and cannon-es are under active maintenance now. Meanwhile, i believe convexpolyhydron can also do it flawlessly, see: https://codesandbox.io/s/r3f-convex-polyhedron-cnm0s

Related

Does object size affects performance in unity

i want to ask a question about the effect of object size object performance. I have made 10 cubes of 100units size and 10 cubes of 1 unit size. Will my fps be lower in the first case.
If you are going to be making a map try to make the shapes as simple as possible, what I mean is if you have a room, don't put 6 cubes that all connect to form a room, just use a plane and connect it, or make the inside of your cube transparent. This comes very useful if you are building large maps, if you are making something simple, than this won't really make a difference. But I recommend getting in the practice of making everything as simple as possible, so you already have practice when you make a bigger game.
It all depends on the camera's field of vision actually. The more items visible at a time will demand more from the system. Also, Sizes won't be that much of a trouble in orthographic but in the perspective mode, it will surely hit the system's demand.

Multi-pass accumulative rendering in three.js

I have written code to render a scene containing lights that work like projectors. But there are a lot of them (more than can be represented with a fixed number of lights in a single shader). So right now I render this by creating a custom shader and rendering the scene once for each light. Each pass the fragment shader determines the contribution for that light and the blender adds in that contribution to the backbuffer. I found this really awkward to set up in three.js. I couldn't find a way of doing multiple passes like this where there were different materials and different geometry required for the different passes. I had to do this by having multiple scenes. The problem there is that I can't have an object3d that is in multiple scenes (Please correct me if I'm wrong). So I need to create duplicates of the objects - one for each scene it is in. This all starts looking really hacky quickly. It's all so special that it seems to be incompatible with various three.js framework features such as VR Rendering. Each light requires shadowing, but I don't have memory for a shadow buffer for each light, so it alternates between rendering the shadow buffer for the light, then the accumulation phase for that light, then the shadow buffer for the next light, then the accumulator the the next light, etc.
I'd much rather set this up in a more "three.js" way. I seem to be writing hack upon hack to get this working, each time forgoing yet another three.js framework feature that doesn't working properly in conjunction with my multi-pass technique. But it doesn't seem like what I'm doing is so out of the ordinary.
My main surprise is that I can't figure out a way to set up a multi-pass scene that does this back and forth rendering and accumulating. And my second surprise is that the Object3D's that I create don't like being added to multiple scenes a the same time, so I have to create duplicates of each object for each scene it wants to be in, in order to keep their states from interfering with each other.
So is there a way of rendering this kind of multi-pass accumulative scene in a better way? Again, I would describe it as a scene with > the max number of lights allows in a single shader pass so their contributions need to be alternatively rendered (shadow buffers) and then additively accumulated in multiple passes. The lights work like typical movie projectors that project an image (as opposed to being a uniform solid color light source).
How can I do multi-pass rendering like this and still take advantage of good framework stuff like stereo rendering for VR and automatic shadow buffer creation?
Here's a simplified snippet that demonstrates the scenes that are involved:
renderer.render(this.environment.scene, camera, null);
for (let i = 0, ii = this.projectors.length; i < ii; ++i) {
let projector = this.projectors[i];
renderer.setClearColor(0x000000, 1);
renderer.clearTarget(this.shadowRenderTarget, true, true, false);
renderer.render(projector.object3D.depthScene, projector.object3D.depthCamera, this.shadowRenderTarget);
renderer.render(projector.object3D.scene, camera);
}
renderer.render(this.foreground.scene, camera, null);
There is a scene that renders lighting from the environment (done with normal lighting) then there is a scene per projector that computes the shadow map for the projector and then adds in the light contribution from each projector, then there is a "foreground" scene with overlays and UI stuff in it.
Is there a more "three.js" way?
Unfortunately i think the answer is no.
I'd much rather set this up in a more "three.js" way. I seem to be writing hack upon hack to get this working,
and welcome to the world of three.js development :)
scene graph
You cannot have a node belong to multiple parents. I believe three also does not allow you to do this:
const myPos = new THREE.Vector3()
myMesh_layer0.position = myPos
myMesh_layer1.position = myPos
It wont work with eulers, quaternions or a matrix.
Managing the matrix updates in multiple scenes would be tricky as well.
the three.js way
There is no way to go about the "hack upon hack" unless you start hacking the core.
Notice that it's 2018 but the official way of including three.js into your web app is through <src> tags.
This is a great example of where it would probably be a better idea not to do things the three.js way but the modern javascript way ie use imports, npm installs etc.
Three.js also does not have a robust core that allows you to be flexible with code around it. It's quite obfuscated and conflated with a limited number of hooks exposed that would allow you to write effects such as you want.
Three is often conflated with it's examples, if you pick a random one, it will be written in a three.js way, but far from what best javascript/coding practices are, at least today.
You'll often find large monolithic files, that would benefit from being broken up.
I believe it's still impossible to import the examples as modules.
Look at the material extensions examples and consider if you would want to apply that pattern in your project.
You can probably encounter more pain points, but this is enough to illustrate that the three.js way may not always be desirable.
remedies
Are few and far between. I've spent more than a year trying to push the onBeforeRender and onAfterRender hooks. It seems useful and allowed for some refactors, but another feature had to be nuked first.
The other feature was iterated on during the course of that year and only addressed a single example, until it was made obvious that onBeforeRender would address both the example, and allow for much more.
This unfortunately also seems to be the three.js way. Since the base is so big and includes so many examples, it's more likely that someone would try to optimize a single example, then try to find a common pattern for refactoring a whole bunch of them.
You could go and file an issue on github, but it would be very hard to argue for something as generic as this. You'd most likely have to write some code as a proposal.
This can become taxing quite quick, because it can be rejected, ignored, or you could be asked to provide examples or refactor existing ones.
You mentioned your hacks failing to work with various three's features, like VR. This i think is a problem with three, VR has been the focus of development for the past couple of years at least, but without ever addressing the core issues.
The good news is, three is more modular than it was ever before, so you can fork the project and tweak the parts you need. The issues with three than may move to a higher level, if you find some coupling in the renderer for example that makes it hard to sync your fork, it would be easier to explain than the whole goal of your particular project.

Is there a way to create simple animations "on the fly" in modern OpenGL?

I think this requires a bit of background information:
I have been modding Minecraft for a while now, but I alway wanted to make my own game, so I started digging into the freshly released LWJGL3 to actually get things done. Yes, I know it's a bit ow level and I should use an engine and stuff...indeed, I already tried some engines and they never quite match what I want to do, so I decided I want to tackle the problem at its root.
So far, I kind of understand how to render meshes, move the "camera", etc. and I'm willing to take the learning curve.
But the thing is, at some point all the tutorials start to explain how to load models and create skeletal animations and so on...but I think I do not really want to go that way. A lot of things in working with Minecraft code was awful, but I liked how I could create models and animations from Java code. Sure, it did not look super realistic, but since I'm not great with Blender either, I doubt having "classic" models and animations would help. Anyway, in that code, I could rotate a box around to make a creature look at a player, I could use a sinus function to move legs and arms (or wings, in my case) and that was working, since Minecraft used immediate mode and Java could directly tell the graphics card where to draw each vertex.
So, actual question(s): Is there any good way to make dynamic animations in modern (3.3+) OpenGL? My models would basically be a hierarchy of shapes (boxes or whatever) and I want to be able to rotate them on the fly. But I'm not sure how to organize that. Would I store all the translation/rotation-matrices for each sub-shape? Would that put a hard limit on the amount of sub-shapes a model could have? Did anyone try something like that?
Edit: For clarification, what I did looked something like this:
Create a model: https://github.com/TheOnlySilverClaw/Birdmod/blob/master/src/main/java/silverclaw/birds/client/model/ModelOstrich.java
The model is created as a bunch of boxes in the constructor, the render and setRotationAngles methods set scale and rotations.
You should follow one opengl tutorial in order to understand the basics.
Let me suggest "Learning Modern 3D Graphics Programming", and especially this chapter, where you move one robot arm with multiple joints.
I did a port in java using jogl here, but you can easily port it over lwjgl.
What you are looking for is exactly skeletal animation, the only difference being the fact you do not want to load animations for your bones but want to compute / generate transforms on the fly.
You basically have a hierarchy of bones, and geometry attached to it. It looks like you want to manipulate this geometry "rigidly", so before sending your meshes / transforms to the GPU (the classic way), you want to start by computing the new transforms in model or world space, then send those freshly computed matrices to draw your geometries on the gpu the standard way.
As Sorin said, to compute each transform you simply have to iterate over your hierarchy and accumulate transforms given the transform of the parent bone and your local transform w.r.t the parent.
Yes and no.
You can have your hierarchy of shapes and store a relative transform for each.
For example the "player" whould have a translation to 100,100, 10 (where the player is), and then the "head" subcomponent would have an additional translation of 0,0,5 (just a bit higher on the z axis).
You can store these as matrices (they can encode translation, roation and scaling) and use glPushMatrix and glPop matrix to add and remove a matrix to a stack maintained by openGL.
The draw() function(or whatever you call it) should look something like :
glPushMatrix();
glMultMatrix(my_transform); // You can also just have glTranslate, glRotate or anything else.
// Draw my mesh
for (child : children) { child.draw(); }
glPopMatrix();
This gives you a hierarchical setup so that objects move with their parent. Alternatively you can have a stack in the main memory and do the multiplications yourself (use a library). I think the openGL stack may have a limit (implementation dependent), but if you handle it yourself the only limit is the amount of ram you can use. Once all the matrices are multiplied rendering is done in the same amount of time, that is it doesn't matter for performance how deep a mesh is in the hierarchy.
For actual animations you need to compute the intermediate transformations. For example for a crouch animation you probably want to have a few frames in between so that the camera doesn't just jump to the low position. You can do this with a time based linear interpolation between the start and end positions, but this only covers simple animations and you still have to implement it yourself.
Anything more complicated (i.e. modify the mesh based on the bone links) you would need to implement yourself.

Algorithm for creating rain effect / water drops?

What is the principle behind creating rain effect or water drops regardless of using any particular language. I've seen a few impressive rain and water effects done in Flash, but how does it actually work?
Rain Effect Example
Rain Drop Water Effect Example
You are asking a question as if the two examples were related, but you actually have
1) simulating drops of rain as seen in air (drop trails; simple but the realism depends on lighting very much)
for this you to simulate following events:
for each time step:
create new drops
move existing drops vertically down
remove (or/and animate) the drops hitting the ground
as pointed in other answers new drops (size and position) can be created with various algorithms.
as for speed they move with constant speed.
finally to show your trails you need to look at simple projections
2) simulating splash waves (water simulation, and in the example a reflective surface is shown)
For this you only need to know where the drops fall and how big they are, the rest is wave propagation. However that's only really visible if there is a reflection and that can be a bit tricky.
NOTES:
There are many things that determines realism, but mostly it boils down to detail.
For example rain is usually seen clearly only in strange lighting conditions - close to lamps or on high contrast background. Otherwise it is quite bleak.
Also the details in interaction - splashing on surfaces that it hits, which can leave bubbles (if close enough to notice), or create waves.
Another example - if you look at this tutorial, which is not really realistic, but it does illustrate one point, you will see that even though the rain looks more like a snow it exposes the 'flatness' of your first example (which has absolutely no depth).
So, it is all about detail.
Try to model what you have in terms of events that you have to simulate and then solve simulating each one separately - for example using fractals for seeding rain might be an overkill, but if you nicely model your work you start with random seeding and latter substitute with more accurate/complex methods.
Here's a paper by Mandelbrot and Lovejoy which is one of the most cited works on developing fractal models to represent rain.
The second one (Rain Drop Water Effect Example) is probably done with a wave equation simulator
They probably use particle effects mostly.
An old school way that is dirt cheap is to use palette cycling. Basically, you setup a ramp of colors and move one color into the next in fixed intervals. The moving colors give the illusion of motion. I've worked on games where rain, wind, snow, waterfalls, fire, etc. have all been animated using palette cycling. It's a dying art, but it still works. :)

Hierarchical animations in DirectX and handling seperate animations on the same mesh?

I have a hierarchical animated model in DirectX which loads and animates based on the following DirectX sample: http://msdn.microsoft.com/en-us/library/ee418677%28VS.85%29.aspx
As good as the sample is it does not really go into some of the details of animation that I'd like. For example, if I have a mesh which has a running animation and a throwing animation as seperate animation sets how can I get the throwing animation to occur for bones above the hip and the walking animation to occur for bones underneath the hip?
Also if I wanted to for example have the person lean left or right would I simply have to find the bone for the hip and multiplay a rotation matrix by its matrix? In this case I think the matrix is m_amxBoneOffsets?
Composing multiple animations to a single one is usually the job of an animation system, something that is way out of scope of the D3D sample.
Let's look at your 2 examples:
running and throwing
Well, in this case you could apply the animation for the lower part of the body from the running animation and the animation for the upper part of the body from the throwing animation. And you'd get a very crappy result.
The how is just a matter of knowing which bones are where in the bone palette (something that depends on how they are stored, and in which order, but nothing inherently hard. The definite reference should be the documentation of the tool generating the animation data)
In practice, you're better off with a blending of the 2 animation. This is, in general, is hard, and software packages exist out there that do this for you. Gamebryo, e.g.
Or, an animation of a running guy who throws is different enough from a standing guy who throws that you might be better off having 2 animations.
Leaning
If you apply a rotation matrix to the root bone, you'll simply rotate your whole character.
Now if you rotate the next bone in the hierarchy (from the spine), you'll get all the bones that depend on it to rotate likewise. It will probably do what you want, but there's a sure way to find out. Try it!
Well the thing is the running animation SHOULD affect the throwing animation slightly. What you need to look into is animation blending.
I'm sure Valve wrote a good paper on how they implemented it in Counter-strike many years ago. Its not on the valve site though so I'm not sure where I got this memory from ...

Resources