Box2d custom shape or bodies joint - performance

Let's say i want to create simple physics object with a shape of "matryoshka" or banal snowman . As i see it , i have two options to do it: 1. To create 2 circle (or may be custom) bodies and connect them with weld joint , or 2. To create one body with two circle (or may be custom) shapes in it.
So the question is: what is more expensive for CPU: bodies connected with joints or complicate-shaped bodies. If i have one object may be i don't feel difference in performance , but what if i have many object of that type?
I know that joints are expensive , but may be custom shaped bodies is more expensiver?
I'm working with Box2dFlash.

Since the question is about CPU use, joints use more CPU than shapes alone with no joint. Circles shapes can be more efficient than polygons in many cases, but not all.
For CPU optimization, use as few bodies and as simple polygons as possible. For every normal defined in a polygon, a calculation may need to be performed if an object overlaps with another. For circles, a maximum of one calculation is needed.
As an aside, unless you are already experiencing performance problems, you should not worry about if your shapes are the idea CPU use. Instead, you should ask whether the simulation the create is the one you want to happen. Box2D contains many many special case optimizations to make it run smoothly. You can also decrease its accuracy per tick by setting the velocity and position iteration variables. This will have a far greater effect on efficiency than geometry unless your geometry is extremely complex.

If you don't want the two sections of the snowman's body to move in relation to each other, then a single body is the way to go. You will ideally be defining the collision shape(s) manually anyway, so there is absolutely no gain to be had using a weld.
Think of it like this: If you use (a) weld(s), the collision shapes will be no less complicated than if you simply made an approximation of the collision geometry for a single body; it would still require either multiple collision shapes, or a single complex collision shape regardless of your choice. The weld will simply be adding an extra step.
If you need the snowman's body to be dynamic in anyway (break apart or flex) then a weld or regular joint is the way to go.

Related

Very fast boolean difference between two meshes

Let's say I have a static object and a movable object which can be moved and rotated, what is the best way to very quickly calculate the difference of those two meshes?
Precision here is not so important, speed is though, since I have to use it in the update phase of the main loop.
Maybe, given the strict time limit, modifying the static object's vertices and triangles directly is to be preferred. Should voxels be preferred here instead?
EDIT: The use case is an interactive viewer of a wood panel (parallelepiped) and a milling tool (a revolved contour, some like these).
The milling tool can be rotated and can work oriented at varying degrees (5 axes).
EDIT 2: The milling tool may not pierce the wood.
EDIT 3: The panel can be as large as 6000x2000mm and the milling tool can be as little as 3x3mm.
If you need the best possible performance then the generic CSG approach may be too slow for you (but still depending on meshes and target hardware).
You may try to find some specialized algorithm, coded for your specific meshes. Let's say you have two cubes - one is a 'wall' and second is a 'window' - then it's much easier/faster to compute resulting mesh with your custom code, than full CSG. Unfortunately you don't say anything about your meshes.
You may also try to make it a 2D problem, use some simplified meshes to compute the result that will 'look like expected'.
If the movement of your meshes is somehow limited you may be able to precompute full or partial results for different mesh combinations to use at runtime.
You may use some space partitioning like BSP or Octrees to divide your meshes during precomputing stage. This way you could split one big problem into many smaller ones that may be faster to compute or at least to make the solution multi-threaded.
You've said about voxels - if you're fine with their look and limits you may voxelize both meshes and just read and mix two voxel values, instead of one. Then you would triangulate it using algorithm like Marching Cubes.
Those are all just some general ideas but we'll need better info to help you more.
EDIT:
With your description it looks like you're modeling some bas-relief, so you may use Relief Mapping to fake this effect. It's based on a height map stored as a texture, so you'd need to just update few pixels of the texture and render a plane. It should be quite fast compared to other approaches, the downside is that it's based on height map, so you can't get shapes that Tee Slot or Dovetail cutter would create.
If you want the real geometry then I'd start from a simple plane as your panel (don't need full 3D yet, just a front surface) and divide it with a 2D grid. The grid element should be slightly bigger than the drill size and every element is a separate mesh. In the frame update you'd cut one, or at most 4 elements that are touched with a drill. Thanks to this grid all your cutting operations will be run with very simple mesh so they may work with your intended speed. You can also cut all current elements in separate threads. After the cutting is done you'll upload to the GPU only currently modified elements so you may end up with quite complex mesh but small modifications per frame.

Simulating contraction of a muscle in a skeleton

Using spherical nodes, cylindrical bones, and cone-twist constraints, I've managed to create a simple skeleton in 3 dimensions. I'm using an offshoot of the bullet physics library (physijs by #chandlerprall, along with threejs).
Now I'd like to add muscles. I've been trying for the last two days to get some sort of sliding constraint or generic 6-DOF constraint to get the muscle to be able to contract and pull its two nodes towards one another.
I'm getting all sorts of crazy results, and I'm beginning to think that I'm going about this in the wrong way. I don't think I can simply use two cone twist constraints and then scale the muscle along its length-wise axis, because scaling collision meshes is apparently fairly expensive.
All I need is a 'muscle' which can attach to two nodes and 'contract' to pull in both its nodes.
Can anyone provide some advice on how I might best approach this using the bullet engine (or really, any physics engine)?
EDIT: What if I don't need collisions to occur for the muscle? Say I just need a visual muscle which is constrained to 2 nodes:
The two nodes are linearly constrained to the muscle collision mesh, which instead of being a large mesh, is just a small one that is only there to keep the visual muscle geometry in place, and provide an axis for the nodes to be constrained to.
I could then use the linear motor that comes with the sliding constraint to move the nodes along the axis. Can anyone see any problems with this? My initial problem is that the smaller collision mesh is a bit volatile and seems to move around all over the place...
I don't have any experience with Bullet. However, there is a large academic community that simulates human motion by modeling the human as a system of rigid bodies. In these simulations, the human is actuated by muscles.
The muscles used in such simulations are modeled to generate force in a physiological way. The amount of force a muscle can produce at any given instant depends on its length and the rate at which its length is changing. Here is a paper that describes a fairly complex muscle model that biomechanists might use: http://nmbl.stanford.edu/publications/pdf/Millard2013.pdf.
Another complication with modeling muscles that comes up in biomechanical simulations is that the path of a muscle must be able to wrap around joints (such as the knee). This is what you are trying to get at when you mention collisions along a muscle. This is called muscle wrapping. See http://www.baylor.edu/content/services/document.php/41153.pdf.
I'm a graduate student in a lab that does simulations of humans involving many muscles. We use the multibody dynamics library (physics engine) Simbody (http://github.com/simbody/simbody), which allows one to define force elements that act along a path. Such paths can be defined in pretty complex ways: they could wrap around many different surfaces. To simulate muscle-driven human motion, we use OpenSim (http://opensim.stanford.edu), which in turn uses Simbody to simulate the physics.

Rotate only part of the vertices?

I'm adding an OpenGL renderer to my 2D game engine and I want to know whether there is a way to apply an mvp matrix only to part of the vertices in a single draw call?
I'm planning to group draw calls by textures so I'll pass a buffer of many vertices and texcoords, now I want to apply different rotation angles to different quads. Is there a way to accomplish it in the shader or should I give up on the mvp matrix in the shader and perform the same thing using the cpu?
EDIT: What about adding 3 float attributes (rotation and rot_center.xy) per vertex?
what's better performance
(1) doing CPU rotation?
(2) providing 3 more floats per vertex
(3) separating draw calls?
Is there any other option?
Here is a possibility:
Do the rotation in the vertex shader. Pass in the information (angle?) needed to create the rotation matrix as a vertex attribute.
Pass in a vertex attribute (ubyte) that is effectively a per-vertex boolean flag. Rotation in #1 will be executed only if the bool is set.
Not sure if the above will work for you from a performance/storage perspective.
I think that, while it is a good thing to group draw calls for many different performance reasons, changing your code to satisfy a basic requirement as rotation is not a good idea.
Drawing batching is a good thing but, if you are forced to keep an additional attribute (because you cannot do it with uniforms for sure, you wouldn't have the information of the single entity) it is not worth.
An additional attribute means much more memory bandwidth usage that usually is the main killing factor for performances on nowadays systems.
Drawing batching, on the other side, is important but not always critical, it depends on many factors such as:
the GPU OpenGL driver optimization
The GPU tiles configuration
The number of shapes/draw calls we are talking about (if you have 20 quads on the screen, why should you bother of batching? :) )
In other words, often it is much more convenient to drop extreme batching in favor of easiness/main tenability and avoid fancy solutions for simple requirements as rotation.
I hope this helps in some way.
Use two different objects, that is all!
There is no other workaround for rotation of part of object
Example:
A game with a tank, where you want to rotate turret and remaining-body separately. Like in your case here these two are treated as separate objects.

OpenGL drawing the same polygon many times in multiple places

In my opengl app, I am drawing the same polygon approximately 50k times but at different points on the screen. In my current approach, I do the following:
Draw the polygon once into a display list
for each instance of the polygon, push the matrix, translate to that point, scale and rotate appropriate (the scaling of each point will be the same, the translation and rotation will not).
However, with 50k polygons, this is 50k push and pops and computations of the correct matrix translations to move to the correct point.
A coworker of mine also suggested drawing the entire scene into a buffer and then just drawing the whole buffer with a single translation. The tradeoff here is that we need to keep all of the polygon vertices in memory rather than just the display list, but we wouldn't need to do a push/translate/scale/rotate/pop for each vertex.
The first approach is the one we currently have implemented, and I would prefer to see if we can improve that since it would require major changes to do it the second way (however, if the second way is much faster, we can always do the rewrite).
Are all of these push/pops necessary? Is there a faster way to do this? And should I be concerned that this many push/pops will degrade performance?
It depends on your ultimate goal. More recent OpenGL specs enable features for "geometry instancing". You can load all the matrices into a buffer and then draw all 50k with a single "draw instances" call (OpenGL 3+). If you are looking for a temporary fix, at the very least, load the polygon into a Vertex Buffer Object. Display Lists are very old and deprecated.
Are these 50k polygons going to move independently? You'll have to put up with some form of "pushing/popping" (even though modern scene graphs do not necessarily use an explicit matrix stack). If the 50k polygons are static, you could pre-compile the entire scene into one VBO. That would make it render very fast.
If you can assume a recent version of OpenGL (>=3.1, IIRC) you might want to look at glDrawArraysInstanced and/or glDrawElementsInstanced. For older versions, you can probably use glDrawArraysInstancedEXT/`glDrawElementsInstancedEXT, but they're extensions, so you'll have to access them as such.
Either way, the general idea is fairly simple: you have one mesh, and multiple transforms specifying where to draw the mesh, then you step through and draw the mesh with the different transforms. Note, however, that this doesn't necessarily give a major improvement -- it depends on the implementation (even more than most things do).

Collisions in computer games

I have a general questions regarding techniques, or approaches used in computer games to detect particles (or objects collisions). Even in a smiple examples, like currently popular angry birds, how is it programmetically achieved that the game knows that an object hits another one and works out it trajectory, while this object can hit others etc. I presume it is not constantly checking states of ALL the objects in the game map...
Thanks for any hints/answers and sorry for a little dummy/general question.
Games like Angry Birds use a physics engine to move objects around and do collision detection. If you want to learn more about how physics engines work, a good place to start is by reading up on Box2D. I don't know what engine Angry Birds uses, but Box2D is widely used and open-source.
For simpler games however you probably don't need a physics engine, and so the collision detection is fairly simple to do and involves two parts:
First is determining which objects to test collision between. For many small games it is actually not a bad idea to test every single object against every other object. Although your question is leery of this brute-force approach, computers are very fast at routine math like that.
For larger scenes however you will want to cull out all the objects that are too far away from the player to matter. Basically you break up the scene into zones and then first test which zone the character is in, then test for collision with every object in that zone.
One technique for doing this is called "quadtree". Another technique you could use is Binary Space Partitioning. In both cases the scene is broken up into a lot of smaller pieces that you can use to organize the scene.
The second part is detecting the collision between two specific objects. The easiest way to do this is distance checks; just calculate how far apart the two objects are, and if they are close enough then they are colliding.
Almost as easy is to build a bounding box around the objects you want to test. It is pretty easy to check if two boxes are overlapping or not, just by comparing their coordinates.
Now bounding boxes are just rectangles; this approach could get more complex if the shapes you want to collide are represented by arbitrary polygons. Some basic knowledge of geometry is usually enough for this part, although full-featured physics engines can get into a lot more detail, telling you not just that a collision has happened, but exactly when and how it happened.
This is one of those hard to answer questions here, but here's a very basic example in 2D as a very basic example, in pseudo-code.
for-each sprite in scene:
for-each othersprite in scene:
if sprite is not othersprite & sprite.XY = othersprite.XY
collision = true;
else
collision = false;
Hopefully that should be enough to get your thought muscles working!
Addendum: Other improvements would be to assume an area around the XY instead of a precise location, which then gives you a sort of support for sprites with large areas.
For 3D theory, I'll give you the article I read a while ago:
http://www.euclideanspace.com/threed/animation/collisiondetect/index.htm
num1's answer covers most of what I would say and so I voted it up. But there are a few modifications/additional points I would make:
1) Bounding boxes are actually the second easiest way to test collision. The easiest way is distance checks; just calculate how far apart the two objects are, and if they are close enough then they are colliding.
2) Another common way of organizing the scene in order to optimize collision tests (ie. an alternative to BSP) is quadtrees. In either case, basically you break up the scene into zones and then first test which zone the character is in, then test for collision with every object in that zone. That way if the level is huge you don't test against every single object in the level; you can cull out all the objects that are too far away from the player to matter.
3) I would emphasize more that Angry Birds uses a physics engine to handle its collisions. While distance checks and bounding boxes are fine for most 2D games, Angry Birds clearly did not use either of those simple methods and instead relied on collision detection from their physics engine. I don't know what physics engine is used in that game, but Box2D is widely used and open-source.

Resources