Collisions in computer games - collision

I have a general questions regarding techniques, or approaches used in computer games to detect particles (or objects collisions). Even in a smiple examples, like currently popular angry birds, how is it programmetically achieved that the game knows that an object hits another one and works out it trajectory, while this object can hit others etc. I presume it is not constantly checking states of ALL the objects in the game map...
Thanks for any hints/answers and sorry for a little dummy/general question.

Games like Angry Birds use a physics engine to move objects around and do collision detection. If you want to learn more about how physics engines work, a good place to start is by reading up on Box2D. I don't know what engine Angry Birds uses, but Box2D is widely used and open-source.
For simpler games however you probably don't need a physics engine, and so the collision detection is fairly simple to do and involves two parts:
First is determining which objects to test collision between. For many small games it is actually not a bad idea to test every single object against every other object. Although your question is leery of this brute-force approach, computers are very fast at routine math like that.
For larger scenes however you will want to cull out all the objects that are too far away from the player to matter. Basically you break up the scene into zones and then first test which zone the character is in, then test for collision with every object in that zone.
One technique for doing this is called "quadtree". Another technique you could use is Binary Space Partitioning. In both cases the scene is broken up into a lot of smaller pieces that you can use to organize the scene.
The second part is detecting the collision between two specific objects. The easiest way to do this is distance checks; just calculate how far apart the two objects are, and if they are close enough then they are colliding.
Almost as easy is to build a bounding box around the objects you want to test. It is pretty easy to check if two boxes are overlapping or not, just by comparing their coordinates.
Now bounding boxes are just rectangles; this approach could get more complex if the shapes you want to collide are represented by arbitrary polygons. Some basic knowledge of geometry is usually enough for this part, although full-featured physics engines can get into a lot more detail, telling you not just that a collision has happened, but exactly when and how it happened.

This is one of those hard to answer questions here, but here's a very basic example in 2D as a very basic example, in pseudo-code.
for-each sprite in scene:
for-each othersprite in scene:
if sprite is not othersprite & sprite.XY = othersprite.XY
collision = true;
else
collision = false;
Hopefully that should be enough to get your thought muscles working!
Addendum: Other improvements would be to assume an area around the XY instead of a precise location, which then gives you a sort of support for sprites with large areas.
For 3D theory, I'll give you the article I read a while ago:
http://www.euclideanspace.com/threed/animation/collisiondetect/index.htm

num1's answer covers most of what I would say and so I voted it up. But there are a few modifications/additional points I would make:
1) Bounding boxes are actually the second easiest way to test collision. The easiest way is distance checks; just calculate how far apart the two objects are, and if they are close enough then they are colliding.
2) Another common way of organizing the scene in order to optimize collision tests (ie. an alternative to BSP) is quadtrees. In either case, basically you break up the scene into zones and then first test which zone the character is in, then test for collision with every object in that zone. That way if the level is huge you don't test against every single object in the level; you can cull out all the objects that are too far away from the player to matter.
3) I would emphasize more that Angry Birds uses a physics engine to handle its collisions. While distance checks and bounding boxes are fine for most 2D games, Angry Birds clearly did not use either of those simple methods and instead relied on collision detection from their physics engine. I don't know what physics engine is used in that game, but Box2D is widely used and open-source.

Related

Simulating contraction of a muscle in a skeleton

Using spherical nodes, cylindrical bones, and cone-twist constraints, I've managed to create a simple skeleton in 3 dimensions. I'm using an offshoot of the bullet physics library (physijs by #chandlerprall, along with threejs).
Now I'd like to add muscles. I've been trying for the last two days to get some sort of sliding constraint or generic 6-DOF constraint to get the muscle to be able to contract and pull its two nodes towards one another.
I'm getting all sorts of crazy results, and I'm beginning to think that I'm going about this in the wrong way. I don't think I can simply use two cone twist constraints and then scale the muscle along its length-wise axis, because scaling collision meshes is apparently fairly expensive.
All I need is a 'muscle' which can attach to two nodes and 'contract' to pull in both its nodes.
Can anyone provide some advice on how I might best approach this using the bullet engine (or really, any physics engine)?
EDIT: What if I don't need collisions to occur for the muscle? Say I just need a visual muscle which is constrained to 2 nodes:
The two nodes are linearly constrained to the muscle collision mesh, which instead of being a large mesh, is just a small one that is only there to keep the visual muscle geometry in place, and provide an axis for the nodes to be constrained to.
I could then use the linear motor that comes with the sliding constraint to move the nodes along the axis. Can anyone see any problems with this? My initial problem is that the smaller collision mesh is a bit volatile and seems to move around all over the place...
I don't have any experience with Bullet. However, there is a large academic community that simulates human motion by modeling the human as a system of rigid bodies. In these simulations, the human is actuated by muscles.
The muscles used in such simulations are modeled to generate force in a physiological way. The amount of force a muscle can produce at any given instant depends on its length and the rate at which its length is changing. Here is a paper that describes a fairly complex muscle model that biomechanists might use: http://nmbl.stanford.edu/publications/pdf/Millard2013.pdf.
Another complication with modeling muscles that comes up in biomechanical simulations is that the path of a muscle must be able to wrap around joints (such as the knee). This is what you are trying to get at when you mention collisions along a muscle. This is called muscle wrapping. See http://www.baylor.edu/content/services/document.php/41153.pdf.
I'm a graduate student in a lab that does simulations of humans involving many muscles. We use the multibody dynamics library (physics engine) Simbody (http://github.com/simbody/simbody), which allows one to define force elements that act along a path. Such paths can be defined in pretty complex ways: they could wrap around many different surfaces. To simulate muscle-driven human motion, we use OpenSim (http://opensim.stanford.edu), which in turn uses Simbody to simulate the physics.

Box2d custom shape or bodies joint

Let's say i want to create simple physics object with a shape of "matryoshka" or banal snowman . As i see it , i have two options to do it: 1. To create 2 circle (or may be custom) bodies and connect them with weld joint , or 2. To create one body with two circle (or may be custom) shapes in it.
So the question is: what is more expensive for CPU: bodies connected with joints or complicate-shaped bodies. If i have one object may be i don't feel difference in performance , but what if i have many object of that type?
I know that joints are expensive , but may be custom shaped bodies is more expensiver?
I'm working with Box2dFlash.
Since the question is about CPU use, joints use more CPU than shapes alone with no joint. Circles shapes can be more efficient than polygons in many cases, but not all.
For CPU optimization, use as few bodies and as simple polygons as possible. For every normal defined in a polygon, a calculation may need to be performed if an object overlaps with another. For circles, a maximum of one calculation is needed.
As an aside, unless you are already experiencing performance problems, you should not worry about if your shapes are the idea CPU use. Instead, you should ask whether the simulation the create is the one you want to happen. Box2D contains many many special case optimizations to make it run smoothly. You can also decrease its accuracy per tick by setting the velocity and position iteration variables. This will have a far greater effect on efficiency than geometry unless your geometry is extremely complex.
If you don't want the two sections of the snowman's body to move in relation to each other, then a single body is the way to go. You will ideally be defining the collision shape(s) manually anyway, so there is absolutely no gain to be had using a weld.
Think of it like this: If you use (a) weld(s), the collision shapes will be no less complicated than if you simply made an approximation of the collision geometry for a single body; it would still require either multiple collision shapes, or a single complex collision shape regardless of your choice. The weld will simply be adding an extra step.
If you need the snowman's body to be dynamic in anyway (break apart or flex) then a weld or regular joint is the way to go.

Is there a random factor in AwayPhysics, or in physics engines generally?

Let's say I throw a cube and it falls on the ground with 45, 45, 0 rotations (on it's corner). Now in a 'perfect' world, the cube wouldn't consist of atoms, and it would be 'perfect', there would be no wind (or any lesser movement of air) etc. And in the end the cube would stay on it's corner. But we don't live in such a boring 'perfect' world, and the physics emulators should take this into account and they do quite nicely. So the cube falls on it's side.
Now my question is, how random is that? Does the cube always fall on it's left side? Or maybe it depends on Math.random()? Or maybe it depends on current time? Or maybe it depends on some custom random function, that takes not time, but parameters of objects on stage, as it's seed?
Why I am making this question is, that if the randomness wasn't based on time, I probably could cache results of collisions (when objects stop) for their particular initial position to optimize my animation? If I cached the whole animation, I wouldn't care, but If I only cached the end result, I could be surprised that two exactly same situations can evaluate to different results and then the other wouldn't fit my cached version.
I could just check the source for Math.random functions, but that would be a shallow method, as the code is surely optimized, and as such sophisticated randomization isn't needed there, I personally would use something like fallLeft = time % 2. Also, the code could change with time.
Couldn't find anything about AwayPhysics here, so probably it's something new for everyone - that's why I added the parentheses part; the world won't explode if I'll assume one thing and it happens that in AwayPhysics it's opposite, just what's the standard?
I, personally, don't use pre-made physics engines. Instead, when I want one, I write it myself, so I know how they work inside. The reason that the cube tips over is because the physics engine is inaccurate. It can only approximate things like trig functions, square-roots, integrals, et cetera, so instead it estimates them to a few digits of accuracy (15 in Javascript). If you have the case of, say, two perfect circles stacked on top of each other, the angle between them (pi/2) would slowly change to some seemingly random value based on the way the program was approximating pi. Eventually, this tiny error would grow as the circles rolled off of each other, and the top one would just fall. So, in answer to your question, the cube should fall the same way each time if thrown in the same way, but the direction in which it always fell would be effectively random.

Collision Detection in a Vector Based 2D Platformer

like many others before, I'm coding a 2D platformer at the moment, or more precise on an Engine for that. My question is about collision detection and especially the reactions to collision.
It's vital to me to have surfaces that are NOT tile based, because the world of the platformer has urgend need of surfaces that are not straight.
What do you think is the best approach for implementing such?
I've already come to a solution, but I'm only about 95% satisfied with it.
My Entity is component based so at first it calculates all the movement. Horizontal position change. Falling. Jumping speed; all that. Then it stores these values temporarily; accessible to the Collision component.
This one adds 4 hitpoints to the entity. Two for the floor collision and two for the wall collision.
Like this
(source: fux-media.com)
I first check against the floor. When there IS collision, I iterate collision detections upwards until there is no more collision.
The I reset the Entity to this point. If both ground hitpoints collide, I reset the Entity to the lowest point to avoid the Entity floating in the air.
Then I check against the wall with the upper hitpoints. Basically, if it collides, I do the same thing as above, just horizontally.
If works quite well. Very heavy ascensions are treated like a wall and very low ascensions are just climbed. But in between, when the ascensions are around 45 degree, it behaves strange. It does not quite feel right.
Now, I could just avoid implementing 45 degree walls in my game, but that would just not be clean, as I'm programming an engine that is supposed to work right no matter what.
So... how would you implement this, algorithmically?
I think you're on the right track with the points. You could combine those points to form a polygon, and then use the resulting polygon to perform collision detection.
Phys2D is a library that happens to be in Java that does collision between several types of geometric primitives. The site includes downloadable/runnable demos on what the library is capable of.
If you look around, you can find 2D polygon collision detection implementations in most common languages. With some study, an understanding of the underlying geometry calculations, and some gusto, you could even write your own if you like.

Differentiate objects?

i want to identify a ball in the picture. I am thiking of using sobel edge detection algorithm,with this i can detect the round objects in the image.
But how do i differentiate between different objects. For example, a foot ball is there in one picture and in another picture i have a picture of moon.. how to differentiate what object has been detected.
When i use my algorithm i get ball in both the cases. Any ideas?
Well if all the objects you would like to differentiate are round, you could even use a hough transformation for round objects. This is a very good way of distinguishing round objects.
But your basic problem seems to be classification - sorting the objects on your image into different classes.
For this you don't really need a Neural Network, you could simply try with a Nearest Neighbor match. It's functionalities are a bit like neural networks since you can give it several reference pictures where you tell the system what can be seen there and it will optimize itself to the best average values for each attribute you detected. By this you get a dictionary of clusters for the different types of objects.
But for this you'll of course first need something that distinguishes a ball from a moon.
Since they are all real round objects (which appear as circles) it will be useless to compare for circularity, circumference, diameter or area (only if your camera is steady and if you know a moon will always have the same size on your images, other than a ball).
So basically you need to look inside the objects itself and you can try to compare their mean color value or grayscale value or the contrast inside the object (the moon will mostly have mid-gray values whereas a soccer ball consists of black and white parts)
You could also run edge filters on the segmented objects just to determine which is more "edgy" in its texture. But for this there are better methods I guess...
So basically what you need to do first:
Find several attributes that help you distinguish the different round objects (assuming they are already separated)
Implement something to get these values out of a picture of a round object (which is already segmented of course, so it has a background of 0)
Build a system that you feed several images and their class to have a supervised learning system and feed it several images of each type (there are many implementations of that online)
Now you have your system running and can give other objects to it to classify.
For this you need to segment the objects in the image, by i.e Edge filters or a Hough Transformation
For each of the segmented objects in an image, let it run through your classification system and it should tell you which class (type of object) it belongs to...
Hope that helps... if not, please keep asking...
When you apply an edge detection algorithm you lose information.
Thus the moon and the ball are the same.
The moon has a diiferent color, a different texture, ... you can use these informations to differnentiate what object has been detected.
That's a question in AI.
If you think about it, the reason you know it's a ball and not a moon, is because you've seen a lot of balls and moons in your life.
So, you need to teach the program what a ball is, and what a moon is. Give it some kind of dictionary or something.
The problem with a dictionary of course would be that to match the object with all the objects in the dictionary would take time.
So the best solution would probably using Neural networks. I don't know what programming language you're using, but there are Neural network implementations to most languages i've encountered.
You'll have to read a bit about it, decide what kind of neural network, and its architecture.
After you have it implemented it gets easy. You just give it a lot of pictures to learn (neural networks get a vector as input, so you can give it the whole picture).
For each picture you give it, you tell it what it is. So you give it like 20 different moon pictures, 20 different ball pictures. After that you tell it to learn (built in function usually).
The neural network will go over the data you gave it, and learn how to differentiate the 2 objects.
Later you can use that network you taught, give it a picture, and it a mark of what it thinks it is, like 30% ball, 85% moon.
This has been discussed before. Have a look at this question. More info here and here.

Resources