Efficient collision detection between balls (ellipses) - html5-canvas

I'm running a simple sketch in an HTML5 canvas using Processing.js that creates "ball" objects which are just ellipses that have position, speed, and acceleration vectors as well as a diameter. In the draw function, I call on a function called applyPhysics() which loops over each ball in a hashmap and checks them against each other to see if their positions make them crash. If they do, their speed vectors are reversed.
Long story short, the number of calculations as it is now is (number of balls)^2 which ends up being a lot when I get to into the hundreds of balls. Using this sort of check slows down the sketch too much so I'm looking for ways to do smart collisions some other way.
Any suggestions? Using PGraphics somehow maybe?

I assume you're already simplifying the physics by treating the ellipses as if they were circles.
Other than that, check out quadtree collision detection:
http://gamedev.tutsplus.com/tutorials/implementation/quick-tip-use-quadtrees-to-detect-likely-collisions-in-2d-space/
I don't know your project, but if the balls have non-random forces applied to them (e.g. gravity) you might use predictive analytics also.

If you grid the space and create a data structure that reflects this (e.g. an array of row objects each containing an array of column objects each containing an ArrayList of ball objects), you can just consider interactions within each cell (or also with neighbouring cells). You can reassign data location for balls that cross boundaries. You then have far fewer interactions.

Related

Very fast boolean difference between two meshes

Let's say I have a static object and a movable object which can be moved and rotated, what is the best way to very quickly calculate the difference of those two meshes?
Precision here is not so important, speed is though, since I have to use it in the update phase of the main loop.
Maybe, given the strict time limit, modifying the static object's vertices and triangles directly is to be preferred. Should voxels be preferred here instead?
EDIT: The use case is an interactive viewer of a wood panel (parallelepiped) and a milling tool (a revolved contour, some like these).
The milling tool can be rotated and can work oriented at varying degrees (5 axes).
EDIT 2: The milling tool may not pierce the wood.
EDIT 3: The panel can be as large as 6000x2000mm and the milling tool can be as little as 3x3mm.
If you need the best possible performance then the generic CSG approach may be too slow for you (but still depending on meshes and target hardware).
You may try to find some specialized algorithm, coded for your specific meshes. Let's say you have two cubes - one is a 'wall' and second is a 'window' - then it's much easier/faster to compute resulting mesh with your custom code, than full CSG. Unfortunately you don't say anything about your meshes.
You may also try to make it a 2D problem, use some simplified meshes to compute the result that will 'look like expected'.
If the movement of your meshes is somehow limited you may be able to precompute full or partial results for different mesh combinations to use at runtime.
You may use some space partitioning like BSP or Octrees to divide your meshes during precomputing stage. This way you could split one big problem into many smaller ones that may be faster to compute or at least to make the solution multi-threaded.
You've said about voxels - if you're fine with their look and limits you may voxelize both meshes and just read and mix two voxel values, instead of one. Then you would triangulate it using algorithm like Marching Cubes.
Those are all just some general ideas but we'll need better info to help you more.
EDIT:
With your description it looks like you're modeling some bas-relief, so you may use Relief Mapping to fake this effect. It's based on a height map stored as a texture, so you'd need to just update few pixels of the texture and render a plane. It should be quite fast compared to other approaches, the downside is that it's based on height map, so you can't get shapes that Tee Slot or Dovetail cutter would create.
If you want the real geometry then I'd start from a simple plane as your panel (don't need full 3D yet, just a front surface) and divide it with a 2D grid. The grid element should be slightly bigger than the drill size and every element is a separate mesh. In the frame update you'd cut one, or at most 4 elements that are touched with a drill. Thanks to this grid all your cutting operations will be run with very simple mesh so they may work with your intended speed. You can also cut all current elements in separate threads. After the cutting is done you'll upload to the GPU only currently modified elements so you may end up with quite complex mesh but small modifications per frame.

Efficiently retrieve Z ordered objects in viewport in 2D game

Imagine a 2D game with a large play area, say 10000x10000 pixels. Now imagine there are thousands of objects spread across this area. All the objects are in a Z order list, so every object has a well-defined position relative to every other object even if they are far away.
Suppose there is a viewport in to this play area showing say a 500x500 area of this play area. Obviously if the algorithm is "for each object in Z order list, if inside viewport, render it" then you waste a lot of time iterating all the thousands of objects far outside the viewport. A better way would be to maintain a Z-ordered list of objects near or inside the viewport.
If both the objects and the viewport are moving, what is an efficient way of maintaining a Z-ordered list of objects which are candidates to draw? This is for a general-purpose game engine so there are not many other assumptions or details you can add in to take advantage of: the problem is pretty much just that.
You do not need to keep your memory layout strongly ordered by Z. Instead you need to store your objects in a space partitionning structure that is oriented along the viewing surface.
A typical partitionning structure, is a quad-tree, in 2D. You can use a binary tree, you can use a grid, or you can use a spatial hashing scheme. You can even mix those techniques and combine them one into each other.
There is no "best", but you can put in the balance the ease of writing and maintaining the code. And the memory you have available.
Let us consider the grid, it is the most simple to implement, fastest to access, and easiest to traverse. (traversing is the fact of going to neighborhood cells)
Imagine you allow yourself 20MB of RAM usage for your grid skeleton, considering the cell content is just a small object (like a std::vector or a c# List), say 50 bytes. for a 10k pixels square surface you then have:
sqrt(20*1024*1024 / 50) = 647
you get 647 cells for one dimension, therefore 10k/647 = 15 pixels wide cells.
Still very small, so I suppose perfectly acceptable. You can adjust the numbers to get cells of 512 pixels for example. It should be a good fit when a few cells fit in the viewport.
Then, it is trivially easy to determine which cells are activated by the viewport, by dividing the top left corner by the size of the cell and flooring that result, this gives you the index directly in the cell. (provided both your viewport space and grid space start at 0,0 both. otherwise you need to offset)
Finally take the bottom right corner, determine the grid coordinate for the cell; and you can do a dual loop (x and y) between the min and max to iterate over the activated cells.
When treating a cell, you can draw the objects it contains by going through the list of objects that you would have previously stowed.
Beware of objects that spans over 2 cells or more. You need to make a choice, either store them once and only, but then your search algorithms will always need to know the size of the biggest element in the region and also search the lists of the neighbooring cells (by going as far as necessary to be sure to cover at least the size of this biggest element).
Or, you can store it multiple times (my prefered way), and simply make sure when you iterate cells, that you treat objects once only per frame. This is very easily achieved by using a frame id, in the object structure (as a mutable member).
This same logic applies for more flexible parition like binary trees.
I have implementation for both available in my engine, check the code out, it may help you get through the details: http://sourceforge.net/projects/carnage-engine/
Final words about your Z Ordering, if you had multiple memory storage for each Z, then you already did a space partitionning, simply not along the good axis.
This can be called layering.
What you can do as an optimization, is instead of storing lists of objects in your cells, you can store (ordered) maps of objects and their keys is their Z, therefore the iteration will be ordered along Z.
A typical solution to this sort of problem is to group your objects according to their approximate XY location. For example, you can bucket them into 500x500 regions, i.e. objects intersecting [0,500]x[0,500], objects intersecting [500,1000]x[0,500], etc. Very large objects might be listed in multiple buckets, but presumably there are not too many very large objects.
For each viewport, you would need to check at most 4 buckets for objects to render. You will generally only look at about as many objects as you need to render anyway, so it should be efficient. This does require a bit more work updating when you reposition objects. However, assuming that a typical object is only in one bucket, it should still be pretty fast.

Grab objects near camera

I was looking at the http://threejs.org/examples/webgl_nearestneighbour.html and had a few questions come up. I see that they use the kdtree to stick all the particles positions and then have a function to determine the nearest particle and color it. Let's say that you have a canvas with around 100 buffered geometries with around 72000 vertices / geometry. The only way I know to do this is that you get the positions of the buffered geometries and then put them into the kdtree to determine the nearest vertice and go from there. This sounds very expensive.
What other way is there to return the objects that are near the camera. Something like how THREE.LOD does it? http://threejs.org/examples/#webgl_lod It has the ability to see how far an object is and render the different levels depending on the setting you inputted.
Define "expensive". The point of the kdtree is to find nearest neighbour elements quickly, its primary focus is not on saving memory (Although it does everything inplace on a typed array, it's quit cheap in terms of memory already). If you need to save memory you maybe have to find another way.
Yet a typed array with length 21'600'000 is indeed a bit long. I highly doubt you have to have every single vertex in there. Why not have a position reference point for every geometry part? And, if you need to get the vertices associated to that point, a dictionary. Then you can call myGeometryVertices[ geometryReferencePoint ].
Three.LOD works with raycasts. If you have a (few) hundred objects that might work well. If you have hundred thousands or even millions of positions you'll get some troubles. Also if you're not using meshes; you can't raytrace e.g. a particle.
Really just build your own logic with them. None of those two provide a prebuilt perfect-for-all-cases solution.

Avoid O(n^2) complexity for collision detection

I am developing a simple tile-based 2D game. I have a level, populated with objects that can interact with the tiles and with each other. Checking collision with the tilemap is rather easy and it can be done for all objects with a linear complexity. But now I have to detect collision between the objects and now I have to check every object against every other object, which results in square complexity.
I would like to avoid square complexity. Is there any well-known methods to reduce collision detection calls between objects. Are there any data-structures (like BSP tree maybe), which are easily maintained and allow to reject many collisions at a time.
For example, the total number of objects in the level is around 500, about 50 of them are seen on screen at a time...
Thanks!
Why don't you let the tiles store information about what objects occupy them. Then collisions can be detected whenever an object is moved to a new tile, by seeing if that tile already contains another object.
This would cost virtually nothing.
You can use quadtree to divide the space and reduce the number of objects you need to check for collision.
See this article - Quadtree demonstration.
And perhaps this - Collision Detection in Two Dimensions.
Or this - Quadtree (source included)
It may seem - at first glance - that it takes a lot of CPU power to maintain the tree, but it also reduces the number of checks significantly (see the demonstration in th first link).
Your game already has the concept of a gameplay-related tilemap. You can either co-opt this tilemap for collision detection, or overlay a secondary grid over your playing field used specifically for sprite tracking and collision detection.
Within your grid, each sprite occupies one or more tiles. The sprite knows which tiles it occupies, and the tiles know which sprites occupy it. When a sprite moves, you only need to check for collisions within the tiles that the sprite occupies. This means no collision detection is necessary at all unless sprites are close enought to occupy the same tiles, and even then, you only need to check for collisions between the sprites that are close together.
If your gameplay is such that sprites will frequently clump together, you may want to implement your grid as a quadtree to allow each tile to be subdivided and prevent too many sprites from occupying the same grid tile.
Also, depending on your implementation, you may need to use a slightly larger bounding box to determine tile occupancy than you use to determine collisions. That way you'll be guaranteed that the sprites will overlap and occupy the same tile before their collision boundaries touch.

How does 3D collision / object detection work?

I'v always wondered this. In a game like GTA where there are 10s of thousands of objects, how does the game know as soon as you're on a health pack?
There can't possibly be an event listener for each object? Iterating isn't good either? I'm just wondering how it's actually done.
There's no one answer to this but large worlds are often space-partitioned by using something along the lines of a quadtree or kd-tree which brings search times for finding nearest neighbors below linear time (fractional power, or at worst O( N^(2/3) ) for a 3D game). These methods are often referred to as BSP for binary space partitioning.
With regards to collision detection, each object also generally has a bounding volume mesh (set of polygons forming a convex hull) associated with it. These highly simplified meshes (sometimes just a cube) aren't drawn but are used in the detection of collisions. The most rudimentary method is to create a plane that is perpendicular to the line connecting the midpoints of each object with the plane intersecting the line at the line's midpoint. If an object's bounding volume has points on both sides of this plane, it is a collision (you only need to test one of the two bounding volumes against the plane). Another method is the enhanced GJK distance algorithm. If you want a tutorial to dive through, check out NeHe Productions' OpenGL lesson #30.
Incidently, bounding volumes can also be used for other optimizations such as what are called occlusion queries. This is a process of determining which objects are behind other objects (occluders) and therefore do not need to be processed / rendered. Bounding volumes can also be used for frustum culling which is the process of determining which objects are outside of the perspective viewing volume (too near, too far, or beyond your field-of-view angle) and therefore do not need to be rendered.
As Kylotan noted, using a bounding volume can generate false positives when detecting occlusion and simply does not work at all for some types of objects such as toroids (e.g. looking through the hole in a donut). Having objects like these occlude correctly is a whole other thread on portal-culling.
Quadtrees and Octrees, another quadtree, are popular ways, using space partitioning, to accomplish this. The later example shows a 97% reduction in processing over a pair-by-pair brute-force search for collisions.
A common technique in game physics engines is the sweep-and-prune method. This is explained in David Baraff's SIGGRAPH notes (see Motion with Constraints chapter). Havok definitely uses this, I think it's an option in Bullet, but I'm not sure about PhysX.
The idea is that you can look at the overlaps of AABBs (axis-aligned bounding boxes) on each axis; if the projection of two objects' AABBs overlap on all three axes, then the AABBs must overlap. You can check each axis relatively quickly by sorting the start and end points of the AABBs; there's a lot of temporal coherence between frames since usually most objects aren't moving very fast, so the sorting doesn't change much.
Once sweep-and-prune detects an overlap between AABBs, you can do the more detailed check for the objects, e.g. sphere vs. box. If the detailed check reveals a collision, you can then resolve the collision by applying forces, and/or trigger a game event or play a sound effect.
Correct. Normally there is not an event listener for each object. Often there is a non-binary tree structure in memory that mimics your games map. Imagine a metro/underground map.
This memory strucutre is a collection of things in the game. You the player, monsters and items that you can pickup or items that might blowup and do you harm. So as the player moves around the game the player object pointer is moved in the game/map memory structure.
see How should I have my game entities knowledgeable of the things around them?
I would like to recommend the solid book of Christer Ericson on real time collision detection. It presents the basics of collision detection while providing references on the contemporary research efforts.
Real-Time Collision Detection (The Morgan Kaufmann Series in Interactive 3-D Technology)
There are a lot of optimizations can be used.
Firstly - any object (say with index i for example) is bounded by cube, with center coordinates CXi,CYi, and size Si
Secondly - collision detection works with estimations:
a) Find all pairs cubes i,j with condition: Abs(CXi-CXj)<(Si+Sj) AND Abs(CYi-CYj)<(Si+Sj)
b) Now we work only with pairs got in a). We calculate distances between them more accurately, something like Sqrt(Sqr(CXi-CXj)+Sqr(CYi-CYj)), objects now represented as sets of few numbers of simple figures - cubes, spheres, cones - and we using geometry formulas to check these figures intersections.
c) Objects from b) with detected intersections are processed as collisions with physics calculating etc.

Resources