Imagine a 2D game with a large play area, say 10000x10000 pixels. Now imagine there are thousands of objects spread across this area. All the objects are in a Z order list, so every object has a well-defined position relative to every other object even if they are far away.
Suppose there is a viewport in to this play area showing say a 500x500 area of this play area. Obviously if the algorithm is "for each object in Z order list, if inside viewport, render it" then you waste a lot of time iterating all the thousands of objects far outside the viewport. A better way would be to maintain a Z-ordered list of objects near or inside the viewport.
If both the objects and the viewport are moving, what is an efficient way of maintaining a Z-ordered list of objects which are candidates to draw? This is for a general-purpose game engine so there are not many other assumptions or details you can add in to take advantage of: the problem is pretty much just that.
You do not need to keep your memory layout strongly ordered by Z. Instead you need to store your objects in a space partitionning structure that is oriented along the viewing surface.
A typical partitionning structure, is a quad-tree, in 2D. You can use a binary tree, you can use a grid, or you can use a spatial hashing scheme. You can even mix those techniques and combine them one into each other.
There is no "best", but you can put in the balance the ease of writing and maintaining the code. And the memory you have available.
Let us consider the grid, it is the most simple to implement, fastest to access, and easiest to traverse. (traversing is the fact of going to neighborhood cells)
Imagine you allow yourself 20MB of RAM usage for your grid skeleton, considering the cell content is just a small object (like a std::vector or a c# List), say 50 bytes. for a 10k pixels square surface you then have:
sqrt(20*1024*1024 / 50) = 647
you get 647 cells for one dimension, therefore 10k/647 = 15 pixels wide cells.
Still very small, so I suppose perfectly acceptable. You can adjust the numbers to get cells of 512 pixels for example. It should be a good fit when a few cells fit in the viewport.
Then, it is trivially easy to determine which cells are activated by the viewport, by dividing the top left corner by the size of the cell and flooring that result, this gives you the index directly in the cell. (provided both your viewport space and grid space start at 0,0 both. otherwise you need to offset)
Finally take the bottom right corner, determine the grid coordinate for the cell; and you can do a dual loop (x and y) between the min and max to iterate over the activated cells.
When treating a cell, you can draw the objects it contains by going through the list of objects that you would have previously stowed.
Beware of objects that spans over 2 cells or more. You need to make a choice, either store them once and only, but then your search algorithms will always need to know the size of the biggest element in the region and also search the lists of the neighbooring cells (by going as far as necessary to be sure to cover at least the size of this biggest element).
Or, you can store it multiple times (my prefered way), and simply make sure when you iterate cells, that you treat objects once only per frame. This is very easily achieved by using a frame id, in the object structure (as a mutable member).
This same logic applies for more flexible parition like binary trees.
I have implementation for both available in my engine, check the code out, it may help you get through the details: http://sourceforge.net/projects/carnage-engine/
Final words about your Z Ordering, if you had multiple memory storage for each Z, then you already did a space partitionning, simply not along the good axis.
This can be called layering.
What you can do as an optimization, is instead of storing lists of objects in your cells, you can store (ordered) maps of objects and their keys is their Z, therefore the iteration will be ordered along Z.
A typical solution to this sort of problem is to group your objects according to their approximate XY location. For example, you can bucket them into 500x500 regions, i.e. objects intersecting [0,500]x[0,500], objects intersecting [500,1000]x[0,500], etc. Very large objects might be listed in multiple buckets, but presumably there are not too many very large objects.
For each viewport, you would need to check at most 4 buckets for objects to render. You will generally only look at about as many objects as you need to render anyway, so it should be efficient. This does require a bit more work updating when you reposition objects. However, assuming that a typical object is only in one bucket, it should still be pretty fast.
Related
I have this 2d raster upon which are layered from 1 to say 20 other 2d rasters (with random size and offset). I'm searching for fast way to access a sub-rectangle view (with random size and offset). The view should return all the layered pixels for each X and Y coordinate.
I guess this is kind of how say, GIMP or other 2d paint apps draw layers upon each other, with the exception that I want to have all the pixels upon each other, and not just projection where the top pixel hides the other ones below it.
I have met this problem and before and I still do now, spend already a lot time to search around internet and here about similar issues, but can't find any. I will describe two possible solution, both from which I'm not satisfied:
Have a basically 3d array of pre-allocated size. This is easy to manage but the storage wasted and memory overhead is really big. For 4k raster of say 16 slots, 4 bytes each, is like 1 GiB of memory? And in application case, most of that space will be wasted, not used.
My solution which I made before. Have two 2d arrays, one is with indices, the other with actual values. Each "pixel" of the first one says in which range of pixels in the second array you can find the actual pixels contributed from all layers. This is well compressed on size, but any request is bouncing between two memory regions and is a bit hassle to setup, not to mention update (a nice to have feature, but not mandatory).
So... any know-how on such kind of problem? Thank you in advance!
Forgot to add that I'm targeting self-sufficient, preferably single thread, CPU solution. The layers, will be most likely greyscale with alpha (that is, certain pixel data will not existent). Lookup operation is priority, updates like adding/removing a layer can be more slow.
Added by Mark (see comment):
In that image, if taking top-left corner of the red rectangle, a lookup should report red, green, blue and black. If the bottom-right corner is taken, it should report red and black only.
I would store the offsets and size in a data-structure separate from the pixel-data. This way you do not jump around in the memory while you calculate the relative coordinates for each layer (or even if you can ignore some layers).
If you want to access single pixels or small areas rather than iterating big areas a Quad-Tree might be a good idea to store your data with more local memory access while accessing pixels or areas which are near each other (in x or y direction).
I am writing a GUI that is supposed to display entities of a system in a 2D coordinate system, which the user can select and drag around. The system is mirror-symmetric w.r.t. the x and y axes. Currently I am subclassing an entity using a QGraphicsRectItem so that I can drag it around in the first quadrant (x>0, y>0) of the coordinate system. I reimplemented the paint method to draw the other three additional rectangles with painter.drawRectangle(). So when I move the entity in quadrant 1, the elements in the other three quadrants perform mirror motions. That works well.
In the next stage, each entity can be subdivided, i.e. consist of hundreds of rectangles. So I need to draw hundreds of rectangles and that four times, with mirror operations. The naive approach takes four for-loops, but I'm wondering if there is a smarter way of doing this in QT. The for-loops hurt a little because I'm using PyQt.
If your drawing operations are so slow the simplest thing you could do is draw to an image, and then simply draw the cached painting from the image 4 times, which will be very fast, since it will just be copying some pixel values.
It might be efficient to cache the drawing result not on item basis but to cache a quadrant of your grid. This way if you zoom in and the items get huge or numerous in count, you won't be wasting lots of memory, instead you will only need one image cache that's the screen size of the quadrant.
It really depends what you want to achieve, which at this point is not entirely clear from the description, and your image isn't showing neither.
I'm running a simple sketch in an HTML5 canvas using Processing.js that creates "ball" objects which are just ellipses that have position, speed, and acceleration vectors as well as a diameter. In the draw function, I call on a function called applyPhysics() which loops over each ball in a hashmap and checks them against each other to see if their positions make them crash. If they do, their speed vectors are reversed.
Long story short, the number of calculations as it is now is (number of balls)^2 which ends up being a lot when I get to into the hundreds of balls. Using this sort of check slows down the sketch too much so I'm looking for ways to do smart collisions some other way.
Any suggestions? Using PGraphics somehow maybe?
I assume you're already simplifying the physics by treating the ellipses as if they were circles.
Other than that, check out quadtree collision detection:
http://gamedev.tutsplus.com/tutorials/implementation/quick-tip-use-quadtrees-to-detect-likely-collisions-in-2d-space/
I don't know your project, but if the balls have non-random forces applied to them (e.g. gravity) you might use predictive analytics also.
If you grid the space and create a data structure that reflects this (e.g. an array of row objects each containing an array of column objects each containing an ArrayList of ball objects), you can just consider interactions within each cell (or also with neighbouring cells). You can reassign data location for balls that cross boundaries. You then have far fewer interactions.
I am developing a simple tile-based 2D game. I have a level, populated with objects that can interact with the tiles and with each other. Checking collision with the tilemap is rather easy and it can be done for all objects with a linear complexity. But now I have to detect collision between the objects and now I have to check every object against every other object, which results in square complexity.
I would like to avoid square complexity. Is there any well-known methods to reduce collision detection calls between objects. Are there any data-structures (like BSP tree maybe), which are easily maintained and allow to reject many collisions at a time.
For example, the total number of objects in the level is around 500, about 50 of them are seen on screen at a time...
Thanks!
Why don't you let the tiles store information about what objects occupy them. Then collisions can be detected whenever an object is moved to a new tile, by seeing if that tile already contains another object.
This would cost virtually nothing.
You can use quadtree to divide the space and reduce the number of objects you need to check for collision.
See this article - Quadtree demonstration.
And perhaps this - Collision Detection in Two Dimensions.
Or this - Quadtree (source included)
It may seem - at first glance - that it takes a lot of CPU power to maintain the tree, but it also reduces the number of checks significantly (see the demonstration in th first link).
Your game already has the concept of a gameplay-related tilemap. You can either co-opt this tilemap for collision detection, or overlay a secondary grid over your playing field used specifically for sprite tracking and collision detection.
Within your grid, each sprite occupies one or more tiles. The sprite knows which tiles it occupies, and the tiles know which sprites occupy it. When a sprite moves, you only need to check for collisions within the tiles that the sprite occupies. This means no collision detection is necessary at all unless sprites are close enought to occupy the same tiles, and even then, you only need to check for collisions between the sprites that are close together.
If your gameplay is such that sprites will frequently clump together, you may want to implement your grid as a quadtree to allow each tile to be subdivided and prevent too many sprites from occupying the same grid tile.
Also, depending on your implementation, you may need to use a slightly larger bounding box to determine tile occupancy than you use to determine collisions. That way you'll be guaranteed that the sprites will overlap and occupy the same tile before their collision boundaries touch.
I'm working on a game, and I've come up with a rather interesting problem: clever ways to draw starfields.
It's a 2D game, so the action can scroll in the X and Y directions. In addition, we can adjust the scale to show more or less of the play area. I'd also like the starfield to have fake parallax to give an impression of depth.
Right now I'm doing this in the traditional way, by having a big array of stars, each of which is tagged by a 'depth' factor. To draw, I translate each star according to the camera position multiplied by the 'depth', so some stars move a lot, and some move a little. This all works fine, but of course since I have a finite number of stars in my array I have issues when the camera moves too far or we zoom out too much. This is will all work, but is involving lots of code and special cases.
This offends my sense of elegance. There has got be a better way of achieving this.
I've considered procedurally generating my stars, which allows me to have an unlimited number: e.g. by using a fixed seed and PRNG to determine the coordinates. I would need to divide the sky up into tiles, generate the seed by hashing the tile coordinates, and then draw, say, 100 stars per tile. This allows me to extend my starfield indefinitely in all directions while still only needing to consider the tiles that are visible --- but this doesn't work with the 'depth' factor, as this allows stars to stray outside their tile. I could simply use multiple layered non-parallax starfields using this algorithm but this strikes me as cheating.
And, of course, I need to do all this every frame, so it's got to be fast.
What do you all reckon?
Have a few layers of stars.
For each layer, use a seeded random number generator (or just an array) to generate the amount of blank space between a star and the next one (a poisson distibution, if you want to be picky about it). You want the stars pretty sparse, so the blank space will often be more than whole row. The back layers will be more dense than the front ones, obviously.
Use this to give yourself several tiles each (say) two screens wide. Scroll the starfield by keeping track of where that "first" star is for each layer.
The player won't notice the tiling, because you scroll the tiles at different rates for each layer, especially if you use a few layers that are each fairly sparse.
As stars in the background don't move as fast as those in the foreground, you could maybe make multi-layer tiles for the background and replace them with one-layer-ones when you've got time to do that. Oh, and how about repeating patterns in the background layers? This would maybe allow you to pregenerate all background tiles - you could still shift them in height and overlay multiple ones with random offsets or so to make it look random.
Is there anything wrong with wrapping the star field around in X and Y? Because of your depth, the wraparound distance should depend on the depth, but you can do that. Each recorded star at (x,y,depth) should appear at all points
[x + j * S * depth, y + k * S * depth]
for all integers j and k. S is a wraparound parameter. If S is 1 then wraparound happens immediately and all stars are always shown somewhere. If S is higher wraparound doesn't happen immediately and some stars are shown off screen. You'll probably want S big enough to ensure no repeats at maximum zoom out.
Each frame, render the stars on one single bitmap/layer. They are only dots, and so it will be faster than using any algorithm with multiple layers.
Now you need an infinite 2D-grid of 3D-boxes filled with a finite number of stars. For each box, you can define an individual RANDOM_SEED value, using its grid-coordinates. The stars in each box can be generated on-the-fly.
Remember to correct the perspective when you zoom: Each 3D-box has a near-rectangle (front-face) and a far-rectangle. You will see more stars of neighbouring boxes, whenever the far-rectangle or near-rectangle shrinks in your view.
Your far-rectangles should never be smaller than half the width of the near-rectangles, otherwise it might be troublesome: You might have to scan huge lists of stars where most of them are out of bounds. You can realize stars behind the far-rectangles via additional 2D-grids of 3D-boxes with other sizes and depths.
Why not combine the coordinates of the starfield 3D boxes to form the random number seed? Use a global "adjustment" if you want to produce different universes. That way you don't need to track the boxes you can't see because the contents are fixed by their location.