Collision detection delegate scheme - delegates

Hey guys! My physics engine is coming along quite nicely (thanks for asking!) and I'm ready to start working on some even more advanced junk. Case in point, I'm trying to set up my collision engine so that an arbitrary delegate can be notified when a collision occurs. Let me set up a scenario for you:
Say we have object A, object B, and object C in the physics simulation. I want to be able to inform a delegate about a collision between A and B, AND inform a potentially DIFFERENT delegate about a collision between A and C.
A little background information: I have a known interface for the delegate, I have the potential of storing state for my collision detector (but don't atm), and have the ability to store state in the objects themselves. Similarly, I use this delegate model to handle collision resolution, simply setting the physics engine as the delegate for all objects by default, allowing the user to change the delegate if desired.
Now, I already tried having each object store it's own collision delegate that would be informed when a collision occurred. This didn't work because when the objects had the same collision delegate, the same collision was handled twice. When I switched to only using the delegate of the first object (however that was decided), the order of simulation became an issue. I want to use a dictionary, but that introduces a significant amount of overhead. However, that seems like the direction I need to be heading.
So here's the question: fight to the death over a suitable solution. How would YOU solve this problem?

I must say that it's a bit odd that two objects can have different delegates (at a collision) and still it would be bad if two identical delegates fired at a collision. I seems like they should both fire all the time or only one of them should. Consistency is what bothers me here.
Explaining that would help some more.
Second, if you use the naive version of holding a delegate for each object and then conditioning activating its functionality ("if (!some boolean indicating this delegate was fired already) {do something}"), this could be solved with a very small overhead.
It works, but I don't like this kind of code.
My suggestion (a bit complex, so think about it before working on it) is to try to focus on a manager object which would go over all the delegates and invoke the two that where relevant to the collision.
For instance, A and B collide, and the manager is invoked with them as parameters. You now can cycle through all the delegates known to the system (assuming they are few) and fire the ones that match "delegate == a.del or delegate == b.del".
This comes at a greater overhead, but if we are talking about a small number of delegates, if will make very little difference. On the other side this will allow you to expend your collision detection engine in this area further more in the future (like the existence of more then one delegate per object).

Related

Avoiding nested for loops in 2D game engine

I am creating a 2D game.
There are many objects, each has a width, a height, and X and Y coordinates.
Each object is stored in an array of it's class:
the player
allies
enemies
obstacles
power ups
bullets shot by the enemies
bullets shot by the player & allies
To make the game work, I need to obviously detect if two elements occupy the same space (let's call it "collission") , so that
the player can collect power ups
nobody can pass through obstacles
the allies can shoot the enemies
the enemies can shoot the allies
friendly fire can be either enabled or disabled
this needs to be checked every frame, 60 times per second.
What I have been doing so far, is loop through each class, and then loop over each class that it can interact with.
For example:
foreach (ally_bullets)
{
foreach (enemies)
{
if ( collision detected between enemy and bullet )
{
remove the bullet and the enemy
}
}
}
It makes sense and it works, but it's very resource intensive as the game gets more complex. The more elements there are, the longer this nested for loop takes to render, ultimately reducing the frame rate. Even if I am trying to run as few loops as possible.
What's a better way of solving this then nested for loops?
A common solution is to use Polymorphism, in which you have a base class (in this case Object) which gets inherited by other classes (such as Player, Bullet, Enemy, etc). Instead of each of these having an individual array, you would have a single array (or typically more appropriate, a vector). Now you just loop through that one array having each Object do their updates, and have their updates checked against every Object in the array.
This 'vector-wise' updating is usually setup as a Messaging System. Now whenever an inherited Object receives a message (such as 'hit by bullet'), that object checks if it cares about that message. If so, accept the message, else ignore it.
This is (in my opinion) the better way to handle what you are trying to accomplish, which I believe is what you were asking.
If you are still using arrays for this, I am going to assume you are still fairly new to programming, and I am going to suggest sticking with what you have now. It will absolutely work (providing you know how to finish your project), and when you finish this and start to learn something more advanced you will see both the shortcomings of the way you are doing it, and the benefits of it).
If you do see some lag arise, it will likely be from your drawing methods long before this kind of interactivity checking becomes a bottleneck.
Either way you go, collision detection itself and rendering are going to be the main areas your cpu will be eaten up, providing your arrays stay within reasonable ranges.
Edit:
Another thing that will help you should you pursue the topics I mentioned is the Observer Pattern, also known as the Listener Pattern.

Multiplayer shooter: should projectiles carry the information about damage, or should the damaged entity?

This is a conceptual question, so no code is included.
When making a multiplayer shooter, be it 2D or 3D, where there are projectile-based weapons, should each projectile contain information as to who fired it, and how much damage it should do? Or, should the entity, upon being struck by the projectile, interpret the projectile and deal damage to itself accordingly?
Basically, the question is "where should the information/code about the damage being dealt exist."
Thoughts?
Personally, I think that giving the projectile this damage provides better modularity and makes more sense logically.
Modularity
I used to do M.U.G.E.N, a fighting game engine where individual creators and teams of creators can release either full games or pieces of games. Full games are actually comparatively rare next to individual characters, stages and etc that you could copy into your own folder and have added to your roster. So if any game engine absolutely had to be designed for modularity, this was it.
How did M.U.G.E.N handle the situation? Except in specific circumstances where a particular creator wanted to do something creative or unconventional, damage amount (and a gazillion more pieces of information, like hit sound, how long to stun, what state to enter as a result) are carried by the character that delivers the attack (In a shooter, this would be equivalent to the bullet holding this information). Why?
Well, simply put, there's no way that one character could be defined so that it could tell who hit it and what attack they used when said character and attack hadn't even been made yet. Keep in mind, characters from 1999 and characters from 2016 are able to fight each other without issue. And almost as dauntingly (and more relevant for you), even if a character's creator could somehow precognitively know every character and every attack that would be added, it would take a long time and a lot of code to add a case for all of them.
So let's get back to the idea of a shooter. Consider that you have two kinds of bullets. One is big and deals a lot of damage (we'll call this BigBullet), while the other is small and deals a little damage (We'll call this SmallBullet). Consider also that you have two kinds of targets that these bullets can damage (We'll call them Soldier and Tank). If you store damage information in the target classes (Soldier and Tank), you have to define this information for each type of bullet for each type of target.
That is, Soldier class will have to have logic to determine which type of bullet it is and how much damage it should deal. Tank class will also have to have logic to determine which type of bullet it is and how much damage it should deal. You can reduce the redundancy of this code if you inherit both classes from a common base class, true.
But compare with the alternative. BigBullet and SmallBullet have a public damage field. The Soldier and Tank don't have to care what type of bullet it got hit by; only ask how much damage it should do. So you can remove that code altogether.
And more importantly, this allows the design to be easily expandable. If you later decide you want a new HugeBullet class that deals even more damage, all you need to do is make a new HugeBullet class. You don't have to worry about updating your target entities at all.
Logic
This is more opinional, but let me describe the way I look at it.
Consider the human skull. It is a work of art. It's got joints in it that allow jaw movement. It has holes in all the right places to allow sensory information to enter, and a hole for food. It's designed for durability and observation.
But it's not specifically designed for every single thing that can happen to it. It doesn't have a bullet-deflection property or an axe-survival property. It has properties that contribute to these defenses, but only in the vaguest and most general of terms. Maybe they have a general resistance to piercing, but they don't have a particular resistance to piercing by each individual type of bullet.
Ultimately the driving force behind how much damage it takes lies with the design of the weapon that strikes it. A bullet is designed to cut through flesh and bone. So organizing your code so that the bullet objects carry damage data is a more accurate simulation of reality.
IMHO
Depends entirely on what you consider damage:
Damage potential from the projectile... definitely store in each projectile.
Damage realized upon the target... a counter to add all the damage inflicted on it by the different projectiles.
Information stored on the projectile regarding firing entity is only relevant if the firing entity is to be awarded points for sucessful hits on targets.

How do I handle the creation/destruction of many objects in memory effectively?

Im in the process of making a game of my own. One of the goals is to have as many objects within the world as possible. In this game, many objects will need to be created in some unpredictable period of time (like a weapon firing will create an object) and once that projectile hits something, the object will need to be destroyed aswell (and maybe the thing it hits).
So i was wondering what the best way to handle this in memory is. Ive thought about creating a stack or table, and adding the pointers to those objects there, and creating and destroying those objects on demand, however, what if several hundred (or thousand) objects try to be created or destroyed at once between frames? I want to keep a steady and fluid frame rate, and such a surge in system calls would surely slow it down.
So ive thought i could try to keep a number of objects in memory so that i could just copy information into them, and use them without having to request the memory for them on demand. But how much memory should i try to reserve? Or should i not worry about that as long as the users computer has enough (presumably they will be focusing on the game and not running a weather simulation in the background).
What would be the best way of handling this?
Short answer: it depends on the expected lifetime of the objects.
Usually, the methods are combined. An object that is fairly static and is unlikely to be removed or created often (usually, players, levels, certain objects in the levels, etc) are created with the first method you described (a list of objects, an array, a singleton, etc) The exact method depends on the game, and the object being created.
For short term objects, like bullets, particle effects, or in some game, the enemies themselves, something like object pool pattern is usually used. A chunk of memory is reversed at the beginning of the game and reused throughout the course of the game for bullets and pretty particle effects. As for how much memory should I reserve?, the ideal answer is "As little as possible". Unfortunately, it's hard to figure that out sometimes. The best way to figure it out is to take a guess at how many bullets or whatnot you plan on having on screen at any given time, multiply by two (for when you decide that your bullet hell shooter doesn't really work to well with only 50 bullets) and then add a little buffer. To make it easier, store that value in a easily understood #define BULLET_MAX 110 so you can change it when the game is closer to done, and you can reasonably be sure that the value isn't going to fluctuate as much. For extra fun, you can tie the value into a config variable, and have the graphics setting affect it.
In real time games, where fluidity is critical, they often allocate a large chunk of memory in the beginning of the level and avoids any allocation/deallocation in the middle of the game.
You can often design so the game mechanic prevents the game from running out of memory (such as increasing the chance of weapon jamming when the player shoots too much too often).
Ultimately though, test your game in your targeted minimum supported machine, if it's fast enough there then it's fast enough and don't overcomplicate your code for hypothetical situations.

Games: Who is responsible for display?

Should entities know how to draw themselves? I've used this approach: it's simple and it works, but after learning MVC-patterns I feel uneasy about this. It's hard to change the art style when all the display logic is buried in the model.
One could introduce a view class, which takes the level as an argument and draws it, but this would mean it has to identify entity types and introduce a "switch"-statement, which I learned is also bad.
Where should one place code for drawing, in a way that is extensible, easy to change, clean and dry?
This question still comes up frequently in game development as various studios and groups start on new engines.
The short answer is that it depends on how complex your game entities will be. For simple entities, it doesn't really matter.
When you get into much more complicated entities, you have to rethink your approach. In general, you'll want to resist the urge to have the uber loop that iterates over every entity and calls some update/render/whatever function. That doesn't scale at all, unless every update, render, or whatever hierarchy is exactly the same. Which is fine for a game like Geometry Wars, but not for anything more complicated than that.
What you want to do is given the most general entity collection extract a usage specific traversal. For example, if you wanted to render the scene you should have a way of extracting all the renderable entities from the entity collection and then rendering all of them in some arbitrary batchable order. Same goes for physics, collision, AI, etc.
Some helpful links:
http://cowboyprogramming.com/2007/01/05/evolve-your-heirachy/
http://research.scee.net/files/presentations/gcapaustralia09/Pitfalls_of_Object_Oriented_Programming_GCAP_09.pdf
I highly recommend both; the first goes into design rationale for building game entities out of components, i.e., render component, physics component, AI component. The second goes into performance characteristics of various game entity approaches.
Theres nothing inherently wrong with having an abstract Draw() style method in your entities that lets them decide how they will be drawn, especially for smaller games that probably won't be extended significantly. I've used this method in a ton of small projects and it works great.
An improvement you can make to that strategy is to use your game resources as a proxy for the actual drawing operations. For example an enemy entity can defer all the rendering through a resource object it owns that represents the mesh; likewise for the texture/skin and effects.
I've recently switched to using my entities as 'dumb' containers for interfaces that define their behaviours. A player entity might contain IMoveable, IControllable, IRenderable, and many more interfaces that simply apply a specialized operation to that entity depending on the data it contains. The entity doesn't know much about this and all the execution happens when the scene graph is traversed for culling/rendering.
MVC isn't necessarily a great fit for games. MVC was designed for traditional GUIs which typically update infrequently based on discrete events, whereas games are more like simulations where there is constant updating going on and the presentation needs to reflect that instantly.
Still, there's no reason why you shouldn't still strive to separate state from presentation. Entities don't need to know how to draw themselves as such - that would mean they know about rendering operations, which is unnecessary - but it should be possible to ask them how they look at this point in time and then to use that information to render the scene. eg. a 2D entity should be able to return its current frame of animation. Or a 3D entity should be able to return its 3d mesh, position, and orientation. No need for the switch statement here, as long as you have generic representations that you can return to the renderer. Entities with wildly differing rendering algorithms might have to return rather different objects at this point.
Typically I solve this issue with inheritance.
For example, on a project I'm working with at the moment I'm using Test-Driven Development to write the game logic, with manual testing for the rendering. This example below in C# shows the rough idea.
class GameObjecet {
// Logic, nicely unit tested.
}
class DrawableGameObject : GameObject {
// Drawing logic - manual testing
}
This is generally what I've found to be the best solution. This allows code to be tested, while not cluttering the game logic with presentational code such as drawing, and model loading etc...

Pattern name for flippable data structure?

I'm trying to think of a naming convention that accurately conveys what's going on within a class I'm designing. On a secondary note, I'm trying to decide between two almost-equivalent user APIs.
Here's the situation:
I'm building a scientific application, where one of the central data structures has three phases: 1) accumulation, 2) analysis, and 3) query execution.
In my case, it's a spatial modeling structure, internally using a KDTree to partition a collection of points in 3-dimensional space. Each point describes one or more attributes of the surrounding environment, with a certain level of confidence about the measurement itself.
After adding (a potentially large number of) measurements to the collection, the owner of the object will query it to obtain an interpolated measurement at a new data point somewhere within the applicable field.
The API will look something like this (the code is in Java, but that's not really important; the code is divided into three sections, for clarity):
// SECTION 1:
// Create the aggregation object, and get the zillion objects to insert...
ContinuousScalarField field = new ContinuousScalarField();
Collection<Measurement> measurements = getMeasurementsFromSomewhere();
// SECTION 2:
// Add all of the zillion objects to the aggregation object...
// Each measurement contains its xyz location, the quantity being measured,
// and a numeric value for the measurement. For example, something like
// "68 degrees F, plus or minus 0.5, at point 1.23, 2.34, 3.45"
foreach (Measurement m : measurements) {
field.add(m);
}
// SECTION 3:
// Now the user wants to ask the model questions about the interpolated
// state of the model. For example, "what's the interpolated temperature
// at point (3, 4, 5)
Point3d p = new Point3d(3, 4, 5);
Measurement result = field.interpolateAt(p);
For my particular problem domain, it will be possible to perform a small amount of incremental work (partitioning the points into a balanced KDTree) during SECTION 2.
And there will be a small amount of work (performing some linear interpolations) that can occur during SECTION 3.
But there's a huge amount of work (constructing a kernel density estimator and performing a Fast Gauss Transform, using Taylor series and Hermite functions, but that's totally beside the point) that must be performed between sections 2 and 3.
Sometimes in the past, I've just used lazy-evaluation to construct the data structures (in this case, it'd be on the first invocation of the "interpolateAt" method), but then if the user calls the "field.add()" method again, I have to completely discard those data structures and start over from scratch.
In other projects, I've required the user to explicitly call an "object.flip()" method, to switch from "append mode" into "query mode". The nice this about a design like this is that the user has better control over the exact moment when the hard-core computation starts. But it can be a nuisance for the API consumer to keep track of the object's current mode. And besides, in the standard use case, the caller never adds another value to the collection after starting to issue queries; data-aggregation almost always fully precedes query preparation.
How have you guys handled designing a data structure like this?
Do you prefer to let an object lazily perform its heavy-duty analysis, throwing away the intermediate data structures when new data comes into the collection? Or do you require the programmer to explicitly flip the data structure from from append-mode into query-mode?
And do you know of any naming convention for objects like this? Is there a pattern I'm not thinking of?
ON EDIT:
There seems to be some confusion and curiosity about the class I used in my example, named "ContinuousScalarField".
You can get a pretty good idea for what I'm talking about by reading these wikipedia pages:
http://en.wikipedia.org/wiki/Scalar_field
http://en.wikipedia.org/wiki/Vector_field
Let's say you wanted to create a topographical map (this is not my exact problem, but it's conceptually very similar). So you take a thousand altitude measurements over an area of one square mile, but your survey equipment has a margin of error of plus-or-minus 10 meters in elevation.
Once you've gathered all the data points, you feed them into a model which not only interpolates the values, but also takes into account the error of each measurement.
To draw your topo map, you query the model for the elevation of each point where you want to draw a pixel.
As for the question of whether a single class should be responsible for both appending and handling queries, I'm not 100% sure, but I think so.
Here's a similar example: HashMap and TreeMap classes allow objects to be both added and queried. There aren't separate interfaces for adding and querying.
Both classes are also similar to my example, because the internal data structures have to be maintained on an ongoing basis in order to support the query mechanism. The HashMap class has to periodically allocate new memory, re-hash all objects, and move objects from the old memory to the new memory. A TreeMap has to continually maintain tree balance, using the red-black-tree data structure.
The only difference is that my class will perform optimally if it can perform all of its calculations once it knows the data set is closed.
If an object has two modes like this, I would suggest exposing two interfaces to the client. If the object is in append mode, then you make sure that the client can only ever use the IAppendable implementation. To flip to query mode, you add a method to IAppendable such as AsQueryable. To flip back, call IQueryable.AsAppendable.
You can implement IAppendable and IQueryable on the same object, and keep track of the state in the same way internally, but having two interfaces makes it clear to the client what state the object is in, and forces the client to deliberately make the (expensive) switch.
I generally prefer to have an explicit change, rather than lazily recomputing the result. This approach makes the performance of the utility more predictable, and it reduces the amount of work I have to do to provide a good user experience. For example, if this occurs in a UI, where do I have to worry about popping up an hourglass, etc.? Which operations are going to block for a variable amount of time, and need to be performed in a background thread?
That said, rather than explicitly changing the state of one instance, I would recommend the Builder Pattern to produce a new object. For example, you might have an aggregator object that does a small amount of work as you add each sample. Then instead of your proposed void flip() method, I'd have a Interpolator interpolator() method that gets a copy of the current aggregation and performs all your heavy-duty math. Your interpolateAt method would be on this new Interpolator object.
If your usage patterns warrant, you could do simple caching by keeping a reference to the interpolator you create, and return it to multiple callers, only clearing it when the aggregator is modified.
This separation of responsibilities can help yield more maintainable and reusable object-oriented programs. An object that can return a Measurement at a requested Point is very abstract, and perhaps a lot of clients could use your Interpolator as one strategy implementing a more general interface.
I think that the analogy you added is misleading. Consider an alternative analogy:
Key[] data = new Key[...];
data[idx++] = new Key(...); /* Fast! */
...
Arrays.sort(data); /* Slow! */
...
boolean contains = Arrays.binarySearch(data, datum) >= 0; /* Fast! */
This can work like a set, and actually, it gives better performance than Set implementations (which are implemented with hash tables or balanced trees).
A balanced tree can be seen as an efficient implementation of insertion sort. After every insertion, the tree is in a sorted state. The predictable time requirements of a balanced tree are due to the fact the cost of sorting is spread over each insertion, rather than happening on some queries and not others.
The rehashing of hash tables does result in less consistent performance, and because of that, aren't appropriate for certain applications (perhaps a real-time microcontroller). But even the rehashing operation depends only on the load factor of the table, not the pattern of insertion and query operations.
For your analogy to hold strictly, you would have to "sort" (do the hairy math) your aggregator with each point you add. But it sounds like that would be cost prohibitive, and that leads to the builder or factory method patterns. This makes it clear to your clients when they need to be prepared for the lengthy "sort" operation.
Your objects should have one role and responsibility. In your case should the ContinuousScalarField be responsible for interpolating?
Perhaps you might be better off doing something like:
IInterpolator interpolator = field.GetInterpolator();
Measurement measurement = Interpolator.InterpolateAt(...);
I hope this makes sense, but without fully understanding your problem domain it's hard to give you a more coherent answer.
"I've just used lazy-evaluation to construct the data structures" -- Good
"if the user calls the "field.add()" method again, I have to completely discard those data structures and start over from scratch." -- Interesting
"in the standard use case, the caller never adds another value to the collection after starting to issue queries" -- Whoops, false alarm, actually not interesting.
Since lazy eval fits your use case, stick with it. That's a very heavily used model because it is so delightfully reliable and fits most use cases very well.
The only reason for rethinking this is (a) the use case change (mixed adding and interpolation), or (b) performance optimization.
Since use case changes are unlikely, you might consider the performance implications of breaking up interpolation. For example, during idle time, can you precompute some values? Or with each add is there a summary you can update?
Also, a highly stateful (and not very meaningful) flip method isn't so useful to clients of your class. However, breaking interpolation into two parts might still be helpful to them -- and help you with optimization and state management.
You could, for example, break interpolation into two methods.
public void interpolateAt( Point3d p );
public Measurement interpolatedMasurement();
This borrows the relational database Open and Fetch paradigm. Opening a cursor can do a lot of preliminary work, and may start executing the query, you don't know. Fetching the first row may do all the work, or execute the prepared query, or simply fetch the first buffered row. You don't really know. You only know that it's a two part operation. The RDBMS developers are free to optimize as they see fit.
Do you prefer to let an object lazily perform its heavy-duty analysis,
throwing away the intermediate data structures when new data comes
into the collection? Or do you require the programmer to explicitly
flip the data structure from from append-mode into query-mode?
I prefer using data structures that allow me to incrementally add to it with "a little more work" per addition, and to incrementally pull the data I need with "a little more work" per extraction.
Perhaps if you do some "interpolate_at()" call in the upper-right corner of your region, you only need to do calculations involving the points in that upper-right corner,
and it doesn't hurt anything to leave the other 3 quadrants "open" to new additions.
(And so on down the recursive KDTree).
Alas, that's not always possible -- sometimes the only way to add more data is to throw away all the previous intermediate and final results, and re-calculate everything again from scratch.
The people who use the interfaces I design -- in particular, me -- are human and fallible.
So I don't like using objects where those people must remember to do things in a certain way, or else things go wrong -- because I'm always forgetting those things.
If an object must be in the "post-calculation state" before getting data out of it,
i.e. some "do_calculations()" function must be run before the interpolateAt() function gets valid data,
I much prefer letting the interpolateAt() function check if it's already in that state,
running "do_calculations()" and updating the state of the object if necessary,
and then returning the results I expected.
Sometimes I hear people describe such a data structure as "freeze" the data or "crystallize" the data or "compile" or "put the data into an immutable data structure".
One example is converting a (mutable) StringBuilder or StringBuffer into an (immutable) String.
I can imagine that for some kinds of analysis, you expect to have all the data ahead of time,
and pulling out some interpolated value before all the data has put in would give wrong results.
In that case,
I'd prefer to set things up such that the "add_data()" function fails or throws an exception
if it (incorrectly) gets called after any interpolateAt() call.
I would consider defining a lazily-evaluated "interpolated_point" object that doesn't really evaluate the data right away, but only tells that program that sometime in the future that data at that point will be required.
The collection isn't actually frozen, so it's OK to continue adding more data to it,
up until the point something actually extract the first real value from some "interpolated_point" object,
which internally triggers the "do_calculations()" function and freezes the object.
It might speed things up if you know not only all the data, but also all the points that need to be interpolated, all ahead of time.
Then you can throw away data that is "far away" from the interpolated points,
and only do the heavy-duty calculations in regions "near" the interpolated points.
For other kinds of analysis, you do the best you can with the data you have, but when more data comes in later, you want to use that new data in your later analysis.
If the only way to do that is to throw away all the intermediate results and recalculate everything from scratch, then that's what you have to do.
(And it's best if the object automatically handled this, rather than requiring people to remember to call some "clear_cache()" and "do_calculations()" function every time).
You could have a state variable. Have a method for starting the high level processing, which will only work if the STATE is in SECTION-1. It will set the state to SECTION-2, and then to SECTION-3 when it is done computing. If there's a request to the program to interpolate a given point, it will check if the state is SECTION-3. If not, it will request the computations to begin, and then interpolate the given data.
This way, you accomplish both - the program will perform its computations at the first request to interpolate a point, but can also be requested to do so earlier. This would be convenient if you wanted to run the computations overnight, for example, without needing to request an interpolation.

Resources