Multiplayer shooter: should projectiles carry the information about damage, or should the damaged entity? - performance

This is a conceptual question, so no code is included.
When making a multiplayer shooter, be it 2D or 3D, where there are projectile-based weapons, should each projectile contain information as to who fired it, and how much damage it should do? Or, should the entity, upon being struck by the projectile, interpret the projectile and deal damage to itself accordingly?
Basically, the question is "where should the information/code about the damage being dealt exist."
Thoughts?

Personally, I think that giving the projectile this damage provides better modularity and makes more sense logically.
Modularity
I used to do M.U.G.E.N, a fighting game engine where individual creators and teams of creators can release either full games or pieces of games. Full games are actually comparatively rare next to individual characters, stages and etc that you could copy into your own folder and have added to your roster. So if any game engine absolutely had to be designed for modularity, this was it.
How did M.U.G.E.N handle the situation? Except in specific circumstances where a particular creator wanted to do something creative or unconventional, damage amount (and a gazillion more pieces of information, like hit sound, how long to stun, what state to enter as a result) are carried by the character that delivers the attack (In a shooter, this would be equivalent to the bullet holding this information). Why?
Well, simply put, there's no way that one character could be defined so that it could tell who hit it and what attack they used when said character and attack hadn't even been made yet. Keep in mind, characters from 1999 and characters from 2016 are able to fight each other without issue. And almost as dauntingly (and more relevant for you), even if a character's creator could somehow precognitively know every character and every attack that would be added, it would take a long time and a lot of code to add a case for all of them.
So let's get back to the idea of a shooter. Consider that you have two kinds of bullets. One is big and deals a lot of damage (we'll call this BigBullet), while the other is small and deals a little damage (We'll call this SmallBullet). Consider also that you have two kinds of targets that these bullets can damage (We'll call them Soldier and Tank). If you store damage information in the target classes (Soldier and Tank), you have to define this information for each type of bullet for each type of target.
That is, Soldier class will have to have logic to determine which type of bullet it is and how much damage it should deal. Tank class will also have to have logic to determine which type of bullet it is and how much damage it should deal. You can reduce the redundancy of this code if you inherit both classes from a common base class, true.
But compare with the alternative. BigBullet and SmallBullet have a public damage field. The Soldier and Tank don't have to care what type of bullet it got hit by; only ask how much damage it should do. So you can remove that code altogether.
And more importantly, this allows the design to be easily expandable. If you later decide you want a new HugeBullet class that deals even more damage, all you need to do is make a new HugeBullet class. You don't have to worry about updating your target entities at all.
Logic
This is more opinional, but let me describe the way I look at it.
Consider the human skull. It is a work of art. It's got joints in it that allow jaw movement. It has holes in all the right places to allow sensory information to enter, and a hole for food. It's designed for durability and observation.
But it's not specifically designed for every single thing that can happen to it. It doesn't have a bullet-deflection property or an axe-survival property. It has properties that contribute to these defenses, but only in the vaguest and most general of terms. Maybe they have a general resistance to piercing, but they don't have a particular resistance to piercing by each individual type of bullet.
Ultimately the driving force behind how much damage it takes lies with the design of the weapon that strikes it. A bullet is designed to cut through flesh and bone. So organizing your code so that the bullet objects carry damage data is a more accurate simulation of reality.

IMHO
Depends entirely on what you consider damage:
Damage potential from the projectile... definitely store in each projectile.
Damage realized upon the target... a counter to add all the damage inflicted on it by the different projectiles.
Information stored on the projectile regarding firing entity is only relevant if the firing entity is to be awarded points for sucessful hits on targets.

Related

Artificial Intelligence for card battle based game

I want to make a game card battle based game. In this cards have specific attributes which can increase player's hp/attack/defense or attack an enemy to reduce his hp/attack/defense
I am trying to make an AI for this game. AI has to predict which card it will select on the basis of current situations like AI's hp/attack/defense and Enemy's hp/attack/defense. Since AI cannot see enemy's card hence it cannot predict future moves.
I searched few techniques of AI like minmax but I think minmax will not be suitable since AI cannot predict any future moves.
I am searching for a technique which is very flexible so that i can add a large variety of cards later.
Can you please suggest a technique for such game.
Thanks
This isn't an ActionScript 3 topic per se but I do think it's rather interesting.
First I'd suggest picking up Yu-Gi-Oh's Stardust Accelerator World Championship 2009 for the Nintendo DS or a comparable game.
The game has a fairly advanced computer AI system that not only deals with expected advantage or disadvantage in terms of hit points but also card advantage and combos. If you're taking on a challenge like this, I definately recommend you do the required research (plus, when playing video games is research, who can complain?)
My suggestion for building an AI is as follows:
As the computer decides its move, create an array of Move objects. Then have it create a new Move object for each possible Move that it can see.
For each move object, calculate how much less HP the opponent will have, how many cards they will still have, how many creatures,etc.
Have the computer decide what's most important (more damage, more card advantage) and have it play that move.
More sophisticated AI's will also think several turns in advance and perhaps "see" moves that others do not.
I suggest you look at this game of Reversi I built a few weeks back for fun in Flash. This has a very basic AI implemented, but the basics could be applied to your situation.
Basically, the way that game works is after each move (player or CPU, so I can determine if the player made the right move in comparison to what the CPU would have made), I create a Vector of each possible legal move. I then decide which move provides the highest score change, and set that as best move. However, I also check to see if the move would result in the other player having access to a corner (if you've never played, the player who grabs the corners generally wins). If it does, I tell the CPU to avoid that move and check the second best move and so on. The end result is a CPU who can actually put up a fight.
Keep in mind that this is just a single afternoon of work (for the entire game, from the crappy GUI to the functionality to the AI) so it is very basic and I could do things like run future possible moves through the check sequence as well. Fun fact, though, my moves (which I based the AI on obviously) are the ones the CPU would make nearly 80% of the time. The only time that does not hold true is when I play the game like you would Chess, where your move is made solely to position a move four turns down the line
For your game, you have a ton of variables to consider and not a single point scale as I had. I would suggest listing out each thing and applying a point value to each thing so you can apply importance values to each one. I did something similar for a caching system that automatically determines what is the most important file to keep based on age, usage, size, etc. You then look at each card in the CPU's hand, calculate what each card's value is and play that card (assuming it is legal to do so, of course).
Once you figure that out, you can look into things like what each move could do in the next turn (i.e. "damage" values for each move). And once that is done, you could add functionality to it that would let the CPU make strategic moves that would allow them to use a more powerful card or perform a "finishing" move or however it works in the end.
Again, though, keep it to a simple point based system and keep going from there. You need something you can physically compare, so sticking to a point based system makes that simple.
I apologize for the length of this answer, but I hope it helps in some way.

How should platformer game's solid objects be implemented efficiently?

I have been trying to write a platformer engine for a few times now. The thing is I am not quite satisfied with my implementation details on solid objects. (wall, floor, ceiling) I have several scenario I would like to discuss.
For a simple platformer game like the first Mario, everything is pretty much blocks. A good implementation should only check for necessary collision, for instance, if Mario is running and at the end of the way, there is a cliff, how should we check for collision efficiently? Should we always check on every step Mario is taking to see whether his hitbox is still on the ground? Or is there some other programming way that allows us to not handle this every frame?
But blocks are boring, let's put in some slopes. Implementation details-wise, how should slopes be handled? Some games such as Sonic, have this loop structure that the character can go "woohoo" in the loop and proceed.
Another scenario is "solid" objects (floor, ceiling, wall) handling. In Megaman, we can see that the player can make himself go through the ceiling by using a tool to go into the solid "wall". Possibly, the programming here is to force the player to go out of the wall so that the player is not stuck, by moving the player quickly to the right. This is an old "workaround" method to avoid player stucking in wall. In newer games these days, the handle is more complex. Take, for instance, Super Smash Brawl, where players can enlarge the characters (along with their hitbox) The program allows the player to move around "in" the ceiling, but once the character is out of the "solid" area, they cannot move back in. Moreover, sometimes, a character is gigantic that they go through 3 solid floors of a scene and they can still move inside fine. Anybody knows implementation details along these lines?
So here, I know that there are many implementation possible, but I just wanna ask here that are there some advanced technical details for platformer game that I should be aware of? I am currently asking for 3 things:
How should solid collision of platformer game be handled efficiently? Can we take lesser time to check whether a character has ran and completely fell off a platform?
Slope programming. At first, I was thinking of physics engine, but I think it might be overkill. But in here, I see that slopes are pretty much another types of floor that "push" or "pull" the character to different elevation. Or should it be programmed differently?
Solid objects handling for special cases. There might be a time where the player can slip into the solid objects either via legal game rules or glitches, but all in all, it is always a bad idea to push the player to some random direction if he is in a wall.
For a small number of objects, doing an all-pairs collision detection check at each time step is fine. Once you get more than a couple hundred objects, you may want to start considering a more efficient method. One way is to use a binary space partitioning (BSP) to only check against nearby objects. Collision detection is a very well researched topics and there are a plethora of resources describing various optimizations.
Indeed, a physics engine is likely overkill for this task. Generally speaking, you can associate with each moving character a "ground" on which he is standing. Then whenever he moves, you simply make him move along the axis of the ground.
Slipping into objects is almost always a bad idea. Try to avoid it if possible.

How do I handle the creation/destruction of many objects in memory effectively?

Im in the process of making a game of my own. One of the goals is to have as many objects within the world as possible. In this game, many objects will need to be created in some unpredictable period of time (like a weapon firing will create an object) and once that projectile hits something, the object will need to be destroyed aswell (and maybe the thing it hits).
So i was wondering what the best way to handle this in memory is. Ive thought about creating a stack or table, and adding the pointers to those objects there, and creating and destroying those objects on demand, however, what if several hundred (or thousand) objects try to be created or destroyed at once between frames? I want to keep a steady and fluid frame rate, and such a surge in system calls would surely slow it down.
So ive thought i could try to keep a number of objects in memory so that i could just copy information into them, and use them without having to request the memory for them on demand. But how much memory should i try to reserve? Or should i not worry about that as long as the users computer has enough (presumably they will be focusing on the game and not running a weather simulation in the background).
What would be the best way of handling this?
Short answer: it depends on the expected lifetime of the objects.
Usually, the methods are combined. An object that is fairly static and is unlikely to be removed or created often (usually, players, levels, certain objects in the levels, etc) are created with the first method you described (a list of objects, an array, a singleton, etc) The exact method depends on the game, and the object being created.
For short term objects, like bullets, particle effects, or in some game, the enemies themselves, something like object pool pattern is usually used. A chunk of memory is reversed at the beginning of the game and reused throughout the course of the game for bullets and pretty particle effects. As for how much memory should I reserve?, the ideal answer is "As little as possible". Unfortunately, it's hard to figure that out sometimes. The best way to figure it out is to take a guess at how many bullets or whatnot you plan on having on screen at any given time, multiply by two (for when you decide that your bullet hell shooter doesn't really work to well with only 50 bullets) and then add a little buffer. To make it easier, store that value in a easily understood #define BULLET_MAX 110 so you can change it when the game is closer to done, and you can reasonably be sure that the value isn't going to fluctuate as much. For extra fun, you can tie the value into a config variable, and have the graphics setting affect it.
In real time games, where fluidity is critical, they often allocate a large chunk of memory in the beginning of the level and avoids any allocation/deallocation in the middle of the game.
You can often design so the game mechanic prevents the game from running out of memory (such as increasing the chance of weapon jamming when the player shoots too much too often).
Ultimately though, test your game in your targeted minimum supported machine, if it's fast enough there then it's fast enough and don't overcomplicate your code for hypothetical situations.

Fitts Law, applying it to touch screens

Been reading a lot into UI design lately and Fitt's Law keeps popping up.
Now from what I gather its basically the larger an item is, and the closer it is to your cursor, the easier it is to click on.
So what about touch screen devices where the input comes from multiple touches or just single touches.
What are the fundamentals to take into account considering this?
Should it be something like, the hands of the user are on the sides of the device so the buttons should be close to the left and right hand sides of the device?
Thanks
I started thinking about this recently too, and here some considerations:
Fitts' law was developed in the 50's as a human factors model (read: controls for fighter plane cockpits) so seeing it re-applied to human motor skills is actually just coming full circle. It definitely applies to mobile devices. [Historical note: The finding that it applied to mouse interfaces was actually a big deal at the time.]
One thing to note is that the Fitts'-endowed advantages of the edges and especially corners of the screen no longer exist on a touch interface: the "infinite size" only applies to mousing interfaces since the cursor cannot move past the edges. Obviously, the same limitation does not exist for our fingers. Basically, the edges are no better than the middle of the screen except for the potentially shorter distance to the target.
Here 1 (pdf) is an '06 study about optimal target sizes for one-handed thumb use, taking into account freedom of movement and such. I was hoping to find a paper that would be able to provide a modification or a new constant to Fitts' law for accuracy of the touch interface but a cursory search didn't turn one up. I guess that means I found a potential research topic ;)
I think one general conclusion to be made based on application to Fitts' law to smaller-screened mobiles is that it's hard to make usable widget-based interfaces without seriously sacrificing information density. One interesting alternative is gesture-based interfaces (beyond the popular pinch and zoom). Unfortunately, the lack of popularity and conventions makes the learning curve rather high. Mobiles are definitely one place that it might be worth the trade-off, though. I predict wider adoption of gesture interfaces on mobiles once conventions stabilize.
Yes, for a touch screen Fitts' law has to be applied in three dimensions, so it's different from the classical mouse movement considerations.
As you say, the origin of the movement is often the default position of the finger. This varies a lot depending on the device where the screen is mounted. On a hand held device you might use the index finger of one hand, or the thumbs of both hands, depending on the design.
Also, on a touch screen you have to move the fingers away from the screen to see it, which makes the distance between controls less important as you move back to the default position between clicks.
What to consider besides Fitts' law is the intuitiveness of the interface. If a button appears where it's not expected, it doesn't matter how close it is, it will still take time to find it.
One specific idea that attempts to leverage Fitts law is to put the most often used controls at the bottom of the screen (i.e., the opposite of current GUI conventions with the menubar and toolbar). This allows users to touch multiple controls in sequence without withdrawing their hands to see the effects, shortening the mean distance moved between inputs. For a tablet, kiosk, or desktop device, the bottom of the screen is probably also the hands’ “rest” position. However, there is the potential problem of the most important controls being the last thing the users sees when scanning the display.
Fitt's Law "predicts that the time required to rapidly move to a target area is a function of the distance to and the size of the target." What's important isn't that Fitts discovered this (it is obvious) what he noticed was that the increase due to distance and size fit a logarithmic formula, which the law models.
On a Windows-Icon-Menu-Pointer (WIMP) system, what's important is that you have 1 location with zero distance (where the cursor is currently) and 4 locations that are of infinite size (the edges of the screen, which the pointer cannot extend beyond). That's really why fitts law pops up so much in UI design (aside from giving weight to things like "Don't make tiny buttons", etc)
But the law makes a lot of assumptions about the range of motion you have available with your hands. If you're holding a tablet with two hands, the law goes out the window. If you're holding it with your left hand, then things on the right side will be easier to reach, etc. So its going to be a lot harder to make generalizations than with a pointer.
That said:
Think about where the users hands are going to be, and if they're both going to be free or not. Place buttons closest to where you think hands will be.
Cluster buttons such that you aren't requiring the user to make a variety of successive taps that are far apart (unless, of course, you're designing a game, in which case that's part of the skill)
Well, you should design for the most important fingers, after all (index, for example). Not that you shouldn't use the others, of course, but people generally are more geared toward using some fingers in detriment of others.
I don't think you can provide a general answer that will work across all sizes and types of touch screens. For example: The infrared vision technology on the Microsoft Surface can fail if a user has extremely dark finger tips (very very rare), but this would not be an issue on a capacitance based touchscreen.
The best practice to implement is lots of testing, with a variety of users. You will quickly learn what works on you device and what doesn't.
I did a paper on this for my graduate human computer interaction class using evolutionary computation to design a more efficient keyboard based on the domain of text that was being typed. I really should release it as an iphone/droid app.

How would you implement a perfect line-of-sight algorithm?

Disclaimer: I'm not actually trying to make one I'm just curious as to how it could be done.
When I say "Most Accurate" I include the basics
wall
distance
light levels
and the more complicated
Dust in Atmosphere
rain, sleet, snow
clouds
vegetation
smoke
fire
If I were to want to program this, what resources should I look into and what things should I watch out for?
Also, are there any relevant books on the theory behind line of sight including all these variables?
I personally don't know too much about this topic but a quick couple of Google searches turns up some formal papers that contain some very relevant information:
http://www.tecgraf.puc-rio.br/publications/artigo_1999_efficient_lineofsight_algorithms.pdf - Provides a detailed description of two different methods of efficiently performing an LOS calculation, along with issues involved
http://www.agc.army.mil/operations/programs/LOS/LOS%20Compendium.doc - This one aims to maintain "a current list of unique LOS algorithms"; it has a section listing quite a few and describing them in detail with a focus on military applications.
Hope this helps!
Typically, one represents the world as a set of volumes of space held in some kind of space partitioning data structure, then intersects the ray representing your "line of sight" with that structure to find the set of objects it hits; these are then walked in order from ray origin to determine the overall result. Reflective objects cause further rays to be fired, opaque objects stop the walk and semitransparent objects partially contribute to the result.
You might like to read up on ray tracing; there is a great body of literature on the subject and well-understood ways of solving what are basically the same problems you list exist.
The obvious question is do you really want the most accurate, and why?
I've worked on games that depended on line of sight and you really need to think clearly about what kind of line of sight you want.
First, can the AI see any part of your body? Or are you talking about "eye to eye" LOS?
Second, if the player's camera view is not his avatar's eye view, the player will not perceive your highly accurate LOS as highly accurate. At which point inaccuracies are fine.
I'm not trying to dissuade you, but remember that player experience is #1, and that might mean not having the best LOS.
A good friend of mine has done the AI for a long=-running series of popular console games. He often tells a story about how the AIs are most interesting (and fun) in the first game, because they stumble into you rather than see you from afar. Now, he has great LOS and spends his time trying to dumb them down to make them as fun as they were in the first game.
So why are you doing this? Does the game need it? Or do you just want the challenge?
There is no "one algorithm" for these since the inputs are not well defined.
If you treat Dust-In-Atmosphere as a constant value then there is an algorithm that can take it into account, but the fact is that dust levels will vary from point to point, and thus the algorithm you want needs to be aware of how your dust-data is structured.
The most used algorithm in todays ray-tracers is just incremental ray-marching, which is by definition not correct, but it does approximate the Ultimate Answer to a fair degree.
Even if you managed to incorporate all these properties into a single master-algorithm, you'd still have to somehow deal with how different people perceive the same setting. Some people are near-sighted, some far-sighted. Then there's the colour-blind. Not to mention that Dust-In-Atmosphere levels also affect tear-glands, which in turn affects visibility. And then there's the whole dichotomy between what people are actually seeying and what they think they are seeying...
There are far too many variables here to aim for a unified solution. Treat your environment as a voxelated space and shoot your rays through it. I suspect that's the only solution you'll be able to complete within a single lifetime...

Resources