Collision Management in a Simulation with Discrete Motion - time

I am building a simulation in which items (like chess pieces) move on a discrete set of positions that do not follow a sequence (like positions on a chessboard) according to a schedule.
Each position can hold only one item at any given time. The schedule could ask multiple items to move at the same time. If the destination position is occupied, the scheduled movement is cancelled.
Here is the question: if item A and item B, originally situated at position 1 and position 2 respectively, are scheduled to move simultaneously to their next positions position 2 and position 3, how do I make sure that item A gets to position 2, hopefully in an efficient design?
The reason to ask this question is that naively I would check whether position 2 is being occupied for item 1 to move into. If the check happens before item B is moved out of the way, item 1 would not move while in fact it should. Because the positions do not follow a sequence, it is not obvious which one to check first. You could imagine things gets messy if many items want to move at the same time. In the extreme case, a full chessboard of items should be allowed to move/rearrange themselves but the naive check may not be able to facilitate that.
Is there a common practice to handle such "nonexistent collision"? Ideas and references are all welcomed.

Two researchers, Ahmed Al Rowaei and Arnold Buss, published a paper in 2010 investigating the impact that using discrete time steps has on model accuracy/fidelity when the real-world system is event-based. There was also some follow-on work in 2011 with their colleague Stephen Lieberman. A major finding was that if you use time stepped models, order of execution matters and can cause the models to deviate from real-world behaviors in significant ways. Time-stepped models generally require you to introduce tie-breaking logic which doesn't exist in the real system. Logic that is needed for the model but doesn't exist in reality is called a "modeling artifact," and can lead to increased model complexity and inaccuracies. Systematic collision resolution schemes can lead to systematic biases.
Their recommendation was to build models based on continuous time. Events are scheduled using the actual (continuous) event times, which determine the order of event execution as in the real-world system. This occasionally (but rarely) requires priority tie breaking based on event type, so that (for example) departure events occur before arrival events if both were to occur at the exact same time.
If you insist on sticking with time-stepped models, a different strategy is to use two or more passes at each time step. The first pass lays out the desired state transitions and identifies potential conflicts, the last pass applies the actual transitions after conflicts have been resolved. The resolution process might be do-able in the initial setup pass, or may require additional passes if it's sufficiently complex.

Related

When to discard events in discrete event simulation

In most examples of DES I've seen an Event triggers a State change and possibly schedules some new Events in the future. However, if I simulate a Billiard game this is not the whole story.
In this case the Events of interest are the shots and the collisions of the balls with each other and with the cushion. The State consists of the position and velocity of each ball.
After a collision or a shot I will first recalculate a new State and from there I will calculate all possible future (first) collisions. The strange thing is that I will have to discard all Events which were scheduled previously as these describe collisions which were possible only before the state change.
So there seem to be two ways of doing DES.
One, where the future Events are computed from the State and all Events scheduled in the past are discarded with each State change (as in the Billiard example), and
another one, where each Event causes a state change and possibly schedules new Events, but where old Events are never discarded (as in most examples I've seen).
This is hard to believe.
The Billiard example also has the irritating property, that future events are calculated from the global state of the system. All Balls need to be considered, not just the ones which participated in a collision or a shot.
I wonder if my Billard example is different from classic DES. In any case, I am looking for the correct way to reason about such issues, i.e.
How do I know which Events are to be discarded?
How do I know what States to consider when scheduling future events
It there a possible "safe" or "foolproof" way to compute future events (at the cost of performance)?
An obvious answer is "it all depends on your problem domain". A more precise answer or a pointer to literature would be much appreciated
Your example is not unique or different from other DES models.
There's a third option which you omitted, which is that when certain events occur, specific other events will be cancelled. For example, in an epidemic model you might schedule infection events. Each infection event subsequently schedules 1) the critical time for the patient beyond which death becomes inevitable, with some probability and some delay corresponding to the patient's demographics, mortality rate for that demographic, and rate of progression for the disease; or 2) the patient's recovery. If medical interventions get queued up according to some triage strategy, treatment may or may not occur prior to the critical time. If not, a death gets scheduled, otherwise cancel the critical time event and schedule a recovery event.
These sorts of event scheduling, event cancellation, and parameterizations so that you can identify which entities the scheduling/cancelling applies to can all be described by a notation called "event graphs," created by Lee Schruben. See 'Schruben, Lee 1983. Simulation modeling with event graphs. Communications of the ACM. 26: 957-963' for the original paper, or check out this tutorial from the 1996 Winter Simulation Conference which is freely available online.
You might also want to look at this paper titled "Simple Movement and Detection in Discrete Event Simulation", which appeared in the 2005 Winter Simulation Conference.
The State consists of the position and velocity of each ball.
Once you get that working, you'll need to add the spin and axis of rotation for each ball, since the proper use of spin is what differentiates the pros from the amateurs.
I will have to discard all Events which were scheduled previously
Yup, that's true, so don't bother scheduling them at all. See below.
So there seem to be two ways of doing DES (both involving the
scheduling of events)
Actually, there's a third way. Simply search the problem space to determine the time of the first future event, and then jump to that time. There is no need to schedule Events. You only care about the one Event that will occur first.
All Balls need to be considered
Yes, this is true. Start by considering one of the balls and determining the time of it's next collision. That time then puts an upper limit on how far the other balls can move. For example, imagine the first ball will collide after 0.1 seconds. Then the question for the second ball is, "Is it possible for the second ball to hit anything within 0.1 seconds?" If not, then move along to the third ball. If so, then reduce the time limit to the time it takes for the second ball to collide, and then move on to the third ball.
An obvious answer is "it all depends on your problem domain"
That's true. My comments apply only to your example of a billiards simulation. For other problem domains, different rules apply.

"Least frequently used" - algorithm

I am building an application that is supposed to extract a mission for the user from a finite mission pool. The thing is that I want:
that the user won't get the same mission twice,
that the user won't get the same missions as his friends (in the application) until some time has passed.
To summarize my problem, I need to extract the least common mission out of the pool.
Can someone please reference me to known algorithms of finding least common something (LFU).
I also need the theoretical aspect, so if someone knows some articles or research papers about this (from known magazines like Scientific American) that would be great.
For getting the least frequently used mission, simply give every mission a counter that counts how many times it was used. Then search for the mission with the lowest counter value.
For getting the mission that was least frequently used by a group of friends, you can store for every user the missions he/she has done (and the number of times). This information is probably useful anyway. Then when a new mission needs to be chosen for a user, a (temporary) combined list of used missions and their frequencies by the users and all his friends can easily be created and sorted by frequency. This is not very expensive.
Base on your 2 requirements, I don't see what "LEAST" used mission has anything to do with this. You said you want non repeating missions.
OPTION 1:
What container do you use to hold all missions? Assume it's a list, when you or your friend chooses a mission move that mission to the end of the list (swap it with the missions there). Now you have split your initial list into 2 sublists. The first part holds unused missions, and the second part holds used missions. Keep track of the pivot/index which separates the 2 lists.
Now every time you or your friends choose a new mission it is choosen it from the first sublist. Then move it into the second sublist and update the pivot.
OPTION 2:
If you repeat missions eventually, but choose first the ones which have been chosen the least amount of time, then you can make your container a min heap. Add a usage counter to each mission and add them to the heap based on that. Extract a mission and increment its usage counter then put it back into the heap. This is a good solution, but depending on how simple your program is, you could even use a circular buffer.
It would be nice to know more about what you're building :)
I think the structure you need is a min-heap. It allows extraction of the minimum in O(Log(n)) and it allows you to increase the value of an item in O(Log(n)) too.
A good start is Edmond Blossom V algorithm for a perfect minimum matching in general graph. If you have a bipartite graph you can look for the Floyd-Warshall algorithmus to find the shortest path. Maybe you can use also a topological search but I don't know because these algorithm are really hard to learn.

Database for brute force solving board games

A few years back, researchers announced that they had completed a brute-force comprehensive solution to checkers.
I have been interested in another similar game that should have fewer states, but is still quite impractical to run a complete solver on in any reasonable time frame. I would still like to make an attempt, as even a partial solution could give valuable information.
Conceptually I would like to have a database of game states that has every known position, as well as its succeeding positions. One or more clients can grab unexplored states from the database, calculate possible moves, and insert the new states into the database. Once an endgame state is found, all states leading up to it can be updated with the minimax information to build a decision trees. If intelligent decisions are made to pick probable branches to explore, I can build information for the most important branches, and then gradually build up to completion over time.
Ignoring the merits of this idea, or the feasability of it, what is the best way to implement such a database? I made a quick prototype in sql server that stored a string representation of each state. It worked, but my solver client ran very very slow, as it puled out one state at a time and calculated all moves. I feel like I need to do larger chunks in memory, but the search space is definitely too large to store it all in memory at once.
Is there a database system better suited to this kind of job? I will be doing many many inserts, a lot of reads (to check if states (or equivalent states) already exist), and very few updates.
Also, how can I parallelize it so that many clients can work on solving different branches without duplicating too much work. I'm thinking something along the lines of a program that checks out an assignment, generates a few million states, and submits it back to be integrated into the main database. I'm just not sure if something like that will work well, or if there is prior work on methods to do that kind of thing as well.
In order to solve a game, what you really need to know per a state in your database is what is its game-theoretic value, i.e. if it's win for the player whose turn it is to move, or loss, or forced draw. You need two bits to encode this information per a state.
You then find as compact encoding as possible for that set of game states for which you want to build your end-game database; let's say your encoding takes 20 bits. It's then enough to have an array of 221 bits on your hard disk, i.e. 213 bytes. When you analyze an end-game position, you first check if the corresponding value is already set in the database; if not, calculate all its successors, calculate their game-theoretic values recursively, and then calculate using min/max the game-theoretic value of the original node and store in database. (Note: if you store win/loss/draw data in two bits, you have one bit pattern left to denote 'not known'; e.g. 00=not known, 11 = draw, 10 = player to move wins, 01 = player to move loses).
For example, consider tic-tac-toe. There are nine squares; every one can be empty, "X" or "O". This naive analysis gives you 39 = 214.26 = 15 bits per state, so you would have an array of 216 bits.
You undoubtedly want a task queue service of some sort, such as RabbitMQ - probably in conjunction with a database which can store the data once you've calculated it. Alternately, you could use a hosted service like Amazon's SQS. The client would consume an item from the queue, generate the successors, and enqueue those, as well as adding the outcome of the item it just consumed to the queue. If the state is an end-state, it can propagate scoring information up to parent elements by consulting the database.
Two caveats to bear in mind:
The number of items in the queue will likely grow exponentially as you explore the tree, with each work item causing several more to be enqueued. Be prepared for a very long queue.
Depending on your game, it may be possible for there to be multiple paths to the same game state. You'll need to check for and eliminate duplicates, and your database will need to be structured so that it's a graph (possibly with cycles!), not a tree.
The first thing that popped into my mind is the Linda-style of a shared 'whiteboard', where different processes can consume 'problems' off the whiteboard, add new problems to the whiteboard, and add 'solutions' to the whiteboard.
Perhaps the Cassandra project is the more modern version of Linda.
There have been many attempts to parallelize problems across distributed computer systems; Folding#Home provides a framework that executes binary blob 'cores' to solve protein folding problems. Distributed.net might have started the modern incarnation of distributed problem solving, and might have clients that you can start from.

Fast algorithm for line of sight calculation in an RTS game

I'm making a simple RTS game. I want it to run very fast because it should work with thousands of units and 8 players.
Everything seems to work flawlessly but it seems the line of sight calculation is a bottleneck. It's simple: if an enemy unit is closer than any of my unit's LOS range it will be visible.
Currently I use a quite naive algorithm: for every enemy units I check whether any of my units is see him. It's O(n^2)
So If there are 8 players and they have 3000 units each that would mean 3000*21000=63000000 tests per player at the worst case. Which is quite slow.
More details: it's a stupid simple 2D space RTS: no grid, units are moving along a straight lines everywhere and there is no collision so they can move through each other. So even hundreds of units can be at the same spot.
I want to speed up this LOS algorithm somehow. Any ideas?
EDIT:
So additional details:
I meant one player can have even 3000 units.
my units have radars so they towards all directions equally.
Use a spatial data structure to efficiently look up units by location.
Additionally, if you only care whether a unit is visible, but not which unit spotted it, you can do
for each unit
mark the regions this unit sees in your spatial data structure
and have:
isVisible(unit) = isVisible(region(unit))
A very simple spatial data structure is the grid: You overlay a coarse grid over the playing field. The regions are this grid's cells. You allocate an array of regions, and for each region keep of list of units presently in this region.
You may also find Muki Haklay's demonstration of spatial indexes useful.
One of the most fundamental rules in gamedev is to optimize the bejeebers out of your algorithms by exploiting all possible constraints your gameplay defines - this is the main reason that you don't see wildly different games built on top of any given companies game engine, they've exploited their constraints so efficiently that they can't deal with anything that isn't within these constraints.
That said, you said that units move in straight lines - and you say that players can 3000 units - even if I assume that's 3000 units for eight players, that's 375 units per player, so I think I'm safe in assuming that on each step of game play (and I am assuming that each step involves the calculation you describe above) that more units will not change their direction than units that will change direction.
So, if this is true, then you want to divide all your pieces into two groups - those that did change direction in the last step, and those that did not.
For those that did, you need to do a bit of calulating - for units of any two opposing forces, you want to ask 'when will unit A see unit B given that neither unit A nor unit B change direction or speed ?(you can deal with accelleration/decelleration, but then it gets more complicated) - to calculate this you need first to determine if the vectors that unit A and unit B are travelling on will intersect (simple 2D line intersection calculation, combined with a calculation that tells you when each unit hits this intersection) - if they don't, and they can't see each other now, then they never will see each other unless at least one of them changes direction. If they do intersect, then you need to calculate the time differential between when the first and second unit pass through the point of intersection - if this distance is greater than the LOS range, then these units will never see each other unless one changes direction - if this differential is less than the LOS range then a few more (wave hands vigorously) calculations will tell you when this blessed event will take place.
Now, what you have is a collection of information bifurcated into elements that never will see each other and elements that will see each other at some time t in the future - each step, you simply deal with the units that have changed direction and compute their interactions with the rest of the units. (Oh, and deal with those units that previous calculations told you would come into view of each other - remember to keep these in an insertable ordered structure) What you've effectively done is exploited the linear behavior of the system to change your question from 'Does unit A see unit B' to 'When will unit A see unit B'
Now, all of that said, this isn't to discount the spatial data structure answer - it's a good answer - however, it is also capable of dealing with units in random motion, so you want to consider how to optimize this process further - you also need to be careful about dealing with cross region visibility, i.e. units at the borders of two different regions may be able to see each other - if you have pieces that tend to clump up, using a spatial data structure with variable dimensions might be the answer, where pieces that are not in the same region are guaranteed not to be able to see each other.
I'd do this with a grid. I think that's how commercial RTS games solve the problem.
Discretize the game world for the visibility tracker. (Square grid is easiest. Experiment with the coarseness to see what value works best.)
Record the present units in each area. (Update whenever a unit moves.)
Record the areas each player sees. (This has to be updated as units move. The unit could just poll to determine its visible tiles. Or you could analyze the map before the game starts..)
Make a list (or whatever structure is fitting) for the enemy units seen by each player.
Now whenever a unit goes from one area of visibility to another, perform a check:
Went from an unseen to a seen area - add the unit to the player's visibility tracker.
Went from a seen to an unseen area - remove the unit from the player's visibility tracker.
In the other two cases no visibility change occurred.
This is fast but takes some memory. However, with BitArrays and Lists of pointers, the memory usage shouldn't be that bad.
There was an article about this in one of the Game Programming Gems books (one of the first three, I think.)

critical path analysis

I'm trying to write a VB6 program (for a laugh) that will compute event times + the critical path JUST BASED ON A PRECEDENCE TABLE. I want my students to use it as a checking mechanism ie. to do everything without drawing the activity network. I'm happy that I can do all this once I've got start and finish events for each activity. How do I allocate events without drawing the network. Everything I come up with works for a specific example and then doesn't work for another one. I need a more general algorithm and it's driving me mental. Help!
I am not a professional programmer - I do this in my spare time to create teaching resources - simple English would really be appreciated.
Okay, so you have a precedence table, which I take to be a table of pairs like
A→B
B→C
and so forth, for activities {A,B,C}. Each of the activities also has a duration and (maybe) a distribution on the duration, so you know A takes 3 days, B takes 2, and so on. This would be interpreted as "A must be finished before B which must be finished before C".
Right?
Now, the obvious thing to do is construct the graph of activities and arrows -- in fact, you basically have the graph there in incidence-list form. The critical part is the greatest-weight (biggest sum of times) path. This is a longest-path problem, and assuming your chart isn't cyclic (which would be bad anyway) it can be solved with topological sort or transitive closure.

Resources