I'm working on a evolutionary simulation with predators, prey, and food (plants that grow on terrain depending on the conditions and meat that creatures give off when they die).
Each of them ocupy an (x,y) position.
At the moment, each creature has a few "eyes" which are sensible to red, green and blue color channels, and when a creature or a piece of food is within their viewing distance, the eyes react sending an input to their neural network, depending on the color of the object they are seeing, it's relative angle and it's distance from the creature.
What I'm doing right now is iterating through ALL the plants, meat pieces, and creatures, and checking if they are within the creature's viewing distance. If that condition is true, then the inputs for the network are calculated.
The problem is that the world is massive (about 10,000*10,000 "units") compared to the creatures viewing distance, which is normally between 150 and 300 "units". On top of that, plant number can get really high depending on terrain conditions (up to a few thousand, too), together whith all the other creatures and meat pieces.
So, I normally end up with a massive loop being performed for each creature, which really slows down the simulation, when most of the creatures and food pieces checked are completely irrelevant (are too far away).
What I'm asking for is some method or algorithm that can reduce the number of points being checked for distance in each loop, limiting the distance of the points being checked, or some other technique.
PS: I thought about dividing the simulation in various "zones" so if a creature was in a zone it would only check for other points (food and other creatures) in that particular zone. However, as they are continuously moving, if they were on the edge of a zone it would make their view very inaccurate.
I also slightly improved the speed by checking for distance^2 (not doing the sqrt), and thein calculating it only if it was smaller than viewing_distance^2.
Divide the world in zones. You only need to check at most 4 zones if zone width is slightly larger than the maximum viewing distance.
Using a quad-tree or a kd-tree has the disadvantage that you need to constantly update the structure. But it might work better, do some profiling.
The quadtree structure might be used for geometrical representation:
http://en.wikipedia.org/wiki/Quadtree
It all depends on how much efficiency you need. The zones answer also has the problem that if your creature is near the boundary of a zone you might endup scanning 4 zones..
Related
Please forgive me if I do not explain my question clearly in title.
Here may I show you two pictures as my example:
my question is described as follows: I have 2 or more different objects(In the pictures, two objects: circle and cross), each one is placed repeatedly with a fixed row/column distance (In the pictures, the circle has a distance of 4 and cross has a distance of 2) into a grid.
In the first picture, each of the two objects are repeated correctly without any interruptions(here interruption means one object may occupy another one's position), but the arrangement in the first picture is ununiform distributed; on the contrary, in the second picture, the two objects may have interruptions (the circle object occupies cross objects' position) but the picture is uniformly distributed.
My target is to get the placement as uniform as possible (the objects are still placed with fixed distances but may allow some occupations). Is there a potential algorithm for this question? Or are there any similar questions?
I have some immature thinkings on this problem that: 1. occupation may relate to least common multiple; 2. how to define "uniformly distributed" mathematically? Maybe there's no genetic solution but is there a solution for some special cases? (for example, 3 objects with distance of multiple of 2, or multiple of 3?)
Uniformity can be measured as sum of squared inverse distances(or distances to equilibrium distances). Because it has squared relation, any single piece that approaches others will have big fitness penalty in system so that the system will not tolerate too close piece and prefer a better distribution.
If you do not use squared (or higher orders) distance but simple distance, then system starts tolerating even overlapped pieces.
If you want to manually compute uniformity, then compute the standard deviation of distances. You'd say its perfect with 1 distance and 0 deviation but small enough deviation also acceptable.
I tested this only on a problem to fit 106 circles in a square thats 10x the size of circle.
Overview:
I have a simple plastic sandbox represented by a 3D polygonal mesh. I need to be able to determine the water level after pouring a specific amount of water into the sandbox.
The water is distributed evenly from above when poured
No fluid simulation, the water is poured very slowly
It needs to be fast
Question:
What kind of techniques/algorithms could I use for this problem?
I'm not looking for a program or similar that can do this, just algorithms - I'll do the implementation.
Just an idea:
First you compute all saddle points. Tools like discrete morse theory or topological persistence might be useful here, but I know too little about them to be sure. Next you iterate over all saddle points, starting at the lowest, and compute the point in time after which water starts to cross that point. This is the time at which the shallower (in terms of volume versus surface area) of the two adjacent basins has reached a level that matches the height of that saddle point. From that point onward, water pouring onto that surface will flow over to the other basin and contribute to its water level instead, until the two basins have reached an equal level. After that, they will be treated as one single basin. Along the way you might have to correct the times when other saddle points will be reached, as the area associated with basins changes. You iterate in order of increasing time, not increasing height (e.g. using a heap with decrease-key operation). Once the final pair of basins have equal height, you are done; afterwards there is only a single basin left.
On the whole, this gives you a sequence of “interesting” times, where things change fundamentally. In between these, the problem will be much more local, as you only have to consider the shape of a single basin to compute its water level. In this local problem, you know the volume of water contained in that basin, so you can e.g. use bisection to find a suitable level for that. The adjacent “interesting” times might provide useful end points for your bisection.
To compute the volume of a triangulated polytope, you can use a 3D version of the shoelace formula: for every triangle, you take its three vertices and compute their determinant. Sum them up and divide by 6, and you have the volume of the enclosed space. Make sure that you orientate all your triangles consistently, i.e. either all as seen from the inside or all as seen from the outside. The choice decides the overall sign, try it out to see which one is which.
Note that your question might need refinement: when the level in a basin reaches two saddle points at exactly the same height, where does the water flow? Without fluid simulation this is not well defined, I guess. You could argue that it should be distributed equally among all adjacent basins. You could argue that such situations are unlikely to occur in real data, and therefore choose one neighbour arbitrarily, implying that this saddle point has infinitesimally less height than the other ones. Or you could come up with a number of other solutions. If this case is of interest to you, then you might need to clarify what you expect there.
A simple solution comes to mind:
binary-search your way through different heights of water, computing the volume of water contained.
i.e.
Start with an upper-estimate for the water's height of the depth D of the sandbox.
Note that since sand is porous, the maximum volume will be with the box filled to the brim with water;
Any more water would just pout back out into the grass in our hypothetical back-yard.
Also note, that this means that you don't need to worry about saddle points, or multiple water levels in your solution;
Again, we're assuming regular porous sand here, not mountains made out of rock.
Compute the volume of water contained by height D.
If it is within your approximation threshold, quit.
Otherwise, adjust your estimate with a different height, and repeat.
Note that computing the volume of water above the surface of the sand for any given triangular piece of the sand is easy.
It's the volume of a triangular prizm plus the volume of the tetrahedron which is in contact with the sand:
Note that the volume of water below the sandline would be calculated similarly, but the volume would be less, since part of it would be occupied by the sand.
I suggest internet-searching for typical air-void contents of sand, or water-holding capacity.
Or whatever phrases would return a sane result.
Also note, that some triangles may have zero water above the sand, if the sand is above the water-line.
Once you have the volume of water both above and below the sand-line for a single triangle of your mesh, you just loop over all of the triangles to get the total volume for your entire sandbox, for the given height.
Note that this is a pretty dumb algorithm, but I suspect it will have decent performance compared to a fancy-schmancier algorithm which would try to do anything more clever.
Remember that this is just a handful of multiplications and summations for each triangle, and that blind loops with few if statements or other flow-control tend to execute quickly, since the processor can pipeline it well.
This method may be parallelized easily instead of looping over each triangle, if you have a highly-detailed mesh of a sandbox, and want to shove the calculations into multiple cores.
Or keep the loops, and shove different heights into each core.
Or something else; I leave parallelization and speeding up as an exercise for the reader.
I've an indeterminate number of closed CGPath elements of various shapes and sizes all containing a single concave bezier curve, like the red and blue shapes in the diagram below.
What is the simplest and most efficient method of dividing these shapes into n regions of (roughly) equal size?
What you want is Delaunay triangulation. Here is an example which resembles what you want to do. It uses an as3 library. Here is an iOS port, that should help you:
https://github.com/czgarrett/delaunay-ios
I don't really understand the context of what you want to achieve and what the constraints are. For instance, is there a hard requirement that the subdivided regions are equal size?
Often the solutions to a performance problem is not a faster algorithm but a different approach, usually one or more of the following:
Pre-compute the values, or compute as much as possible offline. Say by using another server API which is able to do the subdivision offline and cache the results for multiple clients. You could serve the post-computed result as a bitmap where each colour indexes into the table of values you want to display. Looking up the value would be a simple matter of indexing the pixel at the touch position.
Simplify or approximate a solution. Would a grid sub-division be accurate enough? At 500 x 6 = 3000 subdivisions, you only have about 51 square points for each region, that's a region of around 7x7 points. At that size the user isn't going to notice if the region is perfectly accurate. You may need to end up aggregating adjacent regions anyway due to touch resolution.
Progressive refinement. You often don't need to compute the entire algorithm up front. Very often algorithms run in discrete (often symmetrical) units, meaning you're often re-using the information from previous steps. You could compute just the first step up front, and then use a background thread to progressively fill in the rest of the detail. You could also defer final calculation until the the touch occurs. A delay of up to a second is still tolerable at that point, or in the worst case you can display an animation while the calculation is in progress.
You could use some hybrid approach, and possibly compute one or two levels using Delaunay triangulation, and then using a simple, fast triangular sub-division for two more levels.
Depending on the required accuracy, and if discreet samples are not required, the final levels could be approximated using a weighted average between the points of the triangle, i.e., if the touch is halfway between two points, pick the average value between them.
First I will define:
Region: big stuff manually created I want to divide.
Zone: small stuff I want to generate.
I have a map. The world map in fact. And I want to divide it into small zones. The size of the zones will be dependent on what region the zone is in. For instance very small for Europe (maybe Europe will have like 200 zones) but only a couple of huge ones for the Atlantic Ocean.
I can manually create points to enclose a region. I will create regions for each big space I want it to have different size than other spaces. For instance I will create an enclosed region for Europe. So I got a butch of (latitude, longitude) points defining the limits of the Europe region. The shape is of course not regular and there are holes in the middle of it (I don't want to create small zones over the Mediterranean sea but a big one). So what we got is a huge 2D shape to be filled up with zones.
Zones themselves are n-sized polygons, number of sizes can be randomly chosen or subject to other constraints. The area of each zone is also limited random (like 50 plus/minus 40%) although this constraint again can be relaxed (as exception, not as rule). Zones can not overlap and the whole region must be divided.
The obvious question, any algorithm that look like can be used to solve this problem?
I even have problems to determine if a given point is inside or outside an enclosed region.
Me, I'd do it the other way round, put a point in the (approximate) centre of all the zones and compute the Voronoi Diagram of the resulting point set.
EDIT: in response to #Unreason's comments. I don't claim that computing the Voronoi diagram is an answer to the question asked. I do claim that computing the Voronoi diagram is a suitable method for dividing a planar map into zones which are defined by their closeness to a point. This may, or may not, satisfy OP's underlying requirement and OP is free to use or ignore my suggestion.
I implied the following, but will now make it explicit: OP, if taken with this suggestion, should define the points (lat,long) at the 'centres' of each zone required and run the algorithm. Voronoi diagrams are not computed iteratively, if OP doesn't like the solution then OP would have to shift the points around and re-compute. I guess it would be feasible to write a routine to do this; the hard part, as ever with computational cartography, is in defining a computable rule about how well a trial solution fits (quasi-)aesthetic requirements.
I wouldn't bother, I'd use country capital cities as the points for my zones (relatively densely packed in Europe, relatively sparse in the Atlantic) and let the algorithm run. Job done.
Perhaps OP might use the locations of all cities with populations over 5 x 10^5 (there are probably about 200 of those in Europe). Or some other points.
Oh, and computing the Voronoi diagram isn't random either, it's entirely deterministic. Again, this may or may not satisfy the underlying requirement.
To determine if a point is inside a polygon follow point in polygon in wikipedia or use some geometry framework.
The restrictions to divide a polygon into smaller polygons of loosely same size are not very limiting at all, for example if you
cut the big polygons with vertical and horizontal lines spaced such that on land you will get exactly the targeted are size, then for europe you will satisfy your criteria for most of the zones.
Inspect them all and for the ones that do not satisfy the criteria you can start modifying the borders with the neighbouring zones in such a way to reach desired size (as you have +/- 40% this should not be hard).
You can do this by moving the shared nodes or by adding points to the borders and moving only those lines.
Also, before the above, join the zones from the initial cut that are smaller then certain percentage (for example 20% of the target size; these could be islands and other small pieces).
The algorithm would work well for the large number of small zones, but will not work as well for regions that need to be cut into only a few zones (but it would work).
What is the best way to check collision of huge number of circles?
It's very easy to detect collision between two circles, but if we check every combination then it is O(n2) which definitely not an optimal solution.
We can assume that circle object has following properties:
Coordinates
Radius
Velocity
Direction
Velocity is constant, but direction can change.
I've come up with two solutions, but maybe there are some better solutions.
Solution 1
Divide whole space into overlapping squares and check for collision only with circles that are in the same square. Squares need to overlap so there won't be a problem when a circle moves from one square to another.
Solution 2
At the beginning distances between every pair of circles need to be calculated.
If the distance is small then these pair is stored in some list, and we need to check for collision in every update.
If the distance is big then we store after which update there can be a collision (it can be calculated because we know the distance and velocitites). It needs to be stored in some kind of priority queue. After previously calculated number of updates distance needs to be checked again and then we do the same procedure - put it on the list or again in the priority queue.
Answers to Mark Byers questions
Is it for a game?
It's for simulation, but it can be treated also as a game
Do you want to recalculate the new position every n milliseconds, and also check for collisions at this time?
Yes, time between update is constant.
Do you want to find the time at which the first/every collision occurs?
No, I want to find every collision and do 'something' when it occures.
How important is accuracy?
It depends on what do you mean by accuracy. I need to detect all collisions.
Is it a big problem if very small fast moving circles can pass through each other occasionally?
It can be assumed that speed is so small that it won't happen.
There are "spatial index" data-structures for storing your circles for quick comparison later; Quadtree, r-tree and kd-tree are examples.
Solution 1 seems to be a spatial index, and solution 2 would benefit from a spatial index every time you recalculate your pairs.
To complicate matters, your objects are moving - they have velocity.
It is normal to use spatial indexes for objects in games and simulations, but mostly for stationary objects, and typically objects that don't react to a collision by moving.
It is normal in games and such that you compute everything at set time intervals (discrete), so it might be that two objects pass through each other but you fail to notice because they moved so fast. Many games actually don't even evaluate collisions in strict chronological order. They have a spatial index for stationary objects e.g. walls, and lists for all the moving objects that they check exhaustively (although with relaxed discrete checks as I outlined).
Accurate continuous collision detection and where the objects react to collisions in simulations is usually much more demanding.
The pairs approach you outlined sounds promising. You might keep the pairs sorted by next collision, and reinsert them when they have collided in the appropriate new positions. You only have to sort the new generated collision list (O(n lg n)) for the two objects and then to merge two lists (the new collisions for each object, and the existing list of collisions; inserting the new collisions, removing those stale collisions that listed the two objects that collided) which is O(n).
Another solution to this is to adapt your spatial index to store the objects not strictly in one sector but in each that it has passed through since the last calculation, and do things discretely. This means storing fast moving objects in your spatial structure, and you'd need to optimise it for this case.
Remember that linked lists or lists of pointers are very bad for caching on modern processors. I'd advocate that you store copies of your circles - their important properties for collision detection at any rate - in an array (sequential memory) in each sector of any spatial index, or in the pairs you outlined above.
As Mark says in the comments, it could be quite simple to parallelise the calculations.
I assume you are doing simple hard-sphere molecular dynamic simulation, right? I came accros the same problem many times in Monte Carlo and molecular dynamic simulations. Both of your solutions are very often mentioned in literature about simulations. Personaly I prefer solution 1, but slightly modified.
Solution 1
Divide your space into rectangular cells that don't overlap. So when you check one circle for collision you look for all circles inside a cell that your first circle is, and look X cells in each direction around. I've tried many values of X and found that X=1 is the fastest solution. So you have to divide space into cells size in each direction equal to:
Divisor = SimulationBoxSize / MaximumCircleDiameter;
CellSize = SimulationBoxSize / Divisor;
Divisor should be bigger than 3, otherwise it will cause errors (if it is too small, you should enlarge your simulation box).
Then your algorithm will look like this:
Put all circles inside the box
Create cell structure and store indexes or pointers to circles inside a cell (on array or on a list)
Make a step in time (move everything) and update circles positions inside on cells
Look around every circle for collision. You should check one cell around in every direction
If there is a collision - do something
Go to 3.
If you will write it correctly then you would have something about O(N) complexity, because maximum number of circles inside 9 cells (in 2D) or 27 cells (in 3D) is constant for any total number of circles.
Solution 2
Ususaly this is done like this:
For each circle create a list of circles that are in distance R < R_max, calculate time after which we should update lists (something about T_update = R_max / V_max; where V_max is maximum current velocity)
Make a step in time
Check distance of each circle with circles on its list
If there is a collision - do something
If current time is bigger then T_update, go to 1.
Else go to 2.
This solution with lists is very often improved by adding another list with R_max_2 > R_max and with its own T_2 expiration time. In this solution this second list is used to update the first list. Of course after T_2 you have to update all lists which is O(N^2). Also be carefull with this T and T_2 times, because if collision can change velocity then those times would change. Also if you introduce some foreces to your system, then it will also cause velocity change.
Solution 1+2
You can use lists for collision detection and cells for updating lists. In one book it was written that this is the best solution, but I think that if you create small cells (like in my example) then solution 1 is better. But it is my opinion.
Other stuff
You can also do other things to improve speed of simulation:
When you calculate distance r = sqrt((x1-x2)*(x1-x2) + (y1-y2)*(y1-y2) + ...) you don't have to do square root operation. You can compare r^2 to some value - it's ok. Also you don't have to do all (x1-x2)*(x1-x2) operations (I mean, for all dimentions), because if x*x is bigger than some r_collision^2 then all other y*y and so on, summed up, would be bigger.
Molecular dynamics method is very easy to parallelise. You can do it with threads or even on GPU. You can calculate each distance in different thread. On GPU you can easly create thousends of threads almost costless.
For hard-spheres there is also effective algorithm that doesn't do steps in time, but instead it looks for nearest collision in time and jumps to this time and updates all positions. It can be good for not dense systems where collisions are not very probable.
one possible technique is to use the Delaunay triangulation on the center of your circles.
consider the center of each circle and apply the delaunay triangulation. this will tesselate your surface into triangles. this allows you to build a graph where each node stores the center of a triangle, and each edge connects to the center of a neighbour circle. the tesselation operated above will limit the number of neighbours to a reasonable value (6 neighbours on average)
now, when a circle moves, you have a limited set of circles to consider for collision. you then have to apply the tesselation again to the set of circles which are impacted by the move, but this operation involves only a very small subset of circles (the neighbours of the moving circle, and some neighbours of the neighbours)
the critical part is the first tesselation, which will take some time to perform, later tesselations are not a problem. and of course you need an efficient implementation of a graph in term of time and space...
Sub-divide your space up into regions and maintain a list of which circles are centred in each region.
Even if you use a very simple scheme, such as placing all the circles in a list, sorted by centre.x, then you can speed things up massively. To test a given circle, you only need to test it against the circles on either side of it in the list, going out until you reach one that has an x coordinate more than radius away.
You could make a 2D version of a "sphere tree" which is a special (and really easy to implement) case of the "spatial index" that Will suggested. The idea is to "combine" circles into a "containing" circle until you've got a single circle that "contains" the "huge number of circles".
Just to indicate the simplicity of computing a "containing circle" (top-of-my-head):
1) Add the center-locations of the two circles (as vectors) and scale by 1/2, thats the center of the containing circle
2) Subtract the center locations of the two circles (as vectors), add the radii and scale by 1/2, thats the radius of the containing circle
What answer is most efficient will depend somewhat on the density of circles. If the density is low, then placing placing a low-resolution grid over the map and marking those grid elements that contain a circle will likely be the most efficient. This will take approximately O(N*m*k) per update, where N is the total number of circles, m is the average number of circles per grid point, and k is the average number of grid points covered by one circle. If one circle moves more than one grid point per turn, then you have to modify m to include the number of grid points swept.
On the other hand, if the density is extremely high, you're best off trying a graph-walking approach. Let each circle contain all neighbors within a distance R (R > r_i for every circle radius r_i). Then, if you move, you query all the circles in the "forward" direction for neighbors they have and grab any that will be within D; then you forget all the ones in the backward direction that are now farther than D. Now a complete update will take O(N*n^2) where n is the average number of circles within a radius R. For something like a closely-spaced hexagonal lattice, this will give you much better results than the grid method above.
A suggestion - I am no game developer
Why not precalculate when the collisions are going to occur
as you specify
We can assume that circle object has following properties:
-Coordinates
-Radius
-Velocity
-Direction
Velocity is constant, but direction can change.
Then as the direction of one object changes, recalculate those pairs that are affected. This method is effective if directions do not change too frequently.
As Will mentioned in his answer, spacial partition trees are the common solution to this problem. Those algorithms sometimes take some tweaking to handle moving objects efficiently though. You'll want to use a loose bucket-fitting rule so that most steps of movement don't require an object to change buckets.
I've seen your "solution 1" used for this problem before and referred to as a "collision hash". It can work well if the space you're dealing with is small enough to be manageable and you expect your objects to be at least vaguely close to uniformly distributed. If your objects may be clustered, then it's obvious how that causes a problem. Using a hybrid approach of some type of a partition tree inside each hash-box can help with this and can convert a pure tree approach into something that's easier to scale concurrently.
Overlapping regions is one way to deal with objects that straddle the boundaries of tree buckets or hash boxes. A more common solution is to test any object that crosses the edge against all objects in the neighboring box, or to insert the object into both boxes (though that requires some extra handling to avoid breaking traversals).
If your code depends on a "tick" (and tests to determine if objects overlap at the tick), then:
when objects are moving "too fast" they skip over each other without colliding
when multiple objects collide in the same tick, the end result (e.g. how they bounce, how much damage they take, ...) depends on the order that you check for collisions and not the order that collisions would/should occur. In rare cases this can cause a game to lock up (e.g. 3 objects collide in the same tick; object1 and object2 are adjusted for their collision, then object2 and object3 are adjusted for their collision causing object2 to be colliding with object1 again, so the collision between object1 and object2 has to be redone but that causes object2 to be colliding with object3 again, so ...).
Note: In theory this second problem can be solved by "recursive tick sub-division" (if more than 2 objects collide, divide the length of the tick in half and retry until only 2 objects are colliding in that "sub-tick"). This can also cause games to lock up and/or crash (when 3 or more objects collide at the exact same instant you end up with a "recurse forever" scenario).
In addition; sometimes when game developers use "ticks" they also say "1 fixed length tick = 1 / variable frame rate", which is absurd because something that is supposed to be a fixed length can't depend on something variable (e.g. when the GPU is failing to achieve 60 frames per second the entire simulation goes in slow motion); and if they don't do this and have "variable length ticks" instead then both of the problems with "ticks" become significantly worse (especially at low frame rates) and the simulation becomes non-deterministic (which can be problematic for multi-player, and can result in different behavior when the player saves, loads or pauses the game).
The only correct way is to add a dimension (time), and give each object a line segment described as "starting coordinates and ending coordinates", plus a "trajectory after ending coordinates". When any object changes its trajectory (either because something unpredicted happened or because it reached its "ending coordinates") you'd find the "soonest" collision by doing a "distance between 2 lines < (object1.radius + object2.radius)" calculation for the object that changed and every other object; then modify the "ending coordinates" and "trajectory after ending coordinates" for both objects.
The outer "game loop" would be something like:
while(running) {
frame_time = estimate_when_frame_will_be_visible(); // Note: Likely to be many milliseconds after you start drawing the frame
while(soonest_object_end_time < frame_time) {
update_path_of_object_with_soonest_end_time();
}
for each object {
calculate_object_position_at_time(frame_time);
}
render();
}
Note that there are multiple ways to optimize this, including:
split the world into "zones" - e.g. so that if you know object1 would be passing through zones 1 and 2 then it can't collide with any other object that doesn't also pass through zone 1 or zone 2
keep objects in "end_time % bucket_size" buckets to minimize time taken to find "next soonest end time"
use multiple threads to do the "calculate_object_position_at_time(frame_time);" for each object in parallel
do all the "advance simulation state up to next frame time" work in parallel with "render()" (especially if most rendering is done by GPU, leaving CPU/s free).
For performance:
When collisions occur infrequently it can be significantly faster than "ticks" (you can do almost no work for relatively long periods of time); and when you have spare time (for whatever reason - e.g. including because the player paused the game) you can opportunistically calculate further into the future (effectively, "smoothing out" the overhead over time to avoid performance spikes).
When collisions occur frequently it will give you the correct results, but can be slower than a broken joke that gives you incorrect results under the same conditions.
It also makes it trivial to have an arbitrary relationship between "simulation time" and "real time" - things like fast forward and slow motion will not cause anything to break (even if the simulation is running as fast as hardware can handle or so slow that its hard to tell if anything is moving at all); and (in the absence of unpredictability) you can calculate ahead to an arbitrary time in the future, and (if you store old "object line segment" information instead of discarding it when it expires) you can skip to an arbitrary time in the past, and (if you only store old information at specific points in time to minimize storage costs) you can skip back to a time described by stored information and then calculate forward to an arbitrary time. These things combined also make it easy to do things like "instant slow motion replay".
Finally; it's also more convenient for multiplayer scenarios, where you don't want to waste a huge amount of bandwidth sending a "new location" for every object to every client at every tick.
Of course the downside is complexity - as soon as you want to deal with things like acceleration/deceleration (gravity, friction, lurching movement), smooth curves (elliptical orbits, splines) or different shaped objects (e.g. arbitrary meshes/polygons and not spheres/circles) the mathematics involved in calculating when the soonest collision will occur becomes significantly harder and more expensive; which is why game developers resort to the inferior "ticks" approach for simulations that are more complex than the case of N spheres or circles with linear motion.