Anylogic: How to move transporter using Trip (travel) time? - time

I would like to use transporters in my model. The paths in the model are not to scale. How can I move the transporter between the nodes based on travel time?
For example, the same transporter has to be moved between nodes A and B and should do the movement in x mins but if it has to move from node C to node D, it should take y mins. This calls for dynamically updating the speed of the transporter (In the Move By Transporter" block). I am trying to duplicate the "Movement is Defined by: Trip time" functionality found in the "MoveTo" block to the "MoveByTransporter" block. Image: "MoveTo" block functionality that I am trying to add. How do I achieve this?
The reason I need to do this is because exact locations to scale are not known at the moment however, I know how much time it should take for every movement.
I was trying to use length of paths connecting the nodes to define the distance between the nodes. Then dividing this by the time I want it to take to traverse this distance to arrive at a speed and then feeding this speed to the transporter. However, since this distance will change, this approach is not viable.

In my opinion, your approach is exactly right Just make it dynamic.
Use the distance function belonging to the RoutData API
and measure the distance of the path that the transporter will take using the findShortestPath function of the TransporterFleet object.
Good luck!

Related

FlowField Pathfinding on Large RTS Maps

When building a large map RTS game, my team are experiencing some performance issues regarding pathfinding.
A* is obviously inefficient due to not only janky path finding, but processing costs for large groups of units moving at the same time.
After research, the obvious solution would be to use FlowField pathfinding, the industry standard for RTS games as it stands.
The issue we are now having after creating the base algorithm is that the map is quite large requiring a grid of around 766 x 485. This creates a noticeable processing freeze or lag when computing the flowfield for the units to follow.
Has anybody experienced this before or have any solutions on how to make the flowfields more efficient? I have tried the following:
Adding flowfields to a list when it is created and referencing later (Works once it has been created, but obviously lags on creation.)
Processing flowfields before the game is started and referencing the list (Due to the sheer amount of cells, this simply doesn't work.)
Creating a grid based upon the distance between the furthest selected unit and the destination point (Works for short distances, not if moving from one end of the map to the other).
I was thinking about maybe splitting up the map into multiple flowfields, but I'm trying to work out how I would make them move from field to field.
Any advice on this?
Thanks in advance!
Maybe this is a bit late answer. Since you have mentioned that this is a large RTS game, then the computation should not be limited to one CPU core. There are a few advice for you to use flowfield more efficiently.
Use multithreads to compute new flow fields for each unit moving command
Group units, so that all units in same command group share the same flowfield
Partition the flowfield grids, so you only have to update any partition that had modification in path (new building/ moving units)
Pre baked flowfields grid slot cost:you prebake basic costs of the grids(based on environments or other static values that won't change during the game).
Divide, e.g. you have 766 x 485 map, set it as 800 * 500, divide it into 100 * 80 * 50 partitions as stated in advice 3.
You have a grid of 10 * 10 = 100 slots, create a directed graph (https://en.wikipedia.org/wiki/Graph_theory) using the a initial flowfield map (without considering any game units), and use A* algorihtm to search the flowfield grid before the game begins, so that you know all the connections between partitions.
For each new flowfield, Build flowfield only with partitions marked by a simple A* search in the graph. And then use alternative route if one node of the route given by A* is totally blocked from reaching the next one (mark the node as blocked and do A* again in this graph)
6.Cache, save flowfield result from step.5 for further usage (same unit spawning from home and going to the enemy base. Same partition routes. Invalidate cache if there is any change in the path, and only invalidate the cache of the changed partition first, check if this partition still connects to other sides, then only minor change will be made within the partition only)
Runtime late updating the units' command. If the map is large enough. Move the units immediately to the next partition without using the flowfield, (use A* first to search the 10*10 graph to get the next partition). And during this time of movement, in the background, build the flowfield using previous step 1-6. (in fact you only need few milliseconds to do the calculation if optimized properly, then the units changes their route accordingly. Most of the time there is no difference and player won't notice a thing. In the worst case, where we finally have to search all patitions to get the only possible route, it is only the first time there will be any delay, and the cache will minimise the time since it is the only way and the cache will be used repeatitively)
Re-do the build process above every once per few seconds for each command group (in the background), just in case anything changes in the middle of the way.
I could get this working with much larger random map (2000*2000) with no fps drop at all.
Hope this helps anyone in the future.

What algorithm and data structure would fit the use case of overlapping traffic on a roadway

I have a problem where I have a road that has multiple entry points and exits. I am trying to model it so that traffic can flow into an entry and go out the exit. The entry points also act as exits. All the entrypoints are labelled 1 to 10 (i.e. we have 10 entry and exits).
A car is allowed to enter and exit at any point however the entry is always lower number than the exit. For example a car enters at 3 and goes to 8, it cannot go from 3 to 3 or from 8 to 3.
After every second the car moves one unit on the road. So from above example the car goes from 3 to 4 after one second. I want to continuously accept cars at different entrypoints and update their positions after each second. However I cannot accept a car at an entry if there is already one present at that location.
All cars are travelling at the same speed of 1 unit per second and all are same size and occupy just the space at the point they are in. Once a car reaches its destination, its removed from the road.
For all new cars that come into the entrypoint and are waiting, we need to assign a waiting time. How would that work? For example it needs to account for when it is able to find a slot where it can be put on the road.
Is there an algorithm that this problem fits into?
What data structure would I model this in - for example for each entrypoints, I was thinking something like a queue or like an ordered map and for the road, maybe a linkedlist?
Outside of a top down master algorithm that decides what each car does and when, there is another approach that uses agents that interact with their environment and amongst themselves, with a limited set of simple rules. This often give rise to complex behaviors: You could maybe code simple rules into car objects, to define these interactions?
Maybe something like this:
emerging behavior algorithm:
a car moves forward if there are no cars just in front of it.
a car merges into a lane if there are no car right on its side (and
maybe behind that slot too)
a car progresses towards its destination, and removes itself when destination is reached.
proposed data structure
The data structure could be an indexed collection of "slots" along which a car moves towards a destination.
Two data structures could intersect at a tuple of index values for each.
Roads with 2 or more lanes could be modeled with coupled data structures...
optimial numbers
Determining the max road use, and min time to destination would require running the simulation several times, with varying parameters of the number of cars, and maybe variations of the rules.
A more elaborate approach would us continuous space on the road, instead of discrete slots.
I can suggest a Directed Acyclic Graph (DAG) which will store each entry point as a node.
The problem of moving from one point to another can be thought of as a graph-flow problem, which has a number of algorithms for determining movement in a graph.

Incremental price graph approximation

I need to display a crypto currency price graph based similar to what is done on CoinMarketCap: https://coinmarketcap.com/currencies/bitcoin/
There could be gigabytes of data for one currency pair over a long period of time, so sending all the data to the client is not an option.
After doing some research I ended up using a Douglas-Peucker Line Approximation Algorithm: https://www.codeproject.com/Articles/18936/A-C-Implementation-of-Douglas-Peucker-Line-Appro It allows to reduce the amount of dots that is sent to the client but there's one problem: every time there's new data I have to go through all the data on the server and as I'd like to update data on the client in real time, it takes a lot of resources.
So I'm thinking about a some kind of progressive algorithm where, let's say, if I need to display data for the last month I can split data into 5 minute intervals, preprocess only the last interval and when it's completed, remove the first one. I'm debating either customising the Douglas-Peucker algorithm (but I'm not sure if it fits this scenario) or finding an algorithm designed for this purpose (any hint would be highly appreciated)
Constantly re-computing the entire reduction points when the new data arrives would change your graph continuously. The graph will lack consistency. The graph seen by one user would be different from the graph seen by another user and the graph would change when the user refreshes the page(this shouldn't happen!), and even in case of server/application shutdown, your data needs to be consistent to what it was before.
This is how I would approach:
Your reduced points should be as they are. For suppose, you are getting data for every second and you have computed reduced points for a 5-minute interval graph, save those data points in a limiting queue. Now gather all seconds data for next 5-minutes and perform the reduction operation on these 600 data points and add the final reduced point to your limiting queue.
I would make the Queue synchronous and the main thread would return the data points in the queue whenever there is an API call. And the worker thread would compute the reduction point on the 5-minute data once the data for the entire 5-minute interval is available.
I'd use tree.
A sub-node contains the "precision" and "average" values.
"precision" means the date-range. For example: 1 minute, 10 minutes, 1 day, 1 month, etc. This also means a level in the tree.
"average" is the value that best represents the price for a range. You can use a simple average, a linear regression, or whatever you decide as "best".
So if you need 600 points (say you get the window size), you can find the precision by prec=total_date_range/600, and some rounding to your existing ranges.
Now you have the 'prec' you just need to retrieve the nodes for that 'prec` level.
Being gigabytes of data, I'd slice them into std::vector objects. The tree would store ids to these vectors for the lowest nodes. The rest of nodes could also be implemented by indices to vectors.
Updating with new data only requires to update a branch (or even creating a new one), starting from root, but with not so many sub-nodes.

D3 force fundamentals?

I'm struggling to understand the impact of alphaDecay and velocityDecay and I wouldn't be comfortable trying to explain the concept of alpha and strength in the context of D3. From my experience level, the source of the force module is too short-and-sweet and I did not find yet a "D3.force for Dummies" to put me up to speed.
From my tests, I've never associated any significant graph behavior change with different alphaDecay values and I've only seen the impact of velocityDecay at extreme setting (near 0 or 1). link.strength() is also still a mystery.
I'm also really never really sure of when or why I should call simulation.restart().
This all leads to me not being able to have a strategy to come up with a satisfying graph. I feel like I'm always on the edge of a graph that's to explosive or just inert.
I've toyed with this interesting tool and read most of this, but it doesn't really go around alpha,link.strength, and other things I've mentioned.
How do you understand these values and how do you configure them?
So specifically what they do is documented on the tick documentation. It says this:
Increments the current alpha by (alphaTarget - alpha) × alphaDecay; then invokes each registered force, passing the new alpha; then decrements each node’s velocity by velocity × velocityDecay; lastly increments each node’s position by velocity.
I would describe them in this way:
Alpha
alpha I think of as the temperature of the system, which decays over a period of time which I'll use to explain. When the temperature hits 0 (alphaTarget) it automatically stops everything from moving, because it assumes there's no energy left. So the simulation stops.
The duration that the system has energy depends on 3 things, the current alpha, the alphaTarget which is when we should stop and alphaDecay which is the speed at which we lose heat from the system. The larger this is, the quicker the force will come to a stop.
Velocity
Velocity is the speed of the individual items within the force. So after each tick we get an update and the velocity is decreased by the velocityDecay ratio. So I treat velocityDecay as Friction. The higher the friction, the quicker that individual node will come to a stop.
Restart
Typically you call simulation.restart() off the back of some user action (a node being removed, a link being added etc).

Graph plotting: only keeping most relevant data

In order to save bandwith and so as to not to have generate pictures/graphs ourselves I plan on using Google's charting API:
http://code.google.com/apis/chart/
which works by simply issuing a (potentially long) GET (or a POST) and then Google generate and serve the graph themselves.
As of now I've got graphs made of about two thousands entries and I'd like to trim this down to some arbitrary number of entries (e.g. by keeping only 50% of the original entries, or 10% of the original entries).
How can I decide which entries I should keep so as to have my new graph the closest to the original graph?
Is this some kind of curve-fitting problem?
Note that I know that I can do POST to Google's chart API with up to 16K of data and this may be enough for my needs, but I'm still curious
The flot-downsample plugin for the Flot JavaScript graphing library could do what you are looking for, up to a point.
The purpose is to try retain the visual characteristics of the original line using considerably fewer data points.
The research behind this algorithm is documented in the author's thesis.
Note that it doesn't work for any kind of series, and won't give meaningful results when you want a downsampling factor beyond 10, in my experience.
The problem is that it cuts the series in windows of equal sizes then keep one point per window. Since you may have denser data in some windows than others the result is not necessarily optimal. But it's efficient (runs in linear time).
What you are looking to do is known as downsampling or decimation. Essentially you filter the data and then drop N - 1 out of every N samples (decimation or down-sampling by factor of N). A crude filter is just taking a local moving average. E.g. if you want to decimate by a factor of N = 10 then replace every 10 points by the average of those 10 points.
Note that with the above scheme you may lose some high frequency data from your plot (since you are effectively low pass filtering the data) - if it's important to see short term variability then an alternative approach is to plot every N points as a single vertical bar which represents the range (i.e. min..max) of those N points.
Graph (time series data) summarization is a very hard problem. It's like deciding, in a text, what is the "relevant" part to keep in an automatic summarization of it. I suggest you use one of the most respected libraries for finding "patterns of interest" in time series data by Eamonn Keogh

Resources