I am currently working on a school project for multiagent systems and I'm looking for recommendations for pathfinding and exploration algorithms. The relevant problem description is as follows:
a) The agent is in a 2D rectangular grid of known, fixed but arbitrary dimensions.
b) The agent can only move in one of four directions, one square per simulation time step.
c) The agent has a fixed sensor range. It is able to see, for example, a 2-square radius around itself - a 5x5 grid centred on the agent.
d) Cells may contain obstacles which block the agent's movement but not vision.
Obstacles are created at initialisation, are non-perishable, and no new obstacle will spawn mid-simulation.
e) All cells can be accessed from any adjacent cell.
f) Other in-world objects relevant to the agent's goals exist. They do not block the agent, but they spawn and despawn with a lifetime determined by a Normal/Gaussian distribution (parameters unknown to agent but it is allowed to guess on observations).
g) For a sense of the project's scale, the agent will be assessed in three scenarios: 50x50, 100x100, and the last with undisclosed dimensions (to the developer)
There are three different types of scenarios for which I need to generate a path for, and I've identified a possible algorithm:
1) Shortest path to a known refuelling point. The map may not be fully explored yet. I'm considering to use A*.
2) Map exploration. The agent may, at any arbitrary position, choose to initiate exploration mode. I need to get the shortest path to SENSE all cells in one pass. Known obstacle cells can be ignored. Revisiting cells is allowed but should be minimised, as implied by shortest path. I'm considering BFS for this but I honestly have no idea. I tried Googling but the results are all about real robots with hardware sensors. I'm not dealing with hardware here.
3) A variation of traveller's salesman. The agent maintains two sets of waypoints, let's say A and B. When it chooses to do so, it will alternate between waypoints from each set. The agent needs to visit every complete pair of AB-waypoints in the shortest path or, if not possible due to situational limits on path length, the shortest path to earn the highest score possible in that particular pass. The scoring function is the percentage of all pairs spawned that are visited.
Time does not pass and the world does not change while the agent is thinking and planning. Also, no penalty or constraint is imposed on the agent's execution time, so I can afford to look for optimal algorithms as opposed to greedy ones, and pruning of the plan's length is not necessary.
I am capable of tweaking general algorithms to fit my specific contexts and internal logic, but I need help to set me in the correct direction, especially for the latter two scenarios which seem to be common problems in my encounters but aren't mentioned in textbook scenarios. Thanks!
Related
I want to know some more applications of the widest path problem. CLICK!
It seems like something that can be used in a multitude of places, but I couldn't get anything constructive from searching on the internet.
Can someone please share as to where else this might be used?
thanks in advance.
(what I searched for included uses in p2p networks and CDN, but I couldn't find exactly how it is used / the papers were too long for me to scout.)
The widest path problem has a variety of applications in areas such as network routing problems, digital compositing and voting theory. Some specific applications include:
Finding the route with maximum transmission speed between two nodes.
This comes almost directly from the widest-path problem definition. We want to find the path between two nodes which maximizes the minimum-weight edge in the path.
Computing the strongest path strengths in Schulze’s method.
Schulze's method is a system in voting theory for finding a single winner among multiple candidates. Each voter provides an ordered preference list. We then construct a weighted graph where vertices represents candidates and the weight of an edge (u, v) represents the number of voters who prefer candidate u over candidate v. Next, we want to find the strength of the strongest path between each pair of candidates. This is the part of Schulze's method that can be solved using the widest-path problem. We simply run an algorithm to solve the widest-path problem for each pair of vertices.
Mosaicking of digital photographic maps. This is a technique for merging two maps into a single bigger map. The challenge is the the two original photos might have different light intensity, colors, etc. One way to do mosaicking is to produce seams where each pixel in the resulting picture is represented entirely by one photo or the other. We want the seam to appear invisible in the final product. The problem of finding the optimal seam can be modeled as a widest-path problem. Details for the modeling are found in the original paper
Metabolic path analysis for living organisms. The objective of this type of analysis is identify critical reactions in living organisms. A network is constructed based on the stoichiometry of the reactions. We wish to find the path which is energetically favored in the production of a particular metabolite, ie the path where the bottleneck between two vertices is the smallest. This corresponds to the widest-path problem.
I asked this question three days ago and I got burned by contributors because I didn't include enough information. I am sorry about that.
I have a 2D matrix and each array position relates to the depth of water in a channel, I was hoping to apply Dijkstra's or a similar "least cost path" algorithm to find out the least amount of concrete needed to build a bridge across the water.
It took some time to format the data into a clean version so I've learned some rudimentary Matlab skills doing that. I have removed most of the land so that now the shoreline is standardised to a certain value, my plan is to use a loop to move through each "pixel" on the "west" shore and run a least cost algorithm against it to the closest "east" shore and move through the entire mesh ultimately finding the least cost one.
This is my problem, fitting the data to any of the algorithms. Unfortunately I get overwhelmed by options and different formats because the other examples are for other use cases.
My other consideration is that when the shortest cost path is calculated that it will be a jagged line which would not be suitable for a bridge so I need to constrain the bend radius in the path if at all possible and I don't know how to go about doing that.
A picture of the channel:
Any advice in an approach method would be great, I just need to know if someone knows a method that should work, then I will spend the time learning how to fit the data.
You can apply Dijkstra to your problem in this way:
the two "dry" regions you want to connect correspond to matrix entries with value 0; the other cells have a positive value designating the depth (or the cost of filling this place with concrete)
your edges are the connections of neighbouring cells in your matrix. (It can be a 4- or 8-neighbourhood.) The weight of the edge is the arithmetic mean of the values of the connected cells.
then you apply the Dijkstra algorithm with a starting point in one "dry" region and an end point in the other "dry" region.
The cheapest path will connect two cells of value 0 and its weight will correspond to sum of costs of cells visited. (One half of each cell weight is coming from the edge going to the cell, the other half from the edge leaving the cell.)
This way you will get a possibly rather crooked path leading over the water, which may be a helpful hint for where to build a cheap bridge.
You can speed up the calculation by using the A*-algorithm. Therefore one needs a lower bound of the remaining costs for reaching the other side for each cell. Such a lower bound can be calculated by examining the "concentric rings" around a point as long as rings do not contain a 0-cell of the other side. The sum of the minimal cell values for each ring is then a lower bound of the remaining costs.
An alternative approach, which emphasizes the constraint that you require a non-jagged shape for your bridge, would be to use Monte-Carlo, simulated annealing or a genetic algorithm, where the initial "bridge" consisted a simple spline curve between two randomly chosen end points (one on each side of the chasm), plus a small number or randomly chosen intermediate points in the chasm. You would end up with a physically 'realistic' bridge and a reasonably optimized cost of concrete.
I'm dabbling in some path-finding systems (right now A*), but I'm no where near experienced enough to fully grasp the concepts behind everything. So please forgive me if this post is riddled with ignorance, or false assumptions.
My goal is to be able to have an object traverse multiple planes to reach a destination, with variable entrances to each level, and each level having different designs.
imagine a cave system that has 8 entrances on top, and your goal is to reach 6 layers down... Some of the caves link up, and therefor paths can be shared, but others are isolated until a certain point.... etc etc. Also, the inter connectivity of paths can be altered on each level.
I know there are some systems that can do this, but my goal is eventually best/fastest option, as well as the ability to do this calculation quickly (a possibility of at least 80+ different paths being calculated at any given time)
An example would be something like this:
Every 'layer' is another 'level', or plane. The green is a path that is both up and down between layers. during the course of the instance, paths can be placed anywhere, and any divisions inside a layer can be removed (but for the case of this instance, they are organized like that)
Its been fairly easy to implement A* for some basic path finding on a single level (eg, get from one position to a ladder going down). But trying to decide which ladder will lead to the ultimate goal is what is difficult.
My initial thoughts were to do something more like a data structure linking the paths to each other on the same level, and do some sort of tree traversal to decide which ladder the the path finding should direct you to when you reach a certain level...but that got complicated quickly, seeing as levels can be altered at any given point.
Like i said, I'm not certain how A* actually works, or the basics behind it... but its the basic algorithm that most people have said will work for multi-layered designs.
I know that it really depends on the engine, and the implementation of the algorithm, so I'm not asking for specifics... but rather some pointers on the best way to approach the situation;
Should I find a working A* implementation that has multi-level calculations built in, and change my level architecture to fit it?
Should I simply utilize a 2d A* implementation, and create a 'path' structure, to give the pathing instructions on each level?
Is there another approach to this style of design that would be more beneficial, and efficient (taking into account the number of calculations/paths to find)?
Thanks!
An approach that comes to mind:
For each level, calculate all the Manhattan distances (perhaps /2 if you can move diagonally) (that's assuming each level is a 2D grid) from each ladder going up to each ladder going down, and let each floor's value be the shortest distance of those.
Now we can simply let the A* heuristic be the sum of distances from the current floor to the bottom.
To prevent repeated computation of paths on each level, we can precompute each possible path from ladder to ladder on every level - we can also then use the shortest of these paths rather than the Manhattan distances for the heuristic.
You say there is a possibility of at least 80+ different paths being calculated at any given time.
If these are all on the same map, you should consider, for each ladder, calculating what the next ladder we should go to is which will lead to the shortest path (and also its cost) (possibly using some derivative of Dijkstra's algorithm starting from the bottom), then we can simply, from any starting point, check the distances to all ladders on the same floor and the cost of their shortest path, and simply pick the one giving the lowest sum of distance and shortest path.
We could even extend this by storing, for any given point, what the next square is one should move to for the shortest path.
If the map changes a lot, storing the next point for any given point is probably not viable, but don't discount storing the next ladder for each ladder.
If you're not familiar with it, the game consists of a collection of cars of varying sizes, set either horizontally or vertically, on a NxM grid that has a single exit.
Each car can move forward/backward in the directions it's set in, as long as another car is not blocking it. You can never change the direction of a car.
There is one special car, usually it's the red one. It's set in the same row that the exit is in, and the objective of the game is to find a series of moves (a move - moving a car N steps back or forward) that will allow the red car to drive out of the maze.
I've been trying to think how to generate instances for this problem, generating levels of difficulty based on the minimum number to solve the board.
Any idea of an algorithm or a strategy to do that?
Thanks in advance!
The board given in the question has at most 4*4*4*5*5*3*5 = 24.000 possible configurations, given the placement of cars.
A graph with 24.000 nodes is not very large for todays computers. So a possible approach would be to
construct the graph of all positions (nodes are positions, edges are moves),
find the number of winning moves for all nodes (e.g. using Dijkstra) and
select a node with a large distance from the goal.
One possible approach would be creating it in reverse.
Generate a random board, that has the red car in the winning position.
Build the graph of all reachable positions.
Select a position that has the largest distance from every winning position.
The number of reachable positions is not that big (probably always below 100k), so (2) and (3) are feasible.
How to create harder instances through local search
It's possible that above approach will not yield hard instances, as most random instances don't give rise to a complex interlocking behavior of the cars.
You can do some local search, which requires
a way to generate other boards from an existing one
an evaluation/fitness function
(2) is simple, maybe use the length of the longest solution, see above. Though this is quite costly.
(1) requires some thought. Possible modifications are:
add a car somewhere
remove a car (I assume this will always make the board easier)
Those two are enough to reach all possible boards. But one might to add other ways, because of removing makes the board easier. Here are some ideas:
move a car perpendicularly to its driving direction
swap cars within the same lane (aaa..bb.) -> (bb..aaa.)
Hillclimbing/steepest ascend is probably bad because of the large branching factor. One can try to subsample the set of possible neighbouring boards, i.e., don't look at all but only at a few random ones.
I know this is ancient but I recently had to deal with a similar problem so maybe this could help.
Constructing instances by applying random operators from a terminal state (i.e., reverse) will not work well. This is due to the symmetry in the state space. On average you end up in a state that is too close to the terminal state.
Instead, what worked better was to generate initial states (by placing random cars on the grid) and then to try to solve it with some bounded heuristic search algorithm such as IDA* or branch and bound. If an instance cannot be solved under the bound, discard it.
Try to avoid A*. If you have your definition of what you mean is a "hard" instance (I find 16 moves to be pretty difficult) you can use A* with a pruning rule that prevents expansion of nodes x with g(x)+h(x)>T (T being your threshold (e.g., 16)).
Heuristics function - Since you don't have to be optimal when solving it, you can use any simple inadmissible heuristic such as number of obstacle squares to the goal. Alternatively, if you need a stronger heuristic function, you can implement a manhattan distance function by generating the entire set of winning states for the generated puzzle and then using the minimal distance from a current state to any of the terminal state.
I need some input on solving the following problem:
Given a set of unordered (X,Y) points, I need to reduce/simplify the points and end up with a connected graph representation.
The following image show an example of an actual data set and the corresponding desired output (hand-drawn by me in MSPaint, sorry for shitty drawing, but the basic idea should be clear enough).
Some other things:
The input size will be between 1000-20000 points
The algorithm will be run by a user, who can see the input/output visually, tweak input parameters, etc. So automatically finding a solution is not a requirement, but the user should be able to achieve one within a fairly limited number of retries (and parameter tweaks). This also means that the distance between the nodes on the resulting graph can be a parameter and does not need to be derived from the data.
The time/space complexity of the algorithm is not important, but in practice it should be possible to finish a run within a few seconds on a standard desktop machine.
I think it boils down to two distinct problems:
1) Running a filtering pass, reducing the number of points (including some noise filtering for removing stray points)
2) Some kind of connect-the-dots graph problem afterwards. A very problematic area can be seen in the bottom/center part on the example data. Its very easy to end up connecting wrong parts of the graph.
Could anyone point me in the right direction for solving this? Cheers.
K-nearest neighbors (or, perhaps more accurately, a sigma neighborhood) might be a good starting point. If you're working in strictly Euclidean space, you may able to achieve 90% of what you're looking for by specifying some L2 distance threshold beyond which points are not connected.
The next step might be some sort of spectral graph analysis where you can define edges between points using some sort of spectral algorithm in addition to a distance metric. This would give the user a lot more knobs to turn with regards to the connectivity of the graph.
Both of these approaches should be able to handle outliers, e.g. "noisy" points that simply won't be connected to anything else. That said, you could probably combine them for the best possible performance (as spectral clustering performs a lot better when there are no 1-point clusters): run a basic KNN to identify and remove outliers, then a spectral analysis to more robustly establish edges.