Wavefront algorithm for area coverage - algorithm

I am wondering whether the Wavefront algorithm (or any other navigation algorithm), can be modified from trying to reach a a specific goal location to navigate to all reachable locations.
Any other advice on different types of WaveFront algorithm would also be helpful.

I have visited your site. You stated that the robot can receive commands like "Go to ketchen". Well, I advice not to re-invent the wheel. Actually, you don't have to visit every cell, or "the hole area". Rather, you should select your shortest path to it, then walk through.
I believe Dijkstra's algorithm is much better for your robot path-finding.
An enhaced version of Dijkstra is A* algorithm, which takes less time in the average case.
Here you can find examples how do they work, efficiently.
EDIT:
I have visited your site, again. You stated that you want an algorithm for navigating all the erea. Well, as far as I know, repeating A* algorithm will be much better. A* uses BFS, which has a better performance in the average case. It's very efficient when compared whith wavefront. The pseudocode is as following:
A) Find the shortest path with A* algorithm between the location and the goal
B) If there is no way to the goal, specify a temp location and move to it. (Since you
indicated, it may find a way later). After arrived to temp location, Go to step A.
C) Otherwise, if you have found a way, navigate to the target.

This 1993 paper introduced a variant of the vanilla wave-front planner that achieves complete coverage, in addition to navigation from start to goal:
A. Zelinsky, R.A. Jarvis, J.C. Byrne, S. Yuta, "Planning paths of
complete coverage of an unstructured environment by a mobile robot,"
Proceedings of the International Conference on Advanced Robotics,
1993, pp. 533-538.
Also see the following review paper for more ideas on coverage path planning:
Enric Galceran, Marc Carreras, "A survey on coverage path planning for
robotics," Robotics and Autonomous Systems, Volume 61, Issue 12,
December 2013, Pages 1258-1276.

Related

What algorithm to use to compute the fastest order to build buildings?

For a game I'm playing I would like to know the fastest order to build a number of buildings. The game in question is OGame, if you're familiar with it, then that is a plus, but obviously I will explain the basics of the game:
The player has a number of differences resources available.
The player can build buildings, maximum one building at a time.
Buildings have different levels, for example Building A level 1, Building A level 2, etc.
The resource cost to build buildings increases per level.
Buildings cost time to build too, this also increases per level.
Some of the buildings produce the different resources, this also increases per level.
Some of the buildings change the calculations such that buildings get built faster.
I have explicitly chosen to not show the equations as they are not straight-forward and should not be needed to suggest an algorithm.
I have chosen to model this with the following actions:
StartUpgradeBuildingAction: This action starts the upgrade process by subtracting the cost from the available resources.
FinishUpgradeBuildingAction: This action finishes the upgrade process by going forward in time. This also produces resources.
WaitAction: This action forwards the time by X seconds and meanwhile produces the resources according to the resource production.
It should be noted that the state space is infinite and can be characterized by the fact that there are multiple paths to the final configuration (where all requested buildings have been built), each possibly having a different time it takes and a different amount of resources you end up with in the end. Now I am most interested in the path (the order) that is the fastest, and if there are multiple equal paths, then the path that costs the least should be preferred.
I have already tried the following approaches:
Breadth-First Search
Depth-First Search
Iterative Deepening Depth-First Search
Iterative Deepening A* Search
A* Search
Unfortunately all of these algorithms either take too long, or use too much memory.
As googling has not given me any further leads, I ask the following questions here:
Is there an already existing model that matches my problem? I would think that Business Information Systems for example would have encountered this type of problem already.
Does an algorithm exist that gives the best solution? If so, which one?
Is there an algorithm that gives a solution that is close to the best solution? If so, which one?
Any help is appreciated.
There is not the one algorithm that gives you the best solution for your problem. The approaches you tried are all reasonable. However, having tried A* search doesn't mean a lot since A* search depends on a heuristic that evaluates a certain configuration (i.e. it assigns a value to the combination of the amount of time passed, number and selection of buildings built, available resources, etc.). With a good heuristic, A* search might lead you to a very good solution quickly. Finding this heuristic requires good knowledge of the parameters (costs of buildings, benefits of upgrades etc.)
However, my feeling is that your problem is structured in such a way that a series of build decisions can clearly outperform another series of decisions after a small number of steps. Lets say you build buildings A, B and C in this order. You build each one as soon as the required resources are available. Then you try the order C, A, B. You will probably find that one alternative dominates the other insofar that you have the same buildings but in one alternative you have more resources than in the other. Of course this is less likely if you have many different resources. You might have more of resource X but less of Y which makes the situations harder to compare. If it is possible, the good thing is you don't need a heuristic but you clearly see which path you should follow and which to cut off.
Anyway, I would explore how many steps it takes until you find paths that you can dismiss based on such a consideration. If you find them quick, it makes sense to follow a breadth-first strategy and prune branches as soon as possible. A depth first search bears the risk that you spend a lot of time exploring inferior paths.

concrete examples of heuristics

What are concrete examples (e.g. Alpha-beta pruning, example:tic-tac-toe and how is it applicable there) of heuristics. I already saw an answered question about what heuristics is but I still don't get the thing where it uses estimation. Can you give me a concrete example of a heuristic and how it works?
Warnsdorff's rule is an heuristic, but the A* search algorithm isn't. It is, as its name implies, a search algorithm, which is not problem-dependent. The heuristic is. An example: you can use the A* (if correctly implemented) to solve the Fifteen puzzle and to find the shortest way out of a maze, but the heuristics used will be different. With the Fifteen puzzle your heuristic could be how many tiles are out of place: the number of moves needed to solve the puzzle will always be greater or equal to the heuristic.
To get out of the maze you could use the Manhattan Distance to a point you know is outside of the maze as your heuristic. Manhattan Distance is widely used in game-like problems as it is the number of "steps" in horizontal and in vertical needed to get to the goal.
Manhattan distance = abs(x2-x1) + abs(y2-y1)
It's easy to see that in the best case (there are no walls) that will be the exact distance to the goal, in the rest you will need more. This is important: your heuristic must be optimistic (admissible heuristic) so that your search algorithm is optimal. It must also be consistent. However, in some applications (such as games with very big maps) you use non-admissible heuristics because a suboptimal solution suffices.
A heuristic is just an approximation to the real cost, (always lower than the real cost if admissible). The better the approximation, the fewer states the search algorithm will have to explore. But better approximations usually mean more computing time, so you have to find a compromise solution.
Most demonstrative is the usage of heuristics in informed search algorithms, such as A-Star. For realistic problems you usually have large search space, making it infeasible to check every single part of it. To avoid this, i.e. to try the most promising parts of the search space first, you use a heuristic. A heuristic gives you an estimate of how good the available subsequent search steps are. You will choose the most promising next step, i.e. best-first. For example if you'd like to search the path between two cities (i.e. vertices, connected by a set of roads, i.e. edges, that form a graph) you may want to choose the straight-line distance to the goal as a heuristic to determine which city to visit first (and see if it's the target city).
Heuristics should have similar properties as metrics for the search space and they usually should be optimistic, but that's another story. The problem of providing a heuristic that works out to be effective and that is side-effect free is yet another problem...
For an application of different heuristics being used to find the path through a given maze also have a look at this answer.
Your question interests me as I've heard about heuristics too during my studies but never saw an application for it, I googled a bit and found this : http://www.predictia.es/blog/aco-search
This code simulate an "ant colony optimization" algorithm to search trough a website.
The "ants" are workers which will search through the site, some will search randomly, some others will follow the "best path" determined by the previous ones.
A concrete example: I've been doing a solver for the game JT's Block, which is roughly equivalent to the Same Game. The algorithm performs a breadth-first search on all possible hits, store the values, and performs to the next ply. Problem is the number of possible hits quickly grows out of control (10e30 estimated positions per game), so I need to prune the list of positions at each turn and only take the "best" of them.
Now, the definition of the "best" positions is quite fuzzy: they are the positions that are expected to lead to the best final scores, but nothing is sure. And here comes the heuristics. I've tried a few of them:
sort positions by score obtained so far
increase score by best score obtained with a x-depth search
increase score based on a complex formula using the number of tiles, their color and their proximity
improve the last heuristic by tweaking its parameters and seeing how they perform
etc...
The last of these heuristic could have lead to an ant-march optimization: there's half a dozen parameters that can be tweaked from 0 to 1, and an optimizer could find the optimal combination of these. For the moment I've just manually improved some of them.
The second of this heuristics is interesting: it could lead to the optimal score through a full depth-first search, but such a goal is impossible of course because it would take too much time. In general, increasing X leads to a better heuristic, but increases the computing time a lot.
So here it is, some examples of heuristics. Anything can be an heuristic as long as it helps your algorithm perform better, and it's what makes them so hard to grasp: they're not deterministic. Another point with heuristics: they're supposed to lead to quick and dirty results of the real stuff, so there's a trade-of between their execution time and their accuracy.
A couple of concrete examples: for solving the Knight's Tour problem, one can use Warnsdorff's rule - an heuristic. Or for solving the Fifteen puzzle, a possible heuristic is the A* search algorithm.
The original question asked for concrete examples for heuristics.
Some of these concrete examples were already given. Another one would be the number of misplaced tiles in the 15-puzzle or its improvement, the Manhattan distance, based on the misplaced tiles.
One of the previous answers also claimed that heuristics are always problem-dependent, whereas algorithms are problem-independent. While there are, of course, also problem-dependent algorithms (for instance, for every problem you can just give an algorithm that immediately solves that very problem, e.g. the optimal strategy for any tower-of-hanoi problem is known) there are also problem-independent heuristics!
Consequently, there are also different kinds of problem-independent heuristics. Thus, in a certain way, every such heuristic can be regarded a concrete heuristic example while not being tailored to a specific problem like 15-puzzle. (Examples for problem-independent heuristics taken from planning are the FF heuristic or the Add heuristic.)
These problem-independent heuristics base on a general description language and then they perform a problem relaxation. That is, the problem relaxation only bases on the syntax (and, of course, its underlying semantics) of the problem description without "knowing" what it represents. If you are interested in this, you should get familiar with "planning" and, more specifically, with "planning as heuristic search". I also want to mention that these heuristics, while being problem-independent, are dependent on the problem description language, of course. (E.g., my before-mentioned heuristics are specific to "planning problems" and even for planning there are various different sub problem classes with differing kinds of heuristics.)

Travelling salesman with repeat nodes & dynamic weights

Given a list of cities and the cost to fly between each city, I am trying to find the cheapest itinerary that visits all of these cities. I am currently using a MATLAB solution to find the cheapest route, but I'd now like to modify the algorithm to allow the following:
repeat nodes - repeat nodes should be allowed, since travelling via hub cities can often result in a cheaper route
dynamic edge weights - return/round-trip flights have a different (usually lower) cost to two equivalent one-way flights
For now, I am ignoring the issue of flight dates and assuming that it is possible to travel from any city to any other city.
Does anyone have any ideas how to solve this problem? My first idea was to use an evolutionary optimisation method like GA or ACO to solve point 2, and simply adjust the edge weights when evaluating the objective function based on whether the itinerary contains return/round-trip flights, but perhaps somebody else has a better idea.
(Note: I am using MATLAB, but I am not specifically looking for coded solutions, more just high-level ideas about what algorithms can be used.)
Edit - after thinking about this some more, allowing "repeat nodes" seems to be too loose of a constraint. We could further constrain the problem so that, although nodes can be repeatedly visited, each directed edge can only be visited at most once. It seems reasonable to ignore any itineraries which include the same flight in the same direction more than once.
I haven't tested it myself; however, I have read that implementing Simulated Annealing to solve the TSP (or variants of it) can produce excellent results. The key point here is that Simulated Annealing is very easy to implement and requires minimal tweaking, while approximation algorithms can take much longer to implement and are probably more error prone. Skiena also has a page dedicated to specific TSP solvers.
If you want the cost of the solution produced by the algorithm is within 3/2 of the optimum then you want the Christofides algorithm. ACO and GA don't have a guaranteed cost.
Solving the TSP is a NP-hard problem for its subcycles elimination constraints, if you remove any of them (for your hub cities) you just make the problem easier.
But watch out: TSP has similarities with association problem in the meaning that you could obtain non-valid itineraries like:
Cities: New York, Boston, Dallas, Toronto
Path:
Boston - New York
New York - Boston
Dallas - Toronto
Toronto - Dallas
which is clearly wrong since we don't go across all cities.
The subcycle elimination constraints serve just to this purpose. Including a 'hub city' sounds like you need to add weights to the point and make an hybrid between flux problems and tsp problems. Sounds pretty hard but the first try may be: eliminate the subcycles constraints relative to your hub cities (and leave all the others). You can then link the subcycles obtained for the hub cities together.
Good luck
Firstly, what is approximate number of cities in your problem set? (Up to 100? More than 100?)
I have a fair bit of experience with GA (not ACO), and like epitaph says, it has a bit of gambling aspect. For some input, it might stop at a brutally inefficient solution. So, what I have done in the past is to use GA as the first option, compare the answer to some lower bound, and if that seems to be "way off", then run a second (usually a less efficient) algorithm.
Of course, I used plenty of terms that were not standard, so let us make sure that we agree what they would be in this context:
lower bound - of course, in this case, MST would be a lower bound.
"Way Off" - If triangle inequality holds, then an upper bound is UB = 2 * MST. A good "way off" in this context would be 2 * UB.
Second algorithm - In this case, both a linear programming based approach and Christofides would be good choices.
If you limit the problem to round-trips (i.e. the salesman can only buy round-trip tickets), then it can be represented by an undirected graph, and the problem boils down to finding the minimum spanning tree, which can be done efficiently.
In the general case I don't know of a clever way to use efficient algorithms; GA or similar might be a good way to go.
Do you want a near-optimal solution, or do you want the optimal solution?
For the optimal solution, there's still good ol' brute force. Due to requirement 1 involving repeat nodes, you'll have to make sure you search breadth-first, not dept-first. Otherwise you can end up in an infinite loop. You can slowly drop all routes that exceed your current minimum until all routes are exhausted and the minimal route is discovered.

Where can I find information on the D* or D* Lite pathfinding algorithm?

There are links to some papers on D* here, but they're a bit too mathematical for me. Is there any information on D*/D* Lite more geared towards beginners?
Wikipedia has an article on the topic: http://en.wikipedia.org/wiki/D*
Also a D* Lite implementation in C is available from Sven Koenig's page: http://idm-lab.org/code/dstarlite.tar However I find the impenetrable math much easier to read than the C source code ;-)
Another implementation of D* Lite (in C++) is available here: http://code.google.com/p/dstarlite/
Having said that, why not add a few more papers, yes they have math as well :-) but I'll try to get some more recent stuff. People usually get better at explaining their own work as the time goes by, so the focus is on Stentz, Likhachev and Koenig
Stentz,2007 - Field D* - claims to be better than D* Lite :-)
Stentz,2010 - Imitation Lerning- mostly talk about combining Field D* and LEARCH
Ratliff,2009 - LEARCH - also talks about combining with Field D* - yes a cyclic ref :-)
Likhachev,2005 - Anytime D* - with Stentz as well
Yanyan,2009 - BDD-Based Dynamic A*
Koenig,2008 - Comparing real-time and incremental heuristic search
Well if pseudo code is hard for you (you don't have to read theorems and proofs - pseudo code is pretty straight forward if you know standard algorhitms) and you complain against published C and C++ code then I guess you'll need to go doing something else :-)
Seriously, don't expect that someone can teach you a top grade algorithm in a few web paragraphs. Take a pen and paper and write, draw and follow on paper what's going on. You may have to read something twice and google one or two references to get to know a few concepts around it, and there's no need to dig in the theorems and proofs at all - unless you hope to prove the author wrong that is :-))
Can't go forward without some more math - c'est la vie. Imagine that you asked someone to teach you what on earth is matrix inversion but you don't know what are vectors. No one could help you till you learned enough of the math context first.
D* Lite Explanation for the Layperson
D* starts with a crow-flies, idealistic path between Start and Goal; it handles obstacles only as and when it comes across them (usually by moving into an adjacent node). That is - D* Lite has no knowledge of any obstacles until it begins moving along that ideal path.
The holy grail with any pathfinding implementation is to make it quick while getting the shortest path, or at least a decent path (as well as handling [all your various special conditions here - for D* Lite this is dealing with an unknown map as a Mars Rover might do]).
So one of D* Lite's big challenges is adapting to obstacles cheaply as they are reached. Finding them is easy - you just check the node status of neighbours as you move. But how do we adapt the existing map's cost estimates without running through every node... which could be very costly?
LPA* uses a clever trick to adapt costs, a trick D* Lite has put to good use. The current node asks its neighbours: You know me best, do you think I'm being realistic about myself? Specifically, it asks this about its g value, which is the known cost of getting from the initial node to itself i.e. the current node. Neighbours look at their own g, look at where the current node is in relation to them, and then offer an estimate of what they think its cost should be. The minimum of these offers is set as the current node's rhs value which is then used to update its g value; when estimating, neighbours take into account the newly discovered obstacle(s) (or free spaces), such that when current updates g using rhs, it does so with the new obstacles (or free spaces) accounted for.
And once we have realistic g values across the board, of course, a new shortest path appears.
D* vs D* Lite:
First of all, D*-Lite is considered much simpler than D*, and since it always runs at least as fast as D*, it has completely obsoleted D*. Thus, there is never any reason to use D*; use D*-Lite instead.
D* Lite vs A*:
The D* Lite algorithm works by basically running A* search in reverse starting from the goal and attempting to work back to the start. The solver then gives out the current solution and waits for some kind of change in the weights or obstacles that it is presented with. As opposed to repeated A* search, the D* Lite algorithm avoids replanning from scratch and incrementally repair path keeping its modifications local around robot pose.
if you would like to really understand the algorithm. I suggest you start by reading through the pseudo code for A* and implement it. Try first to get the grip of how you put pick from and insert into the heap queue, and how the algorithm uses another heuristic as opposed to just regular Dijkstra algorithm, and why that allows the algorithm to explore less vertices as opposed to Dijkstra.
Once you feel you have a grip of how A* works (you should implement it as well), then I would suggest you take a look at Koenig, 2002 again. I suggest you start looking at the regular D*-Lite pseudo code first. Make sure you understand what every line of code.
concept of priority queue
'U' is your priority queue where you want to stack unexplored vertices.
'U' is has elements of (key, value) where key is you vertex and value is the result of the value = [k1, k2] = calculateKey saying what the priority of the key (meaning vertex) should be. Now your priority queue uses a.
'U.Top()' returns a vertex with the smallest priority of all vertices in priority queue U.
'U.TopKey()' returns the smallest priority of all vertices in prior.
'U.Pop' deletes the vertex with the smallest priority in priority queue U and returns the vertex.
'U.Insert()' inserts vertex into priority queue U with priority.
'U.Update()' changes the priority of vertex in priority queue U to.
Implementation
I have already implemented the Optimized D*-Lite algorithm once python (take a look at this thread here). I suggest you put the code and pseudo code side by side and read through. There is also instructions there to test the simulation if you would like that.
I came up with this
http://idm-lab.org/bib/abstracts/papers/aaai02b.pdf and this
http://www.cs.cmu.edu/~maxim/files/dlitemap_iros02.pdf
http://www.cs.cmu.edu/~maxim/files/dlite_icra02.pdf - has 2 versions of D*
http://www.cs.cmu.edu/~maxim/files/dlite_tro05.pdf - polished up version of icra02
https://www.cs.cmu.edu/~maxim/files/rstar_aaai08.pdf - R* - randomization to reduce computation cost
http://www.cs.cmu.edu/~maxim/files/rstar_proofs34.pdf - modified R*
http://www.cs.unh.edu/~ruml/papers/f-biased-socs-12.pdf - real time R* + PLRTA*
I hope those link will help you :)
Edit: After posting I noticed that the links I gave you were in the link you pointed out as well. Nevertheless I found those directly on Google. Anyway I've looked them up a bit and they don't seem to be that complicated. If you know A* well you should manage to understand D* as well.
From experience I can tell you that A* can be used for what you want as well.
Maxim Likhachev's CMU class notes are very informative. It contains an example of how to propagate the dynamics changes occurred on your graph. Also explains the idea of under-consistent which is very important to understand the algorithms.
http://www.cs.cmu.edu/~maxim/classes/robotplanning_grad/lectures/execanytimeincsearch_16782_fall18.pdf

I need an algorithm to find the best path

I need an algorithm to find the best solution of a path finding problem. The problem can be stated as:
At the starting point I can proceed along multiple different paths.
At each step there are another multiple possible choices where to proceed.
There are two operations possible at each step:
A boundary condition that determine if a path is acceptable or not.
A condition that determine if the path has reached the final destination and can be selected as the best one.
At each step a number of paths can be eliminated, letting only the "good" paths to grow.
I hope this sufficiently describes my problem, and also a possible brute force solution.
My question is: is the brute force is the best/only solution to the problem, and I need some hint also about the best coding structure of the algorithm.
Take a look at A*, and use the length as boundary condition.
http://en.wikipedia.org/wiki/A%2a_search_algorithm
You are looking for some kind of state space search algorithm. Without knowing more about the particular problem, it is difficult to recommend one over another.
If your space is open-ended (infinite tree search), or nearly so (chess, for example), you want an algorithm that prunes unpromising paths, as well as selects promising ones. The alpha-beta algorithm (used by many OLD chess programs) comes immediately to mind.
The A* algorithm can give good results. The key to getting good results out of A* is choosing a good heuristic (weighting function) to evaluate the current node and the various successor nodes, to select the most promising path. Simple path length is probably not good enough.
Elaine Rich's AI textbook (oldie but goodie) spent a fair amount of time on various search algorithms. Full Disclosure: I was one of the guinea pigs for the text, during my undergraduate days at UT Austin.
did you try breadth-first search? (BFS) that is if length is a criteria for best path
you will also have to modify the algorithm to disregard "unacceptable paths"
If your problem is exactly as you describe it, you have two choices: depth-first search, and breadth first search.
Depth first search considers a possible path, pursues it all the way to the end (or as far as it is acceptable), and only then is it compared with other paths.
Breadth first search is probably more appropriate, at each junction you consider all possible next steps and use some score to rank the order in which each possible step is taken. This allows you to prioritise your search and find good solutions faster, (but to prove you have found the best solution it takes just as long as depth-first searching, and is less easy to parallelise).
However, your problem may also be suitable for Dijkstra's algorithm depending on the details of your problem. If it is, that is a much better approach!
This would also be a good starting point to develop your own algorithm that performs much better than iterative searching (if such an algorithm is actually possible, which it may not be!)
A* plus floodfill and dynamic programming. It is hard to implement, and too hard to describe in a simple post and too valuable to just give away so sorry I can't provide more but searching on flood fill and dynamic programming will put you on the path if you want to go that route.

Resources