Implementing D* Lite for Path-Planning - How detect Edge Cost Change? - algorithm

I currently try implementing the D* Lite Algorithm for path-planning (see also here) to get a grasp on it. I found two implementations on the web, both for C/C++, but somehow couldn't entirely follow the ideas since they seem to differ more than expected from the pseudo code in the whitepapers. I especially use the following two papers:
Koening,S.;Likhachev,M. - D* Lite, 2002
Koenig & Likkachev, Fast replanning for Navigation in Unknown Terrain, IEEE Transactions on Robotics, Vol. 21, No. 3, June 2005
I tried implementing the optimized version of D* Lite from the first whitepaper (p.5,Fig.4) and for "debugging" I use the example as shown and explained in the second whitepaper (p.6,Fig.6 and Fig.7). All work is done in MatLab (easier for exchanging code with others).
So far I got the code running to find the initial shortest path by running computeShortestPath() once. But now I am stuck at lines {36''} and {37''} of the pseudo-code:
{36”} Scan graph for changed edge costs;
{37”} if any edge costs changed
How do I detect those changes? I somehow don't seem to have a grasp on how this is being detected? In my implementation so far, I mainly used 3 matrices.
One matrix of same size as the grid map containing all rhs-values. One matrix of same size containing similarly all g-values. And one matrix with variable row count for the priority queue with the first two columns being the priority keys and the third and fourth row being the x- and y-coordinates.
Comparing my results, I get the same result for the first run of computeShortestPath() in Step5 as seen in the second whitepaper, p.6 Fig.6. Moving the robot one step also isn't that a problem. But I really have no clue how the next step of scanning for changed edge costs should be implemented.
Thanks for any hint, advice and/or help!!!

The following was pointed out to me by someone else:
In real-world code, you almost never have to "scan the graph for
changes." Your graph only changes when you change it in the code, so
you already know exactly when and where it can change!
One common way of implementing this is to have a OnGraphChanged
callback in your Graph class, which can be setup to call the
OnGraphChanged method in your PathFinder class. Then anywhere the
graph changes in your Graph class, make sure the OnGraphChanged
callback is called.
I personally implemented it by using a "true map" and a "known map" and after every move letting the robot check/scan all next possible successors and comparing them on the true map and the known map.

Related

What's the difference between effect and control edges of V8's TurboFan?

I've read many blog posts, articles, presentation and videos, even inspected V8's source code, both the bytecode generator, the sea-of-nodes graph generator and the optimization phases, and still couldn't find an answer.
V8's optimizing compiler, TurboFan, uses an IR of type "sea of nodes". All of the academic articles I found about it says that it's basically a CFG combined with a data-flow graph, and as such has two type of edges to connect nodes: data edges and control edges. Basically, if you take only the data edges you form a data-flow graph while if you choose the control edges you get a control flow graph.
However, TurboFan has one more edge type: "effect edges" (and effect phis). I suppose that this is what this slide means when it says that this is not "sea" of nodes but "soup" of nodes, because I couldn't find this term anywhere else. From what I understood, effect edges help the compiler keep the structure of statements/expressions that if reordered will have a visible side-effect. The example everyone uses is o.f = o.f + 1: the load has to come before the store (or we'll read the new value), and the addition has to come before the store, too (or otherwise we'll store the old value and uselessly increment the result).
But I cannot understand: isn't this the goal of control edges? From searching through the code I can see that almost every node has an effect edge and a control edge. Their uses isn't obvious though. From what I understand, in sea of nodes you use control edges to constrain evaluation order. So why do we need both effect and control edges? Probably I'm missing something fundamental, but I can't find it.
TL;DR: What's the use of effect edges and EffectPhi nodes, and how they're different from control edges.
Great thanks.
The idea of a sea-of-nodes compiler is that IR nodes have maximum freedom to move around. That's why in a snippet like o.f = o.f + 1, the sequence load-add-store is not considered "control flow". Only conditions and branches are. So if we slightly extend the example:
if (condition) {
o.f = o.f + 1;
} else {
// something else
}
then, as you describe in the question, effect dependencies ensure that load-add-store are scheduled in this order, and control dependencies ensure that all three operations are only performed if condition is true (technically, if the true-branch of the if-condition is taken). Note that this is important even for the load; for instance, o might be an invalid object if condition is false, and attempting to load its f property might cause a segfault in that case.

Fill all connected grid squares of the same type

Foreword: I am aware there is another question like this, however mine has very specific restrictions. I have done my best to make this question applicable to many, as it is a generic grid issue, but if it still does not belong here, then I am sorry, and please be nice about it. I have found in the past stackoverflow to be a very picky and hostile environment to question askers, but I'm hoping that was just a bad couple people.
Goal(abstract): Check all connected grid squares in a 3D grid that are of the same type and touching on one face.
Goal(specific/implementation): Create a "fill bucket" tool in Minecraft with command blocks.
Knowledge of Minecraft not really necessary to answer, this is more of an algorithm question, and I will be staying away from Minecraft specifics.
Restrictions: I can do this in code with recursive functions, but in Minecraft there are some limitations I am wondering if are possible to get around. 1: no arrays(data structure) permitted. In Minecraft I can store an integer variable and do basic calculations with it (+,-,*,/,%(mod),=,==), but that's it. I cannot dynamically create variables or have the program create anything with a name that I did not set out ahead of time. I can do "IF" and "OR" statements, and everything that derives from them. I CANNOT have multiple program pointers - that is, I can't have things like recursive functions, which require a program to stop executing, execute itself from beginning to end, and then resume executing where it was - I have minimal control over the program flow. I can use loops and conditional exits (so FOR loops). I can have a marker on the grid in 3D space that can move regardless of the presence of blocks (I'm using an armour stand, for those who know), and I can test grid squares relative to that marker.
So say my grid is full of empty spaces only. There are separate clusters of filled squares in opposite corners, not touching each other. If I "use" my fillbucket tool on one block / filled grid square, I want it to use a single marker to check and identify all the connected grid squares - basically, I need to be sure that it traverses the entire shape, all the nooks and crannies, but not the squares that are not connected to that shape. So in the end, one of the two clusters, from me only selecting a single square of it, will be erased/replaced by another kind of block, without affecting the other blocks around it.
Again, apologies if this doesn't belong here. And only answer this if you WANT to tackle the challenge - it's not important or anything, I just want to do this. You don't have to answer it if you don't want to. Or if you can solve this problem for a 2D grid, that would be helpful as well, as I could possibly extend that to work for 3D.
Thank you, and if I get nobody degrading me for how I wrote this post or the fact that I did, then I will consider this a success :)
With help from this and other sources, I figured it out! It turns out that, since all recursive functions (or at least most of them) can be written as FOR loops, that I can make a recursive function in Minecraft. So I did, and the general idea of it is as follows:
For explaining the program, you may assuming the situation is a largely empty grid with a grouping of filled squares in one part of it, and the goal is to replace the kind of block that that grouping is made of with a different block. We'll say the grouping currently consists of red blocks, and we want to change them to blue blocks.
Initialization:
IDs - A objective (data structure) for holding each marker's ID (score)
numIDs - An integer variable for holding number of IDs/markers active
Create one marker at selected grid position with ID [1] (aka give it a score of 1 in the "IDs" objective). This grid position will be a filled square from which to start replacing blocks.
Increment numIDs
Main program:
FOR loop that goes from 1 to numIDs
{
at marker with ID [1], fill grid square with blue block
step 1. test block one to the +x for a red block
step 2. if found, create marker there with ID [numIDs]
step 3. increment numIDs
[//repeat steps 1 2 and 3 for the other five adjacent grid squares: +z, -x, -z, +y, and -y]
delete stand[1]
numIDs -= 1
subtract 1 from every marker's ID's, so that the next marker to evaluate, which was [2], now has ID [1].
} (end loop)
So that's what I came up with, and it works like a charm. Sorry if my explanation is hard to understand, I'm trying to explain in a way that might make sense to both coders and Minecraft players, and maybe achieving neither :P

Cellular Automata update rules Mathematica

I am trying to built some rules about cellular automata. In every cell I don't have one element (predator/prey) I have a number of population. To achieve movement between my population can I compare every cell with one of its neighbours every time? or do I have to compare the cell with all of its neighbours and add some conditions.
I am using Moore neighbourhood with the following update function
update[site,_,_,_,_,_,_,_,_]
I tried to make them move according to all of their neighbours but it's very complicated and I am wondering if by simplify it and check it with all of its neighbours individually it will be wrong.
Thanks
As a general advice I would recommend against going down the pattern recognition technique in Mathematica for specifying the rule table in CA, they tend to get out of hand very quickly.
Doing a predator-prey kind of simulation with CA is a little tricky since in each step, (unlike in traditional CA) the value of the center cell changes ALONG WITH THE VALUE OF THE NEIGHBOR CELL!
This will lead to issues since when the transition function is applied on the neighbor cell it will again compute a new value for itself, but it also needs to "remember" the changes done to it previously when it was a neighbor.
In fluid dynamics simulations using CA, they encounter problems like this and they use a different neighborhood called Margolus neighborhood. In a Margolus neighborhood, the CA lattice is broken into distinct blocks and the update rule applied on each block. In the next step the block boundaries are changed and the transition rules applied on the new boundary, hence information transfer occurs across block boundaries.
Before I give my best attempted answer, you have to realize that your question is oddly phrased. The update rules of your cellular automaton are specified by you, so I wouldn't know whether you have additional conditions you need to implement.
I think you're asking what is the best way to select a neighborhood, and you can do this with Part:
(* We abbreviate 'nbhd' for neighborhood *)
getNbhd[A_, i_Integer?Positive, j_Integer?Positive] :=
A[[i - 1 ;; i + 1, j - 1 ;; j + 1]];
This will select the appropriate Moore neighborhood, including the additional central cell, which you can filter out when you call your update function.
Specifically, to perform the update step of a cellular automaton, one must update all cells simultaneously. In the real world, this means creating a separate array, and placing the updated values there, scrapping the original array afterward.
For more details, see my blog post on Cellular Automata, which includes an implementation of Conway's Game of Life in Mathematica.

Need some help understanding this problem about maximizing graph connectivity

I was wondering if someone could help me understand this problem. I prepared a small diagram because it is much easier to explain it visually.
alt text http://img179.imageshack.us/img179/4315/pon.jpg
Problem I am trying to solve:
1. Constructing the dependency graph
Given the connectivity of the graph and a metric that determines how well a node depends on the other, order the dependencies. For instance, I could put in a few rules saying that
node 3 depends on node 4
node 2 depends on node 3
node 3 depends on node 5
But because the final rule is not "valuable" (again based on the same metric), I will not add the rule to my system.
2. Execute the request order
Once I built a dependency graph, execute the list in an order that maximizes the final connectivity. I am not sure if this is a really a problem but I somehow have a feeling that there might exist more than one order in which case, it is required to choose the best order.
First and foremost, I am wondering if I constructed the problem correctly and if I should be aware of any corner cases. Secondly, is there a closely related algorithm that I can look at? Currently, I am thinking of something like Feedback Arc Set or the Secretary Problem but I am a little confused at the moment. Any suggestions?
PS: I am a little confused about the problem myself so please don't flame on me for that. If any clarifications are needed, I will try to update the question.
It looks like you are trying to determine an ordering on requests you send to nodes with dependencies (or "partial ordering" for google) between nodes.
If you google "partial order dependency graph", you get a link to here, which should give you enough information to figure out a good solution.
In general, you want to sort the nodes in such a way that nodes come after their dependencies; AKA topological sort.
I'm a bit confused by your ordering constraints vs. the graphs that you picture: nothing matches up. That said, it sounds like you have soft ordering constraints (A should come before B, but doesn't have to) with costs for violating the constraint. An optimal algorithm for scheduling that is NP-hard, but I bet you could get a pretty good schedule using a DFS biased towards large-weight edges, then deleting all the back edges.
If you know in advance the dependencies of each node, you can easily build layers.
It's amusing, but I faced the very same problem when organizing... the compilation of the different modules of my application :)
The idea is simple:
def buildLayers(nodes):
layers = []
n = nodes[:] # copy the list
while not len(n) == 0:
layer = _buildRec(layers, n)
if len(layer) == 0: raise RuntimeError('Cyclic Dependency')
for l in layer: n.remove(l)
layers.append(layer)
return layers
def _buildRec(layers, nodes):
"""Build the next layer by selecting nodes whose dependencies
already appear in `layers`
"""
result = []
for n in nodes:
if n.dependencies in flatten(layers): result.append(n) # not truly python
return result
Then you can pop the layers one at a time, and each time you'll be able to send the request to each of the nodes of this layer in parallel.
If you keep a set of the already selected nodes and the dependencies are also represented as a set the check is more efficient. Other implementations would use event propagations to avoid all those nested loops...
Notice in the worst case you have O(n3), but I only had some thirty components and there are not THAT related :p

Mahjong - Arrange tiles to ensure at least one path to victory, regardless of layout

Regardless of the layout being used for the tiles, is there any good way to divvy out the tiles so that you can guarantee the user that, at the beginning of the game, there exists at least one path to completing the puzzle and winning the game?
Obviously, depending on the user's moves, they can cut themselves off from winning. I just want to be able to always tell the user that the puzzle is winnable if they play well.
If you randomly place tiles at the beginning of the game, it's possible that the user could make a few moves and not be able to do any more. The knowledge that a puzzle is at least solvable should make it more fun to play.
Place all the tiles in reverse (ie layout out the board starting in the middle, working out)
To tease the player further, you could do it visibly but at very high speed.
Play the game in reverse.
Randomly lay out pieces pair by pair, in places where you could slide them into the heap. You'll need a way to know where you're allowed to place pieces in order to end up with a heap that matches some preset pattern, but you'd need that anyway.
I know this is an old question, but I came across this when solving the problem myself. None of the answers here are quite perfect, and several of them have complicated caveats or will break on pathological layouts. Here is my solution:
Solve the board (forward, not backward) with unmarked tiles. Remove two free tiles at a time. Push each pair you remove onto a "matched pair" stack. Often, this is all you need to do.
If you run into a dead end (numFreeTiles == 1), just reset your generator :) I have found I usually don't hit dead ends, and have so far have a max retry count of 3 for the 10-or-so layouts I have tried. Once I hit 8 retries, I give up and just randomly assign the rest of the tiles. This allows me to use the same generator for both setting up the board, and the shuffle feature, even if the player screwed up and made a 100% unsolvable state.
Another solution when you hit a dead end is to back out (pop off the stack, replacing tiles on the board) until you can take a different path. Take a different path by making sure you match pairs that will remove the original blocking tile.
Unfortunately, depending on the board, this may loop forever. If you end up removing a pair that resembles a "no outlet" road, where all subsequent "roads" are a dead end, and there are multiple dead ends, your algorithm will never complete. I don't know if it is possible to design a board where this would be the case, but if so, there is still a solution.
To solve that bigger problem, treat each possible board state as a node in a DAG, with each selected pair being an edge on that graph. Do a random traversal, until you find a leaf node at depth 72. Keep track of your traversal history so that you never repeat a descent.
Since dead ends are more rare than first-try solutions in the layouts I have used, what immediately comes to mind is a hybrid solution. First try to solve it with minimal memory (store selected pairs on your stack). Once you've hit the first dead end, degrade to doing full marking/edge generation when visiting each node (lazy evaluation where possible).
I've done very little study of graph theory, though, so maybe there's a better solution to the DAG random traversal/search problem :)
Edit: You actually could use any of my solutions w/ generating the board in reverse, ala the Oct 13th 2008 post. You still have the same caveats, because you can still end up with dead ends. Generating a board in reverse has more complicated rules, though. E.g, you are guaranteed to fail your setup if you don't start at least SOME of your rows w/ the first piece in the middle, such as in a layout w/ 1 long row. Picking a completely random (legal) first move in a forward-solving generator is more likely to lead to a solvable board.
The only thing I've been able to come up with is to place the tiles down in matching pairs as kind of a reverse Mahjong Solitaire game. So, at any point during the tile placement, the board should look like it's in the middle of a real game (ie no tiles floating 3 layers up above other tiles).
If the tiles are place in matching pairs in a reverse game, it should always result in at least one forward path to solve the game.
I'd love to hear other ideas.
I believe the best answer has already been pushed up: creating a set by solving it "in reverse" - i.e. starting with a blank board, then adding a pair somewhere, add another pair in a solvable position, and so on...
If you a prefer "Big Bang" approach (generating the whole set randomly at the beginning), are a very macho developer or just feel masochistic today, you could represent all the pairs you can take out from the given set and how they depend on each other via a directed graph.
From there, you'd only have to get the transitive closure of that set and determine if there's at least one path from at least one of the initial legal pairs that leads to the desired end (no tile pairs left).
Implementing this solution is left as an exercise to the reader :D
Here are rules i used in my implementation.
When buildingheap, for each fret in a pair separately, find a cells (places), which are:
has all cells at lower levels already filled
place for second fret does not block first, considering if first fret already put onboard
both places are "at edges" of already built heap:
EITHER has at least one neighbour at left or right side
OR it is first fret in a row (all cells at right and left are recursively free)
These rules does not guarantee a build will always successful - it sometimes leave last 2 free cells self-blocking, and build should be retried (or at least last few frets)
In practice, "turtle" built in no more then 6 retries.
Most of existed games seems to restrict putting first ("first on row") frets somewhere in a middle. This come up with more convenient configurations, when there are no frets at edges of very long rows, staying up until last player moves. However, "middle" is different for different configurations.
Good luck :)
P.S.
If you've found algo that build solvable heap in one turn - please let me know.
You have 144 tiles in the game, each of the 144 tiles has a block list..
(top tile on stack has an empty block list)
All valid moves require that their "current__vertical_Block_list" be empty.. this can be a 144x144 matrix so 20k of memory plus a LEFT and RIGHT block list, also 20 k each.
Generate a valid move table from (remaning_tiles) AND ((empty CURRENT VERTICAL BLOCK LIST) and ((empty CURRENT LEFT BLOCK LIST) OR (empty CURRENT RIGHT BLOCK LIST)))
Pick 2 random tiles from the valid move table, record them
Update the (current tables Vert, left and right), record the Tiles removed to a stack
Now we have a list of moves that constitute a valid game. Assign matching tile types to each of the 72 moves.
for challenging games, track when each tile becomes available. find sets that have are (early early early late) and (late late late early) since it's blank, you find 1 EE 1 LL and 2 LE blocks.. of the 2 LE block, find an EARLY that blocks ANY other EARLY that (except rightblocking a left side piece)
Once youve got a valid game play around with the ordering.
Solitaire? Just a guess, but I would assume that your computer would need to beat the game(or close to it) to determine this.
Another option might be to have several preset layouts(that allow winning, mixed in with your current level.
To some degree you could try making sure that one of the 4 tiles is no more than X layers below another X.
Most games I see have the shuffle command for when someone gets stuck.
I would try a mix of things and see what works best.

Resources