How to resolve this game problem - algorithm

I have a simple game problem using A*:
We have several nodes in a tree, one node contains:
monster with power and it's element
the way link to other nodes.
The plus point we get after we kill this monster.
There are five elements: Metal, Wood, Water, Fire, Land.
Our character can only kill a monster if our element's encounter score is more than or equal monster's.
And after kill a monster we must plus all the bonus point to one element's score, we can't split them up to several elements.
goal: Find the shortest way to a specific node.
My solution:
I will use A*:
heuristic: Dijkstra
find(mainCharacter,node,plusPoint) {
// node here is the node has smallest f
shortestWay[5] ways;
foreach(element in elements) {
mainCharacter->element += plusPoint;
if (mainCharacter can beat the monster in node) {
bestNode is node has the smallest f in node->neighbourNodes
*ways[element] ++ << the steps, we plus point to the first element at very first path. it can be -1 if we can't go.
find(mainCharacter,bestNode,node->plusPoint)
}
}
Our goal will be the *ways[element] with the smallest step.
My question:
Are my solution right and good enough ?
Are there any better solution for this game ?
Thanks first :)

I'm not sure A* is going to allow you to do this.
The main issue here is that your available nodes change as you explore new nodes. This means it might be worthwhile to backtrack sometimes.
Example: You are at node A, which opens to B and C. B opens to E. E opens to F, which opens to G, which opens to D. C opens to D which is your destination.
B is guarded by a power 2 elemental, and C guarded by a power 4 elemental. You are at power 3. E F and G all have power 2 elementals.
From A you can only go to B and C. C is too powerful so you go to B. You could keep going around to yield A B E F G D, or you could backtrack: A B A C D. (After you take out B, C is no longer too powerful.)
So, you are going to end up doing a lot of re-evaulation in whatever algorithm you come up with. This isn't even bounded by O(n!) because of the potential back tracking.
The approach I would take is to look at the shortest route without backtracking. This is your upper bound, and should be easy to do with A* (I think...) or something like it. Then you can find the geographical paths (ignoring power levels) that are shorter than this distance. From there, you can start eliminating blocks in power until 1) you get them all down, or 2) your geographic distance required to acquire the additional power to get through the blocks pushes the distance over the upper bound.

Related

How to determine which rooms are effectively identical in a given maze

The problem in question can be found here
TL;DR:
There is a maze made up of circular rooms connected by indistinguishable corridors, the goal of players is to walk around and map out the whole maze.
Our goal is to look at a maze and try to reduce it as much as possible.
When looking at a maze you can compare two rooms A and B, if, when you are randomly dropped into the maze, you cannot tell whether you began in A or B these rooms are considered
effectively identical.
By running the maze through an algorithm we should be able to remove all effectively identical rooms thus making the maze smaller without affecting the overall feel of the maze to the players.
More details and rules are in the aforementioned document.
My intuition tells me to walk though the maze making a tree from every single node and then comparing the trees. I will include pictures for the given examples in the document.
Picture 1
Picture 2
I feel, as does Patrick87 in the comments, that it seems likely that a simple graph will eventually be reduced to a single node. So as a frame challenge, I suggest not doing this! But if you want to, you're essentially asking to do a pile of graph isomorphisms, and so there's no better tool than nauty.*
So what you need to do is essentially (pseudocode):
reduce(G):
for n1 in G.nodes:
for n2 in G.nodes:
if n1 != n2:
let G1 = G
swap G1[n1] and G1[n2]
if nauty.isomorphic(G, G1):
G.delete(n1) # or n2
return G
return G # G cannot be reduced
* Boost has an isomorphism library, but it is naive and very slow.

Greedy Algorithm: The Robot

Any ideas? I've tried drawing it out and I've narrowed down the minimum number of robots you'd need but I just don't know how to express it in a greedy algorithm or how to prove it. It's a bonus question from one of our lectures so we don't have to know how to do it but I feel like its a good exercise. Thanks in advance!
until robot is at the bottom row:
while there's a coin to the right on the same row:
go right
go down one step
go to the right corner
or perhaps more succinctly:
If there are coins to the right: go right
Otherwise: go down
Edit:
To see that the algorithm is optimal in the sense that it requires the fewest total number of robots to clear the board, observe that no robot ever makes a move that is worse than an optimal robot: "greedy stays ahead". Here's an attempt to make this argument more formal:
Let G be the greedy algorithm and R be any optimal algorithm.
From the perspective of a single robot, there is some set S of coins within reach. From the starting position, for example, all coins are within reach (though some coins may be mutually exclusive, of course). when a robot r makes a move, a subset V of S becomes unreachable for r. It is clear that for any single move, only one additional robot is needed to take all coins in V. Thus in some sense, the worst possible individual move will be such that V is not empty, and there is no way that a single move would cause the algorithm to require two or more additional robots.
For a robot in G, unless S is empty, V is always a true subset of S. In other words, G doesn't make any "obviously stupid" moves. Combined with the fact that G and R collect all coins, we see that the only interesting places where the robots differ are where they make different choices (down or right) after having taken the same coin.
Consider the robots r in R and g in G at a point where they differ. There are two possibilities:
g goes right and r goes down.
g goes down and r goes right.
In the first case, there is a coin to the right, and r goes down. Thus V is not empty for r at that step, and by the previous argument, g's decision can't be worse.
In the second case, there is no coin to the right, and g goes down. It is clear that V is empty for g, and r's decision can't be better than that.
We see that in any case where R and G differ, G is at least as good as R, which is optimal, so G must also be optimal.
I just don't know how to express it in a greedy algorithm
Use Manhattan/taxicab distance
Then a greedy heuristic may be
for any robot:
look right-down for the closest coin (using taxicab dist min(deltaX, deltaY))
collect it
from the current point, repeat from step 1.
Just "retire" a robot which see no coins at step 1.
The distance use Manhattan distance.
While robot cannot move:
if robot is at right edge:
nextStep = down
if robot is at bottom edge:
nextStep = right
nextStep = F(right,down)
F(right,down):
if(distanceAll(right)<distanceAll(down))
return right
else return down
distanceAll(location):
return the sum of distance between this location to remaining coins
distanceAll can be a heuristics function, maybe you can define a range to calculate the sum of distance. 5X5 or 3X3

What is meant by the set of all possible configuration in a given graph G

I'm trying to understand a Solved exercise 2, Chapter 3 - Algorithm design by tardos.
But i'm not getting the idea of the answer.
In short the question is
We are given two robots located at node a & node b. The robots need to travel to node c and d respectively. The problem is if one of the nodes gets close to each other. "Let's assume the distance is r <= 1 so that if they become close to each other by one node or less" they will have an interference problem, So they won't be able to transmit data to the base station.
The answer is quite long and it does not make any sense to me or I'm not getting its idea.
Anyway I was thinking can't we just perform DFS/BFS to find a path from node a to c, & from b to d. then we modify the DFS/BFS Algorithm so that we keep checking at every movement if the robots are getting close to each other?
Since it's required to solve this problem in polynomial time, I don't think this modification to any of the algorithm "BFS/DFS" will consume a lot of time.
The solution is "From the book"
This problem can be tricky to think about if we view things at the level of the underlying graph G: for a given configuration of the robots—that is, the current location of each one—it’s not clear what rule we should be using to decide how to move one of the robots next. So instead we apply an idea that can be very useful for situations in which we’re trying to perform this type of search. We observe that our problem looks a lot like a path-finding problem, not in the original graph G but in the space of all possible configurations.
Let us define the following (larger) graph H. The node set of H is the set of all possible configurations of the robots; that is, H consists of all possible pairs of nodes in G. We join two nodes of H by an edge if they represent configurations that could be consecutive in a schedule; that is, (u,v) and (u′,v′)will be joined by an edge in H if one of the pairs u,u′ or v,v′ are equal, and the other pair corresponds to an edge in G.
Why the need for larger graph H?
What does he mean by: The node set of H is the set of all possible configurations of the robots; that is, H consists of all possible pairs of nodes in G.
And what does he mean by: We join two nodes of H by an edge if they represent configurations that could be consecutive in a schedule; that is, (u,v) and (u′,v′) will be joined by an edge in H if one of the pairs u,u′ or v,v′ are equal, and the other pair corresponds to an edge in G.?
I do not have the book, but it seems from their answer that at each step they move one robot or the other. Assuming that, H consists of all possible pairs of nodes that are more than distance r apart. The nodes in H are adjacent if they can be reached by moving one robot or the other.
There are not enough details in your proposed algorithm to say anything about it.
Anyway I was thinking can't we just perform DFS/BFS to find a path from node a to c, & from b to d. then we modify the DFS/BFS Algorithm so that we keep checking at every movement if the robots are getting close to each other?
I don't think this would be possible. What you're proposing is to calculate the full path, and afterwards check if the given path could work. If not, how would you handle the situation so that when you rerun the algorithm, it won't find that pathological path? You could exclude that from the set of possible options, but I don't see think that'd be a good approach.
Suppose a path of length n, and now suppose that the pathology resides in the first step of the given path. Suppose now that this happens every time you recalculate the path. You would have to recalculate the path a lot of times just because the algorithm itself isn't aware of the restrictions needed to get to the right answer.
I think this is the point: the algorithm itself doesn't consider the problem's restrictions, and that is the main problem, because there's no easy way of correcting the given (wrong) solution.
What does he mean by: The node set of H is the set of all possible configurations of the robots; that is, H consists of all possible pairs of nodes in G.
What they mean by that is that each node in H represents each possible position of the two robots, which is the same as "all possible pairs of nodes in G".
E.g.: graph G has nodes A, B, C, D, E. H will have nodes AB, AC, AD, AE, BC, BD, BE, CD, CE, DE (consider AB = BA for further analysis).
Let the two robots be named r1 and r2, they start at nodes A and B (given info in the question), so the path will start in node AB in graph H. Next, the possibilities are:
r1 moves to a neighbor node from A
r2 moves to a neighbor node from B
(...repeat for each step unitl r1 and r2 each reach its destination).
All these possible positions of the two robots at the same time are the configurations the answer talks about.
And what does he mean by: We join two nodes of H by an edge if they represent configurations that could be consecutive in a schedule; that is, (u,v) and (u′,v′) will be joined by an edge in H if one of the pairs u,u′ or v,v′ are equal, and the other pair corresponds to an edge in G.?
Let's look at the possibilities from what they state here:
(u,v) and (u′,v′) will be joined by an edge in H if one of the pairs u,u′ or v,v′ are equal, and the other pair corresponds to an edge in G.
The possibilities are:
(u,v) and (u,w) / (v,w) is and edge in E. In this case r2 moves to one of the neighbors from its current node.
(u,v) and (w,v) / (u,w) is and edge in E. In this case r1 moves to one of the neighbors from its current node.
This solution was a bit tricky to me too at first. But after reading it several times and drawing some examples, when I finally bumped into your question, the way you separated each part of the problem then helped me to fully understand each part of the solution. So, a big thanks to you for this question!
Hope it's clearer now for anyone stuck with this problem!

Understanding ordering within a graph when doing traversals

I'm trying to understand depth first and breadth first traversals within the context of a graph. Most visual examples I've seen use trees to illustrate the difference between the two. The ordering of nodes within a tree is much more intuitive than in a graph (at least to me) and it makes perfect sense that nodes would be ordered top down, left to right from the root node.
When dealing with graphs, I see no such natural ordering. I've seen an example with various nodes labeled A though F, where the author explains traversals with nodes assuming the lexical order of their label. This would seem to imply that the type of value represented by a node must be inherently comparable. Is this the case? Any clarification would be much appreciated!
Node values in graphs need not be comparable.
An intuitive/oversimplified way to think about BFS vs DFS is this:
In DFS, you choose a direction to move, then you go as far as you can in that direction until you hit a dead end. Then you backtrack as little as possible, until you find a different direction you can go in. Follow that to its end, then backtrack again, and so on.
In BFS, you sequentially take one step in every possible direction. Then you take two steps in every possible direction, and so on.
Consider the following simple graph (I've deliberately chosen labels that are not A, B, C... to avoid the implication that the ordering of labels matters):
Q --> X --> T
| |
| |
v v
K --> W
A DFS starting at Q might proceed like this: Q to X to W (dead end), backtrack to X, go to T (dead end), backtrack to X and then to Q, go to K (dead end since W has already been visited).
A BFS starting at Q might proceed like this: Q, then X (one step away from Q), then K (one step away from Q), then W (two steps away from Q), then T (two steps away from Q).

Processing nodes in circuit based digraph with cycles for audio DSP

For an audio processing chain (like Propellerheads' Reason), I'm developing a circuit of nodes which communicate with each other in an environment where there may be loops (shown below).
Audio devices (nodes) are processed in batches of 64 audio frames, and in the ideal situation, communication propagates within a single batch. Unfortunately feedback loops can be created, which causes a delay in the chain.
I am looking for the best type of algorithm to consistently minimize feedback loops?
In my system, a cycle leads to at least one audio device having to be a "feedback node" (shown below), which means its "feedback input" cannot be processed within the same batch.
An example of feedback can be seen in the following processing schedule:
D -> A
A -> B
B -> C
C -> 'D
In this case the output from C to D has to be processed on the next batch.
Below is an example of an inefficient processing schedule which results in two feedback loops:
A -> B
B -> C
C -> D
D -> E, 'A
E -> F
F -> G
G -> 'D
Here the output from G to D, and D to A must be processed on the next batch. This means that the output from G reaches A after 2 batches, compared to the output from A to D occurring within the same batch.
The most efficient processing schedule begins with D, which results in just one feedback node (D).
How large can this graph become?
It's quite common to have 1000 audio devices (for example a song with 30 audio channels, with 30 effects devices connected), though the there are typically 2-4 outputs per device and the circuits aren't incredibly complex. Instead audio devices tend to be connected with localised scopes so circuits (if they do exist) are more likely to be locally confined though I just need to prepare the most efficient node schedule to reduce the number feedbacks.
A pair of audio devices with two paths (ideally) should not have mismatched feedback nodes between them
Suppose there are two nodes, M and N, with two separate paths from M to N, there should not be a feedback node on one path but not on the other as this would desynchronise the input to N, which is highly undesired. This aim complicates the algorithm further. But I will examine how Reason behaves (as it might not actually be so complex).
This survey describes several approaches to feedback set problems but only briefly describes branch and bound. I think that branch and bound is a promising approach, so I'll expand on that description here.
In branch and bound, we explore a search tree consisting of nodes where each vertex is assigned a label 0, 1, or ?. The meaning of ? is that we don't know what label to give yet, and the root node has all vertices labeled ?. The leaves of the search tree have no vertices labeled ?. The children of a node where at least one vertex is labeled ? are determined by choosing an arbitrary vertex labeled ? and letting it be 0 in the left child and 1 in the right. This is branching.
To bound a node, we do something to determine a lower bound (because we're minimizing) on the number of vertices labeled 1 in a solution where each of the ?s is replaced by a 0 or a 1. If this lower bound is no better than the best solution that we have found so far, then there is no need to explore the subtree further. For proving optimality, the best approach, given space, is depth-first search with best-first backtracking. The depth-first search part consists of repeatedly exploring the more promising child (lower lower bound) and putting the other into a priority queue. Then, when we get stuck because we're at a leaf or because the node got pruned, we pull the most promising possibility out of the queue. We stop when the queue is empty.
One very common approach for obtaining bounds is linear programming. Instead of labeling vertices 0 or 1, it turns out that if we allow fractional labels in the interval [0, 1], then we can find a "solution" relatively efficiently. This cost of this solution is no greater than the true optimum, but it's not possible, of course, to have a node be "half feedback". Select one of the vertices with a fractional label (closest to 0.5 is often a good bet) and branch on it.
In fact, this approach is so common that most linear programming solvers provide a convenient interface to it in the form of integer programming. Unfortunately, integer programming won't work directly for us, because the program has too many constraints: one for each simple cycle. (Come to think of it, if there aren't too many simple cycles, then you could use integer programming after all.)
The linear program for feedback vertex set looks like this. The variable x_v is the solution label: 0 if v is combinatorial, 1 if v is feedback, fractional values interpolating between those two possibilities.
minimize sum_v x_v (as few feedback vertices as possible)
subject to
for all simple cycles C, sum_{v in C} x_v >= 1 (at least one vertex on each cycle)
for all v, x_v >= 0 (vertices cannot be "negative feedback")
You actually want to solve the dual program, which by weak LP duality, lower bounds the optimal solution when feasible.
maximize sum_C y_C
subject to
for all vertices v, sum_{C ni v} y_C <= 1
for all C, y_C >= 0
The intuitive meaning of this program is as follows. Suppose that we identified disjoint simple cycles. Each of these cycles contains at least one feedback vertex, and these vertices are distinct. This is the fractional analog of that technique (which works on every LP, not just this one; the dual of this LP is the first LP again).
The technique for solving this LP is called column (i.e., variable) generation. Initially, we send it to the solver with no variables. We then interact with the solver repeatedly, getting solutions and adding variables that look useful, until it becomes clear that we've reached an optimum (or stalled). The solver returns corresponding values for x_v for all v and y_C for the C that we've told it about. To find another cycle C' that we should include, we want sum_{v in C'} x_v < 1, preferably much less. Label each arc by the value of x_v, where x_v is its head. For each vertex, run Dijkstra to find shortest paths, then check whether the arcs into that vertex form a short cycle.
This is complicated slightly by the side constraints imposed by the current node. If a vertex is labeled 0 or 1, then we omit its constraint from the dual and use that label in place of x_v in labeling the graph for Dijkstra.
I hope that, when you determine what the side constraints are, we can devise another column generation strategy to deal with them. I'm more confident about that than I would be modifying the combinatorial reduction strategies surveyed.

Resources