Ok, so I have this puzzle that is called the CuFrog, that consists in filling a 3x3x3 cube with a number in each position but leaping over a position when going from one to the other. For instance, considering a flattened cube, the valid position to the right of (1,1) on side 1 would be (3,1) on side 1.
So I'm using Constraints in Prolog to do this and I've given the domain of each variable (1 to 54), I've said they must all be different and that, for each position, one of the positions in the set right-left-down-up has to be the current value of such position + 1.
Also, I've given an entry point to the puzzle, which means I placed the number 1 already on the first position.
Thing is, SICStus is not finding me an answer when I'm labelling the variables. :( It seems I must be missing a restriction somewhere or maybe I'm doing something wrong. Can anyone help?
Thanks.
So you say the CLP(FD) doesn't find a solution. Do you
mean it terminates with "no" or do you mean it doesn't
terminate?
The problem looks like a Hamiltonian path problem. It
could be that the search needs exponential time, and simply
doesn't terminate in practical time.
In this particular case, giving restrictions like symmetry
breaking heuristics, could in fact reduce the search time!
For example from your starting point, you could fix the search
in 2 directions, the other directions can be derived later.
So if the answer is "No", this means too many restrictions.
If the answer is that it doesn't terminate, this means not
enough restrictions or impossible to practically solve.
Despite all the brute force you put into searching a path,
it might later turn out that a solution is systematic. Or
you might get the idea by yourself.
Bye
Related
I have to determine a way for a robot to get out of a maze. The thing is that the layout of the maze is unknown, and the position of the exit is unknown too. The robot also start at an unknown position in the maze.
I found 3 solutions but I have a hard time knowing which one should I use, because in the end it seems that the solutions will purely be random anyway.
I have those 3 solutions :
1) The basic "human" strategy(?), where you put your hand on a wall and go through all the maze if necessary. I also keep a variable "turn counter" to avoid situation where the robot loop.
2) Depth first search
3) Making the robot choose direction randomly
The random one seems the worse, because he could take forever to find the exit (but on the other hand he could be the fastest too...). I'm not sure about the other two though.
Also, is there a way to have some kind of heuristic? Again the lack of information makes me think that it's impossible, but maybe I'm missing something.
Last thing : When the robot find the exit, he will have to go back to his start position using A*. This means that during the first part, where he looks for the exit, he will have draw a map of the maze that he will use for the 2nd part. Maybe this can help too chose the best algorithm for the first part, but yeah I don't see why one would be better.
Could someone help me please? Thanks (Also, sorry for my english).
Problems like this are categorised as real-time search, perhaps the best known example is Learning Real-Time A*, where you combine information about what you've seen before (if you've had to backtrack or know a cheaper way to reach a state), and the actions you can take. As is the case in areas like reinforcement learning, some level of randomness helps balance exploration and exploitation.
Assuming your graph is undirected, time invariant, and the initial and exit node exist in the same component, then choosing a direction at random at each vertex is equivalent to a random walk on a graph.
Regardless of whether the graph is initially known or not, this is a very well understood field of mathematics, equivalent to an absorbing Markov chain, the time to reach the exit state in such cases has a Discrete phase-type distribution - often quite slow, but it's also worth noting that in pathological cases it's possible to design a maze where a random walk will outperform DFS.
#beaker is right in that the first two you suggested should lead the the same result. However, you may be able to improve on the search a little by keeping track of any loops you find. If the Robot finds itself in a spot it has already visited and needs to backtrack once coming to a dead end there may be no need go back so far if there is a shortcut it has found. Also use the segments that have been mapped on the way out and apply Dijkstra's algorithm or A* on it to find the most efficient way back. There may be a faster way back on an unexplored path but this would be the safest way to have a quick result.
Obviously this implementing the checks for loops to prevent unneeded back tracking will make thing more complicated to implement. Though for the return to the start using Dijkstra's algorithm should not be as complex.
If you are feeling ambitious now that found the exit you could use this information and give the robot a sense of direction though in a randomly generated maze that may not help much.
If I tell you the moves for a game of chess and declare who wins, why can't it be checked in polynomial time if the winner does really win? This would make it an NP problem from my understanding.
First of all: The number of positions you can set up with 32 pieces on a 8x8 field is limited. We need to consider any pawn being converted to any other piece and include any such available position, too. Of course, among all these, there are some positions that cannot be reached following the rules of chess, but this does not matter. The important thing is: we have a limit. Lets name this limit simply MaxPositions.
Now for any given position, let's build up a tree as follows:
The given position is the root.
Add any position (legal chess position or not) as child.
For any of these children, add any position as child again.
Continue this way, until your tree reaches a depth of MaxPositions.
I'm now too tired to think of if we need one additional level of depth or not for the idea (proof?), but heck, just let's add it. The important thing is: the tree constructed like this is limited.
Next step: Of this tree, remove any sub-tree that is not reachable from the root via legal chess moves. Repeat this step for the remaining children, grand-children, ..., until there is no unreachable position left in the whole tree. The number of steps must be limited, as the tree is limited.
Now do a breadth-first search and make any node a leaf if it has been found previously. It must be marked as such(!; draw candidate?). Same for any mate position.
How to find out if there is a forced mate? In any sub tree, if it is your turn, there must be at least one child leading to a forced mate. If it is the opponents move, there must be a grand child for every child that leads to a mate. This applies recursively, of course. However, as the tree is limited, this whole algorithm is limited.
[sensored], this whole algorithm is limited! There is some constant limiting the whole stuff. So: although the limit is incredibly high (and far beyond what up-to-date hardware can handle), it is a limit (please do not ask me to calculate it...). So: our problem actually is O(1)!!!
The same for checkers, go, ...
This applies for the forced mate, so far. What is the best move? First, check if we can find a forced mate. If so, fine, we found the best move. If there are several, select the one with the least moves necessary (still there might be more than one...).
If there is no such forced mate, then we need to measure by some means the 'best' one. Possibly count the number of available successions to mate. Other propositions for measurement? As long as operating on this tree from top to down, we still remain limited. So again, we are O(1).
Now what did we miss? Have a look at the link in your comment again. They are talking about an NxN checkers! The author is varying size of the field!
So have a look back at how we constructed the tree. I think it is obvious that the tree grows exponentially with the size of the field (try to prove it yourself...).
I know very well that this answer is not a prove for that the problem is EXP(TIME). Actually, I admit, it is not really an answer at all. But I think what I illustrated still gives quite a good image/impression of the complexity of the problem. And as long as no one provides a better answer, I dare to claim that this is better than nothing at all...
Addendum, considering your comment:
Let me allow to refer to wikipedia. Actually, it should be suffient to transform the other problem in exponential time, not polynomial as in the link, as applying the transformation + solving the resulting problem still remains exponential. But I'm not sure about the exact definition...
It is sufficient to show this for a problem of which you know already it is EXP complete (transforming any other problem to this one and then to the chess problem again remains exponential, if both transformations are exponential).
Apparently, J.M. Robson found a way to do this for NxN checkers. It must be possible for generalized chess, too, probably simply modifying Robsons algorithm. I do not think it is possible for classical 8x8 chess, though...
O(1) applies for classical chess only, not for generalized chess. But it is the latter one for which we assume not being in NP! Actually, in my answer up to this addendum, there is one prove lacking: The size of the limited tree (if N is fix) does not grow faster than exponentially with growing N (so the answer actually is incomplete!).
And to prove that generalized chess is not in NP, we have to prove that there is no polynomial algorithm to solve the problem on a non-deterministic turing machine. This I leave open again, and my answer remains even less complete...
If I tell you the moves for a game of chess and declare who wins, why
can't it be checked in polynomial time if the winner does really win?
This would make it an NP problem from my understanding.
Because in order to check if the winner(white) does really win, you will have to also evaluate all possible moves that the looser(black) could've made in other to also win. That makes the checking also exponential.
I want to ask if there are easier step by step to make the path and possible solution for a game like this
I've already search pathfinding algorithm like A* and Djikstra.
I have read the article.
But, I don't really understand how to implement in a game like in the link above.
Please help if you know something about this. Thanks.
The language that i used is AS3 (actionscript-3).
I would do it like that:
1)Let's use backtracking with some heuristics(I don't know how fast it would actually work).
2)Let's assume that the the current state is S(state is defined by the tiles which are left).
If no tiles are left, the solution is found.
Otherwise, you can use, let's say, depth first search to find the pairs of matching tiles.
A naive solution would be to try to match every pair, remove it and keep searching(and do backtracking if this branch fails).
However, there are a few heuristic to speed up the search:
a)If there are only two tiles of one type and they can be removed, remove them. Do not try to remove any other pairs in this state. I claim that if solution exists, it can obtained by removing this pair first.
b)If all tiles of one type can be matched now(even if there are more than 2 of them), match them. Do not try to remove any other pairs in this state.
c)If neither a) nor b) applies, then you will have to try all possibilities(like in a naive version).
As you can see, these heuristics allow you to avoid branching if either a) or b) applies to the current state, so they can speed up the search(especially at the end of the game, when only a few tiles are left and almost all of them are connected).
P.S You can also use memoization to avoid visiting the same state twice.
Given a 20x30 'sheet of graph paper' mark any even n number of cells so that every cell has an odd number of marked neighboring cells and so that all the cells connect making one 'piece'
Neighboring cells are defined as immediately adjacent cells. (All surrounding cells excluding diagonal cells).
I'm having a problem coming up with a clean algorithm. I currently have one and it's very messy and confusing and I just know there has to be a much better way to do it, I'm just not sure how. I'm looking for a quickly implemented algorithm because I don't have much time left to do the program and we have to code it in Ada which isn't a strength of mine.
I currently am making use of I made like so,
CanMark(cell); -- Checks if the cell can be marked.
HasProblem(cell); -- Checks if the cell has an even number of surrounding marked cells.
HasFix(cell); -- Checks if there is a sequence of cells that can be marked to eliminate currently existing problem.
I don't have the code with me at the moment but I will post when I get home.
Any help would be greatly appreciated.
Sorry for being unclear. I am just asking for a direction not for you to do my problem for me. I feel like this could be done using a graph related algorithm but don't know enough to know for sure. I don't have my code with me right at the moment, but I will certainly post it when I am able to.
I would start small, and build up. The smallest (n=1) is simply:
*
that clearly doesn't work since there are 0 neighbors (and even number). So no solution exists for n=1. Next, try n=2. Only one choice:
**
This works.
Now what about n=3? Doesn't work, no solution for n=3.
Now, how can you add to it to make n=4? n=6? Can you form a pattern?
I'm writing a program which solves a 24-puzzle (5x5 grid) using two heuristic. The first uses how many blocks the incorrect place and the second uses the Manhattan distance between the blocks current place and desired place.
I have different functions in the program which use each heuristic with an A* and a greedy search and compares the results (so 4 different parts in total).
I'm curious whether my program is wrong or whether it's a limitation of the puzzle. The puzzle is generated randomly with pieces being moved around a few times and most of the time (~70%) a solution is found with most searches, but sometimes they fail.
I can understand why greedy would fail, as it's not complete, but seeing as A* is complete this leads me to believe that there's an error in my code.
So could someone please tell me whether this is an error in my thinking or a limitation of the puzzle? Sorry if this is badly worded, I'll rephrase if necessary.
Thanks
EDIT:
So I"m fairly sure it's something I'm doing wrong. Here's a step-by-step list of how I'm doing the searches, is anything wrong here?
Create a new list for the fringe, sorted by whichever heuristic is being used
Create a set to store visited nodes
Add the initial state of the puzzle to the fringe
while the fringe isn't empty..
pop the first element from the fringe
if the node has been visited before, skip it
if node is the goal, return it
add the node to our visited set
expand the node and add all descendants back to the fringe
If you mean that sliding puzzle: This is solvable if you exchange two pieces from a working solution - so if you don't find a solution this doesn't tell anything about the correctness of your algorithm.
It's just your seed is flawed.
Edit: If you start with the solution and make (random) legal moves, then a correct algorithm would find a solution (as reversing the order is a solution).
It is not completely clear who invented it, but Sam Loyd popularized the 14-15 puzzle, during the late 19th Century, which is the 4x4 version of your 5x5.
From the Wikipedia article, a parity argument proved that half of the possible configurations are unsolvable. You are probably running into something similar when your search fails.
I'm going to assume your code is correct, and you implemented all the algorithms and heuristics correctly.
This leaves us with the "generated randomly" part of your puzzle initialization. Are you sure you are generating correct states of the puzzle? If you generate an illegal state, obviously there will be no solution.
While the steps you have listed seem a little incomplete, you have listed enough to ensure that your A* will reach a solution if there is one (albeit not optimal as long as you are just simply skipping nodes).
It sounds like either your puzzle generation is flawed or your algorithm isn't implemented correctly. To easily verify your puzzle generation, store the steps used to generate the puzzle, and run it in reverse and check if the result is a solution state before allowing the puzzle to be sent to the search routines. If you ever generate an invalid puzzle, dump the puzzle, and expected steps and see where the problem is. If the puzzle passes and the algorithm fails, you have at least narrowed down where the problem is.
If it turns out to be your algorithm, post a more detailed explanation of the steps you have actually implemented (not just how A* works, we all know that), like for instance when you run the evaluation function, and where you resort the list that acts as your queue. That will make it easier to determine a problem within your implementation.