Currently learning about the A* search algorithm and using it to find the quickest solution to the N-Puzzle. For some random seed of the initial starting state, the puzzle may be unsolvable which would result in extremely long wait times until the algorithm has search the entire search-space and determined there is not solution to the give start state.
I was wondering if there is a method of precalculating whether the A* algorithm will fail to avoid such a scenario. I've read a bit about how it is possible but can't find a direct answer as to a method in which to do it.
Any guidance or options are appreciated.
I think A* does not offer you a mechanism to know whether or not a problem is solvable. Specifically for N-Puzzle, I think this could help you to check if it can be solved or not:
http://www.geeksforgeeks.org/check-instance-8-puzzle-solvable/
It seems that if you are in a state where you have an odd amount inversion, you know for sure the problem for that permutation is infeasible.
For the N-puzzle specifically, there are only two possible parities, so you just need to check which parity the current puzzle is.
There is an in-depth explanation on how to do this on the math stackexchange
For general A* problems, no, there is no way to pre-compute if the graph is solvable.
Related
I am currently following an algorithms class and we have to solve a sudoku.
I have already a working solution with naive backtracking but I'm much more interest in solving this puzzle problem with a tree data structure.
My problem is that I don't quite understand how it works. Is anyone can explain to me the basic of puzzle solving with tree?
I don't seek optimization. I looking for explanation on algorithms like the Genetic algorithm or something similar. My purpose only to learn at this point. I have hard time to take what I read in scientific articles and translate it on real implementation.
I Hope, I've made my question more clear.
Thank you very much!
EDIT: I edit the post to be more precise.
I don't know how to solve Sudoku "with a tree", but you're trying to mark as many cells as possible before trying to guess and using backtrace. Therefore check out this question.
Like in most search problems, also in Sudoku you may associate a tree with the problem. In this case nodes are partial assignments to the variables and branches correspond to extensions of the assignment in the parent node. However it is not practical (or even feasible) to store these trees in memory, you can think of them as a conceptual tool. Backtracking actually navigates such a tree using variable ordering to avoid exploring hopeless branches. A better (faster) solution can be obtained by constraint propagation techniques since you know that all rows, columns, and 3x3 cells must contain different values. A simple algorithm for that is AC3 and it is covered in most introductory AI books including Russell & Norvig 2010.
I have a few questions regarding the semantics of terminology used when describing algorithms.
Firstly, what is meant by a 'naive' algorithm? How does this differ from other solutions to a given problem? What other forms can solutions take?
Secondly, I have heard much reference to having a 'closed - form' solution. I have no idea what this means either - but often it appears when trying to solve recurrence relations...
Thanks for your time
A Naive algorithm is usually the most obvious solution when one is asked a problem. It may not be a smart algorithm but will probably get the job done (...eventually.)
Eg. Trying to search for an element in a sorted array.
A Naive algorithm would be to use a Linear Search.
A Not-So Naive Solution would be to use the Binary Search.
A better example, would be in case of substring search Naive Algorithm is far less efficient than Boyer–Moore or Knuth–Morris–Pratt Algorithm
A Closed Form Solution is a simple Solution that works instantly without any loops,functions etc..
Eg:
Iterative Algorithm for sum of integer from 1 to n
s= 0
for i in 1 to n
s = s + i
end for
print s
Closed Form (for the same problem)
s = n * (n + 1 ) /2
Naive algorithm is a very simple algorithm, one with very simple rules. Sometimes the first one that comes to mind. It may be stupid and very slow, it may not even solve the problem. It may sometimes be the best possible. Here's an example of a problem and "naive" algorithms:
Problem: You are in a (2-dimensional) maze. Find your way out. (meaning: to a spot with an "EXIT" sign :)
Naive algorithm 1: Start walking and choose the right one in every intersection you meet (until you find "EXIT").
Naive algorithm 2: Start walking and choose a random one in every intersection you meet (until you find "EXIT").
Algorithm 1 will not even get you out of some mazes!
Algorithm 2 will get you out of all mazes (although this is rather hard to prove).
Closed form means you can give the one expression as solution, that does solve it without recurrence/recursive. Here one should remark, that it is not always possible to find such a closed form.
Naive means just that what it says: A first, stupid solution to the problem, that solves it, but maybe not very time-/space efficient. What one really considers 'naive' depends on the speaker, the context, and the weather of the next day. Often it is used to distinguish a very sophisticated solution (that uses some kind of trick) from the obvious implementation.
I'm writing a program which solves a 24-puzzle (5x5 grid) using two heuristic. The first uses how many blocks the incorrect place and the second uses the Manhattan distance between the blocks current place and desired place.
I have different functions in the program which use each heuristic with an A* and a greedy search and compares the results (so 4 different parts in total).
I'm curious whether my program is wrong or whether it's a limitation of the puzzle. The puzzle is generated randomly with pieces being moved around a few times and most of the time (~70%) a solution is found with most searches, but sometimes they fail.
I can understand why greedy would fail, as it's not complete, but seeing as A* is complete this leads me to believe that there's an error in my code.
So could someone please tell me whether this is an error in my thinking or a limitation of the puzzle? Sorry if this is badly worded, I'll rephrase if necessary.
Thanks
EDIT:
So I"m fairly sure it's something I'm doing wrong. Here's a step-by-step list of how I'm doing the searches, is anything wrong here?
Create a new list for the fringe, sorted by whichever heuristic is being used
Create a set to store visited nodes
Add the initial state of the puzzle to the fringe
while the fringe isn't empty..
pop the first element from the fringe
if the node has been visited before, skip it
if node is the goal, return it
add the node to our visited set
expand the node and add all descendants back to the fringe
If you mean that sliding puzzle: This is solvable if you exchange two pieces from a working solution - so if you don't find a solution this doesn't tell anything about the correctness of your algorithm.
It's just your seed is flawed.
Edit: If you start with the solution and make (random) legal moves, then a correct algorithm would find a solution (as reversing the order is a solution).
It is not completely clear who invented it, but Sam Loyd popularized the 14-15 puzzle, during the late 19th Century, which is the 4x4 version of your 5x5.
From the Wikipedia article, a parity argument proved that half of the possible configurations are unsolvable. You are probably running into something similar when your search fails.
I'm going to assume your code is correct, and you implemented all the algorithms and heuristics correctly.
This leaves us with the "generated randomly" part of your puzzle initialization. Are you sure you are generating correct states of the puzzle? If you generate an illegal state, obviously there will be no solution.
While the steps you have listed seem a little incomplete, you have listed enough to ensure that your A* will reach a solution if there is one (albeit not optimal as long as you are just simply skipping nodes).
It sounds like either your puzzle generation is flawed or your algorithm isn't implemented correctly. To easily verify your puzzle generation, store the steps used to generate the puzzle, and run it in reverse and check if the result is a solution state before allowing the puzzle to be sent to the search routines. If you ever generate an invalid puzzle, dump the puzzle, and expected steps and see where the problem is. If the puzzle passes and the algorithm fails, you have at least narrowed down where the problem is.
If it turns out to be your algorithm, post a more detailed explanation of the steps you have actually implemented (not just how A* works, we all know that), like for instance when you run the evaluation function, and where you resort the list that acts as your queue. That will make it easier to determine a problem within your implementation.
I need an algorithm to find the best solution of a path finding problem. The problem can be stated as:
At the starting point I can proceed along multiple different paths.
At each step there are another multiple possible choices where to proceed.
There are two operations possible at each step:
A boundary condition that determine if a path is acceptable or not.
A condition that determine if the path has reached the final destination and can be selected as the best one.
At each step a number of paths can be eliminated, letting only the "good" paths to grow.
I hope this sufficiently describes my problem, and also a possible brute force solution.
My question is: is the brute force is the best/only solution to the problem, and I need some hint also about the best coding structure of the algorithm.
Take a look at A*, and use the length as boundary condition.
http://en.wikipedia.org/wiki/A%2a_search_algorithm
You are looking for some kind of state space search algorithm. Without knowing more about the particular problem, it is difficult to recommend one over another.
If your space is open-ended (infinite tree search), or nearly so (chess, for example), you want an algorithm that prunes unpromising paths, as well as selects promising ones. The alpha-beta algorithm (used by many OLD chess programs) comes immediately to mind.
The A* algorithm can give good results. The key to getting good results out of A* is choosing a good heuristic (weighting function) to evaluate the current node and the various successor nodes, to select the most promising path. Simple path length is probably not good enough.
Elaine Rich's AI textbook (oldie but goodie) spent a fair amount of time on various search algorithms. Full Disclosure: I was one of the guinea pigs for the text, during my undergraduate days at UT Austin.
did you try breadth-first search? (BFS) that is if length is a criteria for best path
you will also have to modify the algorithm to disregard "unacceptable paths"
If your problem is exactly as you describe it, you have two choices: depth-first search, and breadth first search.
Depth first search considers a possible path, pursues it all the way to the end (or as far as it is acceptable), and only then is it compared with other paths.
Breadth first search is probably more appropriate, at each junction you consider all possible next steps and use some score to rank the order in which each possible step is taken. This allows you to prioritise your search and find good solutions faster, (but to prove you have found the best solution it takes just as long as depth-first searching, and is less easy to parallelise).
However, your problem may also be suitable for Dijkstra's algorithm depending on the details of your problem. If it is, that is a much better approach!
This would also be a good starting point to develop your own algorithm that performs much better than iterative searching (if such an algorithm is actually possible, which it may not be!)
A* plus floodfill and dynamic programming. It is hard to implement, and too hard to describe in a simple post and too valuable to just give away so sorry I can't provide more but searching on flood fill and dynamic programming will put you on the path if you want to go that route.
Ok, so here's the problem:
I need to find any number of intem groups from 50-100 item set that add up to 1000, 2000, ..., 10000.
Input: list of integers
Integer can be on one list only.
Any ideas on algorithm?
Googling for "Knapsack problem" should get you quite a few hits (though they're not likely to be very encouraging -- this is quite a well known NP-complete problem).
Edit: if you want to get technical, what you're describing seems to really be the subset sum problem -- which is a special case of the knapsack problem. Of course, that's assuming I'm understanding your description correctly, which I'll admit may be open to some question.
You might find Algorithm 3.94 in The Handbook of Applied Cryptography helpful.
I'm not 100% on what you are asking, but I've used backtracking searches for something like this before. This is a brute force algorithm that is the slowest possible solution, but it will work. The wiki article on Backtracking Search may help you. Basically, you can use a recursive algorithm to examine every possible combination.
This is the knapsack problem. Are there any constraints on the integers you can choose from? Are they divisible? Are they all less than some given value? There may be ways to solve the problem in polynomial time given such constraints - Google will provide you with answers.