Why does backtracking make an algorithm non-deterministic? - algorithm

So I've had at least two professors mention that backtracking makes an algorithm non-deterministic without giving too much explanation into why that is. I think I understand how this happens, but I have trouble putting it into words. Could somebody give me a concise explanation of the reason for this?

It's not so much the case that backtracking makes an algorithm non-deterministic.
Rather, you usually need backtracking to process a non-deterministic algorithm, since (by the definition of non-deterministic) you don't know which path to take at a particular time in your processing, but instead you must try several.

I'll just quote wikipedia:
A nondeterministic programming language is a language which can specify, at certain points in the program (called "choice points"), various alternatives for program flow. Unlike an if-then statement, the method of choice between these alternatives is not directly specified by the programmer; the program must decide at runtime between the alternatives, via some general method applied to all choice points. A programmer specifies a limited number of alternatives, but the program must later choose between them. ("Choose" is, in fact, a typical name for the nondeterministic operator.) A hierarchy of choice points may be formed, with higher-level choices leading to branches that contain lower-level choices within them.
One method of choice is embodied in backtracking systems, in which some alternatives may "fail", causing the program to backtrack and try other alternatives. If all alternatives fail at a particular choice point, then an entire branch fails, and the program will backtrack further, to an older choice point. One complication is that, because any choice is tentative and may be remade, the system must be able to restore old program states by undoing side-effects caused by partially executing a branch that eventually failed.
Out of the Nondeterministic Programming article.

Consider an algorithm for coloring a map of the world. No color can be used on adjacent countries. The algorithm arbitrarily starts at a country and colors it an arbitrary color. So it moves along, coloring countries, changing the color on each step until, "uh oh", two adjacent countries have the same color. Well, now we have to backtrack, and make a new color choice. Now we aren't making a choice as a nondeterministic algorithm would, that's not possible for our deterministic computers. Instead, we are simulating the nondeterministic algorithm with backtracking. A nondeterministic algorithm would have made the right choice for every country.

The running time of backtracking on a deterministic computer is factorial, i.e. it is in O(n!).
Where a non-deterministic computer could instantly guess correctly in each step, a deterministic computer has to try all possible combinations of choices.
Since it is impossible to build a non-deterministic computer, what your professor probably meant is the following:
A provenly hard problem in the complexity class NP (all problems that a non-deterministic computer can solve efficiently by always guessing correctly) cannot be solved more efficiently on real computers than by backtracking.
The above statement is true, if the complexity classes P (all problems that a deterministic computer can solve efficiently) and NP are not the same. This is the famous P vs. NP problem. The Clay Mathematics Institute has offered a $1 Million prize for its solution, but the problem has resisted proof for many years. However, most researchers believe that P is not equal to NP.
A simple way to sum it up would be: Most interesting problems a non-deterministic computer could solve efficiently by always guessing correctly, are so hard that a deterministic computer would probably have to try all possible combinations of choices, i.e. use backtracking.

Thought experiment:
1) Hidden from view there is some distribution of electric charges which you feel a force from and you measure the potential field they create. Tell me exactly the positions of all the charges.
2) Take some charges and arrange them. Tell me exactly the potential field they create.
Only the second question has a unique answer. This is the non-uniqueness of vector fields. This situation may be in analogy with some non-deterministic algorithms you are considering. Further consider in math limits which do not exist because they have different answers depending on which direction you approach a discontinuity from.

I wrote a maze runner that uses backtracking (of course), which I'll use as an example.
You walk through the maze. When you reach a junction, you flip a coin to decide which route to follow. If you chose a dead end, trace back to the junction and take another route. If you tried them all, return to the previous junction.
This algorithm is non-deterministic, non because of the backtracking, but because of the coin flipping.
Now change the algorithm: when you reach a junction, always try the leftmost route you haven't tried yet first. If that leads to a dead end, return to the junction and again try the leftmost route you haven't tried yet.
This algorithm is deterministic. There's no chance involved, it's predictable: you'll always follow the same route in the same maze.

If you allow backtracking you allow infinite looping in your program which makes it non-deterministic since the actual path taken may always include one more loop.

Non-Deterministic Turing Machines (NDTMs) could take multiple branches in a single step. DTMs on the other hand follow a trial-and-error process.
You can think of DTMs as regular computers. In contrast, quantum computers are alike to NDTMs and can solve non-deterministic problems much easier (e.g. see their application in breaking cryptography). So backtracking would actually be a linear process for them.

I like the maze analogy. Lets think of the maze, for simplicity, as a binary tree, in which there is only one path out.
Now you want to try a depth first search to find the correct way out of the maze.
A non deterministic computer would, at every branching point, duplicate/clone itself and run each further calculations in parallel. It is like as if the person in the maze would duplicate/clone himself (like in the movie Prestige) at each branching point and send one copy of himself into the left subbranch of the tree and the other copy of himself into the right subbranch of the tree.
The computers/persons who end up at a dead end they die (terminate without answer).
Only one computer will survive (terminate with an answer), the one who gets out of the maze.
The difference between backtracking and non-determinism is the following.
In the case of backtracking there is only one computer alive at any given moment, he does the traditional maze solving trick, simply marking his path with a chalk and when he gets to a dead end he just simply backtracks to a branching point whose sub branches he did not yet explore completely, just like in a depth first search.
IN CONTRAST :
A non deteministic computer can clone himself at every branching point and check for the way out by running paralell searches in the sub branches.
So the backtracking algorithm simulates/emulates the cloning ability of the non-deterministic computer on a sequential/non-parallel/deterministic computer.

Related

Optimal algorithm to find exit of a maze with no information

I have to determine a way for a robot to get out of a maze. The thing is that the layout of the maze is unknown, and the position of the exit is unknown too. The robot also start at an unknown position in the maze.
I found 3 solutions but I have a hard time knowing which one should I use, because in the end it seems that the solutions will purely be random anyway.
I have those 3 solutions :
1) The basic "human" strategy(?), where you put your hand on a wall and go through all the maze if necessary. I also keep a variable "turn counter" to avoid situation where the robot loop.
2) Depth first search
3) Making the robot choose direction randomly
The random one seems the worse, because he could take forever to find the exit (but on the other hand he could be the fastest too...). I'm not sure about the other two though.
Also, is there a way to have some kind of heuristic? Again the lack of information makes me think that it's impossible, but maybe I'm missing something.
Last thing : When the robot find the exit, he will have to go back to his start position using A*. This means that during the first part, where he looks for the exit, he will have draw a map of the maze that he will use for the 2nd part. Maybe this can help too chose the best algorithm for the first part, but yeah I don't see why one would be better.
Could someone help me please? Thanks (Also, sorry for my english).
Problems like this are categorised as real-time search, perhaps the best known example is Learning Real-Time A*, where you combine information about what you've seen before (if you've had to backtrack or know a cheaper way to reach a state), and the actions you can take. As is the case in areas like reinforcement learning, some level of randomness helps balance exploration and exploitation.
Assuming your graph is undirected, time invariant, and the initial and exit node exist in the same component, then choosing a direction at random at each vertex is equivalent to a random walk on a graph.
Regardless of whether the graph is initially known or not, this is a very well understood field of mathematics, equivalent to an absorbing Markov chain, the time to reach the exit state in such cases has a Discrete phase-type distribution - often quite slow, but it's also worth noting that in pathological cases it's possible to design a maze where a random walk will outperform DFS.
#beaker is right in that the first two you suggested should lead the the same result. However, you may be able to improve on the search a little by keeping track of any loops you find. If the Robot finds itself in a spot it has already visited and needs to backtrack once coming to a dead end there may be no need go back so far if there is a shortcut it has found. Also use the segments that have been mapped on the way out and apply Dijkstra's algorithm or A* on it to find the most efficient way back. There may be a faster way back on an unexplored path but this would be the safest way to have a quick result.
Obviously this implementing the checks for loops to prevent unneeded back tracking will make thing more complicated to implement. Though for the return to the start using Dijkstra's algorithm should not be as complex.
If you are feeling ambitious now that found the exit you could use this information and give the robot a sense of direction though in a randomly generated maze that may not help much.

Why is chess, checkers, Go, etc. in EXP but conjectured to be in NP?

If I tell you the moves for a game of chess and declare who wins, why can't it be checked in polynomial time if the winner does really win? This would make it an NP problem from my understanding.
First of all: The number of positions you can set up with 32 pieces on a 8x8 field is limited. We need to consider any pawn being converted to any other piece and include any such available position, too. Of course, among all these, there are some positions that cannot be reached following the rules of chess, but this does not matter. The important thing is: we have a limit. Lets name this limit simply MaxPositions.
Now for any given position, let's build up a tree as follows:
The given position is the root.
Add any position (legal chess position or not) as child.
For any of these children, add any position as child again.
Continue this way, until your tree reaches a depth of MaxPositions.
I'm now too tired to think of if we need one additional level of depth or not for the idea (proof?), but heck, just let's add it. The important thing is: the tree constructed like this is limited.
Next step: Of this tree, remove any sub-tree that is not reachable from the root via legal chess moves. Repeat this step for the remaining children, grand-children, ..., until there is no unreachable position left in the whole tree. The number of steps must be limited, as the tree is limited.
Now do a breadth-first search and make any node a leaf if it has been found previously. It must be marked as such(!; draw candidate?). Same for any mate position.
How to find out if there is a forced mate? In any sub tree, if it is your turn, there must be at least one child leading to a forced mate. If it is the opponents move, there must be a grand child for every child that leads to a mate. This applies recursively, of course. However, as the tree is limited, this whole algorithm is limited.
[sensored], this whole algorithm is limited! There is some constant limiting the whole stuff. So: although the limit is incredibly high (and far beyond what up-to-date hardware can handle), it is a limit (please do not ask me to calculate it...). So: our problem actually is O(1)!!!
The same for checkers, go, ...
This applies for the forced mate, so far. What is the best move? First, check if we can find a forced mate. If so, fine, we found the best move. If there are several, select the one with the least moves necessary (still there might be more than one...).
If there is no such forced mate, then we need to measure by some means the 'best' one. Possibly count the number of available successions to mate. Other propositions for measurement? As long as operating on this tree from top to down, we still remain limited. So again, we are O(1).
Now what did we miss? Have a look at the link in your comment again. They are talking about an NxN checkers! The author is varying size of the field!
So have a look back at how we constructed the tree. I think it is obvious that the tree grows exponentially with the size of the field (try to prove it yourself...).
I know very well that this answer is not a prove for that the problem is EXP(TIME). Actually, I admit, it is not really an answer at all. But I think what I illustrated still gives quite a good image/impression of the complexity of the problem. And as long as no one provides a better answer, I dare to claim that this is better than nothing at all...
Addendum, considering your comment:
Let me allow to refer to wikipedia. Actually, it should be suffient to transform the other problem in exponential time, not polynomial as in the link, as applying the transformation + solving the resulting problem still remains exponential. But I'm not sure about the exact definition...
It is sufficient to show this for a problem of which you know already it is EXP complete (transforming any other problem to this one and then to the chess problem again remains exponential, if both transformations are exponential).
Apparently, J.M. Robson found a way to do this for NxN checkers. It must be possible for generalized chess, too, probably simply modifying Robsons algorithm. I do not think it is possible for classical 8x8 chess, though...
O(1) applies for classical chess only, not for generalized chess. But it is the latter one for which we assume not being in NP! Actually, in my answer up to this addendum, there is one prove lacking: The size of the limited tree (if N is fix) does not grow faster than exponentially with growing N (so the answer actually is incomplete!).
And to prove that generalized chess is not in NP, we have to prove that there is no polynomial algorithm to solve the problem on a non-deterministic turing machine. This I leave open again, and my answer remains even less complete...
If I tell you the moves for a game of chess and declare who wins, why
can't it be checked in polynomial time if the winner does really win?
This would make it an NP problem from my understanding.
Because in order to check if the winner(white) does really win, you will have to also evaluate all possible moves that the looser(black) could've made in other to also win. That makes the checking also exponential.

How genetic algorithm is different from random selection and evaluation for fittest?

I have been learning the genetic algorithm since 2 months. I knew about the process of initial population creation, selection , crossover and mutation etc. But could not understand how we are able to get better results in each generation and how its different than random search for a best solution. Following I am using one example to explain my problem.
Lets take example of travelling salesman problem. Lets say we have several cities as X1,X2....X18 and we have to find the shortest path to travel. So when we do the crossover after selecting the fittest guys, how do we know that after crossover we will get a better chromosome. The same applies for mutation also.
I feel like its just take one arrangement of cities. Calculate the shortest distance to travel them. Then store the distance and arrangement. Then choose another another arrangement/combination. If it is better than prev arrangement, then save the current arrangement/combination and distance else discard the current arrangement. By doing this also, we will get some solution.
I just want to know where is the point where it makes the difference between random selection and genetic algorithm. In genetic algorithm, is there any criteria that we can't select the arrangement/combination of cities which we have already evaluated?
I am not sure if my question is clear. But I am open, I can explain more on my question. Please let me know if my question is not clear.
A random algorithm starts with a completely blank sheet every time. A new random solution is generated each iteration, with no memory of what happened before during the previous iterations.
A genetic algorithm has a history, so it does not start with a blank sheet, except at the very beginning. Each generation the best of the solution population are selected, mutated in some way, and advanced to the next generation. The least good members of the population are dropped.
Genetic algorithms build on previous success, so they are able to advance faster than random algorithms. A classic example of a very simple genetic algorithm, is the Weasel program. It finds its target far more quickly than random chance because each generation it starts with a partial solution, and over time those initial partial solutions are closer to the required solution.
I think there are two things you are asking about. A mathematical proof that GA works, and empirical one, that would waive your concerns.
Although I am not aware if there is general proof, I am quite sure at least a good sketch of a proof was given by John Holland in his book Adaptation in Natural and Artificial Systems for the optimization problems using binary coding. There is something called Holland's schemata theoerm. But you know, it's heuristics, so technically it does not have to be. It basically says that short schemes in genotype raising the average fitness appear exponentially with successive generations. Then cross-over combines them together. I think the proof was given only for binary coding and got some criticism as well.
Regarding your concerns. Of course you have no guarantee that a cross-over will produce a better result. As two intelligent or beautiful parents might have ugly stupid children. The premise of GA is that it is less likely to happen. (As I understand it) The proof for binary coding hinges on the theoerm that says a good partial patterns will start emerging, and given that the length of the genotype should be long enough, such patterns residing in different specimen have chance to be combined into one improving his fitness in general.
I think it is fairly easy to understand in terms of TSP. Crossing-over help to accumulate good sub-paths into one specimen. Of course it all depends on the choice of the crossing method.
Also GA's path towards the solution is not purely random. It moves towards a certain direction with stochastic mechanisms to escape trappings. You can lose best solutions if you allow it. It works because it wants to move towards the current best solutions, but you have a population of specimens and they kind of share knowledge. They are all similar, but given that you preserve diversity new better partial patterns can be introduced to the whole population and get incorporated into the best solutions. This is why diversity in population is regarded as very important.
As a final note please remember the GA is a very broad topic and you can modify the base in nearly every way you want. You can introduce elitarism, taboos, niches, etc. There is no one-and-only approach/implementation.

How do I guarantee that a cellular automata generated maze is solvable/interesting?

I am writing a maze generation algorithm, and this wikipedia article caught my eye. I decided to implement it in java, which was a cinch. The problem I am having is that while a maze-like picture is generated, the maze often is not solvable and is not often interesting. What I mean by interesting is that there are a vast number of unreachable places and often there are many solutions.
I implemented the 1234/3 rule (although is is changeable easily, see comments for an explanation) with a roughly 50/50 distribution in the start. The mazes always reach an equilibrium where there is no change between t-steps.
My question is, is there a way to guarantee the mazes solvability from a fixed start and endpoint? Also, is there a way to make the maze more interesting to solve (fewer/one solution and few/no unreachable places)? If this is not possible with cellular automata, please tell me. Thank you.
I don't think it's possible to ensure a solvable, interesting maze through simple cellular automata, unless there's some specific criteria that can be placed on the starting state. The fact that cells have no knowledge of the overall shape because each cell won't be able to coordinate with the group as a whole.
If you're insistent on using them, you could do some combination of modification and pathfinding after generation is finished, but other methods (like the ones shown in the Wikipedia article or this question) are simpler to implement and won't result in walls that take up a whole cell (unless you want that).
the root of the problem is that "maze quality" is a global measure, but your automaton cells are restricted to a very local knowledge of the system.
to resolve this, you have three options:
add the global information from outside. generate mazes using the automaton and random initial data, then measure the maze quality (eg using flood fill or a bunch of other maze solving techniques) and repeat until you get a result you like.
use a much more complex set of explicit rules and state. you can work out a set of rules / cell values that encode both the presence of walls and the lengths / quality of paths. for example, -1 would be a wall and a positive value would be the sum of all neighbours above and to the left. then positive values encode the path distance from top left, roughly. that's not enough, but it shows the general idea... you need to encode an algorithm about the maze "directly" in the rules of the system.
use a less complex, but still turing complete, set of rules, and encode the rules for maze generation in the initial state. for example, you could use conway's life and construct an initial state that is a "program" that implements maze generation via gliders etc etc.
if it helps any you could draw a parallel between the above and:
ghost in the machine / external user
FPGA
programming a general purpose CPU
Run a path finding algorithm over it. Dijkstra would give you a sure way to compute all solutions. A* would give you one good solution.
The difficulty of a maze can be measured by the speed at which these algorithms solve it.
You can add some dead-ends in order to shut down some solutions.

dynamic fitness function for genetic algorithm

I'm not sure if I'm completely understanding genetic algorithms and how they work, I'm trying to learn via ai4r http://ai4r.rubyforge.org/geneticAlgorithms.html
If in Job Shop Scheduling, which I believe can be solved by GA(?), isn't cost of any single job is based on how it related to it's predecessors? I was thinking I would calculate a cost based on the placement of the chromosome with a dynamic score of how well it is placed rather than a binary value, but I'm not sure this works.
Anybody have any experience with this? or does a GA only work when the difference between any two genomes is static?
I hope I have the right terminology here, as I mentioned, I'm just learning.
-----------------------update-----------------------------------
I think I'm using a bit of the wrong terminology here. I referred to 'fitness' when I think what I actually wanted to use was cost matrix.
The example I'm going from describes this
Each chromosome must represent a posible solution for the problem. This class conatins an array with the list of visited nodes (cities of the tour). The size of the tour is obtained automatically from the traveling costs matrix. You have to assign the costs matrix BEFORE you run the genetic search. The following costs matrix could be used to solve the problem with only 3 cities:
data_set = [ [ 0, 10, 5],
[ 6, 0, 4],
[25, 4, 0]
]
Ai4r::GeneticAlgorithm::Chromosome.set_cost_matrix(data_set)
so in my instance, I'm thinking the 'cost' of each chromosome is dynamic based on it's neighbours.
Since you asked in a comment to make this an answer, I took the liberty of summarizing my earlier responses as well so it's all in one place. The answer to the specific question of "what is a penalty term" is in item #3 below.
The way a standard genetic algorithm works is that each "chromosome" is a complete solution to the problem. In your case, an ordering for the jobs to be submitted. The confusion, I think, centers around the notion that because the individual contributions to fitness made by a particular job in that schedule varies according to the rest of the schedule, you must need something "dynamic". That's not really true. From the point of view of the GA, the only thing that has a fitness is the entire solution. So a dynamic problem is one in which the fitness of a whole schedule can change over time. Going back to the TSP, a dynamic problem would be one in which touring cities in order of A, B, C, D, and then E actually had a different distance each time you tried it. Even though the cost of a tour through B depends on which cities come before and after B in the tour, once you decide that, the costs are static, and because the GA only ever receives costs for entire tours, all it knows is that [A,B,C,D,E] has a constant fitness. No dynamic trickery needed.
Now, your second question was how to handle constraints like, for the TSP example, what if you need to ensure that the salesman gets to Paris by a certain time? Typically, there are three ways to try to handle this.
Never allow a solution to be generated in which he doesn't get there before 2:00. Sometimes this is easy, other times it's very hard. For instance, if the constraint was "he cannot start at city X", it's fairly easy to just not generate solutions that don't start with X. Often though, simply finding valid solutions can be hard, and so this approach doesn't really work.
Allow constraints to be violated, but fix them afterward. In the TSP example, you let crossover and mutation produce any possible tour, but then scan through it to see if he gets to Paris too late. If so, swap the position of Paris with some earlier city in the tour. Again though, sometimes it can be hard to figure out a good way to repair violations.
Penalize the fitness of an infeasible solution. Here, the idea is that even if I can't prevent him from getting to Paris too late and I can't fix it if he does, I can at least make the fitness arbitrarily worse. For TSP, the fitness is the length of the tour. So you might say that if a tour gets him to Paris too late, the fitness is the length of the tour + 100. That let's the solution stay in the population (it might be very good otherwise, so you want it to have a chance to pass on some of its genes), but you make it less likely to be selected, because your selection and replacement methods pick individuals with better fitness values.
For your JSP problem, typically you're looking to minimize the makespan. The same three options are available to you if you do have some constraints. But from what I can tell, you don't really have such constraints. I think you're trying to inject too much knowledge into the process rather than letting the evolutionary algorithm come up with it on its own. That is, you don't necessarily worry about telling the GA that some arrangements of jobs are better than others. You just assign higher fitness to the better ones and let the process converge.
That said, injecting information like this is often a really good thing to do, but you want to have a good understanding of the basic algorithm first. Let's say that we know that for TSP, it's more likely that a good solution will connect cities that are close to one another. The way I would use that information inside a GA would be to generate random solutions non-uniformly (perhaps with a greedy heuristic). I might also replace the standard crossover and mutation algorithms with something customized. Mutation is typically easier to do this with than crossover. To mutate a TSP solution, I might pick two connected cities, break the connection, and then look for a way to reconnect them that was "closer". That is, if a tour is [A,B,C,D,E,F,G,H], I might pick the edge [B,C] at random, and then look for another edge, maybe [F,G], such that when I connected them crossways to get [A,B,G,D,E,F,C,H], the total tour length was lower. I could even extend that mutation beyond one step -- create a loop that keeps trying to break and reconnect edges until it can't find a shorter tour. This leads to what is usually called a hybrid GA because it's a GA hybridized with a local search; sometimes also called a Memetic Algorithm. These sorts of algorithms usually outperform a black-box GA because you're giving the algorithm "hints" to bias it towards trying things you expect to be good.
I think this idea of a memetic algorithm is pretty close to what you were hitting on in your original question of wondering how to deal with the fact that the contribution to fitness from a particular job depends on where the other jobs are in the schedule. The only stumbling block there is that you were a bit unlucky in that the somewhat reasonable idea of thinking of this as "dynamic" leads you a bit astray, as "dynamic" actually means something entirely different here.
So to wrap up, there's nothing "dynamic" about your problem, so the things people do with GAs for dynamic problems will be entirely unhelpful. A standard GA will work with no fancy tricks. However, the idea of using information you have about what schedules work better can be introduced into the genetic operators, and will probably result in a significantly better overall algorithm.
You'd use GA to find say the best order to do a number of jobs in, or those jobs which made say best use of a day's resources. So yes they'd be related to each other.
So your fitness measure would be for seq 1,3,4,5,6,2.
Look at say find shortest path algorithm, starts to make sense then

Resources