I am new to decidability. I read halting problem of halting problem but didn't get anything what it is actually implying. I am already screwed up with the explaination.
Can anyone provide me any sound explaination or atleast some details, it would be of much help ?
You can think of a Turing machine as a sort of theoretical computer that can execute programs. A program for a Turing machine consists of a set of states and transitions between these states. Programs running on Turing machines have access to a single input source, called the tape, which has some string (possibly empty) the program can process. All programs for Turing machines either return true (halt-accept), return false (halt-reject), or fail to ever return anything at all - while (true) ; return true;. Depending on your definition it may also be possible for the machine to throw an exception (crash); but typically crashing is thought of as something like try { /*crash*/ } catch (Exception) { return false; } so that crashing means halt-reject.
The halting problem, then, is whether you can write a program for a Turing machine, whose input is another program for a Turing machine and some string of input, which returns true or false (halt-accepts or halt-rejects) if the program it has been given halts on the input provided and returns false otherwise (i.e., if the program never halts).
It turns out the answer is that no such general program exists for Turing machines. Suppose one did exist, M. Given inputs m and i it accepts if m halts on i and rejects otherwise. We can make another program specifically designed to fool M:
N(i)
1. if M(N, i) then loop forever;
2. otherwise return true
Now continue whether M works on N:
Suppose M says that N halts on input i. Then N will loop forever, and M will have been wrong.
Suppose M says that N loops forever on input i. Then N will return true and halt, so M will have been wrong.
In either case, M comes up with the wrong answer for our N, and since we made no assumptions about M other than that it exists and solves the halting problem, we may safely conclude that no machine that solves the halting problem exists.
(M may be passed into the program N as input if it is preferred not to reference M directly).
If I have code for some function f (that takes in one input for simplicity), I need to decide if the input x affects the output f(x), i.e, if f is a constant function defined below.
Define f to be constant function if output of f is invariant w.r.t x. This should hold for ALL inputs. So for example, if we have f(x) = 0 power x, it may output 0 for all inputs except for x = 0, where it may output error. So f is not a constant function.
I can only do static analysis of the code and assume the code is Java source for simplicity.
Is this possible?
This is obviously at least as hard as solving the Halting Problem (proof left as an exercise), so the answer is "no", this is not possible.
It is almost certainly possible. In most cases. Where there aren't weird thing going on.
For normal functions, the ordinary, useful kind that actually return values rather than doing their own little thing, yes.
For a simple function, not recursive, no nastiness of that sort, doing it manually, I would probably make the static-analysis equivalent of a sign chart, where I examine the code and determine every value of x that might possibly be a boundary condition or such (e.g. the code has if (x < 0) somewhere in it, so I check the function for values of x near 0). If this sort of attempt is doomed to fail please tell me before I try to use it on something.
Using brute force to grind away at it could work, unless you are working with quadruple precision x values or something similarly-sized, because then brute force could take years. Although at that point its not really static-analysis anymore.
Static-analysis generally really means having a computer tell you by looking at the code, not you looking at it yourself (at least not very much). Algorithms exist for doing this in many languages, wikipedia has such a list, including some free or even open source.
The ultimate proof that something can be done is for it to have been done already.
Since you'd call a non-terminating function non-constant, here's the reduction from your problem to the halting problem:
void does_it_halt(...);
int f(int x) {
if(x == 1) {
does_it_halt();
}
return 0;
}
Asking if f is constant is equivalent to asking if does_it_halt halts. Therefore, what you're asking for is impossible, since the halting problem is undecidable.
How would you determine a loop is a infinite loop and will break out of it.
Does anyone has the algorithm or can assist me on this one.
Thanks
There is no general case algorithm that can determine if a program is in an infinite loop or not for every turing complete language, this is basically the Halting Problem.
The idea of proving it is simple:
Assume you had such an algorithm A.
Build a program B that invokes A on itself [on B].
if A answers "the program will halt" - do an infinite loop
else [A answers B doesn't halt] - halt immidiately
Now, assume you invoke A on B - the answer will be definetly wrong, thus A doesn't exist.
Note: the above is NOT a formal proof, just a sketch of it.
As written by others, it cannot be determined.
However, if you want to have some checking, you can use the WatchDog design pattern.
This is a separate thread that checks if a task still is active. Your own thread should give a signal regularly to say it is alive. Make sure this signal is not set inside your (infinite) loop.
If there was no signal, the program is inside an infinite loop or has stopped and the watchdog can act on it.
I'm going over the proof for The Halting Problem in Intro to the Theory of Computation by Sipser and my main concern is about the proof below:
If TM M doesn't know when it's looping (it can't accept or reject which is why a TM is Turing Recognizable for all strings), then how would could the decider H decide if M could possibly be in a loop? The same problem will carry through when TM D does its processing.
After reading this and trying to visualize the proof I came up with this code which is a simplified version of the code in this answer to a related question:
function halts(func) {
// Insert code here that returns "true" if "func" halts and "false" otherwise.
}
function deceiver() {
if(halts(deceiver))
while(true) { }
}
If halts(deceiver) returns true, deceiver will run forever, and if it returns false, deceiver will halt, which contradicts the definition of halts. Hence, the function halts is impossible.
This is a "proof by contradiction", a reductio ad absurdum. (Latin phrases are always good in theory classes... as long as they make sense, of course.)
This program H is just a program with two inputs: a string representing a program for some machine, and an input. For purposes of the proof, you simply assume the program H is correct: it simply will halt and accept if M accepts with w. You don't need to think about how it would do that; in fact, we're about to prove it can't, that no such program H can exist, ...
BECAUSE
if such a program existed, we could immediately construct another program H' that H couldn't decide. But, by the assumption, there is no such program: H can decide everything. So, we're forced to conclude that no program defined as we defined H is possible.
By the way, the reductio method of proof is more controversial than you might expect, considering how often its used, especially in Computer Science. You shouldn't be embarrassed to find it a little odd. The magic term is "non-constructive" and if you feel really ambitious, ask one of your professors about Errett Bishop's critique of non-constructive mathematics.
I have written a simple brainfuck interpreter in MATLAB script language. It is fed random bf programs to execute (as part of a genetic algorithm project). The problem I face is, the program turns out to have an infinite loop in a sizeable number of cases, and hence the GA gets stuck at the point.
So, I need a mechanism to detect infinite loops and avoid executing that code in bf.
One obvious (trivial) case is when I have
[]
I can detect this and refuse to run that program.
For the non-trivial cases, I figured out that the basic idea is: to determine how one iteration of the loop changes the current cell. If the change is negative, we're eventually going to reach 0, so it's a finite loop. Otherwise, if the change is non-negative, it's an infinite loop.
Implementing this is easy for the case of a single loop, but with nested loops it becomes very complicated. For example, (in what follows (1) refers to contents of cell 1, etc. )
++++ Put 4 in 1st cell (1)
>+++ Put 3 in (2)
<[ While( (1) is non zero)
-- Decrease (1) by 2
>[ While( (2) is non zero)
- Decrement (2)
<+ Increment (1)
>]
(2) would be 0 at this point
+++ Increase (2) by 3 making (2) = 3
<] (1) was decreased by 2 and then increased by 3, so net effect is increment
and hence the code runs on and on. A naive check of the number of +'s and -'s done on cell 1, however, would say the number of -'s is more, so would not detect the infinite loop.
Can anyone think of a good algorithm to detect infinite loops, given arbitrary nesting of arbitrary number of loops in bf?
EDIT: I do know that the halting problem is unsolvable in general, but I was not sure whether there did not exist special case exceptions. Like, maybe Matlab might function as a Super Turing machine able to determine the halting of the bf program. I might be horribly wrong, but if so, I would like to know exactly how and why.
SECOND EDIT: I have written what I purport to be infinite loop detector. It probably misses some edge cases (or less probably, somehow escapes Mr. Turing's clutches), but seems to work for me as of now.
In pseudocode form, here it goes:
subroutine bfexec(bfprogram)
begin
Looping through the bfprogram,
If(current character is '[')
Find the corresponding ']'
Store the code between the two brackets in, say, 'subprog'
Save the value of the current cell in oldval
Call bfexec recursively with subprog
Save the value of the current cell in newval
If(newval >= oldval)
Raise an 'infinite loop' error and exit
EndIf
/* Do other character's processings */
EndIf
EndLoop
end
Alan Turing would like to have a word with you.
http://en.wikipedia.org/wiki/Halting_problem
When I used linear genetic programming, I just used an upper bound for the number of instructions a single program was allowed to do in its lifetime. I think that this is sensible in two ways: I cannot really solve the halting problem anyway, and programs that take too long to compute are not worthy of getting more time anyway.
Let's say you did write a program that could detect whether this program would run in an infinite loop. Let's say for the sake of simplicity that this program was written in brainfuck to analyze brainfuck programs (though this is not a precondition of the following proof, because any language can emulate brainfuck and brainfuck can emulate any language).
Now let's say you extend the checker program to make a new program. This new program exits immediately when its input loops indefinitely, and loops forever when its input exits at some point.
If you input this new program into itself, what will the results be?
If this program loops forever when run, then by its own definition it should exit immediately when run with itself as input. And vice versa. The checker program cannot possibly exist, because its very existence implies a contradiction.
As has been mentioned before, you are essentially restating the famous halting problem:
http://en.wikipedia.org/wiki/Halting_problem
Ed. I want to make clear that the above disproof is not my own, but is essentially the famous disproof Alan Turing gave back in 1936.
State in bf is a single array of chars.
If I were you, I'd take a hash of the bf interpreter state on every "]" (or once in rand(1, 100) "]"s*) and assert that the set of hashes is unique.
The second (or more) time I see a certain hash, I save the whole state aside.
The third (or more) time I see a certain hash, I compare the whole state to the saved one(s) and if there's a match, I quit.
On every input command ('.', IIRC) I reset my saved states and list of hashes.
An optimization is to only hash the part of state that was touched.
I haven't solved the halting problem - I'm detecting infinite loops while running the program.
*The rand is to make the check independent of loop period
Infinite loop cannot be detected, but you can detect if the program is taking too much time.
Implement a timeout by incrementing a counter every time you run a command (e.g. <, >, +, -). When the counter reaches some large number, which you set by observation, you can say that it takes very long time to execute your program. For your purpose, "very long" and infinite is a good-enough approximation.
As already mentioned this is the Halting Problem.
But in your case there might be a solution: The Halting Problem is considering is about the Turing machine, which has unlimited memory.
In case you know that you have a upper limit of memory (e.g. you know you dont use more than 10 memory cells), you can execute your programm and stop it. The idea is that the computation space bounds computation time (as you cant write more than one cell at one step). After you executed as much steps as you can have different memory configurations, you can break. E.g. if you have 3 cells, with 256 conditions, you can have at most 3^256 different states, and so you can stop after executing that many steps. But be careful, there are implicit cells, like the instruction pointer and the registers. You do it even shorter, if you save every state configuration and as soon as you detect one, which you already had, you have an infite loop. This approach is definitly much better in the run time, but therefor needs much more space (here it might be suitable to hash the configurations).
This is not the halting problem, however, it is still not reasonable to try to detect halting even in such a limited machine as a 1000 cell BF machine.
Consider this program:
+[->[>]+<[-<]+]
This program will not repeat until it has filled up the entire of memory which for just 1000 cells will take about 10^300 years.
If I remember correctly, the halting problem proof was only true for some extreme case that involved self reference. However it's still trivial to show a practical example of why you can't make an infinite loop detector.
Consider Fermat's Last Theorem. It's easy to create a program that iterates through every number (or in this case 3 numbers), and detects if it's a counterexample to the theorem. If so it halts, otherwise it continues.
So if you have an infinite loop detector, it should be able to prove this theorem, and many many others (perhaps all others, if they can be reduced to searching for counterexamples.)
In general, any program that involves iterating through numbers and only stopping under some condition, would require a general theorem prover to prove if that condition can ever be met. And that's the simplest case of looping there is.
Off the top of my head (and I could be wrong), I would think it would be a little bit difficult to detect whether or not a program has an infinite loop without actually executing the program itself.
As the conditional execution of portions of the program depends on the execution state of the program, it will be difficult to know the particular state of the program without actually executing the program.
If you don't require that a program with an infinite loop be executed, you could try having an "instructions executed" counter, and only execute a finite number of instructions. This way, if a program does have an infinite loop, the interpreter can terminate the program which is stuck in an infinite loop.