What is Undecidability and its practical application? - computation-theory

I have a vague understanding of undecidability. I get it that undecidability is that for a problem there exists no algorithm or the Turing machine will never halt. But I cant just visualize it. Can someone explain in a better way. and I don't get why we are learning about it and its applications. Can someone explain in this topic?

The fundamental concept being explored here is decision problems. A decision problem is any problem that can be formulated as a yes/no (or true/false) question and expressed in a formal language, such as lambda calculus or the input to a Turing machine.
A problem is undecidable if there exists no algorithm that can give the right answer on every possible input. It might help to think about the converse of this: If we have some undecidable problem and Bob comes along and says he's got an algorithm that solves it (decides the problem), well then there's going to be at least one input that causes Bob's algorithm to give the wrong answer or loop infinitely.
The halting problem is a decision problem - an excellent example of one that is undecidable. In fact, it was the first decision problem to be shown to be undecidable.
For the sake of keeping things concrete, let's say Alice would like to have a program for her Turing machine that can analyze any program and predict with perfect accuracy whether or not it will halt. Bob comes along with his program, which he calls P for "the Perfect Program!" He claims it'll do the job.
Skeptical, Alice takes P and starts writing her own program, which she calls Q (for Quixotic). Like P, Q takes any program as its input. When it gets its input, Q immediately calls P and asks it what the input program is going to do.
If P says the input program is going to run forever, Q prints "Done!" and immediately halts.
If P says the input program is going to halt, Q prints "Looping!" and then enters an infinite loop from which it will never return.
What happens if we run Q and give it its own source code as input (We run Q(Q))?
If P says Q will run forever, Q immediately prints "Done!" and halts, so it can't do that. It would be giving the wrong answer.
If P says Q will halt, Q prints "Looping!" and then enters an infinite loop. So P would be giving the wrong answer here too.
So no matter what P does, it can't give the correct answer. Bob claimed it solved the halting problem (it always gives the right answer, on every input)! But doing so would produce a contradiction, as we see above. So, Bob can't be telling the truth about this program P of his. And in fact, it doesn't even matter what's actually in Bob's program. It could be simple, or it could be ridiculously complex. No matter how sophisticated or clever or elegant it is, it can't solve the problem. Not with 100% accuracy, as decision problems require. No program can. That's what undecidability means.

Addressing the "practical application" part of this question, developing compilers is a good example of how undecidability can be leveraged in real life.
Some compilers make every effort to identify bad behavior that will happen at runtime. For example, it would be pretty cool if your C compiler could always, 100% of the time tell you if the program you have written down is going to loop infinitely or have a segmentation fault, for example. But, it can't! If it could, the C compiler would be deciding an undecidable problem (the halting problem, in the case of infinite loops).
I'm not an expert in compilers, but I'd bet that having a pretty solid understanding of decidability is pretty important for researchers / engineers working on cutting edge compilers. Will you and I, as average Joes, have a practical application for decidability? Probably not. But it's still pretty cool!

Related

Python Recursion Understanding Issue

I'm a freshman in cs, and I am having a little issue understanding the content of recursion in python.
I have 3 questions that I want to ask, or to confirm if I am understanding the content correctly.
1: I'm not sure the purpose of using base case in recursion. Is base case worked to terminate the recursion somewhere in my program?
2: Suppose I have my recursive call above any other code of my program. Is the program going to run the recursion first and then to run the content after recursion?
3: How to trace recursion properly with a reasonable correctness rate? I personally think it's really hard to trace recursion and I can barely find some instruction online.
Thanks for answering my questions.
Yes. The idea is that the recursive call will continue to call itself (with different inputs) until it calls the base case (or one of many base cases). The base case is usually a case of your problem where the solution is trivial and doesn't rely on any other parts of the problem. It then walks through the calls backwards, building on the answer it got for the simplest version of the question.
Yes. Interacting with recursive functions from the outside is exactly the same as interacting with any other function.
Depends on what you're writing, and how comfortable you are with debugging tools. If you're trying to track what values are getting passed back and forth between recursive calls, printing all the parameters at the start of the function and the return value at the end can take you pretty far.
The easiest way to wrap your head around this stuff is to write some of your own. Start with some basic examples (the classic is the factorial function) and ask for help if you get stuck.
Edit: If you're more math-oriented, you might look up mathematical induction (you will learn it anyway as part of cs education). It's the exact same concept, just taught a little differently.

How do programmers test their algorithm in TopCoder or other competitions?

Good programmers who write programs of moderate to higher difficulty in competitions of TopCoder or ACM ICPC, have to ensure the correctness of their algorithm before submission.
Although they are provided with some sample test cases to ensure the correct output, but how does it guarantees that program will behave correctly? They can write some test cases of their own but it won't be possible in all cases to know the correct answer through manual calculation. How do they do it?
Update: As it seems, it is not quite possible to analyze and guarantee the outcome of an algorithm given tight constraints of a competitive environment. However, if there are any manual, more common traits which are adopted while solving such problems - should be enough to answer the question. Something like best practices..
In competitions, the top programmers have enough experience to read the question, and think of some test cases that should catch most of the possibilities for input.
It catches most of the bugs usually - but it is NOT 100% safe.
However, in real life critical applications (critical systems on air planes or nuclear reactors for example) there are methods to PROVE some piece of code does what it is supposed to do.
This is the field of formal verification - which is way too complex and time consuming to be done during a contest, but for some systems it is used because mistakes could not be tolerated.
Some additional information:
Formal verification basically consists of 2 parts:
Manual verification - in here we use proving systems such as Hoare logic and manually prove the program does what we wants it to do.
Automatic model checking - modeling the problem as state machine, and use Model Checking tools to verify that the module does what it is supposed to do (or not doing something "bad").
Specifying "what it should do" is usually done with temporal logic.
This is often used to verify correctness of hardware models as well. For example Intel uses it to ensure they won't get the floating point bug again.
Picture this, imagine you are a top programmer.Meaning you know a bunch of algorithms and wouldn't think think twice while implementing them.You know how to modify an already known algorithm to suit the problem's needs.You are strong with estimating time and complexity and you expect that in the worst case your tailored algorithm would run within time and memory constraints.
At this level you simply think and use a scratchpad for about five to ten minutes and have a super clear algorithm before you start to code.Once you finish coding, you hit compile and there is usually no compilation error.Because the code is so intuitive to you.
Then based on the algorithm used and data structures used, you expect that there might be
one of the following issues.
a corner case
an overflow problem
A corner case is basically like you have coded for the general case, however when say N=1, the answer is different from others.So you generally write it as a special case.
An overflow is when intermediate values or results overflow a data type's limits.
You make note of any problems which arise at this point, and use this data during Challenge phase(as in TopCoder).
Once you have checked against these two, you hit Submit.
There's a time element to Top Coder, so it's not possible to test every combination within that constraint. They probably do the best they can and rely on experience for the rest, just as one does in real life. I don't know that it's ever possible to guarantee that a significant piece of code is error free forever.

Is it possible to predict how long a program will take to run?

I was wondering, is it possible to do that with an arbitrary program? I heard that, with some mathematics, you can estimate how long a simple algorithm, like a sorting algorithm, will take to run; but what about more complex programs?
Once I visited a large cluster at an university, which runs programs from scientists all over the world. When I asked one of the engineers how they managed to schedule when each program will be run, he said that the researchers sent, with their programs, an estimative of how long they would take to run, based on the previous analysis that some program made for this purpose.
So, does this kind of program really exist? If not, how can I make a good estimative of the run time of my programs?
in general you cannot do this because in general you cannot prove that a program will finish at all. this is known as the halting problem
You're actually asking a few related questions at the same time, not just one single question.
Is there a Program A that, when given another arbitrary Program B as an input, provide an estimate of how long it will take Program B to run? No. Absolutely not. You can't even devise a Program A that will tell you if an arbitrary Program B will ever stop.
That second version-- will Program B ever halt-- is called the Halting Problem, cleverly enough, and it's been proven that it's just not decidable. Wikipedia has a nice web page, and the book Goedel, Escher, Bach is a very long, but very conversational and readable exposition of the ideas involved Goedel's Incompleteness Theorem, which is very closely related.
http://en.wikipedia.org/wiki/Halting_problem
http://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach
So if that's true, then how are the scientists coming up with those estimates? Well, they don't have arbitrary programs, they have particular programs that they have written. Some programs are undecidable, but not **all* programs are undecidable. So, unless someone makes a serious error, the scientists aren't going to try to run a program that they haven't proven will stop.
Once they have proven that some program will stop, one of the major mathematical tools is called Big O notation. At a very intuitive level, Big O notation helps to develop scaling laws for how the run-time of a program varies with the size of the input. At a very trivial example, suppose your program is a loop, and the loop takes one arbitrary unit of time to run. If you run the loop N times, it takes N units of time. But, if that loop is itself in another loop that runs N times, the whole thing takes N*N units of time. Those two programs scale very differently. (That's a trivial example, but real examples can get quite complicated.)
http://en.wikipedia.org/wiki/Big_oh
That's a rather abstract, mathematical tool. Big O analyses are often so abstract that they simply assume that all sufficiently low level operations take "about" the same amount of time, and Big O doesn't really give answers in terms of seconds, hours, or days, anyway. In practice, real computers are also affected by hardware details, such as how long it takes to perform some very low level operation, or worse, how long it takes to move information from one part of the machine to another part which is extremely important on multi-processor computers. So in practice, the insights from the Big O analyses are combined with a detailed knowledge of the machine that it will run on in order to come up with an estimate.
You should research the Big O notation. While it does not give a fixed number, it tells you how its performance will change for different sizes. There are a few simple rules (if your code is in a loop of n iterations, then it will take n*time to run the loop).
The trouble are:
With complex programs there are multiple variables affecting it.
Does not take into account user interaction.
Same for network latency, etc.
So, this method works well for scientific programs, where the program is heavy calculus, using a very studied algorithm (and many times it is just running just the same algorithm over different data sets).
You can't really.
For a simple algorithm you know if something is O(n) or O(n^2). Which one it is, you can guesstimate.
However, if you've got a program running tons of different algorithms, it'll become quite hard to guesstimate it. What you, however, can do, is predict results based on a previous run.
Assume you first estimate your program to run for one hour, but it runs for half an hour. If you change very little between builds/releases then you'll know next time that it will run somewhere around half an hour.
If you've made radical changes then it becomes harder to find the ETA :-]

Code generation by genetic algorithms

Evolutionary programming seems to be a great way to solve many optimization problems. The idea is very easy and the implementation does not make problems.
I was wondering if there is any way to evolutionarily create a program in ruby/python script (or any other language)?
The idea is simple:
Create a population of programs
Perform genetic operations (roulette-wheel selection or any other selection), create new programs with inheritance from best programs, etc.
Loop point 2 until program that will satisfy our condition is found
But there are still few problems:
How will chromosomes be represented? For example, should one cell of chromosome be one line of code?
How will chromosomes be generated? If they will be lines of code, how do we generate them to ensure that they are syntactically correct, etc.?
Example of a program that could be generated:
Create script that takes N numbers as input and returns their mean as output.
If there were any attempts to create such algorithms I'll be glad to see any links/sources.
If you are sure you want to do this, you want genetic programming, rather than a genetic algorithm. GP allows you to evolve tree-structured programs. What you would do would be to give it a bunch of primitive operations (while($register), read($register), increment($register), decrement($register), divide($result $numerator $denominator), print, progn2 (this is GP speak for "execute two commands sequentially")).
You could produce something like this:
progn2(
progn2(
read($1)
while($1
progn2(
while($1
progn2( #add the input to the total
increment($2)
decrement($1)
)
)
progn2( #increment number of values entered, read again
increment($3)
read($1)
)
)
)
)
progn2( #calculate result
divide($1 $2 $3)
print($1)
)
)
You would use, as your fitness function, how close it is to the real solution. And therein lies the catch, that you have to calculate that traditionally anyway*. And then have something that translates that into code in (your language of choice). Note that, as you've got a potential infinite loop in there, you'll have to cut off execution after a while (there's no way around the halting problem), and it probably won't work. Shucks. Note also, that my provided code will attempt to divide by zero.
*There are ways around this, but generally not terribly far around it.
It can be done, but works very badly for most kinds of applications.
Genetic algorithms only work when the fitness function is continuous, i.e. you can determine which candidates in your current population are closer to the solution than others, because only then you'll get improvements from one generation to the next. I learned this the hard way when I had a genetic algorithm with one strongly-weighted non-continuous component in my fitness function. It dominated all others and because it was non-continuous, there was no gradual advancement towards greater fitness because candidates that were almost correct in that aspect were not considered more fit than ones that were completely incorrect.
Unfortunately, program correctness is utterly non-continuous. Is a program that stops with error X on line A better than one that stops with error Y on line B? Your program could be one character away from being correct, and still abort with an error, while one that returns a constant hardcoded result can at least pass one test.
And that's not even touching on the matter of the code itself being non-continuous under modifications...
Well this is very possible and #Jivlain correctly points out in his (nice) answer that genetic Programming is what you are looking for (and not simple Genetic Algorithms).
Genetic Programming is a field that has not reached a broad audience yet, partially because of some of the complications #MichaelBorgwardt indicates in his answer. But those are mere complications, it is far from true that this is impossible to do. Research on the topic has been going on for more than 20 years.
Andre Koza is one of the leading researchers on this (have a look at his 1992 work) and he demonstrated as early as 1996 how genetic programming can in some cases outperform naive GAs on some classic computational problems (such as evolving programs for Cellular Automata synchronization).
Here's a good Genetic Programming tutorial from Koza and Poli dated 2003.
For a recent reference you might wanna have a look at A field guide to genetic programming (2008).
Since this question was asked, the field of genetic programming has advanced a bit, and there have been some additional attempts to evolve code in configurations other than the tree structures of traditional genetic programming. Here are just a few of them:
PushGP - designed with the goal of evolving modular functions like human coders use, programs in this system store all variables and code on different stacks (one for each variable type). Programs are written by pushing and popping commands and data off of the stacks.
FINCH - a system that evolves Java byte-code. This has been used to great effect to evolve game-playing agents.
Various algorithms have started evolving C++ code, often with a step in which compiler errors are corrected. This has had mixed, but not altogether unpromising results. Here's an example.
Avida - a system in which agents evolve programs (mostly boolean logic tasks) using a very simple assembly code. Based off of the older (and less versatile) Tierra.
The language isn't an issue. Regardless of the language, you have to define some higher-level of mutation, otherwise it will take forever to learn.
For example, since any Ruby language can be defined in terms of a text string, you could just randomly generate text strings and optimize that. Better would be to generate only legal Ruby programs. However, it would also take forever.
If you were trying to build a sorting program and you had high level operations like "swap", "move", etc. then you would have a much higher chance of success.
In theory, a bunch of monkeys banging on a typewriter for an infinite amount of time will output all the works of Shakespeare. In practice, it isn't a practical way to write literature. Just because genetic algorithms can solve optimization problems doesn't mean that it's easy or even necessarily a good way to do it.
The biggest selling point of genetic algorithms, as you say, is that they are dirt simple. They don't have the best performance or mathematical background, but even if you have no idea how to solve your problem, as long as you can define it as an optimization problem you will be able to turn it into a GA.
Programs aren't really suited for GA's precisely because code isn't good chromossome material. I have seen someone who did something similar with (simpler) machine code instead of Python (although it was more of an ecossystem simulation then a GA per se) and you might have better luck if you codify your programs using automata / LISP or something like that.
On the other hand, given how alluring GA's are and how basically everyone who looks at them asks this same question, I'm pretty sure there are already people who tried this somewhere - I just have no idea if any of them succeeded.
Good luck with that.
Sure, you could write a "mutation" program that reads a program and randomly adds, deletes, or changes some number of characters. Then you could compile the result and see if the output is better than the original program. (However we define and measure "better".) Of course 99.9% of the time the result would be compile errors: syntax errors, undefined variables, etc. And surely most of the rest would be wildly incorrect.
Try some very simple problem. Say, start with a program that reads in two numbers, adds them together, and outputs the sum. Let's say that the goal is a program that reads in three numbers and calculates the sum. Just how long and complex such a program would be of course depends on the language. Let's say we have some very high level language that lets us read or write a number with just one line of code. Then the starting program is just 4 lines:
read x
read y
total=x+y
write total
The simplest program to meet the desired goal would be something like
read x
read y
read z
total=x+y+z
write total
So through a random mutation, we have to add "read z" and "+z", a total of 9 characters including the space and the new-line. Let's make it easy on our mutation program and say it always inserts exactly 9 random characters, that they're guaranteed to be in the right places, and that it chooses from a character set of just 26 letters plus 10 digits plus 14 special characters = 50 characters. What are the odds that it will pick the correct 9 characters? 1 in 50^9 = 1 in 2.0e15. (Okay, the program would work if instead of "read z" and "+z" it inserted "read w" and "+w", but then I'm making it easy by assuming it magically inserts exactly the right number of characters and always inserts them in the right places. So I think this estimate is still generous.)
1 in 2.0e15 is a pretty small probability. Even if the program runs a thousand times a second, and you can test the output that quickly, the chance is still just 1 in 2.0e12 per second, or 1 in 5.4e8 per hour, 1 in 2.3e7 per day. Keep it running for a year and the chance of success is still only 1 in 62,000.
Even a moderately competent programmer should be able to make such a change in, what, ten minutes?
Note that changes must come in at least "packets" that are correct. That is, if a mutation generates "reax z", that's only one character away from "read z", but it would still produce compile errors, and so would fail.
Likewise adding "read z" but changing the calculation to "total=x+y+w" is not going to work. Depending on the language, you'll either get errors for the undefined variable or at best it will have some default value, like zero, and give incorrect results.
You could, I suppose, theorize incremental solutions. Maybe one mutation adds the new read statement, then a future mutation updates the calculation. But without the calculation, the additional read is worthless. How will the program be evaluated to determine that the additional read is "a step in the right direction"? The only way I see to do that is to have an intelligent being read the code after each mutation and see if the change is making progress toward the desired goal. And if you have an intelligent designer who can do that, that must mean that he knows what the desired goal is and how to achieve it. At which point, it would be far more efficient to just make the desired change rather than waiting for it to happen randomly.
And this is an exceedingly trivial program in a very easy language. Most programs are, what, hundreds or thousands of lines, all of which must work together. The odds against any random process writing a working program are astronomical.
There might be ways to do something that resembles this in some very specialized application, where you are not really making random mutations, but rather making incremental modifications to the parameters of a solution. Like, we have a formula with some constants whose values we don't know. We know what the correct results are for some small set of inputs. So we make random changes to the constants, and if the result is closer to the right answer, change from there, if not, go back to the previous value. But even at that, I think it would rarely be productive to make random changes. It would likely be more helpful to try changing the constants according to a strict formula, like start with changing by 1000's, then 100's then 10's, etc.
I want to just give you a suggestion. I don't know how successful you'd be, but perhaps you could try to evolve a core war bot with genetic programming. Your fitness function is easy: just let the bots compete in a game. You could start with well known bots and perhaps a few random ones then wait and see what happens.

What makes a problem more fundamental than another?

Is there any formal definition for what makes a problem more fundamental than another? Otherwise, what would be an acceptable informal definition?
An example of a problem that is more fundamental than another would be sorting vs font rendering.
When many problems can be solved using one algorithm, for instance. This is the case for any optimal algorithm for sorting. BTW, perhaps you're mixing problems and algorithms? There is a formal definition of one problem being reducible to another. See http://en.wikipedia.org/wiki/Reduction_(complexity)
The original question is a valid one, and does not have to assume/consider complexity and reducibility as #slebetman suggested. (Thus making the question more fundamental :)
If we attempt a formal definition, we could have this one: Problem P1 is more fundamental than problem P2, if a solution to P1 affects the outcome of a wider set of other problems. This likely implies that P1 will affect problems in different domains of computer science - and possibly beyond.
In practical terms, I would correct again #slebetman. Instead of "if something uses or challenges an assumption then it is less fundamental than that assumption", I would say "if a problem uses or challenges an assumption then it is less fundamental than the same problem without the assumption". I.e. sorting of Objects is more fundamental than sorting of Integers; or, font rendering on a printer is less fundamental than font rendering on any device.
If I understood your question right.
When you can solve the same problem by applying many algorithms, the algorithm which proves its lightweight on both of memory and CPU is considered more fundamental. And I can think of another thing which is, a fundamental algorithm will not use other algorithms, otherwise it would be a complex one.
The problem or solution that has more applications is more fundamental.
(Sorting has many appications, P!=NP too has many applications (or implications), rendeering only has a few applications.)
Inf is right when he hinted at complexity and reducibility but not just of what is involved/included in an algorithm's implementation. It's the assumptions you make.
All pieces of code are written with assumptions about the world in which it operates. Those assumptions are more fundamental than the code written with those assumptions. In short, I would say: if something uses or challenges an assumption then it is less fundamental than that assumption.
Lets take an example: sorting. The problem of sorting is that when done the naive way the time it takes to sort grows very quickly. In technical terms, the problem of sorting tends towards being NP complete. So sorting algorithms are developed with the aim of avoiding being NP complete. But is NP complete always slow? That's a problem unto itself and the problem is called P!=NP which so far has not been proven either true or false. So P!=NP is more fundamental than sorting.
But even P!=NP makes some assumptions. It assumes that NP complete problems can always be run to completion even if it takes a long time to complete. In other words, the big-O notation for an NP complete problem is never infinite. This begs the question are all problems guaranteed to have a point of completion? The answer to that is no, not all problems have a guaranteed point of completion, the most trivial of which is the infinite loop: while (1) {print "still running!"}. So can we detect all cases of programs running infinitely? No, and the proof of that is the halting problem. Hence the halting problem is more fundamental than P!=NP.
But even the halting problem makes assumptions. We assume that we are running all our programs on a particular kind of CPU. Can we treat all CPUs as equivalent? Is there a CPU out there that can be built to solve the halting problem? Well, the answer is as long as a CPU is Turing complete, they are equivalent with other CPUs that are Turing complete. Turing completeness is therefore a more fundamental problem than the halting problem.
We can go on and on and question our assumptions like: are all kinds of signals equivalent? Is it possible that a fluidic or mechanical computer be fundamentally different than an electronic computer? And it will lead us to things like Shannon's information theory and Boolean algebra etc and each assumption that we uncover are more fundamental than the one above it.

Resources