Are there any Recursively Enumerable problems that are not RE-hard? - computation-theory

The computability class that I'm taking explains several languages that are in RE - REC (recursively enumerable but not recursive, i.e. solvable by a non-halting turing machine). It first shows how one of them (L_d, language of turing machines which don't accept their own encoding) is not in RE, and proves that its complement is in RE - REC. It then proves that it is reducible to the universal language (L_u, the set of all binary encodings of turing machines concatenated with a string that it accepts). It then goes on to show how L_u is RE-Hard, then reduces it to L_PCP (Post's Correspondence Problem) and then reduces that to Context Free Grammar Ambiguity. Are there any problems that are in RE, but not RE-Hard? Because so far, for every problem our professor has explained in RE - REC, he has proven they are reducible to eachother.

The problem you mention (with the clarification of Peter Leupold, that should be integrated in the question) is known as Post problem. The answer is positive: in particular, all so called "simple" sets are RE-sets that are not many-one complete.
A simple set is a RE-set whose complementary is "immune". An immune set is a set that does not contain any infinite RE-set.
This is enough to prove that a simple set cannot be complete, since the complementary of a complete set is productive, and any productive set contains an infinite RE-subset, generated by its own production function.
A few simple sets are known. My favourite example is the set of non-random numbers, according to Kolmogorov complexity, that is the set of all numbers that can be more compactly expressed as the index of a program generating it (on input 0). The proof that such a set is simple is not difficult and can be found in any good text on computability.

The answer to your question is yes, because even finite languages are RE. But they are by no means hard in the sense you mean.
Maybe your question is really "Are there any Recursively Enumerable problems that are not RE-hard but not recursive?" Here the answer depends on your notion of reduction. Probably your professor is using many-one reductions; then the answer is probably, YES (I am not completely sure). For weaker reductions the answer is NO.

Related

Is it possible to convert all higher-order logic and dependent type in Lean/Isabelle/Coq into first-order logic?

More specially, given arbitrary Lean proof/theorem, is it possible to express it solely using first-order logic? If so, is it practical, i.e. the generated FOL will not be enormously large?
I have seen https://www.cl.cam.ac.uk/~lp15/papers/Automation/translations.pdf, but since I am not an expert, I am still not sure whether all Lean's proof code can be converted.
Other mathematical proof languages are also OK.
The short answer is: yes, it is not impractically large and this is done in particular when translating proofs to SMT solvers for sledgehammer-like tools. There is a fair amount of blowup, but it is a linear factor on the order of 2-5. You probably lose more from not having specific support for all the built in rules, and in the case of DTT, writing down all the defeq proofs which are normally implicit.

What is the core difference between algorithm and pseudocode?

As the question describe itself "What is the core difference between algorithm and pseudocode?".
algorithm
An algorithm is a procedure for solving a problem in terms of the actions to be executed and the order in which those actions are to be executed. An algorithm is merely the sequence of steps taken to solve a problem. The steps are normally "sequence," "selection, " "iteration," and a case-type statement.
Pseudocode
Pseudocode is an artificial and informal language that helps programmers develop algorithms. Pseudocode is a "text-based" detail (algorithmic) design tool.
The rules of Pseudocode are reasonably straightforward. All statements showing "dependency" are to be indented. These include while, do, for, if, switch. Examples below will illustrate this notion.
I think all the other answers give useful explanations and definitions, but I'm going to give mine.
An algorithm is the idea of how to obtain some result from some input. It is an abstract concept; an algorithm is not something material by itself, but more something like an imagination or a deduction, a thing that only exists in the mind. In the broad sense, any sequence of steps that give you some thing(s) from other thing(s) could be called an algorithm. For example, if the screen of your computer is dirty, "spraying some glass cleaner on it and wipe it with a cloth" could be said to be an algorithm to solve the problem of how to obtain a clean screen form a dirty screen. It is important to note the difference between the problem itself (getting a clean screen) and the algorithm (wiping it with a cloth and cleaner); generally, several different algorithms are possible to solve the same problem. The idea of complexity is inherent to the algorithms itself, not the problem or the particular implementation or execution of the algorithm.
Pseudocode is a language to express algorithms. Since, as said before, algorithms are only concepts, we need to use something to express them and explain them to other people. Pseudocode is a convenient way for many computer science algorithms, because it is usually unambiguous, easy to read and somewhat similar to many programming languages. However, a specific programming language like C or Java can also be used to express and algorithm (it's just less convenient to those not familiarized with that language). In other cases, pseudocode may not be the best way to express an algorithm; for example, many graph and tree algorithms can be explained more easily with drawings or diagrams. In the previous example, the algorithm to get your screen cleaned is probably better expressed in a natural language like English, because it is simple and specific enough for that case.
Obviously, terms are frequently used loosely and exchanged depending on the context, and there's no need to be nitpicky about it, but I think it is important to have the difference clear. An algorithm doesn't stop being an algorithm just because it is written in Python instead of pseudocode. Pseudocode is just a convenient and widespread communication tool to express them.
An algorithm is something (a sequence of steps) you can do. Pseudocode is a notation to describe an algorithm.
Algorithm is something which is represented in mathematical terms. It includes, analysis, complexity considerations(best, average and worstcase analysis etc.).Pseudo code is a human readable representation of a program.
From Wikipedia :
Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states, eventually producing "output" and terminating at a final ending state. 
With a pseudo language one can implement an algorithm without using a programming language such as C.
An example of pseudo language is Flow Charts.

What makes a problem more fundamental than another?

Is there any formal definition for what makes a problem more fundamental than another? Otherwise, what would be an acceptable informal definition?
An example of a problem that is more fundamental than another would be sorting vs font rendering.
When many problems can be solved using one algorithm, for instance. This is the case for any optimal algorithm for sorting. BTW, perhaps you're mixing problems and algorithms? There is a formal definition of one problem being reducible to another. See http://en.wikipedia.org/wiki/Reduction_(complexity)
The original question is a valid one, and does not have to assume/consider complexity and reducibility as #slebetman suggested. (Thus making the question more fundamental :)
If we attempt a formal definition, we could have this one: Problem P1 is more fundamental than problem P2, if a solution to P1 affects the outcome of a wider set of other problems. This likely implies that P1 will affect problems in different domains of computer science - and possibly beyond.
In practical terms, I would correct again #slebetman. Instead of "if something uses or challenges an assumption then it is less fundamental than that assumption", I would say "if a problem uses or challenges an assumption then it is less fundamental than the same problem without the assumption". I.e. sorting of Objects is more fundamental than sorting of Integers; or, font rendering on a printer is less fundamental than font rendering on any device.
If I understood your question right.
When you can solve the same problem by applying many algorithms, the algorithm which proves its lightweight on both of memory and CPU is considered more fundamental. And I can think of another thing which is, a fundamental algorithm will not use other algorithms, otherwise it would be a complex one.
The problem or solution that has more applications is more fundamental.
(Sorting has many appications, P!=NP too has many applications (or implications), rendeering only has a few applications.)
Inf is right when he hinted at complexity and reducibility but not just of what is involved/included in an algorithm's implementation. It's the assumptions you make.
All pieces of code are written with assumptions about the world in which it operates. Those assumptions are more fundamental than the code written with those assumptions. In short, I would say: if something uses or challenges an assumption then it is less fundamental than that assumption.
Lets take an example: sorting. The problem of sorting is that when done the naive way the time it takes to sort grows very quickly. In technical terms, the problem of sorting tends towards being NP complete. So sorting algorithms are developed with the aim of avoiding being NP complete. But is NP complete always slow? That's a problem unto itself and the problem is called P!=NP which so far has not been proven either true or false. So P!=NP is more fundamental than sorting.
But even P!=NP makes some assumptions. It assumes that NP complete problems can always be run to completion even if it takes a long time to complete. In other words, the big-O notation for an NP complete problem is never infinite. This begs the question are all problems guaranteed to have a point of completion? The answer to that is no, not all problems have a guaranteed point of completion, the most trivial of which is the infinite loop: while (1) {print "still running!"}. So can we detect all cases of programs running infinitely? No, and the proof of that is the halting problem. Hence the halting problem is more fundamental than P!=NP.
But even the halting problem makes assumptions. We assume that we are running all our programs on a particular kind of CPU. Can we treat all CPUs as equivalent? Is there a CPU out there that can be built to solve the halting problem? Well, the answer is as long as a CPU is Turing complete, they are equivalent with other CPUs that are Turing complete. Turing completeness is therefore a more fundamental problem than the halting problem.
We can go on and on and question our assumptions like: are all kinds of signals equivalent? Is it possible that a fluidic or mechanical computer be fundamentally different than an electronic computer? And it will lead us to things like Shannon's information theory and Boolean algebra etc and each assumption that we uncover are more fundamental than the one above it.

How to calculate indefinite integral programmatically

I remember solving a lot of indefinite integration problems. There are certain standard methods of solving them, but nevertheless there are problems which take a combination of approaches to arrive at a solution.
But how can we achieve the solution programatically.
For instance look at the online integrator app of Mathematica. So how do we approach to write such a program which accepts a function as an argument and returns the indefinite integral of the function.
PS. The input function can be assumed to be continuous(i.e. is not for instance sin(x)/x).
You have Risch's algorithm which is subtly undecidable (since you must decide whether two expressions are equal, akin to the ubiquitous halting problem), and really long to implement.
If you're into complicated stuff, solving an ordinary differential equation is actually not harder (and computing an indefinite integral is equivalent to solving y' = f(x)). There exists a Galois differential theory which mimics Galois theory for polynomial equations (but with Lie groups of symmetries of solutions instead of finite groups of permutations of roots). Risch's algorithm is based on it.
The algorithm you are looking for is Risch' Algorithm:
http://en.wikipedia.org/wiki/Risch_algorithm
I believe it is a bit tricky to use. This book:
http://www.amazon.com/Algorithms-Computer-Algebra-Keith-Geddes/dp/0792392590
has description of it. A 100 page description.
You keep a set of basic forms you know the integrals of (polynomials, elementary trigonometric functions, etc.) and you use them on the form of the input. This is doable if you don't need much generality: it's very easy to write a program that integrates polynomials, for example.
If you want to do it in the most general case possible, you'll have to do much of the work that computer algebra systems do. It is a lifetime's work for some people, e.g. if you look at Risch's "algorithm" posted in other answers, or symbolic integration, you can see that there are entire multi-volume books ("Manuel Bronstein, Symbolic Integration Volume I: Springer") that have been written on the topic, and very few existing computer algebra systems implement it in maximum generality.
If you really want to code it yourself, you can look at the source code of Sage or the several projects listed among its components. Of course, it's easier to use one of these programs, or, if you're writing something bigger, use one of these as libraries.
These expert systems usually have a huge collection of techniques and simply try one after another.
I'm not sure about WolframMath, but in Maple there's a command that enables displaying all intermediate steps. If you do so, you get as output all the tried techniques.
Edit:
Transforming the input should not be the really tricky part - you need to write a parser and a lexer, that transforms the textual input into an internal representation.
Good luck. Mathematica is very complex piece of software, and symbolic manipulation is something that it does the best. If you are interested in the topic take a look at these books:
http://www.amazon.com/Computer-Algebra-Symbolic-Computation-Elementary/dp/1568811586/ref=sr_1_3?ie=UTF8&s=books&qid=1279039619&sr=8-3-spell
Also, going to the source wouldn't hurt either. These book actually explains the inner workings of mathematica
http://www.amazon.com/Mathematica-Book-Fourth-Stephen-Wolfram/dp/0521643147/ref=sr_1_7?ie=UTF8&s=books&qid=1279039687&sr=1-7

Can any algorithmic puzzle be implemented in a purely functional way?

I've been contemplating programming language designs, and from the definition of Declarative Programming on Wikipedia:
This is in contrast from imperative programming, which requires a detailed description of the algorithm to be run.
and further down:
... Any style of programming that is not imperative. ...
It then goes on to express that functional languages, because they are not imperative, are declarative by their very nature.
However, this makes me wonder, are purely functional programming languages able to solve any algorithmic problem, or are the constraints based upon what functions are available in that language?
I'm mostly interested in general thoughts on the subject, although if specific examples can illustrate the point, I certainly welcome them.
According to the Church-Turing Thesis ,
the three computational processes (recursion, λ-calculus, and Turing machine) were shown to be equivalent"
where Turing machine can be read as "procedural" and lambda calculus as "functional".
Yes, Haskell, Erlang, etc. are Turing complete languages. In principle, you don't need mutable state to solve a problem, since you can always create a new object instead of mutating the old one. Of course, Brainfuck is also Turing complete. In other words, just because an algorithm can be expressed in a functional language doesn't mean it's not horribly awkward.
OK, so Church and Turing provied it is possible, but how do we actually do something?
Rewriting imperative code in pure functional style is an exercise I frequently assign to undergraduate students:
Each mutable variable becomes a function parameter
Loops are rewritten using recursion
Each goto is expressed as a function call with arguments
Sometimes what comes out is a mess, but often the results are surprisingly elegant. The only real trick is not to pass arguments that never change, but instead to let-bind them in the outer environment.
The big difference with functional style programming is that it avoids mutable state. Where imperative programming will typically update variables, functional programming will define new, read-only values.
The main place where this will hit performance is with algorithms that use updatable arrays. An imperative implementation can update an array element in O(1) time, while the best a purely functional style of implementation can achieve is O(log N) (using a sorted tree).
Note that functional languages generally have some way to use updateable arrays with O(1) access time (e.g., Haskell provides this with its state transformer monad). However, this is arguably an imperative programming method... nothing wrong with that; you want to use the best tools for a particular job, after all.
The functional style of O(log N) incremental array update is not all bad, though, as functional style algorithms seem to lend themselves well to parallellization.
Too long to be posted as a comment on #SteveB's answer.
Functional programming and imperative programming have equal capability: whatever one can do, the other can do. They are said to be Turing complete. The functions that a Turing machine can compute are exactly the ones that recursive function theory and λ-calculus express.
But the Church-Turing Thesis, as such, is irrelevant. It asserts that any computation can be carried out by a Turing machine. This relates an informal idea - computation - to a formal one - the Turing machine. Nobody has yet found anything we would recognise as computation that a Turing machine can't do. Will someone find such a thing in future? Who can tell.
Using state monads you can program in an imperative style in Haskell.
So the assertion that Haskell is declarative by its very nature needs to be taken with a grain of salt. On the positive side it then is equivalent to imperative programming languages, also in a practical sense which doesn't completely ignore efficiency.
While I completely agree with the answer that invokes Church-Turing thesis, this begs an interesting question actually. If I have a parallel computation problem (which is not algorithmic in a strict mathematical sense), such as multiple producer/consumer queue or some network protocol between several machines, can this be adequately modeled by Turing machine? It can be simulated, but if we simulate it, we lose the purpose why we have the parallelism in the problem (because then we can find simpler algorithm on the Turing machine). So what if we were not to lose parallelism inherent to the problem (and thus the reason why are we interested in it), we couldn't remove the notion of state?
I remember reading somewhere that there are problems which are provably harder when solved in a purely functional manner, but I can't seem to find the reference.
As noted above, the primary problem is array updates. While the compiler may use a mutable array under the hood in some conditions, it must be guaranteed that only one reference to the array exists in the entire program.
Not only is this a hard mathematical fact, it is also a problem in practice, if you don't use impure constructs.
On a more subjective note, stating that all Turing complete languages are equivalent is only true in a narrow mathematical sense. Paul Graham explores the issue in Beating the Averages in the section "The Blub Paradox."
Formal results such as Turing-completeness may be provably correct, but they are not necessarily useful. The travelling salesman problem may be NP-complete, and yet salesman travel all the time. It seems they don't feel the need to follow an "optimal" path, so the theorem is irrelevant.
NOTE: I am not trying to bash functional programming, since I really like it. It is just important to remember that it is not a panacea.

Resources