Recursion Vs Iteration - algorithm

I believe that all the problems that have iterative logic can be solved using iterations, but can we solve any problem using recursion? Can recursion always substitute iteration? Please provide a proof to your answer if you can. Also assume that we have an infinite stack or we run the program on a Turing machine. I don't care if this proof is a theoretical proof. (that's why I mentioned the Turing Machine)

Yes, recursion can always substitute iteration, this has been discussed before. Quoting from the linked post:
Because you can build a Turing complete language using strictly iterative structures and a Turning complete language using only recursive structures, then the two are therefore equivalent.
Explaining a bit: we know that any computable problem can be solved by a Turing machine. And it's possible to construct a programming language A without recursion, that is equivalent to a Turing machine. Similarly, it's possible to build a programming language B without iteration, equal in computational power to a Turing machine.
Therefore, if both A and B are Turing-complete we can conclude that for any iterative program there must exist an equivalent recursive program, and vice versa. This is a theoretical result, in the sense that it doesn't give you any hints on how to derive one recursive program from an arbitrary iterative program, or vice versa.

Yes. There is a type of recursion called tail recursion, which is directly translatable to iteration. One can be converted to the other without any problem. Thus, all iterative solutions can be converted into recursive solutions.
In fact, many compilers can detect that you are doing tail recursion, and then convert it into for-loop type code for efficiency.

A recursion should be used when a loop is not needed, but the method must repeat itself in a certain case. For example, zipping a folder. If there is a subfolder, it should call itself (recursive). Recursion could substitute iterations if you wanted to, but it's not recommended. Most people just use iterations and only use recursions when they are needed.

Related

Can every recursive algorithm be improved with dynamic programming?

I am a first year undergraduate CSc student who is looking to get into competitive programming.
Recursion involves defining and solving sub problems. As I understand, top down dynamic programming (dp) involves memoizing the solutions to sub problems to reduce the time complexity of the algorithm.
Can top down dp be used to improve the efficiency of every recursive algorithm with overlapping sub problems? Where would dp fail to work and how can I identify this?
The short answer is: Yes.
However, there are some constraints. The most obvious one is that recursive calls must overlap. I.e. during the execution of an algorithm, the recursive function must be called multiple times with the same parameters. This lets you truncate the recursion tree by memoization. So you can always use memoization to reduce the number of calls.
However, this reduction of calls comes with a price. You need to store the results somewhere. The next obvious constraint is that you need to have enough memory. This comes with a not-so obvious constraint. Memory access always requires some time. You first need to find where the result is stored and then maybe even copy it to some location. So in some cases, it might be faster to let the recursion calculate the result instead of loading it from somewhere. But this is very implementation-specific and can even depend on the operating system and hardware setup.

Does any divide-and-conquer algorithm use recursion

I am arguing with a fellow student, as he wants to convince me that there is a possibility that a divide-and-conquer algorithm can be implemented without the use of recursion.
Is this truly the case?
Any algorithm that can be implemented with recursion can also be implemented non-recursively.
Recursion and iteration are equally expressive: recursion can be replaced by iteration with an explicit stack, while iteration can be replaced with tail recursion. Which approach is preferable depends on the problem under consideration and the language used.
http://en.wikipedia.org/wiki/Recursion_%28computer_science%29#Recursion_versus_iteration
There's an important thing to understand: using or not recursion is an implementation decision. Recursion is not necessary to add computing power (at least not to a Turing complete language). Look up "Tail recursion" for an easy example of how to transform a recursive function to a non recursive one (in the case of a divide-et-impera algorithm you can remove at most 1 of the recursive calls with this method).
A function/algorithm that is computable with recursion is computable also without it. What matters is if the language with which you implement the algorithm is Turing complete or not.
Let's make an example. The mergesort algorithm can be implemented also in a non recursive way using a queue as an auxiliary data structure to keep track of the various merges.

Complexity of algorithms of different programming paradigms

I know that most programming languages are Turing complete, but I wonder whether a problem can be resolved with an algorithm of the same complexity with any programming language (and in particular with any programming paradigm).
To make my answer more explicit with an example: is there any problem which can be resolved with an imperative algorithm of complexity x (say O(n)), but cannot be resolved by a functional algorithm with the same complexity (or vice versa)?
Edit: The algorithm itself can be different. The question is about the complexity of solving the problem -- using any approach available in the language.
In general, no, not all algorithms can be implemented with the same order of complexity in all languages. This can be trivially proven, for instance, with a hypothetical language that disallows O(1) access to an array. However, there aren't any algorithms (to my knowledge) that cannot be implemented with the optimal order of complexity in a functional language. The complexity analysis of an algorithm's pseudocode makes certain assumptions about what operations are legal, and what operations are O(1). If you break one of those assumptions, you can alter the complexity of the algorithm's implementation even though the language is Turing complete. Turing-completeness makes no guarantees regarding the complexity of any operation.
An algorithm has a measured runtime such as O(n) like you said, implementations of an algorithm must adhere to that same runtime or they do not implement the algorithm. The language or implementation does not by definition change the algorithm and thus does not change the asymptotic runtime.
That said certain languages and technologies might make expressing the algorithm easier and offer constant speedups (or slowdowns) due to how the language gets compiled or executed.
I think your first paragraph is wrong. And I think your edit doesn't change that.
Assuming you are requiring that the observed behaviour of an implementation conforms to the time complexity of the algorithm, then...
When calculating the complexity of an algorithm assumptions are made about what operations are constant time. These assumptions are where you're going to find your clues.
Some of the more common assumptions are things like constant time array access, function calls, and arithmetic operations.
If you cannot provide those operations in a language in constant time you cannot reproduce the algorithm in a way that preserves the time complexity.
Reasonable languages can break those assumptions, and sometimes have to if they want to deal with, say, immutable data structures with shared state, concurrency, etc.
For example, Clojure uses trees to represent Vectors. This means that access is not constant time (I think it's log32 of the size of the array, but that's not constant even though it might as well be).
You can easily imagine a language having to do complicated stuff at runtime when calling a function. For example, deciding which one was meant.
Once upon a time floating point and multi-word integer multiplication and division were sadly not constant time (they were implemented in software). There was a period during which languages transitioned to using hardware when very reasonable language implementations behaved very differently.
I'm also pretty sure you can come up with algorithms that fare very poorly in the world of immutable data structures. I've seen some optimisation algorithms that would be horribly difficult, maybe impossible or effectively so, to implement while dealing immutability without breaking the time complexity.
For what it's worth, there are algorithms out there that assume set union and intersection are constant time... good luck implementing those algorithms in constant time. There are also algorithms that use an 'oracle' that can answer questions in constant time... good luck with those too.
I think that a language can have different basilar operations that cost O(1), for example mathematical operations (+, -, *, /), or variable/array access (a[i]), function call and everything you can think.
If a language do not have one of this O(1) operations (as brain bending that do not have O(1) array access) it can not do everything C can do with same complexity, but if a language have more O(1) operations (for example a language with O(1) array search) it can do more than C.
I think that all "serious" language have the same basilar O(1) operations, so they can resolve problem with same complexity.
If you consider Brainfuck or the Turing machine itself, there is one fundamental operation, that takes O(n) time there, although in most other languages it can be done in O(1) – indexing an array.
I'm not completely sure about this, but I think you can't have true array in functional programming either (having O(1) “get element at position” and O(1) “set element at position”). Because of immutability, you can either have a structure that can change quickly, but accessing it takes time or you will have to copy large parts of the structure on every change to get fast access. But I guess you could cheat around that using monads.
Looking at things like functional versus imperative, I doubt you'll find any real differences.
Looking at individual languages and implementations is a different story though. Ignoring, for the moment, the examples from Brainfuck and such, there are still some pretty decent examples to find.
I still remember one example from many years ago, writing APL (on a mainframe). The task was to find (and eliminate) duplicates in a sorted array of numbers. At the time, most of the programming I'd done was in Fortran, with a few bits and pieces in Pascal (still the latest and greatest thing at the time) or BASIC. I did what seemed obvious: wrote a loop that stepped through the array, comparing array[i] to array[i+1], and keeping track of a couple of indexes, copying each unique element back the appropriate number of places, depending on how many elements had already been eliminated.
While this would have worked quite well in the languages to which I was accustomed, it was barely short of a disaster in APL. The solution that worked a lot better was based more on what was easy in APL than computational complexity. Specifically, what you did was compare the first element of the array with the first element of the array after it had been "rotated" by one element. Then, you either kept the array as it was, or eliminated the last element. Repeat that until you'd gone through the whole array (as I recall, detected when the first element was smaller than the first element in the rotated array).
The difference was fairly simple: like most APL implementations (at least at the time), this one was a pure interpreter. A single operation (even one that was pretty complex) was generally pretty fast, but interpreting the input file took quite a bit of time. The improved version was much shorter and faster to interpret (e.g., APL provides the "rotate the array" thing as a single, primitive operation so that was only a character or two to interpret instead of a loop).

Is there anything which can only be achieved by recursion?

I am not sure but I had heard of an algorithm which can only be achieved by recursion. Does anyone know of such thing?
You can always simulate recursion by keeping your own stack. So no.
You need to clarify what kind of recursion you are talking about. There's algorithmic recursion and there's recursion in the implementation. It is true that any recursive algorithm allows for a straightforward non-recursive implementation - one can easily do it by using the standard trick of simulating the recursion with manual stack.
However, your question mentions algorithms only. If one assumes that it is specifically about algorithmic recursion, then the answer is yes, there are algorithms that are inherently and unavoidably recursive. In general case it is not possible to replace an inherently recursive algorithm with a non-recursive one. The simplest way to build an inherently recursive algorithm is to take an inherently recursive data structure first. For example, let's say we need to traverse a tree with only parent-to-child links (and no child-to-parent links). It is impossible to come up with a reasonable non-recursive algorithm to solve this problem.
So, that's one example for you: traversal of a tree, which has only parent-to-child links cannot be performed by a non-recursive algorithm.
Another example of an inherently recursive algorithm is the well-known QuickSort algorithm. QuickSort is always recursive, and it cannot be turned into a non-recursive algorithm simply because if you succeed in doing this it will no longer be QuickSort anymore. It will be a completely different algorithm. Granted, this sounds as an exercise in pure semantics, but nevertheless that's also something that is worth mentioning.
If I remember my algorithms correctly, there is nothing doable by recursion that cannot be done with a stack and a loop. I don't have the formal proof here at my fingertips, however.
Edit: it occurs to me that the answer, possibly, is that the only thing doable by recursion that is not doable with a stack+loop, is a stack overflow?
The following compares a recursive vs non-recursive implementations: http://www.sparknotes.com/cs/recursion/whatisrecursion/section1.html
Excerpt:
Given that recursion is in general less efficient, why would we use it? There are two situations where recursion is the best solution:
The problem is much more clearly solved using recursion: there are many problems where the recursive solution is clearer, cleaner, and much more understandable. As long as the efficiency is not the primary concern, or if the efficiencies of the various solutions are comparable, then you should use the recursive solution.
Some problems are much easier to solve through recursion: there are some problems which do not have an easy iterative solution. Here you should use recursion. The Towers of Hanoi problem is an example of a problem where an iterative solution would be very difficult. We'll look at Towers of Hanoi in a later section of this guide.
Are you just looking for a practical example of recursion? Recently my friends and I implemented the Haar Wavelet function (as an exercise to start learning Ruby), which seemed to require recursion. Unless anybody has an implementation of it without recursion?
I would imagine any time one doesn't know the depth of the stack one is iterating over, recursion is the logical way to go. Sure, it may be doable with some hacked up loops, but is that good code?

Complexity in Prolog programs?

In Prolog, problems are solved using backtracking. It's a declarative paradigm rather than an imperative one (like in C, PHP or Python). In this kind of languages is it worth to think in terms of complexity?
The natural way you think problems seems to be O(N^2) as someone pointed in this question.
You can definitely analyze the complexity of Prolog programs, just like any other language. That particular problem you linked might be O(n^2). But not all Prolog programs will have this complexity. For example, you can easily write a SAT solver in Prolog, and that problem is NP-Complete.
It entirely depends on the problem.
e.g. summing a list of numbers is O(N), as far as I can tell.
sum([],0).
sum(List,Total) :-
sum(List,0,Total).
sum([],Total,Total).
sum([Head|Rest],Accumulator,Total) :-
SoFar is Head + Accumulator,
sum(Rest,SoFar,Total).
The only actions are the addition ("is") and the recursive call, which should both be worth 1 each. They'll both be performed ~ once per item in the list, so the total actions should be ~ 2N, which is O(N).
Its important to analyze Complexity in any Language be it prolog or any imperative language. However I can give you some tips to speed up your prolog programs
Always Always try to make your programs tail recursive. This will ensure that your program doesnt run out of stack.
Try to use cut and fail in your programs where you know you dont need any further answers.
Try using accumalators.
Check out CLPFD, It helps in reducing the search space drastically which speeds up your program. Essentially it eliminates the bad choice before your program can backtrack and wasting time exploring those choices.
Always write the best outcome rule first. (This really depends on the problem , but generally best case rule goes first).

Resources