Design patterns for converting recursive algorithms to iterative ones - algorithm

Are there any general heuristics, tips, tricks, or common design paradigms that can be employed to convert a recursive algorithm to an iterative one? I know it can be done, I'm wondering if there are practices worth keeping in mind when doing so.

You can often entirely preserve the original structure of a recursive algorithm, but avoid the stack, by employing tail calls and changing to continuation-passing, as suggested by this blog entry. (I should really cook up a better standalone example.)

A common technique that I use where I'm on the process of replace a recursive algorithm by an iterative one is generally to use a stack, pushing the parameters that are being passed to the recursive function.
Check the following articles:
Replacing Recursion With a Stack
Stacks and Recursion Elimination (pdf)

A common practice is to manage a LIFO stack that keeps a running list of what "remains to be done", and to handle the whole process in a while loop which continues until the list is empty.
With this pattern, what would be a recursion call in the true recursion model is replaced by
- a pushing of current (partially finished) task's "context" onto the stack,
- a pushing of the new task (the one which prompted recursion) onto the stack
- and to "continue" (i.e. jump to the beginning of) the while loop.
Near the head of the loop, the logic pops the most recently inserted context, and starts work on this basis.
Effectively this merely "moves" information which would have otherwise been kept in nested stackframes on the "system" stack, to an application-managed stack container. It is an improvement however, for this stack container can be allocated anywhere (the recursion limit is typically tied to limits in the "system" stack). Therefore essentially the same work gets done, but the explicit management of a "stack" allows this to take place within a single loop construct rather than recursive calls.

Often general recursion can be replaced by tail recursion, by collecting partial results in an accumulator and passing it down with the recursive call. Tail recursion is essentially iterative, the recursive call can be implemented as a jump.
For example, the standard general recursive definition of factorial
factorial(n) = if n = 0 then 1 else n * factorial(n - 1)
can be replaced by
factorial(n) = f_iter(n, 1)
and
f_iter(n, a) = if n = 0 then a else f_iter(n - 1, n * a)
which is tail recursive. It is the same as
a = 1;
while (n != 0) {
a = n * a;
n = n - 1;
}
return a;

Have a look at these links for performance examples
Recursion VS Iteration (Looping) : Speed & Memory Comparison
and
Replace Recursion with Iteration
and
Recursion vs Iteration
Q: Is the recursive version usually
faster?
A: No -- it's usually slower (due to the overhead of maintaining
the stack)
Q: Does the recursive version usually use less memory?
A: No -- it usually uses more memory (for the stack).
Q: Then why use recursion??
A: Sometimes it is much simpler to write the recursive version (but
we'll need to wait until we've
discussed trees to see really good
examples...)

I generally start from the base case (every recursive function has one) and work my way backwards, storing the results in a cache (array or hashtable) if necessary.
Your recursive function solves a problem by solving smaller subproblems and using them to solve the bigger the instance of the problem. Each subproblem is also broken down further and so on until the subproblem is so small that the solution is trivial (i.e. the base case).
The idea is to start at the base case(or base cases) and use that to build the solution for larger cases, and then use those to build even larger cases and so on, until the whole problem is solved. This does not require a stack, and can be done with loops.
A simple example (in Python):
#recursive version
def fib(n):
if n==0 or n==1:
return n
else:
return fib(n-1)+fib(n-2)
#iterative version
def fib2(n):
if n==0 or n==1:
return n
prev1,prev2=0,1 # start from the base case
for i in xrange(n):
cur=prev1+prev2 #build the solution for the next case using the previous solutions
prev1,prev2=cur,prev1
return cur

One pattern is Tail Recursion:
A function call is said to be tail
recursive if there is nothing to do
after the function returns except
return its value.
Wiki.

Related

What is the advantage of using tail recursion here?

I have been reading articles describing how space complexity of quicksort can be reduced by using the tail recursive version but I am not able to understand how this is so. Following are the two versions :
QUICKSORT(A, p, r)
q = PARTITION(A, p, r)
QUICKSORT(A, p, q-1)
QUICKSORT(A, q+1, r)
TAIL-RECURSIVE-QUICKSORT(A, p, r)
while p < r
q = PARTITION(A, p, r)
TAIL-RECURSIVE-QUICKSORT(A, p, q-1)
p = q+1
(Source - http://mypathtothe4.blogspot.com/2013/02/lesson-2-variations-on-quicksort-tail.html)
As far as I understand , both of these would cause recursive calls on both the left and right half of the array. In both the cases , only one half would processed at a time and therefore at any time only one recursive call would be using the stack space. I am unable to see how the tail recursive quicksort saves space.
The pseudo code above is taken from the article - http://mypathtothe4.blogspot.com/2013/02/lesson-2-variations-on-quicksort-tail.html
The explanation provided in the article confuses me even more -
Quicksort partitions a given sub-array and proceeds to recurse twice;
one on the left-sub-array and one on the right. Each of these
recursive calls will require its own individual stream of stack space.
This space is used to store the indexing variables for the array at
some level of recursion. If we picture this occurring from beginning
to end of execution, we can see that the stack space doubles at each
layer.
So how does Tail-Recursive-Quicksort fix all of this?
Well, instead of recursing on two sub-arrays, we now only recurse on
one. This eliminates the need for doubling stack space at every layer
of execution. We get around this problem by using the while loop as an
iterative control that performs the same task. Instead of needing the
stack to save sets of variables for two recursive calls, we simply
alter the same set of variables and use the single recursive call on
new variables.
I don't see how the stack space doubles at every layer of execution in the case of a regular quicksort.
Note :- There is no mention of compiler optimization in the article.
A tail recursive function call allows the compiler to perform a special optimization which it normally can not with regular recursion. In a tail recursive function, the recursive call is the very last thing to be executed. In this case, instead of allocating a stack frame for each call, the compiler can rework the code to simply reuse the current stack frame, meaning a tail-recursive function will only use a single stack frame as opposed to hundreds or even thousands.
This optimization is possible because the compiler knows that once the tail recursive call is made, no previous copies of variables will be needed, because there is no more code to execute. If, for instance, a print statement followed a recursive call, the compiler would need to know the value of the variable to be printed after the recursive call returns, and thus the stack frame cannot be reused.
Here's the wiki page if you'd like more information on how this "space saving" and stack reuse actually works, along with examples: Tail Call
Edit: I didn't explain how this applies to quicksort, did I? Well, some terms are thrown around in that article which make everything all confusing (and some of it is just plain wrong). The first function given (QUICKSORT) makes a recursive call on the left, a recursive call on the right, and then exits. Notice that the recursive call on the right is the very last thing that happens in the function. If the compiler supports tail recursive optimization (explained above), only the left calls create new stack frames; all the right calls just reuse the current frame. This can save some stack frames, but can still suffer from the case where the partitioning creates a sequence of calls where tail recursion optimization doesn't matter. Plus, even though right-side calls use the same frame, the left-side calls called within the right-side calls still use the stack. In the worst case, the stack depth is N.
The second version described is not a tail recursive quicksort, but rather a quicksort where only the left sorting is done recursively, and the right sorting is done using the loop. In fact, this quicksort (as previously described by another user) cannot have the tail recursion optimization applied to it, because the recursive call is not the last thing to execute. How does this work? When implemented correctly, the the first call to quicksort is the same as a left-side call in the original algorithm. However, no right-side recursive calls are even called. How does this work? Well, the loop takes care of that: instead of sorting "left then right", it sorts the left with a call, then sorts the right by continually sorting only the lefts of the right. It's really ridiculous sounding, but it's basically just sorting so many lefts that the rights become single elements and don't need to be sorted. This effectively removes the right recursion, making the function less recursive (pseudo recursive, if you will). However, the real implementation does not choose just the left side each time; it chooses the smallest side. The idea is still the same; it basically only does a recursive call on one side instead of both. Picking the shorter side will ensure that the stack depth can never be larger than log2(N), which is the depth of a proper binary tree. This is because the shorter side is always going to be at most half the size of our current array section. The implementation given by the article does not ensure this however, because it can suffer from the same worst-case scenario of "left is the whole tree". This article actually gives a pretty good explanation of it if you're willing to do more reading: Efficient selection and partial sorting based on quicksort
The advantage, the whole point of the "mixed recursive/iterative" version, i.e. version that processes one sub-range by recursion and another sub-range by iteration, is that by choosing which of the two sub-ranges to process recursively you can guarantee that the depth of recursion will never exceed log2 N, regardless of how bad the pivot choice is.
For the TAIL-RECURSIVE-QUICKSORT pseudocode provided in the question, where the recursive processing is performed first by a literal recursive call, that recursive call should be given the shorter sub-range. That by itself will make sure that the recursion depth will be limited by log2 N. So, in order to achieve the recursion depth guarantee the code absolutely have to compare the lengths of sub-ranges before deciding which sub-range to process by recursive call.
A proper implementation of that approach might look as follows (borrowing your pseudocode as a starting point)
HALF-RECURSIVE-QUICKSORT(A, p, r)
while p < r
q = PARTITION(A, p, r)
if (q - p < r - q)
HALF-RECURSIVE-QUICKSORT(A, p, q-1)
p = q+1
else
HALF-RECURSIVE-QUICKSORT(A, q+1, r)
r = q-1
The TAIL-RECURSIVE-QUICKSORT pseudocode you provided does not make any attempts to compare the lengths of the sub-ranges. In such case it provides no benefit whatsoever. And no, it is not really "tail recursive". QuickSort cannot possibly be reduced to a tail-recursive algorithm.
If you do a Google search on the terms "qsort loguy higuy", you will easily find numerous instances of another popular QuickSort implementation (C standard library style) based on the very same idea of using recursion for only one of the two sub-ranges. That implementation for 32 bit platforms uses explicit stack of maximum depth of ~32 specifically because it can guarantee the the recursion depth will never get higher than that. (Similarly, 64 bit platforms will only need stack depth of ~64.)
The QUICKSORT version that makes two literal recursive calls is significantly worse in that regard, since repetitive bad choice of pivot can make it to reach very high recursion depth, up to N in the worst case. With two recursive calls you cannot guarantee that recursion depth will be limited by log2 N. A smart compiler might be able to replace the trailing call to QUICKSORT with iteration, i.e. turn your QUICKSORT into your TAIL-RECURSIVE-QUICKSORT, but it won't be smart enough to perform the sub-range length comparison.
Advantage of using tail-recursion := so that the compiler optimize the code and convert it to a non-recursive code.
Advantage of non-recursive code over recursive one := the non-recursive code requires less memory to execute than a recursive one. This is because of idle stack frames that the recursion consumes.
Here comes the interesting part:- even though the compilers can theoriticaly perform that optimization, they in practice dont. even the widespread compilers like dot-net and java dont perform that optimization.
one problem that all code optimizations face is the sacrifice in debug-ability. the optimized code no longer corresponds to source code so stack traces and exception details cant be easily understood. high-performance code or scientific applications are one thing but for majority of consumer applications debugging is required even after release. hence optimizations are not done that vigorously.
please see:
https://stackoverflow.com/q/340762/1043824
Why doesn't .NET/C# optimize for tail-call recursion?
https://stackoverflow.com/a/3682044/1043824
There seems to be some vocabulary confusion here.
The first version is tail-recursive, because the last statement is a recursive call:
QUICKSORT(A, p, r)
q = PARTITION(A, p, r)
QUICKSORT(A, p, q-1)
QUICKSORT(A, q+1, r)
If you apply the tail-recursion optimization, which is to convert the recursion into a loop, you get the second one, which is no longer tail-recursive:
TAIL-RECURSIVE-QUICKSORT(A, p, r)
while p < r
q = PARTITION(A, p, r)
TAIL-RECURSIVE-QUICKSORT(A, p, q-1)
p = q+1
The benefit of this is that usually, you will need less stack memory. Why is that? To understand, imagine you want to sort an array with 31 items. In the highly unlikely case that all partitions are perfect, i.e. they split the array right in the middle, your recursion depth would be 4. Indeed, the first split would yield two partitions of 15 items, the second one two partitions of 7 items, the third one two of 3 items, and after the fourth everything is sorted.
But partitions are rarely this perfect. As a result, not all recursions go equally deep. In our example, you might have some that are only three levels deep, and some that are 7 or more (worst case is 30). By eliminating half of the recursions, you have a fair chance that your maximum recursion depth will be less.
As AndreyT pointed out, often the ranges are compared to make sure that the biggest partition is always processed iteratively, and the smallest one recursively. That guarantees the smallest possible recursion depth for a given input and pivot selection strategy.
But this is not always the case. Sometimes, people want to yield results as soon as possible, or want to find and sort only the first n elements. In these cases they always want to sort the first partition before the second one. Even in this situation, eliminating the tail recursion usually improves memory usage, and never makes it worse.
I don't know exactly if this is the correct place to ask this doubt or I should have posted a new question but I have a quite similar doubt.
void print(int n) {
if (n < 0) return;
cout << " " << n;
// The last executed statement is recursive call
print(n-1);
print(n-1);
}
Is this tail recursive ?
Tail recursion is the optimization done by modern compilers called tail call elimination.
When the caller/parent function nothing have to do in the unwinding stages after the child calls have finished, last thing is the recursion call itself, then the modern compiler uses the goto and labels for the optimization.
example: Our version -> Prints n to 1
void fun(int n){
if(n==0)
return;
printf("%d\n",n);
fun(n-1)
}
after optimization->
void fun(int n){
start:
if(n==0)
return;
printf("%d\n",n);
n=n-1;
goto start;
}
Advantages of this optimization:
1.Uses very few stack-frames for tail-recursive calls
2.Consumes less memory
3.No need to save the procedure activation record, this reduces the function overhead.
3.No more segmentation fault is C/C++ and stack overflow in java.

Is there a recursive function / algorithm where using stack frames is unavoidable (cannot be made fully tail recursive)?

I'm trying to come up with examples of recursive algorithms / functions that cannot be rewritten in a way that avoids using lots of stack memory (i.e. cannot be made fully tail recursive nor rewritten using a loop that doesn't use a stack). Do such functions exist?
I think quicksort might be a candidate, but am not sure if it can be rewritten to use a single tail-recursive function call.
Every algorithm that requires more than one call in the general case cannnot be done without a backtrack stack for iteration or growing the stack because the first call is not in tail position and will always have a continuation.
The simple fibonacci, quicksort and sum of tree are all in that category.

recursion versus iteration

Is it correct to say that everywhere recursion is used a for loop could be used? And if recursion is usually slower what is the technical reason for ever using it over for loop iteration?
And if it is always possible to convert an recursion into a for loop is there a rule of thumb way to do it?
Recursion is usually much slower because all function calls must be stored in a stack to allow the return back to the caller functions. In many cases, memory has to be allocated and copied to implement scope isolation.
Some optimizations, like tail call optimization, make recursions faster but aren't always possible, and aren't implemented in all languages.
The main reasons to use recursion are
that it's more intuitive in many cases when it mimics our approach of the problem
that some data structures like trees are easier to explore using recursion (or would need stacks in any case)
Of course every recursion can be modeled as a kind of loop : that's what the CPU will ultimately do. And the recursion itself, more directly, means putting the function calls and scopes in a stack. But changing your recursive algorithm to a looping one might need a lot of work and make your code less maintainable : as for every optimization, it should only be attempted when some profiling or evidence showed it to be necessary.
Is it correct to say that everywhere recursion is used a for loop could be used?
Yes, because recursion in most CPUs is modeled with loops and a stack data structure.
And if recursion is usually slower what is the technical reason for using it?
It is not "usually slower": it's recursion that is applied incorrectly that's slower. On top of that, modern compilers are good at converting some recursions to loops without even asking.
And if it is always possible to convert an recursion into a for loop is there a rule of thumb way to do it?
Write iterative programs for algorithms best understood when explained iteratively; write recursive programs for algorithms best explained recursively.
For example, searching binary trees, running quicksort, and parsing expressions in many programming languages is often explained recursively. These are best coded recursively as well. On the other hand, computing factorials and calculating Fibonacci numbers are much easier to explain in terms of iterations. Using recursion for them is like swatting flies with a sledgehammer: it is not a good idea, even when the sledgehammer does a really good job at it+.
+ I borrowed the sledgehammer analogy from Dijkstra's "Discipline of Programming".
Question :
And if recursion is usually slower what is the technical reason for ever using it over for loop iteration?
Answer :
Because in some algorithms are hard to solve it iteratively. Try to solve depth-first search in both recursively and iteratively. You will get the idea that it is plain hard to solve DFS with iteration.
Another good thing to try out : Try to write Merge sort iteratively. It will take you quite some time.
Question :
Is it correct to say that everywhere recursion is used a for loop could be used?
Answer :
Yes. This thread has a very good answer for this.
Question :
And if it is always possible to convert an recursion into a for loop is there a rule of thumb way to do it?
Answer :
Trust me. Try to write your own version to solve depth-first search iteratively. You will notice that some problems are easier to solve it recursively.
Hint : Recursion is good when you are solving a problem that can be solved by divide and conquer technique.
Besides being slower, recursion can also result in stack overflow errors depending on how deep it goes.
To write an equivalent method using iteration, we must explicitly use a stack. The fact that the iterative version requires a stack for its solution indicates that the problem is difficult enough that it can benefit from recursion. As a general rule, recursion is most suitable for problems that cannot be solved with a fixed amount of memory and consequently require a stack when solved iteratively.
Having said that, recursion and iteration can show the same outcome while they follow different pattern.To decide which method works better is case by case and best practice is to choose based on the pattern that problem follows.
For example, to find the nth triangular number of
Triangular sequence: 1 3 6 10 15 …
A program that uses an iterative algorithm to find the n th triangular number:
Using an iterative algorithm:
//Triangular.java
import java.util.*;
class Triangular {
public static int iterativeTriangular(int n) {
int sum = 0;
for (int i = 1; i <= n; i ++)
sum += i;
return sum;
}
public static void main(String args[]) {
Scanner stdin = new Scanner(System.in);
System.out.print("Please enter a number: ");
int n = stdin.nextInt();
System.out.println("The " + n + "-th triangular number is: " +
iterativeTriangular(n));
}
}//enter code here
Using a recursive algorithm:
//Triangular.java
import java.util.*;
class Triangular {
public static int recursiveTriangular(int n) {
if (n == 1)
return 1;
return recursiveTriangular(n-1) + n;
}
public static void main(String args[]) {
Scanner stdin = new Scanner(System.in);
System.out.print("Please enter a number: ");
int n = stdin.nextInt();
System.out.println("The " + n + "-th triangular number is: " +
recursiveTriangular(n));
}
}
Yes, as said by Thanakron Tandavas,
Recursion is good when you are solving a problem that can be solved by divide and conquer technique.
For example: Towers of Hanoi
N rings in increasing size
3 poles
Rings start stacked on pole 1. Goal is to move rings so
that they are stacked on pole 3 ...But
Can only move one ring at a time.
Can’t put larger ring on top of smaller.
Iterative solution is “powerful yet ugly”; recursive solution is “elegant”.
I seem to remember my computer science professor say back in the day that all problems that have recursive solutions also have iterative solutions. He says that a recursive solution is usually slower, but they are frequently used when they are easier to reason about and code than iterative solutions.
However, in the case of more advanced recursive solutions, I don't believe that it will always be able to implement them using a simple for loop.
Most of the answers seem to assume that iterative = for loop. If your for loop is unrestricted (a la C, you can do whatever you want with your loop counter), then that is correct. If it's a real for loop (say as in Python or most functional languages where you cannot manually modify the loop counter), then it is not correct.
All (computable) functions can be implemented both recursively and using while loops (or conditional jumps, which are basically the same thing). If you truly restrict yourself to for loops, you will only get a subset of those functions (the primitive recursive ones, if your elementary operations are reasonable). Granted, it's a pretty large subset which happens to contain every single function you're likely to encouter in practice.
What is much more important is that a lot of functions are very easy to implement recursively and awfully hard to implement iteratively (manually managing your call stack does not count).
recursion + memorization could lead to a more efficient solution compare with a pure iterative approach, e.g. check this:
http://jsperf.com/fibonacci-memoized-vs-iterative-for-large-n
Short answer: the trade off is recursion is faster and for loops take up less memory in almost all cases. However there are usually ways to change the for loop or recursion to make it run faster

Can all recursive functions be re-written as tail-recursions? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Are there problems that cannot be written using tail recursion?
From my understanding, tail recursion is an optimization you can use when a recursive call does not need information from the recursive calls that it will spam.
Is it possible then to implement all recursive functions using tail-recursion? What about something like DFS, where you need the innermost child to return before the parent can?
It depends on exactly what you are asking.
If you want to keep all functions as functions (no mutable state) with the same signatures, then no. The most obvious example is the quicksort, where both calls can't be tail calls.
If you can modify the function in various ways, then yes. Sometimes a local modification is sufficient - often you can add an "accumulator" that builds some expression that is returned, although, if the result involves non-commutative operations, then you need to be careful (for example, when naively constructing linked lists, the order is reversed) or you can add a stack.
Alternatively, you can do a global modification of the entire program, in which each function takes as an extra argument the function that contains future actions. This is the continuation passing that Pete is talking about.
if you are working by hand then the local modification is often fairly easy. but if you're doing automated rewriting (in a compiler for example) then it's simpler to take the global approach (it requires less "smarts").
Yes and no.
Yes, used in conjunction with other control flow mechanisms (e.g., continuation passing) you can express any arbitrary control flow as tail recursion.
No, it is not possible to express all recursion as tail recursion unless you do supplement the tail recursion with other control flow mechanisms.
All programs can be rewritten as tail calls using continuation passing. Simply add one parameter to the tail call representing the continuation of the current execution.
Any Turning complete language perform the same transformation as continuation passing provides - create a Gödel number for a the program and input parameters that the non-tail call returns to, and pass that as a parameter to the tail call - though obviously environments where this is done for you with a continuation, co-routine or other first-class construct makes it much easier.
CPS is used as a compiler optimisation and I have previously written interpreters using continuation passing. The scheme programming language is designed to allow it to be implemented in such a fashion with requirements in the standard for tail call optimisation and first class continuations.
Yes you can. The transformation usually involves maintaining the necessary information explicitly, which would otherwise be maintained for us implicitly spread among the execution stack's call frames during run time.
As simple as that. Whatever the run time system does during execution implicitly, we can do explicitly by ourselves. There's no big mystery here. PCs are made of silicon and copper and steel.
It is trivial to implement DFS as a loop with explicit queue of states/positions/nodes to process. It is in fact defined that way - DFS replaces the popped first entry in the queue with all the arcs coming from it; BFS adds these arcs to the end of the queue.
The continuation-passing style transformation leaves all the function calls in a program as tail calls after it is done. This is a simple fact of life. The continuations used will grow and shrink, but calls will all be tail calls.
We can further reify the state of process spread in continuations, as explicitly maintained data on the heap. What this achieves in the end, is explication and reification, moving implicit stuff on stack to the explicit stuff living in the heap, simplifying and demystifying the control flow.
I don't know if all recursive functions can be rewritten to be tail-recursive, but many of them can. One standard method of doing this is to use an accumulator. For example, the factorial function can be written (in Common Lisp) like so:
(defun factorial (n)
(if (<= n 1)
1
(* n (factorial (1- n)))))
This is recursive, but not tail recursive. It can be made tail-recursive by adding an accumulator argument:
(defun factorial-accum (accum n)
(if (<= n 1)
accum
(factorial-accum (* n accum) (1- n))))
Factorials can be calculated by setting the accumulator to 1. For example, the factorial of 3 is:
(factorial-accum 1 3)
Whether all recursive functions can be rewritten as tail-recursive functions using methods like this isn't clear to me, though. But certainly many functions can be.
Recursive algorithm is an algorithm that implemented in accordance with Divide & Conquer strategy, where solving each intermediate sub-problem produces 0, 1 or more new smaller sub-problems. If these sub-problems are solved in LIFO order, you get a classic recursive algorithm.
Now, if your algorithm is known to produce only 0 or 1 sub-problem at each step, then this algorithm can be easily implemented through tail recursion. In fact, such algorithm can easily be rewritten as an iterative algorithm and implemented by a simple cycle. (Needless to add, tail recursion is just another less explicit way to implement iteration.)
A schoolbook example of such recursive algorithm would be recursive approach to factorial calculation: to calculate n! you need to calculate (n-1)! first, i.e. at each recursive step you discover only one smaller sub-problem. This is the property that makes it so easy to turn factorial computation algorithm into a truly iterative one (or tail-recursive one).
However, if you know that in general case the number of sub-problems generated at each step of your algorithm is more than 1, then your algorithm is substantially recursive. It cannot be rewritten as iterative algorithm, it cannot be implemented through tail recursion. Any attempts to implement such algorithm in iterative or tail-recursive fashion will require additional LIFO storage of non-constant size for storing "pending" sub-problems. Such implementation attempts would simply obfuscate the unavoidable recursive nature of the algorithm by implementing recursion manually.
For example, such simple problem as traversal of a binary tree with parent->child links (and no child->parent links) is a substantially recursive problem. It cannot be done by tail-recursive algorithm, it cannot be done by an iterative algorithm.
No it can be done "naturally" only for calls with one recursive call. For two or more recursive calls you can of course mimic stack frame yourself. But it will be very ugly and will effectively won't be tail recursive, in the sense of optimizing memory.
The point with tail-recursion is you don't want to come back to parent stack. So simply pass on that information to child stack which can completely replace parent stack, instead of stack growing.

Is there a problem that has only a recursive solution? [duplicate]

This question already has answers here:
Closed 13 years ago.
Possible Duplicates:
Is there a problem that has only a recursive solution?
Can every recursion be converted into iteration?
“Necessary” Uses of Recursion in Imperative Languages
Is there a problem that has only a recursive solution, that is, a problem that has a recursive solution, but an iterative solution has yet to be found, or better yet, has proven to be non-existing (obviously, this is not a tail-recursion)?
replace function calls with pushing arguments onto a stack, and returns with popping off the stack, and you've eliminated recursion.
Edit: in response to "using a stack does not decrease space costs"
If a recursive algorithm can run in constant space, it can be written in a tail-recursive manner. if it is written in tail-recursive format, then any decent compiler can collapse the stack. However, this means the "convert function calls to explicit stack-pushes" method also takes constant space as well. As an example, lets take factorial.
factorial:
def fact_rec(n):
' Textbook Factorial function '
if n < 2: return 1
else: return n * f(n-1)
def f(n, product=1):
' Tail-recursive factorial function '
if n < 2: return product
else: return f(n-1, product*n)
def f(n):
' explicit stack -- otherwise same as tail-recursive function '
stack, product = [n], 1
while len(stack):
n = stack.pop()
if n < 2: pass
else:
stack.append(n-1)
product *= n
return product
because the stack.pop() follows the stack.append() in the loop, the stack never has more than one item in it, and so it fulfills the constant-space requirement. if you imagine using a temp variable instead of a 1-length stack, it becomes your standard iterative-factorial algorithm.
of course, there are recursive functions that can't be written in tail-recursive format. You can still convert to iterative format with some kind of stack, but I'd be surprised if there were any guarantees on space-complexity.
In Response to the Ackermann function answer, this is a pretty straightforward convert-the-call-stack-into-a-real-stack problem. This also shows one benefit of the iterative version.
on my platform (Python 3.1rc2/Vista32) the iterative version calculates ack(3,7) = 1021 fine, while the recursive version stackoverflows. NB: it didn't stackoverflow on python 2.6.2/Vista64 on a different machine, so it seems to be rather platform-dependant,
(Community wiki because this is really a comment to another answer [if only comments supported code formatting .... ])
def ack(m,n):
s = [m]
while len(s):
m = s.pop()
if m == 0:
n += 1
elif n == 0:
s.append(m-1)
n = 1
else:
s.append(m-1)
s.append(m)
n -= 1
return n
The Ackermann function cannot be expressed without recursion
edit: As noted in another answer, this is incorrect.
You can define a Turing Machine without recursion (right?) So recursion isn't required for a language to be Turing-complete.
In programming, recursion is really a special case of iteration - one where you use the call stack as a special means of storing state. You can rewrite any recursive method to be an iterative one. It may be more complicated or less elegant, but it's equivalent.
In mathematics, there are certain problems that require recursive techniques to arrive at an answer - some examples are finding roots (Newton's Method), computing primes, graph optimization, etc. However, even here, there's just a question of how you differentiate between the terms "iteration" and "recursion".
EDIT: As others have pointed out, there exist many functions whose definition is recursive - ex. the Ackermann function. However, this does not mean that they cannot be computed using iterative constructs - so long as you have a turing complete operation set and unlimited memory.
All non np-complete problems can be solved with just sequence, decision, and iteration. Recursion should not be required, though it usually greatly simplifies the problem.
It comes down to how many lines of code is it going to take to solve the problem...
List all the files on your C:\ using recursion and then without. Sure you can do it both ways - but one way is going to be a lot simpler to understand and debug.
No. Recursion is nothing more than a stack and you can achieve the same results by implementing a stack explicitly.
That may not be an especially satisfying answer, but you'd have to ask a much more specific question to get a better answer. For example, theory dictates that in the levels of computing there is quite a difference in the range of problems that can be solved if you have a while loop vs. only having (traditional) for loops.
I put "traditional" in there because they really mean loops that iterate a certain number of times whereas C style for (...;...;...) loops are while loops in disguise.

Resources