Recursion or Iteration? - performance
Is there a performance hit if we use a loop instead of recursion or vice versa in algorithms where both can serve the same purpose? Eg: Check if the given string is a palindrome.
I have seen many programmers using recursion as a means to show off when a simple iteration algorithm can fit the bill.
Does the compiler play a vital role in deciding what to use?
Loops may achieve a performance gain for your program. Recursion may achieve a performance gain for your programmer. Choose which is more important in your situation!
It is possible that recursion will be more expensive, depending on if the recursive function is tail recursive (the last line is recursive call). Tail recursion should be recognized by the compiler and optimized to its iterative counterpart (while maintaining the concise, clear implementation you have in your code).
I would write the algorithm in the way that makes the most sense and is the clearest for the poor sucker (be it yourself or someone else) that has to maintain the code in a few months or years. If you run into performance issues, then profile your code, and then and only then look into optimizing by moving over to an iterative implementation. You may want to look into memoization and dynamic programming.
Comparing recursion to iteration is like comparing a phillips head screwdriver to a flat head screwdriver. For the most part you could remove any phillips head screw with a flat head, but it would just be easier if you used the screwdriver designed for that screw right?
Some algorithms just lend themselves to recursion because of the way they are designed (Fibonacci sequences, traversing a tree like structure, etc.). Recursion makes the algorithm more succinct and easier to understand (therefore shareable and reusable).
Also, some recursive algorithms use "Lazy Evaluation" which makes them more efficient than their iterative brothers. This means that they only do the expensive calculations at the time they are needed rather than each time the loop runs.
That should be enough to get you started. I'll dig up some articles and examples for you too.
Link 1: Haskel vs PHP (Recursion vs Iteration)
Here is an example where the programmer had to process a large data set using PHP. He shows how easy it would have been to deal with in Haskel using recursion, but since PHP had no easy way to accomplish the same method, he was forced to use iteration to get the result.
http://blog.webspecies.co.uk/2011-05-31/lazy-evaluation-with-php.html
Link 2: Mastering Recursion
Most of recursion's bad reputation comes from the high costs and inefficiency in imperative languages. The author of this article talks about how to optimize recursive algorithms to make them faster and more efficient. He also goes over how to convert a traditional loop into a recursive function and the benefits of using tail-end recursion. His closing words really summed up some of my key points I think:
"recursive programming gives the programmer a better way of organizing
code in a way that is both maintainable and logically consistent."
https://developer.ibm.com/articles/l-recurs/
Link 3: Is recursion ever faster than looping? (Answer)
Here is a link to an answer for a stackoverflow question that is similar to yours. The author points out that a lot of the benchmarks associated with either recursing or looping are very language specific. Imperative languages are typically faster using a loop and slower with recursion and vice-versa for functional languages. I guess the main point to take from this link is that it is very difficult to answer the question in a language agnostic / situation blind sense.
Is recursion ever faster than looping?
Recursion is more costly in memory, as each recursive call generally requires a memory address to be pushed to the stack - so that later the program could return to that point.
Still, there are many cases in which recursion is a lot more natural and readable than loops - like when working with trees. In these cases I would recommend sticking to recursion.
Typically, one would expect the performance penalty to lie in the other direction. Recursive calls can lead to the construction of extra stack frames; the penalty for this varies. Also, in some languages like Python (more correctly, in some implementations of some languages...), you can run into stack limits rather easily for tasks you might specify recursively, such as finding the maximum value in a tree data structure. In these cases, you really want to stick with loops.
Writing good recursive functions can reduce the performance penalty somewhat, assuming you have a compiler that optimizes tail recursions, etc. (Also double check to make sure that the function really is tail recursive---it's one of those things that many people make mistakes on.)
Apart from "edge" cases (high performance computing, very large recursion depth, etc.), it's preferable to adopt the approach that most clearly expresses your intent, is well-designed, and is maintainable. Optimize only after identifying a need.
Recursion is better than iteration for problems that can be broken down into multiple, smaller pieces.
For example, to make a recursive Fibonnaci algorithm, you break down fib(n) into fib(n-1) and fib(n-2) and compute both parts. Iteration only allows you to repeat a single function over and over again.
However, Fibonacci is actually a broken example and I think iteration is actually more efficient. Notice that fib(n) = fib(n-1) + fib(n-2) and fib(n-1) = fib(n-2) + fib(n-3). fib(n-1) gets calculated twice!
A better example is a recursive algorithm for a tree. The problem of analyzing the parent node can be broken down into multiple smaller problems of analyzing each child node. Unlike the Fibonacci example, the smaller problems are independent of each other.
So yeah - recursion is better than iteration for problems that can be broken down into multiple, smaller, independent, similar problems.
Your performance deteriorates when using recursion because calling a method, in any language, implies a lot of preparation: the calling code posts a return address, call parameters, some other context information such as processor registers might be saved somewhere, and at return time the called method posts a return value which is then retrieved by the caller, and any context information that was previously saved will be restored. the performance diff between an iterative and a recursive approach lies in the time these operations take.
From an implementation point of view, you really start noticing the difference when the time it takes to handle the calling context is comparable to the time it takes for your method to execute. If your recursive method takes longer to execute then the calling context management part, go the recursive way as the code is generally more readable and easy to understand and you won't notice the performance loss. Otherwise go iterative for efficiency reasons.
I believe tail recursion in java is not currently optimized. The details are sprinkled throughout this discussion on LtU and the associated links. It may be a feature in the upcoming version 7, but apparently it presents certain difficulties when combined with Stack Inspection since certain frames would be missing. Stack Inspection has been used to implement their fine-grained security model since Java 2.
http://lambda-the-ultimate.org/node/1333
There are many cases where it gives a much more elegant solution over the iterative method, the common example being traversal of a binary tree, so it isn't necessarily more difficult to maintain. In general, iterative versions are usually a bit faster (and during optimization may well replace a recursive version), but recursive versions are simpler to comprehend and implement correctly.
Recursion is very useful is some situations. For example consider the code for finding the factorial
int factorial ( int input )
{
int x, fact = 1;
for ( x = input; x > 1; x--)
fact *= x;
return fact;
}
Now consider it by using the recursive function
int factorial ( int input )
{
if (input == 0)
{
return 1;
}
return input * factorial(input - 1);
}
By observing these two, we can see that recursion is easy to understand.
But if it is not used with care it can be so much error prone too.
Suppose if we miss if (input == 0), then the code will be executed for some time and ends with usually a stack overflow.
In many cases recursion is faster because of caching, which improves performance. For example, here is an iterative version of merge sort using the traditional merge routine. It will run slower than the recursive implementation because of caching improved performances.
Iterative implementation
public static void sort(Comparable[] a)
{
int N = a.length;
aux = new Comparable[N];
for (int sz = 1; sz < N; sz = sz+sz)
for (int lo = 0; lo < N-sz; lo += sz+sz)
merge(a, lo, lo+sz-1, Math.min(lo+sz+sz-1, N-1));
}
Recursive implementation
private static void sort(Comparable[] a, Comparable[] aux, int lo, int hi)
{
if (hi <= lo) return;
int mid = lo + (hi - lo) / 2;
sort(a, aux, lo, mid);
sort(a, aux, mid+1, hi);
merge(a, aux, lo, mid, hi);
}
PS - this is what was told by Professor Kevin Wayne (Princeton University) on the course on algorithms presented on Coursera.
Using recursion, you're incurring the cost of a function call with each "iteration", whereas with a loop, the only thing you usually pay is an increment/decrement. So, if the code for the loop isn't much more complicated than the code for the recursive solution, loop will usually be superior to recursion.
Recursion and iteration depends on the business logic that you want to implement, though in most of the cases it can be used interchangeably. Most developers go for recursion because it is easier to understand.
It depends on the language. In Java you should use loops. Functional languages optimize recursion.
Recursion has a disadvantage that the algorithm that you write using recursion has O(n) space complexity.
While iterative aproach have a space complexity of O(1).This is the advantange of using iteration over recursion.
Then why do we use recursion?
See below.
Sometimes it is easier to write an algorithm using recursion while it's slightly tougher to write the same algorithm using iteration.In this case if you opt to follow the iteration approach you would have to handle stack yourself.
If you're just iterating over a list, then sure, iterate away.
A couple of other answers have mentioned (depth-first) tree traversal. It really is such a great example, because it's a very common thing to do to a very common data structure. Recursion is extremely intuitive for this problem.
Check out the "find" methods here:
http://penguin.ewu.edu/cscd300/Topic/BSTintro/index.html
Recursion is more simple (and thus - more fundamental) than any possible definition of an iteration. You can define a Turing-complete system with only a pair of combinators (yes, even a recursion itself is a derivative notion in such a system). Lambda calculus is an equally powerful fundamental system, featuring recursive functions. But if you want to define an iteration properly, you'd need much more primitives to start with.
As for the code - no, recursive code is in fact much easier to understand and to maintain than a purely iterative one, since most data structures are recursive. Of course, in order to get it right one would need a language with a support for high order functions and closures, at least - to get all the standard combinators and iterators in a neat way. In C++, of course, complicated recursive solutions can look a bit ugly, unless you're a hardcore user of FC++ and alike.
I would think in (non tail) recursion there would be a performance hit for allocating a new stack etc every time the function is called (dependent on language of course).
it depends on "recursion depth".
it depends on how much the function call overhead will influence the total execution time.
For example, calculating the classical factorial in a recursive way is very inefficient due to:
- risk of data overflowing
- risk of stack overflowing
- function call overhead occupy 80% of execution time
while developing a min-max algorithm for position analysis in the game of chess that will analyze subsequent N moves can be implemented in recursion over the "analysis depth" (as I'm doing ^_^)
Recursion? Where do I start, wiki will tell you “it’s the process of repeating items in a self-similar way"
Back in day when I was doing C, C++ recursion was a god send, stuff like "Tail recursion". You'll also find many sorting algorithms use recursion. Quick sort example: http://alienryderflex.com/quicksort/
Recursion is like any other algorithm useful for a specific problem. Perhaps you mightn't find a use straight away or often but there will be problem you’ll be glad it’s available.
In C++ if the recursive function is a templated one, then the compiler has more chance to optimize it, as all the type deduction and function instantiations will occur in compile time. Modern compilers can also inline the function if possible. So if one uses optimization flags like -O3 or -O2 in g++, then recursions may have the chance to be faster than iterations. In iterative codes, the compiler gets less chance to optimize it, as it is already in the more or less optimal state (if written well enough).
In my case, I was trying to implement matrix exponentiation by squaring using Armadillo matrix objects, in both recursive and iterative way. The algorithm can be found here... https://en.wikipedia.org/wiki/Exponentiation_by_squaring.
My functions were templated and I have calculated 1,000,000 12x12 matrices raised to the power 10. I got the following result:
iterative + optimisation flag -O3 -> 2.79.. sec
recursive + optimisation flag -O3 -> 1.32.. sec
iterative + No-optimisation flag -> 2.83.. sec
recursive + No-optimisation flag -> 4.15.. sec
These results have been obtained using gcc-4.8 with c++11 flag (-std=c++11) and Armadillo 6.1 with Intel mkl. Intel compiler also shows similar results.
Mike is correct. Tail recursion is not optimized out by the Java compiler or the JVM. You will always get a stack overflow with something like this:
int count(int i) {
return i >= 100000000 ? i : count(i+1);
}
You have to keep in mind that utilizing too deep recursion you will run into Stack Overflow, depending on allowed stack size. To prevent this make sure to provide some base case which ends you recursion.
Using just Chrome 45.0.2454.85 m, recursion seems to be a nice amount faster.
Here is the code:
(function recursionVsForLoop(global) {
"use strict";
// Perf test
function perfTest() {}
perfTest.prototype.do = function(ns, fn) {
console.time(ns);
fn();
console.timeEnd(ns);
};
// Recursion method
(function recur() {
var count = 0;
global.recurFn = function recurFn(fn, cycles) {
fn();
count = count + 1;
if (count !== cycles) recurFn(fn, cycles);
};
})();
// Looped method
function loopFn(fn, cycles) {
for (var i = 0; i < cycles; i++) {
fn();
}
}
// Tests
var curTest = new perfTest(),
testsToRun = 100;
curTest.do('recursion', function() {
recurFn(function() {
console.log('a recur run.');
}, testsToRun);
});
curTest.do('loop', function() {
loopFn(function() {
console.log('a loop run.');
}, testsToRun);
});
})(window);
RESULTS
// 100 runs using standard for loop
100x for loop run.
Time to complete: 7.683ms
// 100 runs using functional recursive approach w/ tail recursion
100x recursion run.
Time to complete: 4.841ms
In the screenshot below, recursion wins again by a bigger margin when run at 300 cycles per test
If the iterations are atomic and orders of magnitude more expensive than pushing a new stack frame and creating a new thread and you have multiple cores and your runtime environment can use all of them, then a recursive approach could yield a huge performance boost when combined with multithreading. If the average number of iterations is not predictable then it might be a good idea to use a thread pool which will control thread allocation and prevent your process from creating too many threads and hogging the system.
For example, in some languages, there are recursive multithreaded merge sort implementations.
But again, multithreading can be used with looping rather than recursion, so how well this combination will work depends on more factors including the OS and its thread allocation mechanism.
I found another differences between those approaches.
It looks simple and unimportant, but it has a very important role while you prepare for interviews and this subject arises, so look closely.
In short:
1) iterative post-order traversal is not easy - that makes DFT more complex
2) cycles check easier with recursion
Details:
In the recursive case, it is easy to create pre and post traversals:
Imagine a pretty standard question: "print all tasks that should be executed to execute the task 5, when tasks depend on other tasks"
Example:
//key-task, value-list of tasks the key task depends on
//"adjacency map":
Map<Integer, List<Integer>> tasksMap = new HashMap<>();
tasksMap.put(0, new ArrayList<>());
tasksMap.put(1, new ArrayList<>());
List<Integer> t2 = new ArrayList<>();
t2.add(0);
t2.add(1);
tasksMap.put(2, t2);
List<Integer> t3 = new ArrayList<>();
t3.add(2);
t3.add(10);
tasksMap.put(3, t3);
List<Integer> t4 = new ArrayList<>();
t4.add(3);
tasksMap.put(4, t4);
List<Integer> t5 = new ArrayList<>();
t5.add(3);
tasksMap.put(5, t5);
tasksMap.put(6, new ArrayList<>());
tasksMap.put(7, new ArrayList<>());
List<Integer> t8 = new ArrayList<>();
t8.add(5);
tasksMap.put(8, t8);
List<Integer> t9 = new ArrayList<>();
t9.add(4);
tasksMap.put(9, t9);
tasksMap.put(10, new ArrayList<>());
//task to analyze:
int task = 5;
List<Integer> res11 = getTasksInOrderDftReqPostOrder(tasksMap, task);
System.out.println(res11);**//note, no reverse required**
List<Integer> res12 = getTasksInOrderDftReqPreOrder(tasksMap, task);
Collections.reverse(res12);//note reverse!
System.out.println(res12);
private static List<Integer> getTasksInOrderDftReqPreOrder(Map<Integer, List<Integer>> tasksMap, int task) {
List<Integer> result = new ArrayList<>();
Set<Integer> visited = new HashSet<>();
reqPreOrder(tasksMap,task,result, visited);
return result;
}
private static void reqPreOrder(Map<Integer, List<Integer>> tasksMap, int task, List<Integer> result, Set<Integer> visited) {
if(!visited.contains(task)) {
visited.add(task);
result.add(task);//pre order!
List<Integer> children = tasksMap.get(task);
if (children != null && children.size() > 0) {
for (Integer child : children) {
reqPreOrder(tasksMap,child,result, visited);
}
}
}
}
private static List<Integer> getTasksInOrderDftReqPostOrder(Map<Integer, List<Integer>> tasksMap, int task) {
List<Integer> result = new ArrayList<>();
Set<Integer> visited = new HashSet<>();
reqPostOrder(tasksMap,task,result, visited);
return result;
}
private static void reqPostOrder(Map<Integer, List<Integer>> tasksMap, int task, List<Integer> result, Set<Integer> visited) {
if(!visited.contains(task)) {
visited.add(task);
List<Integer> children = tasksMap.get(task);
if (children != null && children.size() > 0) {
for (Integer child : children) {
reqPostOrder(tasksMap,child,result, visited);
}
}
result.add(task);//post order!
}
}
Note that the recursive post-order-traversal does not require a subsequent reversal of the result. Children printed first and your task in the question printed last. Everything is fine. You can do a recursive pre-order-traversal (also shown above) and that one will require a reversal of the result list.
Not that simple with iterative approach! In iterative (one stack) approach you can only do a pre-ordering-traversal, so you obliged to reverse the result array at the end:
List<Integer> res1 = getTasksInOrderDftStack(tasksMap, task);
Collections.reverse(res1);//note reverse!
System.out.println(res1);
private static List<Integer> getTasksInOrderDftStack(Map<Integer, List<Integer>> tasksMap, int task) {
List<Integer> result = new ArrayList<>();
Set<Integer> visited = new HashSet<>();
Stack<Integer> st = new Stack<>();
st.add(task);
visited.add(task);
while(!st.isEmpty()){
Integer node = st.pop();
List<Integer> children = tasksMap.get(node);
result.add(node);
if(children!=null && children.size() > 0){
for(Integer child:children){
if(!visited.contains(child)){
st.add(child);
visited.add(child);
}
}
}
//If you put it here - it does not matter - it is anyway a pre-order
//result.add(node);
}
return result;
}
Looks simple, no?
But it is a trap in some interviews.
It means the following: with the recursive approach, you can implement Depth First Traversal and then select what order you need pre or post(simply by changing the location of the "print", in our case of the "adding to the result list"). With the iterative (one stack) approach you can easily do only pre-order traversal and so in the situation when children need be printed first(pretty much all situations when you need start print from the bottom nodes, going upwards) - you are in the trouble. If you have that trouble you can reverse later, but it will be an addition to your algorithm. And if an interviewer is looking at his watch it may be a problem for you. There are complex ways to do an iterative post-order traversal, they exist, but they are not simple. Example:https://www.geeksforgeeks.org/iterative-postorder-traversal-using-stack/
Thus, the bottom line: I would use recursion during interviews, it is simpler to manage and to explain. You have an easy way to go from pre to post-order traversal in any urgent case. With iterative you are not that flexible.
I would use recursion and then tell: "Ok, but iterative can provide me more direct control on used memory, I can easily measure the stack size and disallow some dangerous overflow.."
Another plus of recursion - it is simpler to avoid / notice cycles in a graph.
Example (preudocode):
dft(n){
mark(n)
for(child: n.children){
if(marked(child))
explode - cycle found!!!
dft(child)
}
unmark(n)
}
It may be fun to write it as recursion, or as a practice.
However, if the code is to be used in production, you need to consider the possibility of stack overflow.
Tail recursion optimization can eliminate stack overflow, but do you want to go through the trouble of making it so, and you need to know you can count on it having the optimization in your environment.
Every time the algorithm recurses, how much is the data size or n reduced by?
If you are reducing the size of data or n by half every time you recurse, then in general you don't need to worry about stack overflow. Say, if it needs to be 4,000 level deep or 10,000 level deep for the program to stack overflow, then your data size need to be roughly 24000 for your program to stack overflow. To put that into perspective, a biggest storage device recently can hold 261 bytes, and if you have 261 of such devices, you are only dealing with 2122 data size. If you are looking at all the atoms in the universe, it is estimated that it may be less than 284. If you need to deal with all the data in the universe and their states for every millisecond since the birth of the universe estimated to be 14 billion years ago, it may only be 2153. So if your program can handle 24000 units of data or n, you can handle all data in the universe and the program will not stack overflow. If you don't need to deal with numbers that are as big as 24000 (a 4000-bit integer), then in general you don't need to worry about stack overflow.
However, if you reduce the size of data or n by a constant amount every time you recurse, then you can run into stack overflow when n becomes merely 20000. That is, the program runs well when n is 1000, and you think the program is good, and then the program stack overflows when some time in the future, when n is 5000 or 20000.
So if you have a possibility of stack overflow, try to make it an iterative solution.
As far as I know, Perl does not optimize tail-recursive calls, but you can fake it.
sub f{
my($l,$r) = #_;
if( $l >= $r ){
return $l;
} else {
# return f( $l+1, $r );
#_ = ( $l+1, $r );
goto &f;
}
}
When first called it will allocate space on the stack. Then it will change its arguments, and restart the subroutine, without adding anything more to the stack. It will therefore pretend that it never called its self, changing it into an iterative process.
Note that there is no "my #_;" or "local #_;", if you did it would no longer work.
"Is there a performance hit if we use a loop instead of
recursion or vice versa in algorithms where both can serve the same purpose?"
Usually yes if you are writing in a imperative language iteration will run faster than recursion, the performance hit is minimized in problems where the iterative solution requires manipulating Stacks and popping items off of a stack due to the recursive nature of the problem. There are a lot of times where the recursive implementation is much easier to read because the code is much shorter,
so you do want to consider maintainability. Especailly in cases where the problem has a recursive nature. So take for example:
The recursive implementation of Tower of Hanoi:
def TowerOfHanoi(n , source, destination, auxiliary):
if n==1:
print ("Move disk 1 from source",source,"to destination",destination)
return
TowerOfHanoi(n-1, source, auxiliary, destination)
print ("Move disk",n,"from source",source,"to destination",destination)
TowerOfHanoi(n-1, auxiliary, destination, source)
Fairly short and pretty easy to read. Compare this with its Counterpart iterative TowerOfHanoi:
# Python3 program for iterative Tower of Hanoi
import sys
# A structure to represent a stack
class Stack:
# Constructor to set the data of
# the newly created tree node
def __init__(self, capacity):
self.capacity = capacity
self.top = -1
self.array = [0]*capacity
# function to create a stack of given capacity.
def createStack(capacity):
stack = Stack(capacity)
return stack
# Stack is full when top is equal to the last index
def isFull(stack):
return (stack.top == (stack.capacity - 1))
# Stack is empty when top is equal to -1
def isEmpty(stack):
return (stack.top == -1)
# Function to add an item to stack.
# It increases top by 1
def push(stack, item):
if(isFull(stack)):
return
stack.top+=1
stack.array[stack.top] = item
# Function to remove an item from stack.
# It decreases top by 1
def Pop(stack):
if(isEmpty(stack)):
return -sys.maxsize
Top = stack.top
stack.top-=1
return stack.array[Top]
# Function to implement legal
# movement between two poles
def moveDisksBetweenTwoPoles(src, dest, s, d):
pole1TopDisk = Pop(src)
pole2TopDisk = Pop(dest)
# When pole 1 is empty
if (pole1TopDisk == -sys.maxsize):
push(src, pole2TopDisk)
moveDisk(d, s, pole2TopDisk)
# When pole2 pole is empty
else if (pole2TopDisk == -sys.maxsize):
push(dest, pole1TopDisk)
moveDisk(s, d, pole1TopDisk)
# When top disk of pole1 > top disk of pole2
else if (pole1TopDisk > pole2TopDisk):
push(src, pole1TopDisk)
push(src, pole2TopDisk)
moveDisk(d, s, pole2TopDisk)
# When top disk of pole1 < top disk of pole2
else:
push(dest, pole2TopDisk)
push(dest, pole1TopDisk)
moveDisk(s, d, pole1TopDisk)
# Function to show the movement of disks
def moveDisk(fromPeg, toPeg, disk):
print("Move the disk", disk, "from '", fromPeg, "' to '", toPeg, "'")
# Function to implement TOH puzzle
def tohIterative(num_of_disks, src, aux, dest):
s, d, a = 'S', 'D', 'A'
# If number of disks is even, then interchange
# destination pole and auxiliary pole
if (num_of_disks % 2 == 0):
temp = d
d = a
a = temp
total_num_of_moves = int(pow(2, num_of_disks) - 1)
# Larger disks will be pushed first
for i in range(num_of_disks, 0, -1):
push(src, i)
for i in range(1, total_num_of_moves + 1):
if (i % 3 == 1):
moveDisksBetweenTwoPoles(src, dest, s, d)
else if (i % 3 == 2):
moveDisksBetweenTwoPoles(src, aux, s, a)
else if (i % 3 == 0):
moveDisksBetweenTwoPoles(aux, dest, a, d)
# Input: number of disks
num_of_disks = 3
# Create three stacks of size 'num_of_disks'
# to hold the disks
src = createStack(num_of_disks)
dest = createStack(num_of_disks)
aux = createStack(num_of_disks)
tohIterative(num_of_disks, src, aux, dest)
Now the first one is way easier to read because suprise suprise shorter code is usually easier to understand than code that is 10 times longer. Sometimes you want to ask yourself is the extra performance gain really worth it? The amount of hours wasted debugging the code. Is the iterative TowerOfHanoi faster than the Recursive TowerOfHanoi? Probably, but not by a big margin. Would I like to program Recursive problems like TowerOfHanoi using iteration? Hell no. Next we have another recursive function the Ackermann function:
Using recursion:
if m == 0:
# BASE CASE
return n + 1
elif m > 0 and n == 0:
# RECURSIVE CASE
return ackermann(m - 1, 1)
elif m > 0 and n > 0:
# RECURSIVE CASE
return ackermann(m - 1, ackermann(m, n - 1))
Using Iteration:
callStack = [{'m': 2, 'n': 3, 'indentation': 0, 'instrPtr': 'start'}]
returnValue = None
while len(callStack) != 0:
m = callStack[-1]['m']
n = callStack[-1]['n']
indentation = callStack[-1]['indentation']
instrPtr = callStack[-1]['instrPtr']
if instrPtr == 'start':
print('%sackermann(%s, %s)' % (' ' * indentation, m, n))
if m == 0:
# BASE CASE
returnValue = n + 1
callStack.pop()
continue
elif m > 0 and n == 0:
# RECURSIVE CASE
callStack[-1]['instrPtr'] = 'after first recursive case'
callStack.append({'m': m - 1, 'n': 1, 'indentation': indentation + 1, 'instrPtr': 'start'})
continue
elif m > 0 and n > 0:
# RECURSIVE CASE
callStack[-1]['instrPtr'] = 'after second recursive case, inner call'
callStack.append({'m': m, 'n': n - 1, 'indentation': indentation + 1, 'instrPtr': 'start'})
continue
elif instrPtr == 'after first recursive case':
returnValue = returnValue
callStack.pop()
continue
elif instrPtr == 'after second recursive case, inner call':
callStack[-1]['innerCallResult'] = returnValue
callStack[-1]['instrPtr'] = 'after second recursive case, outer call'
callStack.append({'m': m - 1, 'n': returnValue, 'indentation': indentation + 1, 'instrPtr': 'start'})
continue
elif instrPtr == 'after second recursive case, outer call':
returnValue = returnValue
callStack.pop()
continue
print(returnValue)
And once again I will argue that the recursive implementation is much easier to understand. So my conclusion is use recursion if the problem by nature is recursive and requires manipulating items in a stack.
I'm going to answer your question by designing a Haskell data structure by "induction", which is a sort of "dual" to recursion. And then I will show how this duality leads to nice things.
We introduce a type for a simple tree:
data Tree a = Branch (Tree a) (Tree a)
| Leaf a
deriving (Eq)
We can read this definition as saying "A tree is a Branch (which contains two trees) or is a leaf (which contains a data value)". So the leaf is a sort of minimal case. If a tree isn't a leaf, then it must be a compound tree containing two trees. These are the only cases.
Let's make a tree:
example :: Tree Int
example = Branch (Leaf 1)
(Branch (Leaf 2)
(Leaf 3))
Now, let's suppose we want to add 1 to each value in the tree. We can do this by calling:
addOne :: Tree Int -> Tree Int
addOne (Branch a b) = Branch (addOne a) (addOne b)
addOne (Leaf a) = Leaf (a + 1)
First, notice that this is in fact a recursive definition. It takes the data constructors Branch and Leaf as cases (and since Leaf is minimal and these are the only possible cases), we are sure that the function will terminate.
What would it take to write addOne in an iterative style? What will looping into an arbitrary number of branches look like?
Also, this kind of recursion can often be factored out, in terms of a "functor". We can make Trees into Functors by defining:
instance Functor Tree where fmap f (Leaf a) = Leaf (f a)
fmap f (Branch a b) = Branch (fmap f a) (fmap f b)
and defining:
addOne' = fmap (+1)
We can factor out other recursion schemes, such as the catamorphism (or fold) for an algebraic data type. Using a catamorphism, we can write:
addOne'' = cata go where
go (Leaf a) = Leaf (a + 1)
go (Branch a b) = Branch a b
Related
Performance of the tail recursive functions
The variety of books, articles, blog posts suggests that rewriting recursive function into tail recursive function makes it faster. No doubts it is faster for trivial cases like generating Fibonacci numbers or calculating factorial. In such cases there is a typical approach to rewrite - by using "helper function" and additional parameter for intermediate results. TAIL RECURSION is the great description of the differences between tail recursive and not tail recursive functions and the possible way how to turn the recursive function into a tail recursive one. What is important for such rewriting - the number of function calls is the same (before/after rewriting), the difference comes from the way how those calls are optimized for tail recursion. Nevertheless, it is not always possible to convert the function into tail recursive one with such an easy trick. I would categorize such cases as below Function still can be rewritten into tail recursive but that might require additional data structures and more substantial changes in the implementation Function cannot be rewritten into tail recursive with any means but recursion still can be avoided by using loops and imitating stack (I'm not 100% sure that tail recursion is impossible in some cases and I cannot describe how identify such cases, so if there is any academical research on this subject - the link would be highly appreciated) Now let me consider specific example when function can be rewritten into tail recursive by using additional structures and changing the way algorithm works. Sample task: Print all sequences of length n containing 1 and 0 and which do not have adjacent 1s. Obvious implementation which comes to mind first is below (on each step, if current value is 0 then we generate two sequences with length n-1 otherwise we generate only sequence with length n-1 which starts from 0) def gen001(lvl: Int, result: List[Int]):Unit = { //println("gen001") if (lvl > 0) { if (result.headOption.getOrElse(0) == 0) { gen001(lvl - 1, 0 :: result) gen001(lvl - 1, 1 :: result) } else gen001(lvl - 1, 0 :: result) } else { println(result.mkString("")) } } gen001(5, List()) It's not that straightforward to avoid two function calls when current element is 0 but that can be done if we generate children for each value in the intermediate sequences starting from the sequence '01' on the level 1. After having hierarchy of auxiliary sequences for level 1..n we can reconstruct the result (printResult) starting from leaf nodes (or sequence from the last iteration in other words). #tailrec def g001(lvl: Int, current: List[(Int, Int)], result: List[List[(Int, Int)]]):List[List[(Int, Int)]] = { //println("g001") if (lvl > 1) { val tmp = current.map(_._1).zipWithIndex val next = tmp.flatMap(x => x._1 match {case 0 => List((0, x._2), (1, x._2)) case 1 => List((0, x._2))}) g001(lvl - 1, next, next :: result) } else result } def printResult(p: List[List[(Int, Int)]]) = { p.head.zipWithIndex.foreach(x => println(p.scanLeft((-1, x._2))((r1, r2) => (r2(r1._2)._1, r2(r1._2)._2)).tail.map(_._1).mkString(""))) } val r = g001(5, List(0,1).zipWithIndex, List(List(0,1).zipWithIndex)) println(r) printResult(r) Output List(List((0,0), (1,0), (0,1), (0,2), (1,2), (0,3), (1,3), (0,4), (0,5), (1,5), (0,6), (0,7), (1,7)), List((0,0), (1,0), (0,1), (0,2), (1,2), (0,3), (1,3), (0,4)), List((0,0), (1,0), (0,1), (0,2), (1,2)), List((0,0), (1,0), (0,1)), List((0,0), (1,1))) 00000 10000 01000 00100 10100 00010 10010 01010 00001 10001 01001 00101 10101 So now, if we compare two approaches, the first one requires much more recursive calls however on the other hand it's much more efficient in terms of memory because no additional data structures of intermediate results are required. Finally, the questions are Is there a class of recursive functions which cannot be implemented as tail recursive? If so how to identify them? Is there a class of recursive functions such as their tail recursive implementation cannot be as efficient as non tail recursive one (for example in terms of memory usage). If so how to identify them. (Function from above example seems to be in this category) Links to academical research papers are highly appreciated. PS. Thera are a number of related questions already asked and below links may be quite useful Rewrite linear recursive function as tail-recursive function https://cs.stackexchange.com/questions/56867/why-are-loops-faster-than-recursion UPDATE Quick performance test Tail recursive function can be simplified to store only auxiliary sequence on each step. That would be enough to print result. Let me take out of the picture function which prints result and provide just modified version of the tail recursive approach. def time[R](block: => R): R = { val t0 = System.nanoTime() val result = block // call-by-name val t1 = System.nanoTime() println("Elapsed time: " + (t1 - t0)/1e9 + "s") result } #tailrec def gg001(lvl: Int, current: List[Int], result: List[List[Int]]):List[List[Int]] = { //println("g001") if (lvl > 1) { val next = current.flatMap(x => x match {case 0 => List(0, 1) case 1 => List(0)}) gg001(lvl - 1, next, next :: result) } else result } time{gen001(30, List())} time{gg001(30, List(0,1), List(List(0,1)))} time{gen001(31, List())} time{gg001(31, List(0,1), List(List(0,1)))} time{gen001(32, List())} time{gg001(32, List(0,1), List(List(0,1)))} Output Elapsed time: 2.2105142s Elapsed time: 1.2582993s Elapsed time: 3.7674929s Elapsed time: 2.4024759s Elapsed time: 6.4951573s Elapsed time: 8.6575108s For some N tail recursive approach starts taking more time than the original one and if we keep increasing N further it will start failing with java.lang.OutOfMemoryError: GC overhead limit exceeded Which makes me think that overhead to manage auxiliary data structures for tail recursive approach outweighs performance gains due to less number of recursive calls as well as their optimization. I might have chosen not the most optimal implementation and/or data structures and also it may be due to language specific challenges but tail recursive approach does not look as absolute best solution (in terms of execution time/resources) even for this specific task.
The class of functions in 1 is empty: any computable function written in a recursive style has a tail-recursive equivalent (at the limit, since there's a tail-recursive implementation of a Turing Machine, you can translate any computable function into a Turing Machine definition and then the tail recursive version of that function is running that definition through the tail-recursive implementation of a Turing Machine). There are likewise no functions for which tail recursion is intrinsically less efficient than non-tail recursion. In your example, for instance, it's simply not correct that "it's much more efficient in terms of memory because no additional data structures of intermediate results are required." The required additional structure of intermediate results is implicit in the call-stack (which goes away in the tail recursive version). While the call stack is likely an array (more space efficient than a linked-list) it also, because of its generality, stores more data than is required.
Let me try to answer my own question. Recursive function can be easily rewritten into a tail recursive one if it is never called more than once from the function body. Alternatively, any function can be rewritten into a loop and loop can be rewritten into tail recursive function but this would require using some auxiliary data structures [at least for] for stack. Such implementation can be slightly faster than the original approach (I believe this difference may vary from one language to another) but on the other hand it will be more cumbersome and less maintainable. So practically, tail recursive implementation makes sense when it does not require efforts to implement stack and the only additional hassle is storing intermediate results.
Understanding why Floyd's tortoise and hare algorithm works when applied to an array of integers
I was trying to solve this leetcode problem https://leetcode.com/problems/find-the-duplicate-number/ using my own implementation of the tortoise and hare algorithm which resulted in an infinite loop when given the following array of integers: [3,1,3,4,2] Only after tracing through my algorithm was I able to see that the slow and fast runners never take on the two duplicate values at the same time. Here is my algorithm in pseudocode: initialize fast and slow runners to 0 while(true) move fast runner two indices forward move slow runner one index forward if arr[fast] == arr[slow] and fast != slow return arr[fast] // this is the duplicate Now, I'm sure someone who is skilled in discrete mathematics would have been able to intuitively know that this approach would not have lead to the correct solution without first having to trace through an example like I had to do. What inferences or observations could I have made that would have lead me to see that this algorithm was not going to work? I'd like to know how one could intuitively identity a flaw in this logic through a series of logical statements. In other words, what's the explanation for why the two runners will never find the duplicates in this example? I feel like it may have something to do with counting, but I do not have a very strong background in discrete. And to clarify, I have looked at the correct implementation so I do know what the correct way to solve it is. I just thought that this way would have worked too similar to applying it to linked lists, where you'd move the fast runner two nodes up and the slow runner one node up. Thank you for your help.
Floyd's tortoise algorithm works when you're detecting a cycle in a linked list. It relies on the fact that if both pointers are moving at a different pace, the gap between them will keep on increasing to a limit, after which it'll be reset if a cycle exists. In this case, the algorithm does find a cycle, since both pointers converge to the index 0 after some iterations. However, you're not looking to detect a cycle here; you're trying to find a duplicate. That's why this gets stuck in infinite recursion: it is meant to detect a cycle (which it correctly does), but not detect duplicates in its basic implementation. To clarify, here's a sample linked list created on your sample array. 3 -> 1 -> 3 -> 4 -> 2 '--<----<----<----<-' If you run Floyd's algorithm, you find that the cycle will get detected at index 0, since both pointers will converge there. It works by checking if fast and slow point to the same location and not if they have the same values of nodes (fast==slow isn't the same as fast.value==slow.value). You are attempting to check duplicates by comparing the value on the nodes, and checking if the nodes don't point to the same location. That is actually the flaw, since Floyd's algorithm works to check if both pointers point to the same location in order to detect a cycle. You can read this simple, informative proof to improve your intuition as to why the pointers will converge.
That' not a bad idea. Here is an implementation in Python: class Solution: def findDuplicate(self, nums): slow, fast = 0, 0 while True: slow = nums[nums[slow]] fast = nums[fast] if slow == fast: break fast = 0 while True: slow, fast = nums[slow], nums[fast] if slow == fast: break return slow We can also use Binary Search: class Solution: def findDuplicate(self, nums): lo, hi = 0, len(nums) - 1 mid = lo + (hi - lo) // 2 while hi - lo > 1: count = 0 for num in nums: if mid < num <= hi: count += 1 if count > hi - mid: lo = mid else: hi = mid mid = lo + (hi - lo) // 2 return hi In C++: // The following block might slightly improve the execution time; // Can be removed; static const auto __optimize__ = []() { std::ios::sync_with_stdio(false); std::cin.tie(nullptr); std::cout.tie(nullptr); return 0; }(); // Most of headers are already included; // Can be removed; #include <iostream> #include <cstdint> #include <vector> static const struct Solution { static const int findDuplicate( const std::vector<int>& nums ) { int slow = 0; int fast = 0; while (true) { slow = nums[nums[slow]]; fast = nums[fast]; if (slow == fast) { break; } } fast = 0; while (slow != fast) { slow = nums[slow]; fast = nums[fast]; } return slow; } };
Implementing recursion without a separate function
I was wondering how (or maybe even if it's possible) to implement recursion without a separate function that calls itself. So far all algorithms implementing recursion I've seen use a separate function. I thought a lot and came up with the idea that a goto statement with some variable mutation can do the job but I'm really unsure about that. I made a mini research and found info about this Structured programming theorem which proves that every algorithm can be implemented with only three data structures, so such a recursion implementation must be posible but I still cannot assemble everything into consistent knowledge and understanding for the whole would-be approach.
What you are looking for is basically expressing a recursive function into an iterative form. This can easily be done by using a Stack. Here's a very simple example in C#: int NodeCount(Node n) { if (n.Visited) return 0; // This node was visited through another node, discard n.Visited = true; int res = 1; foreach (Node ni in n.Children) // Recurse on each node { res += NodeCount(ni); // Add the number of sub node for each node } return res; } Here's the exact same function in iterative form: int NodeCount(Node root) { int res = 0; Stack<int> stk = new Stack<int>(); stk.Push(root) // We start with the root node while( stk.Count > 0) // While we still have nodes to visit { Node current = stk.Pop(); // Get the node on top of the stack current.Visited = true; // Mark it as visited res ++; // Add one to the count foreach (Node ni in n.Children) // Push all non visited children to the stack { if (ni.Visited == false) stk.Push(ni); } } return res; }
to implement recursion without a separate function that calls itself. Yes it is possible to convert a recursive algorithm to an iterative one. But why would you want to use a goto statement? When you make a function call the assembly generated has a branch instruction to jumb to the specific function which acts similar to a goto. So what you would accomplish is somewhat similar to what the machine code would eventually do, with the benefit of avoiding stack frame calls and the disadvantage of the horrible spaggeti code from using goto. If you want to avoid recursion, use iteration. How did you come up with this question?
You can perform a sort of a recursion without a function using any stack-based language. For example, in Forth you can write the fibonacci function like this: : fibonacci 0 1 10 0 do 2dup + rot . loop swap . . ; This program recursively generates 10 iterations of the fibonacci sequence, each iteration using the inputs from the previous iteration. No function is called.
Tail-recursive pow() algorithm with memoization?
I'm looking for an algorithm to compute pow() that's tail-recursive and uses memoization to speed up repeated calculations. Performance isn't an issue; this is mostly an intellectual exercise - I spent a train ride coming up with all the different pow() implementations I could, but was unable to come up with one that I was happy with that had these two properties. My best shot was the following: def calc_tailrec_mem(base, exp, cache_line={}, acc=1, ctr=0): if exp == 0: return 1 elif exp == 1: return acc * base elif exp in cache_line: val = acc * cache_line[exp] cache_line[exp + ctr] = val return val else: cache_line[ctr] = acc return calc_tailrec_mem(base, exp-1, cache_line, acc * base, ctr + 1) It works, but it doesn't memoize the results of all calculations - only those with exponents 1..exp/2 and exp.
You'll get better performance if you use the successive squaring technique described in SICP section 1.2.4 Exponentiation. It doesn't use memoization, but the general approach is O(log n) instead of O(n), so you should still see an improvement. I talk about the solution to the iterative process from exercise 1.16 here.
I don't think you're recording the correct thing in your cache, the mapping changed when you call it with different arguments. I think you need to have a cache of (base,exp) -> pow(base,exp). I understand what ctr is for, and why only half of what you expect is recorded. Consider calc_tailrec_mem(2,4): First level, pow(2,1) is recorded as 2, the next level = calc_tailrec_mem(2,3,...), and pow(2,2) is recorded. The next level is calc_tailrec_mem(2,2,...), but that is already saved in the cache, so the recursion stops. The function is very confusing because it's caching something completely different from what it's supposed to be calculating, due to the acculumator and ctr.
This is way too late, but anyone out there looking for the answer, here it is: int powMem(int base,int exp){ //initializes once and for all static map<int,int> memo; //base case to stop the recursion if(exp <= 1) return base; //check if the value is already calculated before. If yes just return it. if(memo.find(exp) != memo.end()) return memo[exp]; //else just find it and then store it in memo for further use. int x = powMem(base,exp/2); memo[exp] = x*x; //return the answer return memo[exp]; } This uses the memo array - a map , to be exact - to store the already calculated values.
What is tail recursion?
Whilst starting to learn lisp, I've come across the term tail-recursive. What does it mean exactly?
Consider a simple function that adds the first N natural numbers. (e.g. sum(5) = 0 + 1 + 2 + 3 + 4 + 5 = 15). Here is a simple JavaScript implementation that uses recursion: function recsum(x) { if (x === 0) { return 0; } else { return x + recsum(x - 1); } } If you called recsum(5), this is what the JavaScript interpreter would evaluate: recsum(5) 5 + recsum(4) 5 + (4 + recsum(3)) 5 + (4 + (3 + recsum(2))) 5 + (4 + (3 + (2 + recsum(1)))) 5 + (4 + (3 + (2 + (1 + recsum(0))))) 5 + (4 + (3 + (2 + (1 + 0)))) 5 + (4 + (3 + (2 + 1))) 5 + (4 + (3 + 3)) 5 + (4 + 6) 5 + 10 15 Note how every recursive call has to complete before the JavaScript interpreter begins to actually do the work of calculating the sum. Here's a tail-recursive version of the same function: function tailrecsum(x, running_total = 0) { if (x === 0) { return running_total; } else { return tailrecsum(x - 1, running_total + x); } } Here's the sequence of events that would occur if you called tailrecsum(5), (which would effectively be tailrecsum(5, 0), because of the default second argument). tailrecsum(5, 0) tailrecsum(4, 5) tailrecsum(3, 9) tailrecsum(2, 12) tailrecsum(1, 14) tailrecsum(0, 15) 15 In the tail-recursive case, with each evaluation of the recursive call, the running_total is updated. Note: The original answer used examples from Python. These have been changed to JavaScript, since Python interpreters don't support tail call optimization. However, while tail call optimization is part of the ECMAScript 2015 spec, most JavaScript interpreters don't support it.
In traditional recursion, the typical model is that you perform your recursive calls first, and then you take the return value of the recursive call and calculate the result. In this manner, you don't get the result of your calculation until you have returned from every recursive call. In tail recursion, you perform your calculations first, and then you execute the recursive call, passing the results of your current step to the next recursive step. This results in the last statement being in the form of (return (recursive-function params)). Basically, the return value of any given recursive step is the same as the return value of the next recursive call. The consequence of this is that once you are ready to perform your next recursive step, you don't need the current stack frame any more. This allows for some optimization. In fact, with an appropriately written compiler, you should never have a stack overflow snicker with a tail recursive call. Simply reuse the current stack frame for the next recursive step. I'm pretty sure Lisp does this.
An important point is that tail recursion is essentially equivalent to looping. It's not just a matter of compiler optimization, but a fundamental fact about expressiveness. This goes both ways: you can take any loop of the form while(E) { S }; return Q where E and Q are expressions and S is a sequence of statements, and turn it into a tail recursive function f() = if E then { S; return f() } else { return Q } Of course, E, S, and Q have to be defined to compute some interesting value over some variables. For example, the looping function sum(n) { int i = 1, k = 0; while( i <= n ) { k += i; ++i; } return k; } is equivalent to the tail-recursive function(s) sum_aux(n,i,k) { if( i <= n ) { return sum_aux(n,i+1,k+i); } else { return k; } } sum(n) { return sum_aux(n,1,0); } (This "wrapping" of the tail-recursive function with a function with fewer parameters is a common functional idiom.)
This excerpt from the book Programming in Lua shows how to make a proper tail recursion (in Lua, but should apply to Lisp too) and why it's better. A tail call [tail recursion] is a kind of goto dressed as a call. A tail call happens when a function calls another as its last action, so it has nothing else to do. For instance, in the following code, the call to g is a tail call: function f (x) return g(x) end After f calls g, it has nothing else to do. In such situations, the program does not need to return to the calling function when the called function ends. Therefore, after the tail call, the program does not need to keep any information about the calling function in the stack. ... Because a proper tail call uses no stack space, there is no limit on the number of "nested" tail calls that a program can make. For instance, we can call the following function with any number as argument; it will never overflow the stack: function foo (n) if n > 0 then return foo(n - 1) end end ... As I said earlier, a tail call is a kind of goto. As such, a quite useful application of proper tail calls in Lua is for programming state machines. Such applications can represent each state by a function; to change state is to go to (or to call) a specific function. As an example, let us consider a simple maze game. The maze has several rooms, each with up to four doors: north, south, east, and west. At each step, the user enters a movement direction. If there is a door in that direction, the user goes to the corresponding room; otherwise, the program prints a warning. The goal is to go from an initial room to a final room. This game is a typical state machine, where the current room is the state. We can implement such maze with one function for each room. We use tail calls to move from one room to another. A small maze with four rooms could look like this: function room1 () local move = io.read() if move == "south" then return room3() elseif move == "east" then return room2() else print("invalid move") return room1() -- stay in the same room end end function room2 () local move = io.read() if move == "south" then return room4() elseif move == "west" then return room1() else print("invalid move") return room2() end end function room3 () local move = io.read() if move == "north" then return room1() elseif move == "east" then return room4() else print("invalid move") return room3() end end function room4 () print("congratulations!") end So you see, when you make a recursive call like: function x(n) if n==0 then return 0 n= n-2 return x(n) + 1 end This is not tail recursive because you still have things to do (add 1) in that function after the recursive call is made. If you input a very high number it will probably cause a stack overflow.
Using regular recursion, each recursive call pushes another entry onto the call stack. When the recursion is completed, the app then has to pop each entry off all the way back down. With tail recursion, depending on language the compiler may be able to collapse the stack down to one entry, so you save stack space...A large recursive query can actually cause a stack overflow. Basically Tail recursions are able to be optimized into iteration.
The jargon file has this to say about the definition of tail recursion: tail recursion /n./ If you aren't sick of it already, see tail recursion.
Instead of explaining it with words, here's an example. This is a Scheme version of the factorial function: (define (factorial x) (if (= x 0) 1 (* x (factorial (- x 1))))) Here is a version of factorial that is tail-recursive: (define factorial (letrec ((fact (lambda (x accum) (if (= x 0) accum (fact (- x 1) (* accum x)))))) (lambda (x) (fact x 1)))) You will notice in the first version that the recursive call to fact is fed into the multiplication expression, and therefore the state has to be saved on the stack when making the recursive call. In the tail-recursive version there is no other S-expression waiting for the value of the recursive call, and since there is no further work to do, the state doesn't have to be saved on the stack. As a rule, Scheme tail-recursive functions use constant stack space.
Tail recursion refers to the recursive call being last in the last logic instruction in the recursive algorithm. Typically in recursion, you have a base-case which is what stops the recursive calls and begins popping the call stack. To use a classic example, though more C-ish than Lisp, the factorial function illustrates tail recursion. The recursive call occurs after checking the base-case condition. factorial(x, fac=1) { if (x == 1) return fac; else return factorial(x-1, x*fac); } The initial call to factorial would be factorial(n) where fac=1 (default value) and n is the number for which the factorial is to be calculated.
It means that rather than needing to push the instruction pointer on the stack, you can simply jump to the top of a recursive function and continue execution. This allows for functions to recurse indefinitely without overflowing the stack. I wrote a blog post on the subject, which has graphical examples of what the stack frames look like.
The best way for me to understand tail call recursion is a special case of recursion where the last call(or the tail call) is the function itself. Comparing the examples provided in Python: def recsum(x): if x == 1: return x else: return x + recsum(x - 1) ^RECURSION def tailrecsum(x, running_total=0): if x == 0: return running_total else: return tailrecsum(x - 1, running_total + x) ^TAIL RECURSION As you can see in the general recursive version, the final call in the code block is x + recsum(x - 1). So after calling the recsum method, there is another operation which is x + ... However, in the tail recursive version, the final call(or the tail call) in the code block is tailrecsum(x - 1, running_total + x) which means the last call is made to the method itself and no operation after that. This point is important because tail recursion as seen here is not making the memory grow because when the underlying VM sees a function calling itself in a tail position (the last expression to be evaluated in a function), it eliminates the current stack frame, which is known as Tail Call Optimization(TCO). EDIT NB. Do bear in mind that the example above is written in Python whose runtime does not support TCO. This is just an example to explain the point. TCO is supported in languages like Scheme, Haskell etc
Here is a quick code snippet comparing two functions. The first is traditional recursion for finding the factorial of a given number. The second uses tail recursion. Very simple and intuitive to understand. An easy way to tell if a recursive function is a tail recursive is if it returns a concrete value in the base case. Meaning that it doesn't return 1 or true or anything like that. It will more than likely return some variant of one of the method parameters. Another way is to tell is if the recursive call is free of any addition, arithmetic, modification, etc... Meaning its nothing but a pure recursive call. public static int factorial(int mynumber) { if (mynumber == 1) { return 1; } else { return mynumber * factorial(--mynumber); } } public static int tail_factorial(int mynumber, int sofar) { if (mynumber == 1) { return sofar; } else { return tail_factorial(--mynumber, sofar * mynumber); } }
The recursive function is a function which calls by itself It allows programmers to write efficient programs using a minimal amount of code. The downside is that they can cause infinite loops and other unexpected results if not written properly. I will explain both Simple Recursive function and Tail Recursive function In order to write a Simple recursive function The first point to consider is when should you decide on coming out of the loop which is the if loop The second is what process to do if we are our own function From the given example: public static int fact(int n){ if(n <=1) return 1; else return n * fact(n-1); } From the above example if(n <=1) return 1; Is the deciding factor when to exit the loop else return n * fact(n-1); Is the actual processing to be done Let me the break the task one by one for easy understanding. Let us see what happens internally if I run fact(4) Substituting n=4 public static int fact(4){ if(4 <=1) return 1; else return 4 * fact(4-1); } If loop fails so it goes to else loop so it returns 4 * fact(3) In stack memory, we have 4 * fact(3) Substituting n=3 public static int fact(3){ if(3 <=1) return 1; else return 3 * fact(3-1); } If loop fails so it goes to else loop so it returns 3 * fact(2) Remember we called ```4 * fact(3)`` The output for fact(3) = 3 * fact(2) So far the stack has 4 * fact(3) = 4 * 3 * fact(2) In stack memory, we have 4 * 3 * fact(2) Substituting n=2 public static int fact(2){ if(2 <=1) return 1; else return 2 * fact(2-1); } If loop fails so it goes to else loop so it returns 2 * fact(1) Remember we called 4 * 3 * fact(2) The output for fact(2) = 2 * fact(1) So far the stack has 4 * 3 * fact(2) = 4 * 3 * 2 * fact(1) In stack memory, we have 4 * 3 * 2 * fact(1) Substituting n=1 public static int fact(1){ if(1 <=1) return 1; else return 1 * fact(1-1); } If loop is true so it returns 1 Remember we called 4 * 3 * 2 * fact(1) The output for fact(1) = 1 So far the stack has 4 * 3 * 2 * fact(1) = 4 * 3 * 2 * 1 Finally, the result of fact(4) = 4 * 3 * 2 * 1 = 24 The Tail Recursion would be public static int fact(x, running_total=1) { if (x==1) { return running_total; } else { return fact(x-1, running_total*x); } } Substituting n=4 public static int fact(4, running_total=1) { if (x==1) { return running_total; } else { return fact(4-1, running_total*4); } } If loop fails so it goes to else loop so it returns fact(3, 4) In stack memory, we have fact(3, 4) Substituting n=3 public static int fact(3, running_total=4) { if (x==1) { return running_total; } else { return fact(3-1, 4*3); } } If loop fails so it goes to else loop so it returns fact(2, 12) In stack memory, we have fact(2, 12) Substituting n=2 public static int fact(2, running_total=12) { if (x==1) { return running_total; } else { return fact(2-1, 12*2); } } If loop fails so it goes to else loop so it returns fact(1, 24) In stack memory, we have fact(1, 24) Substituting n=1 public static int fact(1, running_total=24) { if (x==1) { return running_total; } else { return fact(1-1, 24*1); } } If loop is true so it returns running_total The output for running_total = 24 Finally, the result of fact(4,1) = 24
In Java, here's a possible tail recursive implementation of the Fibonacci function: public int tailRecursive(final int n) { if (n <= 2) return 1; return tailRecursiveAux(n, 1, 1); } private int tailRecursiveAux(int n, int iter, int acc) { if (iter == n) return acc; return tailRecursiveAux(n, ++iter, acc + iter); } Contrast this with the standard recursive implementation: public int recursive(final int n) { if (n <= 2) return 1; return recursive(n - 1) + recursive(n - 2); }
I'm not a Lisp programmer, but I think this will help. Basically it's a style of programming such that the recursive call is the last thing you do.
Here is a Common Lisp example that does factorials using tail-recursion. Due to the stack-less nature, one could perform insanely large factorial computations ... (defun ! (n &optional (product 1)) (if (zerop n) product (! (1- n) (* product n)))) And then for fun you could try (format nil "~R" (! 25))
A tail recursive function is a recursive function where the last operation it does before returning is make the recursive function call. That is, the return value of the recursive function call is immediately returned. For example, your code would look like this: def recursiveFunction(some_params): # some code here return recursiveFunction(some_args) # no code after the return statement Compilers and interpreters that implement tail call optimization or tail call elimination can optimize recursive code to prevent stack overflows. If your compiler or interpreter doesn't implement tail call optimization (such as the CPython interpreter) there is no additional benefit to writing your code this way. For example, this is a standard recursive factorial function in Python: def factorial(number): if number == 1: # BASE CASE return 1 else: # RECURSIVE CASE # Note that `number *` happens *after* the recursive call. # This means that this is *not* tail call recursion. return number * factorial(number - 1) And this is a tail call recursive version of the factorial function: def factorial(number, accumulator=1): if number == 0: # BASE CASE return accumulator else: # RECURSIVE CASE # There's no code after the recursive call. # This is tail call recursion: return factorial(number - 1, number * accumulator) print(factorial(5)) (Note that even though this is Python code, the CPython interpreter doesn't do tail call optimization, so arranging your code like this confers no runtime benefit.) You may have to make your code a bit more unreadable to make use of tail call optimization, as shown in the factorial example. (For example, the base case is now a bit unintuitive, and the accumulator parameter is effectively used as a sort of global variable.) But the benefit of tail call optimization is that it prevents stack overflow errors. (I'll note that you can get this same benefit by using an iterative algorithm instead of a recursive one.) Stack overflows are caused when the call stack has had too many frame objects pushed onto. A frame object is pushed onto the call stack when a function is called, and popped off the call stack when the function returns. Frame objects contain info such as local variables and what line of code to return to when the function returns. If your recursive function makes too many recursive calls without returning, the call stack can exceed its frame object limit. (The number varies by platform; in Python it is 1000 frame objects by default.) This causes a stack overflow error. (Hey, that's where the name of this website comes from!) However, if the last thing your recursive function does is make the recursive call and return its return value, then there's no reason it needs to keep the current frame object needs to stay on the call stack. After all, if there's no code after the recursive function call, there's no reason to hang on to the current frame object's local variables. So we can get rid of the current frame object immediately rather than keep it on the call stack. The end result of this is that your call stack doesn't grow in size, and thus cannot stack overflow. A compiler or interpreter must have tail call optimization as a feature for it to be able to recognize when tail call optimization can be applied. Even then, you may have rearrange the code in your recursive function to make use of tail call optimization, and it's up to you if this potential decrease in readability is worth the optimization.
In short, a tail recursion has the recursive call as the last statement in the function so that it doesn't have to wait for the recursive call. So this is a tail recursion i.e. N(x - 1, p * x) is the last statement in the function where the compiler is clever to figure out that it can be optimised to a for-loop (factorial). The second parameter p carries the intermediate product value. function N(x, p) { return x == 1 ? p : N(x - 1, p * x); } This is the non-tail-recursive way of writing the above factorial function (although some C++ compilers may be able to optimise it anyway). function N(x) { return x == 1 ? 1 : x * N(x - 1); } but this is not: function F(x) { if (x == 1) return 0; if (x == 2) return 1; return F(x - 1) + F(x - 2); } I did write a long post titled "Understanding Tail Recursion – Visual Studio C++ – Assembly View"
A tail recursion is a recursive function where the function calls itself at the end ("tail") of the function in which no computation is done after the return of recursive call. Many compilers optimize to change a recursive call to a tail recursive or an iterative call. Consider the problem of computing factorial of a number. A straightforward approach would be: factorial(n): if n==0 then 1 else n*factorial(n-1) Suppose you call factorial(4). The recursion tree would be: factorial(4) / \ 4 factorial(3) / \ 3 factorial(2) / \ 2 factorial(1) / \ 1 factorial(0) \ 1 The maximum recursion depth in the above case is O(n). However, consider the following example: factAux(m,n): if n==0 then m; else factAux(m*n,n-1); factTail(n): return factAux(1,n); Recursion tree for factTail(4) would be: factTail(4) | factAux(1,4) | factAux(4,3) | factAux(12,2) | factAux(24,1) | factAux(24,0) | 24 Here also, maximum recursion depth is O(n) but none of the calls adds any extra variable to the stack. Hence the compiler can do away with a stack.
here is a Perl 5 version of the tailrecsum function mentioned earlier. sub tail_rec_sum($;$){ my( $x,$running_total ) = (#_,0); return $running_total unless $x; #_ = ($x-1,$running_total+$x); goto &tail_rec_sum; # throw away current stack frame }
This is an excerpt from Structure and Interpretation of Computer Programs about tail recursion. In contrasting iteration and recursion, we must be careful not to confuse the notion of a recursive process with the notion of a recursive procedure. When we describe a procedure as recursive, we are referring to the syntactic fact that the procedure definition refers (either directly or indirectly) to the procedure itself. But when we describe a process as following a pattern that is, say, linearly recursive, we are speaking about how the process evolves, not about the syntax of how a procedure is written. It may seem disturbing that we refer to a recursive procedure such as fact-iter as generating an iterative process. However, the process really is iterative: Its state is captured completely by its three state variables, and an interpreter need keep track of only three variables in order to execute the process. One reason that the distinction between process and procedure may be confusing is that most implementations of common languages (including Ada, Pascal, and C) are designed in such a way that the interpretation of any recursive procedure consumes an amount of memory that grows with the number of procedure calls, even when the process described is, in principle, iterative. As a consequence, these languages can describe iterative processes only by resorting to special-purpose “looping constructs” such as do, repeat, until, for, and while. The implementation of Scheme does not share this defect. It will execute an iterative process in constant space, even if the iterative process is described by a recursive procedure. An implementation with this property is called tail-recursive. With a tail-recursive implementation, iteration can be expressed using the ordinary procedure call mechanism, so that special iteration constructs are useful only as syntactic sugar.
Tail recursion is the life you are living right now. You constantly recycle the same stack frame, over and over, because there's no reason or means to return to a "previous" frame. The past is over and done with so it can be discarded. You get one frame, forever moving into the future, until your process inevitably dies. The analogy breaks down when you consider some processes might utilize additional frames but are still considered tail-recursive if the stack does not grow infinitely.
Tail Recursion is pretty fast as compared to normal recursion. It is fast because the output of the ancestors call will not be written in stack to keep the track. But in normal recursion all the ancestor calls output written in stack to keep the track.
To understand some of the core differences between tail-call recursion and non-tail-call recursion we can explore the .NET implementations of these techniques. Here is an article with some examples in C#, F#, and C++\CLI: Adventures in Tail Recursion in C#, F#, and C++\CLI. C# does not optimize for tail-call recursion whereas F# does. The differences of principle involve loops vs. Lambda calculus. C# is designed with loops in mind whereas F# is built from the principles of Lambda calculus. For a very good (and free) book on the principles of Lambda calculus, see Structure and Interpretation of Computer Programs, by Abelson, Sussman, and Sussman. Regarding tail calls in F#, for a very good introductory article, see Detailed Introduction to Tail Calls in F#. Finally, here is an article that covers the difference between non-tail recursion and tail-call recursion (in F#): Tail-recursion vs. non-tail recursion in F sharp. If you want to read about some of the design differences of tail-call recursion between C# and F#, see Generating Tail-Call Opcode in C# and F#. If you care enough to want to know what conditions prevent the C# compiler from performing tail-call optimizations, see this article: JIT CLR tail-call conditions.
Recursion means a function calling itself. For example: (define (un-ended name) (un-ended 'me) (print "How can I get here?")) Tail-Recursion means the recursion that conclude the function: (define (un-ended name) (print "hello") (un-ended 'me)) See, the last thing un-ended function (procedure, in Scheme jargon) does is to call itself. Another (more useful) example is: (define (map lst op) (define (helper done left) (if (nil? left) done (helper (cons (op (car left)) done) (cdr left)))) (reverse (helper '() lst))) In the helper procedure, the LAST thing it does if the left is not nil is to call itself (AFTER cons something and cdr something). This is basically how you map a list. The tail-recursion has a great advantage that the interpreter (or compiler, dependent on the language and vendor) can optimize it, and transform it into something equivalent to a while loop. As matter of fact, in Scheme tradition, most "for" and "while" loop is done in a tail-recursion manner (there is no for and while, as far as I know).
Tail Recursive Function is a recursive function in which recursive call is the last executed thing in the function. Regular recursive function, we have stack and everytime we invoke a recursive function within that recursive function, adds another layer to our call stack. In normal recursion space: O(n) tail recursion makes space complexity from O(N)=>O(1) Tail call optimization means that it is possible to call a function from another function without growing the call stack. We should write tail recursion in recursive solutions. but certain languages do not actually support the tail recursion in their engine that compiles the language down. since ecma6, there has been tail recursion that was in the specification. BUt none of the engines that compile js have implemented tail recursion into it. you wont achieve O(1) in js, because the compiler itself does not know how to implement this tail recursion. As of January 1, 2020 Safari is the only browser that supports tail call optimization. Haskell and Java have tail recursion optimization Regular Recursive Factorial function Factorial(x) { //Base case x<=1 if (x <= 1) { return 1; } else { // x is waiting for the return value of Factorial(x-1) // the last thing we do is NOT applying the recursive call // after recursive call we still have to multiply. return x * Factorial(x - 1); } } we have 4 calls in our call stack. Factorial(4); // waiting in the memory for Factorial(3) 4 * Factorial(3); // waiting in the memory for Factorial(2) 4 * (3 * Factorial(2)); // waiting in the memory for Factorial(1) 4 * (3 * (2 * Factorial(1))); 4 * (3 * (2 * 1)); We are making 4 Factorial() calls, space is O(n) this mmight cause Stackoverflow Tail Recursive Factorial function tailFactorial(x, totalSoFar = 1) { //Base Case: x===0. In recursion there must be base case. Otherwise they will never stop if (x === 0) { return totalSoFar; } else { // there is nothing waiting for tailFactorial to complete. we are returning another instance of tailFactorial() // we are not doing any additional computaion with what we get back from this recursive call return tailFactorial(x - 1, totalSoFar * x); } } We dont need to remember anything after we make our recursive call
There are two basic kinds of recursions: head recursion and tail recursion. In head recursion, a function makes its recursive call and then performs some more calculations, maybe using the result of the recursive call, for example. In a tail recursive function, all calculations happen first and the recursive call is the last thing that happens. Taken from this super awesome post. Please consider reading it.
A function is tail recursive if each recursive case consists only of a call to the function itself, possibly with different arguments. Or, tail recursion is recursion with no pending work. Note that this is a programming-language independent concept. Consider the function defined as: g(a, b, n) = a * b^n A possible tail-recursive formulation is: g(a, b, n) | n is zero = a | n is odd = g(a*b, b, n-1) | otherwise = g(a, b*b, n/2) If you examine each RHS of g(...) that involves a recursive case, you'll find that the whole body of the RHS is a call to g(...), and only that. This definition is tail recursive. For comparison, a non-tail-recursive formulation might be: g'(a, b, n) = a * f(b, n) f(b, n) | n is zero = 1 | n is odd = f(b, n-1) * b | otherwise = f(b, n/2) ^ 2 Each recursive case in f(...) has some pending work that needs to happen after the recursive call. Note that when we went from g' to g, we made essential use of associativity (and commutativity) of multiplication. This is not an accident, and most cases where you will need to transform recursion to tail-recursion will make use of such properties: if we want to eagerly do some work rather than leave it pending, we have to use something like associativity to prove that the answer will be the same. Tail recursive calls can be implemented with a backwards jump, as opposed to using a stack for normal recursive calls. Note that detecting a tail call, or emitting a backwards jump is usually straightforward. However, it is often hard to rearrange the arguments such that the backwards jump is possible. Since this optimization is not free, language implementations can choose not to implement this optimization, or require opt-in by marking recursive calls with a 'tailcall' instruction and/or choosing a higher optimization setting. Some languages (e.g. Scheme) do, however, require all implementations to optimize tail-recursive functions, maybe even all calls in tail position. Backwards jumps are usually abstracted as a (while) loop in most imperative languages, and tail-recursion, when optimized to a backwards jump, is isomorphic to looping.
This question has a lot of great answers... but I cannot help but chime in with an alternative take on how to define "tail recursion", or at least "proper tail recursion." Namely: should one look at it as a property of a particular expression in a program? Or should one look at it as a property of an implementation of a programming language? For more on the latter view, there is a classic paper by Will Clinger, "Proper Tail Recursion and Space Efficiency" (PLDI 1998), that defined "proper tail recursion" as a property of a programming language implementation. The definition is constructed to allow one to ignore implementation details (such as whether the call stack is actually represented via the runtime stack or via a heap-allocated linked list of frames). To accomplish this, it uses asymptotic analysis: not of program execution time as one usually sees, but rather of program space usage. This way, the space usage of a heap-allocated linked list vs a runtime call stack ends up being asymptotically equivalent; so one gets to ignore that programming language implementation detail (a detail which certainly matters quite a bit in practice, but can muddy the waters quite a bit when one attempts to determine whether a given implementation is satisfying the requirement to be "property tail recursive") The paper is worth careful study for a number of reasons: It gives an inductive definition of the tail expressions and tail calls of a program. (Such a definition, and why such calls are important, seems to be the subject of most of the other answers given here.) Here are those definitions, just to provide a flavor of the text: Definition 1 The tail expressions of a program written in Core Scheme are defined inductively as follows. The body of a lambda expression is a tail expression If (if E0 E1 E2) is a tail expression, then both E1 and E2 are tail expressions. Nothing else is a tail expression. Definition 2 A tail call is a tail expression that is a procedure call. (a tail recursive call, or as the paper says, "self-tail call" is a special case of a tail call where the procedure is invoked itself.) It provides formal definitions for six different "machines" for evaluating Core Scheme, where each machine has the same observable behavior except for the asymptotic space complexity class that each is in. For example, after giving definitions for machines with respectively, 1. stack-based memory management, 2. garbage collection but no tail calls, 3. garbage collection and tail calls, the paper continues onward with even more advanced storage management strategies, such as 4. "evlis tail recursion", where the environment does not need to be preserved across the evaluation of the last sub-expression argument in a tail call, 5. reducing the environment of a closure to just the free variables of that closure, and 6. so-called "safe-for-space" semantics as defined by Appel and Shao. In order to prove that the machines actually belong to six distinct space complexity classes, the paper, for each pair of machines under comparison, provides concrete examples of programs that will expose asymptotic space blowup on one machine but not the other. (Reading over my answer now, I'm not sure if I'm managed to actually capture the crucial points of the Clinger paper. But, alas, I cannot devote more time to developing this answer right now.)
Many people have already explained recursion here. I would like to cite a couple of thoughts about some advantages that recursion gives from the book “Concurrency in .NET, Modern patterns of concurrent and parallel programming” by Riccardo Terrell: “Functional recursion is the natural way to iterate in FP because it avoids mutation of state. During each iteration, a new value is passed into the loop constructor instead to be updated (mutated). In addition, a recursive function can be composed, making your program more modular, as well as introducing opportunities to exploit parallelization." Here also are some interesting notes from the same book about tail recursion: Tail-call recursion is a technique that converts a regular recursive function into an optimized version that can handle large inputs without any risks and side effects. NOTE The primary reason for a tail call as an optimization is to improve data locality, memory usage, and cache usage. By doing a tail call, the callee uses the same stack space as the caller. This reduces memory pressure. It marginally improves the cache because the same memory is reused for subsequent callers and can stay in the cache, rather than evicting an older cache line to make room for a new cache line.