One should choose recursion over iteration? - algorithm

Why should one choose recursion over iteration, when a solution has the same time complexity for both cases but better space complexity for iterative?

Here's a particular example of a case where there are extra considerations. Tree search algorithms can be defined recursively (because each subtree of a tree is a tree) or iteratively (with a stack). However, while a recursive search can work perfectly for finding the first leaf with a certain property or searching over all leaves, it does not lend itself to producing a well-behaved iterator: an object or function state that returns a leaf, and later when called again returns the next leaf, etc. In an iterative design the search stack can be stored as a static member of the object or function, but in a recursive design the call stack is lost whenever the function returns and is difficult or expensive to recreate.

Iteration is more difficult to understand in some algorithms. An algorithm that can naturally be expressed recursively may not be as easy to understand if expressed iteratively. It can also be difficult to convert a recursive algorithm into an iterative algorithm, and verifying that the algorithms are equivalent can also be difficult.
Recursion allows you to allocate additional automatic objects at each function call. The iterative alternative is to repeatedly dynamically allocate or resize memory blocks. On many platforms automatic allocation is much faster, to the point that its speed bonus outweighs the speed penalty and storage cost of recursive calls. (But some platforms don't support allocation of large amounts of automatic data, as mentioned above; it's a trade-off.)
recursion is very beneficial when the iterative solutions requires that you simulate recursion with a stack. Recursion acknowledges that the compiler already manages a stack to accomplish precisely what you need. When you start managing your own, not only are you likely re-introducing the function call overhead you intended to avoid; but you're re-inventing a wheel (with plenty of room for bugs) that already exists in a pretty bug-free form.

Some Benefits for Recursion
Code is Perfect Elegant (compared to loops)
very useful in backtracking data structures like LinkedList, Binary Search Trees as the recursion works by calling itself in addition stack made especially for this recursive calls and each call chained by its previous one

Related

Space complexity comparison between recursion and dynamic programming, which is better?

I've seen that the space complexity of recursion depends on space used in call stack. And dynamic programming uses extra space to improve time complexity. So does recursion is better than dynamic programming in terms of space complexity?
Not if the dynamical programming is done optimally. It just makes explicit the storage requirements which are used anyway by recursive algorithm implicitly on the stack. It doesn't add any extra space needlessly (unless it's implemented suboptimally).
Consider Fibonacci calculation. The recurrence formula seem to only require two values, Fib(n+2) = Fib(n+1) + Fib(n), but due to recursion the calculation will actually use O(n) space on the stack anyway. Due to the double recursion the time though will be exponential, whereas with the dynamic programming filling the same space from the ground up both space and time will be linear.
If you pick your favorite problem that dynamic programming is appropriate for, such as subset-sum, there are generally three approaches.
Recursion
Bottom up dynamic programming.
Top down dynamic programming, aka recursion with memoization.
In terms of time complexity, recursion is usually worse by an exponential factor, and the other two are equivalent. (That is why we do dynamic programming.)
In terms of space requirements, recursion is usually the best (just have to track the current attempt at a solution), and often (though not always) bottom up is better than top-down by a factor of n (the size of your problem). That is because you know when you're done with a particular piece of data and can throw it away.
In terms of ease of writing the code, recursion is usually easiest, followed by top-down, followed by bottom-up. (Though the memory savings of bottom up make it worthwhile to learn.)
Now you may ask, are there other tradeoffs possible between memory and performance? It isn't done very often, but there is. Do top down and use an LRU cache (when the cache gets too big you discard the least recently used value from it). You will get a different tradeoff, though figuring out what the tradeoff is is kind of complicated.

Tree : Performance comparison between stack implementation and recursive call of Traversal in BST

well I am currently learning data structure and algorithm. I got two methods of traversal in binary search tree.
(1)-stack implementation
(2)-recursive call method
which one performance wise is better?
As long as the algorithm remain same, performance should also be same. In your case: performance remains same because in both cases, stacks are used.
In, stack implementation programmer explicitly maintaining a stack for traversal. And in recursive call method, programs internal call stack is used for traversal.
EDIT:
and what about running time complexity ??
Running time complexity would be same for both of the cases. But execution time could be different depending on the implementation. As there is no code/implementation provided, "in general sense, recursion could take much longer time because
recursion (implemented naively) involves pushing a stack frame,
jumping, returning, and popping back from the stack.
For more information you can check the following links:
Is recursion faster than loops
Looping versus recursion for improved application performance

Can every recursive algorithm be improved with dynamic programming?

I am a first year undergraduate CSc student who is looking to get into competitive programming.
Recursion involves defining and solving sub problems. As I understand, top down dynamic programming (dp) involves memoizing the solutions to sub problems to reduce the time complexity of the algorithm.
Can top down dp be used to improve the efficiency of every recursive algorithm with overlapping sub problems? Where would dp fail to work and how can I identify this?
The short answer is: Yes.
However, there are some constraints. The most obvious one is that recursive calls must overlap. I.e. during the execution of an algorithm, the recursive function must be called multiple times with the same parameters. This lets you truncate the recursion tree by memoization. So you can always use memoization to reduce the number of calls.
However, this reduction of calls comes with a price. You need to store the results somewhere. The next obvious constraint is that you need to have enough memory. This comes with a not-so obvious constraint. Memory access always requires some time. You first need to find where the result is stored and then maybe even copy it to some location. So in some cases, it might be faster to let the recursion calculate the result instead of loading it from somewhere. But this is very implementation-specific and can even depend on the operating system and hardware setup.

Are there any reason to choose iterative algorithms over recursive ones

I am more comfortable with implementing recursive methods over iterative ones. While studying for the exam, i implemented a recursive BFS(Breadth-First Search) using Queues, but while searching online for a recursive BFS that uses Queues, i kept on reading that BFS is an iterative algorithm not a recursive one. So is there any reason to choose one over the other?
Iterative is more efficient for the computer. Recursive is more efficient for the programmer and more elegant (perhaps).
The problem with recursive is each recursive call pushes the state/frame onto the call stack, which can quickly lead to resource exhaustion (stack overflow!) for deep recursion. But, the solutions are often easier to code and read.
Iterative performs better because it's all done in the local frame. However, converting recursive to iterative can reduce readabiity due to the introduction of variables to cater for the progression of the algorithm.
Chose whatever implementation is easiest to code and maintain. Only worry if you have a demonstrated problem.
Iterative and recursive both have same time complexity.difference is: recursive programs need more memory as each recursive call pushes state of the program into stack and stackoverflow may occur.but recursive code is easy to write and manage.You can reduce the space complexity of recursive program by using tail recursion.
Iterative implementations are usually faster. One example is fibonacci series. It's faster to implement it in a simple loop over a recursive solution.
More discussion here Recursion or Iteration?

Recursive VS Nonrecursive for binary tree traversal

What is the differences between Recursive VS Nonrecursive for binary tree traversal?
Which one is best for a large tree and Why?
Thanks
Recursive functions are simpler to implement since you only have to care about a node, they use the stack to store the state for each call.
Non-recursive functions have a lot less stack usage but require you to store a list of all nodes for each level and can be far more complex than recursive functions.
The difference is that a recursive way uses the call stack whereas an iterative way uses an explicit stack (the stack data structure). What this leads to are two things:
1) If it is a large tree, a recursive way can cause stack overflow.
2) With an iterative approach, you can stop somewhere in the middle of the traversal. In other words, you can implement something like a pre-order/in-order/post-order iterator with a stack. This can be useful in some cases.

Resources