This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Are there problems that cannot be written using tail recursion?
From my understanding, tail recursion is an optimization you can use when a recursive call does not need information from the recursive calls that it will spam.
Is it possible then to implement all recursive functions using tail-recursion? What about something like DFS, where you need the innermost child to return before the parent can?
It depends on exactly what you are asking.
If you want to keep all functions as functions (no mutable state) with the same signatures, then no. The most obvious example is the quicksort, where both calls can't be tail calls.
If you can modify the function in various ways, then yes. Sometimes a local modification is sufficient - often you can add an "accumulator" that builds some expression that is returned, although, if the result involves non-commutative operations, then you need to be careful (for example, when naively constructing linked lists, the order is reversed) or you can add a stack.
Alternatively, you can do a global modification of the entire program, in which each function takes as an extra argument the function that contains future actions. This is the continuation passing that Pete is talking about.
if you are working by hand then the local modification is often fairly easy. but if you're doing automated rewriting (in a compiler for example) then it's simpler to take the global approach (it requires less "smarts").
Yes and no.
Yes, used in conjunction with other control flow mechanisms (e.g., continuation passing) you can express any arbitrary control flow as tail recursion.
No, it is not possible to express all recursion as tail recursion unless you do supplement the tail recursion with other control flow mechanisms.
All programs can be rewritten as tail calls using continuation passing. Simply add one parameter to the tail call representing the continuation of the current execution.
Any Turning complete language perform the same transformation as continuation passing provides - create a Gödel number for a the program and input parameters that the non-tail call returns to, and pass that as a parameter to the tail call - though obviously environments where this is done for you with a continuation, co-routine or other first-class construct makes it much easier.
CPS is used as a compiler optimisation and I have previously written interpreters using continuation passing. The scheme programming language is designed to allow it to be implemented in such a fashion with requirements in the standard for tail call optimisation and first class continuations.
Yes you can. The transformation usually involves maintaining the necessary information explicitly, which would otherwise be maintained for us implicitly spread among the execution stack's call frames during run time.
As simple as that. Whatever the run time system does during execution implicitly, we can do explicitly by ourselves. There's no big mystery here. PCs are made of silicon and copper and steel.
It is trivial to implement DFS as a loop with explicit queue of states/positions/nodes to process. It is in fact defined that way - DFS replaces the popped first entry in the queue with all the arcs coming from it; BFS adds these arcs to the end of the queue.
The continuation-passing style transformation leaves all the function calls in a program as tail calls after it is done. This is a simple fact of life. The continuations used will grow and shrink, but calls will all be tail calls.
We can further reify the state of process spread in continuations, as explicitly maintained data on the heap. What this achieves in the end, is explication and reification, moving implicit stuff on stack to the explicit stuff living in the heap, simplifying and demystifying the control flow.
I don't know if all recursive functions can be rewritten to be tail-recursive, but many of them can. One standard method of doing this is to use an accumulator. For example, the factorial function can be written (in Common Lisp) like so:
(defun factorial (n)
(if (<= n 1)
1
(* n (factorial (1- n)))))
This is recursive, but not tail recursive. It can be made tail-recursive by adding an accumulator argument:
(defun factorial-accum (accum n)
(if (<= n 1)
accum
(factorial-accum (* n accum) (1- n))))
Factorials can be calculated by setting the accumulator to 1. For example, the factorial of 3 is:
(factorial-accum 1 3)
Whether all recursive functions can be rewritten as tail-recursive functions using methods like this isn't clear to me, though. But certainly many functions can be.
Recursive algorithm is an algorithm that implemented in accordance with Divide & Conquer strategy, where solving each intermediate sub-problem produces 0, 1 or more new smaller sub-problems. If these sub-problems are solved in LIFO order, you get a classic recursive algorithm.
Now, if your algorithm is known to produce only 0 or 1 sub-problem at each step, then this algorithm can be easily implemented through tail recursion. In fact, such algorithm can easily be rewritten as an iterative algorithm and implemented by a simple cycle. (Needless to add, tail recursion is just another less explicit way to implement iteration.)
A schoolbook example of such recursive algorithm would be recursive approach to factorial calculation: to calculate n! you need to calculate (n-1)! first, i.e. at each recursive step you discover only one smaller sub-problem. This is the property that makes it so easy to turn factorial computation algorithm into a truly iterative one (or tail-recursive one).
However, if you know that in general case the number of sub-problems generated at each step of your algorithm is more than 1, then your algorithm is substantially recursive. It cannot be rewritten as iterative algorithm, it cannot be implemented through tail recursion. Any attempts to implement such algorithm in iterative or tail-recursive fashion will require additional LIFO storage of non-constant size for storing "pending" sub-problems. Such implementation attempts would simply obfuscate the unavoidable recursive nature of the algorithm by implementing recursion manually.
For example, such simple problem as traversal of a binary tree with parent->child links (and no child->parent links) is a substantially recursive problem. It cannot be done by tail-recursive algorithm, it cannot be done by an iterative algorithm.
No it can be done "naturally" only for calls with one recursive call. For two or more recursive calls you can of course mimic stack frame yourself. But it will be very ugly and will effectively won't be tail recursive, in the sense of optimizing memory.
The point with tail-recursion is you don't want to come back to parent stack. So simply pass on that information to child stack which can completely replace parent stack, instead of stack growing.
Related
Is it recommended to use recursive algorithm to calculate sum of n cubes in terms of time and space efficiency? comparing to a non-recursive?
What exactly do you mean? Summing the first n cubes is best done by computing (n^2*(n + 1)^2)/4, but if you're given a list of numbers to sum their cubes, that's not much of an option.
If you are in a language that does tail call optimization, a tail call recursive implementation is certainly recommended. If you're not, it may still be worth writing the recursive function if that is easier for you to reason about (a very important aspect of organizing code!). But do keep in mind that a recursion of depth n will take, depending on your language, compiler, etc., anywhere from 4*n to at least several 100*n bytes of memory, and stack space isn't unlimited.
I'd go for a loop in most languages. For large n, because it is more resource efficient, and for small n, because I find it easier to read than a recursive version. But that's tied to my personal background and experience, and what is easier for you and whoever else needs to se your code maybe completely different.
It is dependent on what you want to accomplish. If you want it to be dependent on previous outcomes, you can make it recursive. Otherwise I would suggest to make it non-recursive.
Most compiled languages have tail recursion removal and for simple case like this it will not be a problem. Math people find it much easier to write functional languages and recursions come more natural to them. You can however you can write very efficiently:
var sumOf0To10Cubes = Enumerable.Range(0, 10).Select(o => Math.Pow(o, 3)).Sum();
Note that math people prefer:
Sum[x^3,x->{0,10}]
I have been reading articles describing how space complexity of quicksort can be reduced by using the tail recursive version but I am not able to understand how this is so. Following are the two versions :
QUICKSORT(A, p, r)
q = PARTITION(A, p, r)
QUICKSORT(A, p, q-1)
QUICKSORT(A, q+1, r)
TAIL-RECURSIVE-QUICKSORT(A, p, r)
while p < r
q = PARTITION(A, p, r)
TAIL-RECURSIVE-QUICKSORT(A, p, q-1)
p = q+1
(Source - http://mypathtothe4.blogspot.com/2013/02/lesson-2-variations-on-quicksort-tail.html)
As far as I understand , both of these would cause recursive calls on both the left and right half of the array. In both the cases , only one half would processed at a time and therefore at any time only one recursive call would be using the stack space. I am unable to see how the tail recursive quicksort saves space.
The pseudo code above is taken from the article - http://mypathtothe4.blogspot.com/2013/02/lesson-2-variations-on-quicksort-tail.html
The explanation provided in the article confuses me even more -
Quicksort partitions a given sub-array and proceeds to recurse twice;
one on the left-sub-array and one on the right. Each of these
recursive calls will require its own individual stream of stack space.
This space is used to store the indexing variables for the array at
some level of recursion. If we picture this occurring from beginning
to end of execution, we can see that the stack space doubles at each
layer.
So how does Tail-Recursive-Quicksort fix all of this?
Well, instead of recursing on two sub-arrays, we now only recurse on
one. This eliminates the need for doubling stack space at every layer
of execution. We get around this problem by using the while loop as an
iterative control that performs the same task. Instead of needing the
stack to save sets of variables for two recursive calls, we simply
alter the same set of variables and use the single recursive call on
new variables.
I don't see how the stack space doubles at every layer of execution in the case of a regular quicksort.
Note :- There is no mention of compiler optimization in the article.
A tail recursive function call allows the compiler to perform a special optimization which it normally can not with regular recursion. In a tail recursive function, the recursive call is the very last thing to be executed. In this case, instead of allocating a stack frame for each call, the compiler can rework the code to simply reuse the current stack frame, meaning a tail-recursive function will only use a single stack frame as opposed to hundreds or even thousands.
This optimization is possible because the compiler knows that once the tail recursive call is made, no previous copies of variables will be needed, because there is no more code to execute. If, for instance, a print statement followed a recursive call, the compiler would need to know the value of the variable to be printed after the recursive call returns, and thus the stack frame cannot be reused.
Here's the wiki page if you'd like more information on how this "space saving" and stack reuse actually works, along with examples: Tail Call
Edit: I didn't explain how this applies to quicksort, did I? Well, some terms are thrown around in that article which make everything all confusing (and some of it is just plain wrong). The first function given (QUICKSORT) makes a recursive call on the left, a recursive call on the right, and then exits. Notice that the recursive call on the right is the very last thing that happens in the function. If the compiler supports tail recursive optimization (explained above), only the left calls create new stack frames; all the right calls just reuse the current frame. This can save some stack frames, but can still suffer from the case where the partitioning creates a sequence of calls where tail recursion optimization doesn't matter. Plus, even though right-side calls use the same frame, the left-side calls called within the right-side calls still use the stack. In the worst case, the stack depth is N.
The second version described is not a tail recursive quicksort, but rather a quicksort where only the left sorting is done recursively, and the right sorting is done using the loop. In fact, this quicksort (as previously described by another user) cannot have the tail recursion optimization applied to it, because the recursive call is not the last thing to execute. How does this work? When implemented correctly, the the first call to quicksort is the same as a left-side call in the original algorithm. However, no right-side recursive calls are even called. How does this work? Well, the loop takes care of that: instead of sorting "left then right", it sorts the left with a call, then sorts the right by continually sorting only the lefts of the right. It's really ridiculous sounding, but it's basically just sorting so many lefts that the rights become single elements and don't need to be sorted. This effectively removes the right recursion, making the function less recursive (pseudo recursive, if you will). However, the real implementation does not choose just the left side each time; it chooses the smallest side. The idea is still the same; it basically only does a recursive call on one side instead of both. Picking the shorter side will ensure that the stack depth can never be larger than log2(N), which is the depth of a proper binary tree. This is because the shorter side is always going to be at most half the size of our current array section. The implementation given by the article does not ensure this however, because it can suffer from the same worst-case scenario of "left is the whole tree". This article actually gives a pretty good explanation of it if you're willing to do more reading: Efficient selection and partial sorting based on quicksort
The advantage, the whole point of the "mixed recursive/iterative" version, i.e. version that processes one sub-range by recursion and another sub-range by iteration, is that by choosing which of the two sub-ranges to process recursively you can guarantee that the depth of recursion will never exceed log2 N, regardless of how bad the pivot choice is.
For the TAIL-RECURSIVE-QUICKSORT pseudocode provided in the question, where the recursive processing is performed first by a literal recursive call, that recursive call should be given the shorter sub-range. That by itself will make sure that the recursion depth will be limited by log2 N. So, in order to achieve the recursion depth guarantee the code absolutely have to compare the lengths of sub-ranges before deciding which sub-range to process by recursive call.
A proper implementation of that approach might look as follows (borrowing your pseudocode as a starting point)
HALF-RECURSIVE-QUICKSORT(A, p, r)
while p < r
q = PARTITION(A, p, r)
if (q - p < r - q)
HALF-RECURSIVE-QUICKSORT(A, p, q-1)
p = q+1
else
HALF-RECURSIVE-QUICKSORT(A, q+1, r)
r = q-1
The TAIL-RECURSIVE-QUICKSORT pseudocode you provided does not make any attempts to compare the lengths of the sub-ranges. In such case it provides no benefit whatsoever. And no, it is not really "tail recursive". QuickSort cannot possibly be reduced to a tail-recursive algorithm.
If you do a Google search on the terms "qsort loguy higuy", you will easily find numerous instances of another popular QuickSort implementation (C standard library style) based on the very same idea of using recursion for only one of the two sub-ranges. That implementation for 32 bit platforms uses explicit stack of maximum depth of ~32 specifically because it can guarantee the the recursion depth will never get higher than that. (Similarly, 64 bit platforms will only need stack depth of ~64.)
The QUICKSORT version that makes two literal recursive calls is significantly worse in that regard, since repetitive bad choice of pivot can make it to reach very high recursion depth, up to N in the worst case. With two recursive calls you cannot guarantee that recursion depth will be limited by log2 N. A smart compiler might be able to replace the trailing call to QUICKSORT with iteration, i.e. turn your QUICKSORT into your TAIL-RECURSIVE-QUICKSORT, but it won't be smart enough to perform the sub-range length comparison.
Advantage of using tail-recursion := so that the compiler optimize the code and convert it to a non-recursive code.
Advantage of non-recursive code over recursive one := the non-recursive code requires less memory to execute than a recursive one. This is because of idle stack frames that the recursion consumes.
Here comes the interesting part:- even though the compilers can theoriticaly perform that optimization, they in practice dont. even the widespread compilers like dot-net and java dont perform that optimization.
one problem that all code optimizations face is the sacrifice in debug-ability. the optimized code no longer corresponds to source code so stack traces and exception details cant be easily understood. high-performance code or scientific applications are one thing but for majority of consumer applications debugging is required even after release. hence optimizations are not done that vigorously.
please see:
https://stackoverflow.com/q/340762/1043824
Why doesn't .NET/C# optimize for tail-call recursion?
https://stackoverflow.com/a/3682044/1043824
There seems to be some vocabulary confusion here.
The first version is tail-recursive, because the last statement is a recursive call:
QUICKSORT(A, p, r)
q = PARTITION(A, p, r)
QUICKSORT(A, p, q-1)
QUICKSORT(A, q+1, r)
If you apply the tail-recursion optimization, which is to convert the recursion into a loop, you get the second one, which is no longer tail-recursive:
TAIL-RECURSIVE-QUICKSORT(A, p, r)
while p < r
q = PARTITION(A, p, r)
TAIL-RECURSIVE-QUICKSORT(A, p, q-1)
p = q+1
The benefit of this is that usually, you will need less stack memory. Why is that? To understand, imagine you want to sort an array with 31 items. In the highly unlikely case that all partitions are perfect, i.e. they split the array right in the middle, your recursion depth would be 4. Indeed, the first split would yield two partitions of 15 items, the second one two partitions of 7 items, the third one two of 3 items, and after the fourth everything is sorted.
But partitions are rarely this perfect. As a result, not all recursions go equally deep. In our example, you might have some that are only three levels deep, and some that are 7 or more (worst case is 30). By eliminating half of the recursions, you have a fair chance that your maximum recursion depth will be less.
As AndreyT pointed out, often the ranges are compared to make sure that the biggest partition is always processed iteratively, and the smallest one recursively. That guarantees the smallest possible recursion depth for a given input and pivot selection strategy.
But this is not always the case. Sometimes, people want to yield results as soon as possible, or want to find and sort only the first n elements. In these cases they always want to sort the first partition before the second one. Even in this situation, eliminating the tail recursion usually improves memory usage, and never makes it worse.
I don't know exactly if this is the correct place to ask this doubt or I should have posted a new question but I have a quite similar doubt.
void print(int n) {
if (n < 0) return;
cout << " " << n;
// The last executed statement is recursive call
print(n-1);
print(n-1);
}
Is this tail recursive ?
Tail recursion is the optimization done by modern compilers called tail call elimination.
When the caller/parent function nothing have to do in the unwinding stages after the child calls have finished, last thing is the recursion call itself, then the modern compiler uses the goto and labels for the optimization.
example: Our version -> Prints n to 1
void fun(int n){
if(n==0)
return;
printf("%d\n",n);
fun(n-1)
}
after optimization->
void fun(int n){
start:
if(n==0)
return;
printf("%d\n",n);
n=n-1;
goto start;
}
Advantages of this optimization:
1.Uses very few stack-frames for tail-recursive calls
2.Consumes less memory
3.No need to save the procedure activation record, this reduces the function overhead.
3.No more segmentation fault is C/C++ and stack overflow in java.
I'm trying to come up with examples of recursive algorithms / functions that cannot be rewritten in a way that avoids using lots of stack memory (i.e. cannot be made fully tail recursive nor rewritten using a loop that doesn't use a stack). Do such functions exist?
I think quicksort might be a candidate, but am not sure if it can be rewritten to use a single tail-recursive function call.
Every algorithm that requires more than one call in the general case cannnot be done without a backtrack stack for iteration or growing the stack because the first call is not in tail position and will always have a continuation.
The simple fibonacci, quicksort and sum of tree are all in that category.
Are there any general heuristics, tips, tricks, or common design paradigms that can be employed to convert a recursive algorithm to an iterative one? I know it can be done, I'm wondering if there are practices worth keeping in mind when doing so.
You can often entirely preserve the original structure of a recursive algorithm, but avoid the stack, by employing tail calls and changing to continuation-passing, as suggested by this blog entry. (I should really cook up a better standalone example.)
A common technique that I use where I'm on the process of replace a recursive algorithm by an iterative one is generally to use a stack, pushing the parameters that are being passed to the recursive function.
Check the following articles:
Replacing Recursion With a Stack
Stacks and Recursion Elimination (pdf)
A common practice is to manage a LIFO stack that keeps a running list of what "remains to be done", and to handle the whole process in a while loop which continues until the list is empty.
With this pattern, what would be a recursion call in the true recursion model is replaced by
- a pushing of current (partially finished) task's "context" onto the stack,
- a pushing of the new task (the one which prompted recursion) onto the stack
- and to "continue" (i.e. jump to the beginning of) the while loop.
Near the head of the loop, the logic pops the most recently inserted context, and starts work on this basis.
Effectively this merely "moves" information which would have otherwise been kept in nested stackframes on the "system" stack, to an application-managed stack container. It is an improvement however, for this stack container can be allocated anywhere (the recursion limit is typically tied to limits in the "system" stack). Therefore essentially the same work gets done, but the explicit management of a "stack" allows this to take place within a single loop construct rather than recursive calls.
Often general recursion can be replaced by tail recursion, by collecting partial results in an accumulator and passing it down with the recursive call. Tail recursion is essentially iterative, the recursive call can be implemented as a jump.
For example, the standard general recursive definition of factorial
factorial(n) = if n = 0 then 1 else n * factorial(n - 1)
can be replaced by
factorial(n) = f_iter(n, 1)
and
f_iter(n, a) = if n = 0 then a else f_iter(n - 1, n * a)
which is tail recursive. It is the same as
a = 1;
while (n != 0) {
a = n * a;
n = n - 1;
}
return a;
Have a look at these links for performance examples
Recursion VS Iteration (Looping) : Speed & Memory Comparison
and
Replace Recursion with Iteration
and
Recursion vs Iteration
Q: Is the recursive version usually
faster?
A: No -- it's usually slower (due to the overhead of maintaining
the stack)
Q: Does the recursive version usually use less memory?
A: No -- it usually uses more memory (for the stack).
Q: Then why use recursion??
A: Sometimes it is much simpler to write the recursive version (but
we'll need to wait until we've
discussed trees to see really good
examples...)
I generally start from the base case (every recursive function has one) and work my way backwards, storing the results in a cache (array or hashtable) if necessary.
Your recursive function solves a problem by solving smaller subproblems and using them to solve the bigger the instance of the problem. Each subproblem is also broken down further and so on until the subproblem is so small that the solution is trivial (i.e. the base case).
The idea is to start at the base case(or base cases) and use that to build the solution for larger cases, and then use those to build even larger cases and so on, until the whole problem is solved. This does not require a stack, and can be done with loops.
A simple example (in Python):
#recursive version
def fib(n):
if n==0 or n==1:
return n
else:
return fib(n-1)+fib(n-2)
#iterative version
def fib2(n):
if n==0 or n==1:
return n
prev1,prev2=0,1 # start from the base case
for i in xrange(n):
cur=prev1+prev2 #build the solution for the next case using the previous solutions
prev1,prev2=cur,prev1
return cur
One pattern is Tail Recursion:
A function call is said to be tail
recursive if there is nothing to do
after the function returns except
return its value.
Wiki.
If I have a choice to use recursion or memoization to solve a problem which should I use? In other words if they are both viable solutions in that they give the correct output and can be reasonably expressed in the code I'm using, when would I use one over the other?
They are not mutually exclusive. You can use them both.
Personally, I'd build the base recursive function first, and add memoization afterwards, as an optimisation step.
The rule of thumb to use is based on the amount of overlap the subproblems have. If you're calculating fibonacci numbers (the classic recursion example) there's a lot of needless recalculation done if you use recursion.
For example, to calculate F(4), I need to know F(3) and F(2), so I calculate F(3) by calculating F(2) and F(1), and so on. If I used recursion, I just calculated F(2) and most other F(n) twice. If I use memoization, I can just look the value up.
If you're doing binary search there is no overlap between subproblems, so recursion is okay. Splitting the input array in half at each step results in two unique arrays, which represent two subproblems with no overlap. Memoization wouldn't be a benefit in cases like this.
Recursion has a performance penalty associated with the creation of stack frames, memoization's penalty is the caching of the results, If performance is a concern only way to know for sure will be to test in your application.
In my personal opinion I'd go with the method which is the easiest to use and understand first, which in my opinion is recursion. Until you demonstrate a need for memoization.
Memoization is just a caching method that happen to be commonly used to optimize recursion. It cannot replace recursion.
Not sure I can say without knowing the problem. Often you'd want to use memoization with recursion. Still, memoization is likely to be significantly quicker than recursion if you can in fact use it as an alternative solution. They both have performance issues, but they vary differently with the nature of the problem/size of input.
I pick memoization because it's usually possible to access more heap memory than stack memory.
That is, if your algorithm is run on a lot of data, in most languages you'll run out of stack space recursing before you run out of space on the heap saving data.
I believe you might be confusing memoization (which is, as others have noted, an optimization strategy for recursive algorithms) with dynamic programming (which simulates a recursive solution but does not actually use recursion). If that was your question I'd say it would depend on your priorities: high runtime efficiency (dynamic programming) or high readability (memoization, as the recursive solution of the problem is still present in the code).
It depends on what you're going for. dynamic programming (memoization) is almost always faster. Often by a LOT. (ie, cubic to squared, or exponential to poly), but in my experience, recursion is easier to read and debug.
Then again, a lot of people avoid recursion like the plague, so they don't find it easy to read...
Also, (third hand?) I find that it's easiest to find the Dynamic solution after I've come up with the recursive one, so I end up doing both. But if you've already got both solutions, Dynamic may be your best bet.
I'm not sure if I've been helpful, but there you go. As in many things of algorithm choice, YMMV.
If your problem is a recursive one, what choice do you have but to recurse?
You can write your recursive function in a way that short circuits using memoization, to gain maximum speed for the second call.
I don't agree with Tomalak's assertion that with a recursive problem you have no choice but to recurse?
The best example is the above-mentioned Fibonacci series.
On my computer the recursive version of F(45) (F for Fibonacci) takes 15 seconds for 2269806339 additions, the iterative version takes 43 additions and executes in a few microseconds.
Another well-known example is Towers of Hanoi. After your class on the topic it may seem like recursion is the only way to solve it. But even here there's a iterative solution, although it's not as obvious as the recursive one. Even so, the iterative is the faster, mainly because no expensive stack-operations are required.
In case you're interested in the non-recursive version of Towers of Hamoi, here's the Delphi source code:
procedure TForm1.TowersOfHanoi(Ndisks: Word);
var
I: LongWord;
begin
for I := 1 to (1 shl Ndisks) do
Memo1.Lines.Add(Format('%4d: move from pole %d to pole %d',
[I, (I and (I - 1)) mod 3, (I or (I - 1) + 1) mod 3]));
Memo1.Lines.Add('done')
end;
Recursion does not need to use a significant amount stack space if the problem can be solved using tail recursion techniques. As said previously, depends on the problem.
In the usual case, you are faced with two criteria which help with your decision:
Run time
Readability
Recursive code is usually slower but much more readable (not always, but most often. As it was said, tail recursion can help if your language supports it - if not, there is not much you can do).
The iterative version of a recursive problem is usually faster in terms of runtime but the code is hard to understand and, because of that, frail.
If both versions have the same run time and the same readability, there is no reason to choose either over the other. In this case, you have to check other things: Will this code change? How about maintenance? Are you comfortable with recursive algorithms or are they giving you nightmares?
var memoizer = function (fund, memo) {
var shell = function (arg) {
if (typeof memo[arg] !== 'number') {
memo[arg] = fund(shell, arg);
}
return memo[arg];
};
return shell;
};
var fibonacci = memoizer(function (recur, n) { return recur(n - 1) + recur(n - 2); }, [0, 1]);
use both!
Combine both. Optimize your recursive solution by using memoization. That's what memoization is meant to be for. For using memory space to speed up the recursion.