Can someone explain me this code intuitively? - algorithm

I understand recursion and what the advantages it brings to writing code efficiently. While I can code recursive functions, I cannot seem to wrap my head around how they work. I would like someone to explain me recursion instinctively.
For example, this code:
int fact(int n)
{ if n<0:
return -1
elif n==0:
return 1
else
return n*fact(n-1)
}
These are some of my questions:
Let's say n=5. On entering the function,the control goes to the last return statement since none of the previous conditions are satisfied.
Now, roughly, the computer 'writes' something like this: 5*(fact(4))
Again, the fact() function is called and the same process gets repeated except now we have n=4.
So, how exactly does the compiler multiply 5*4 and so on until 2 since its not exactly 5*4 but 5*fact(4). How does it 'remember' that it has to multiply two integers and where does it store the temporary value since we haven't provided any explicit data structure?
Again let's say n=5. The same process goes on and eventually n gets decremented to 0. My question is why/how doesn't the function simply return 1 as stated in the return statement. Similar to my previous question, how does the compiler 'remember' that it also has 180 stored for displaying?
I'd be really thankful if someone explains this to me completely so that can understand recursion better and intuitively.

Yeah, for beginners recursion can be quite confusing. But, you are already on the right track with your explanation under "1.".
The function will be called recursively until a break condition is satisfied. In this case, the break condition is satisfied when n equals 0. At this point, no recursive calls will be made anymore. The result of each recursive call is returned to the caller. The callers always "wait" until they get a result. That's how the algorithm "knows" the receiver of the results. The flow of this procedure is handled by the so called stack.
Hence, in your informal notation (in this example n equals 3):
3*(fact(2)) = 3*(2*fact(1)) = 3*(2*(1*fact(0))).
Now, n equals 0. The inner fact(0) therefore returns 1:
3*(2*(1*(1)))) = 3*(2*(1)) = 3*(2) = 6

You can see a bit like this
The function fact(int n) is like a class and every time you call fact(int n) you create an instance of that class. By creating them (calling them) from the same function, you are creating a chain of instances. Once you reach break condition, those functions start returning one by one and the value they returned to calculate a new value in the return statement return n*fact(n-1) e.g. return 3*fact(2);

Related

How can I handle System.StackOverflowException?

This error appears sometimes only when I call a recursive function of which one of the parameters is a number: rand()%10. Just like in the code down below:
private: System::Void AIrandomMove(int randomMove,String ^s)
{
if (randomMove == 1)
{
if ( Move(1) ) // move number 1 had already been done
AIrandomMove(rand()%10,s); // here it appears the System.StackOverflowException
else
//do move number 1
}
//same goes for ==2 || ==3 || ... || ==10
}
How can I handle this?
A proper recursive algorithm works under two assumptions:
you have a base case which terminate the recursion (so the function doesn't call itself)
you have a recursive case which invokes the function itself with different arguments so that there is some progression involved
This translates in something like:
void recursive(inArgs) {
if (condition)
return;
else
recursive(outArgs)
}
It's clear that if condition is the expression true then this code never terminates (hence it will eventually raise a stack overflow).
In your situation condition is evaluated through a random value comparison. Now, assume condition is rand()%2 == 0. So basically each time it is evaluated you have 50% chance of being true and 50% of being false.
This doesn't guarantee that the recursion will terminate, as a path with n true evaluation exists (and it probability can be calculated). That's the problem with your design.
If many moves have been already made (or maybe all of them) then recursion won't end.
You don't need recursion at all in your case, since you could store the available moves in a set and remove them once they are not available anymore (possibly shuffling the set to then choose one randomly). Or even a simpler solution would be something like:
int choosenMove = rand()%10;
while (Move(choosenMove)) {
choosenMove = rand()%10;
// do move choosenMove
}
But this doesn't guarantee termination neither if you don't make sure that a state in which no moves are available can't happen.

Recursion transformation without stack frames code repetitions

I have the following pseudo-code:
function X(data, limit, level = 0)
{
result = [];
foreach (Y(data, level) as entity) {
if (level < limit) {
result = result + X(entity, limit, level + 1);
} else {
//trivial recursion case:
result = result + Z(entity);
}
}
return result;
}
which I need to turn into a plain (e.g. without recursive calls). So far I'm out of ideas regarding how to do that elegantly. Following this answer I see that I must construct the entire stack frames which are basically the code repetitions (i.e. I will place same code again and again with different return addresses).
Or I tried stuff like these suggestions - where there is a phrase
Find a recursive call that’s not a tail call.
Identify what work is being done between that call and its return statement.
But I do not understand how can the "work" be identified in the case when it is happening from within internal loop.
So, my problem is that all examples above are providing cases when the "work can be easily identified" because there are no control instructions from within the function. I understand the concept behind recursion on a compilation level, but what I want to avoid is code repetition. So,
My question: how to approach transformation of the pseudo-code above which does not mean code repetitions for simulating stack frames?
It looks like an algorithm to descend a nested data structure (lists of lists) and flatten it out into a single list. It would have been good to have a simple description like that in the question.
To do that, you need to keep track of multiple indices / iterators / cursors, one at each level that you've descended through. A recursive implementation does that by using the call stack. A non-recursive implementation will need a manually-implemented stack data structure where you can push/pop stuff.
Since you don't have to save context (registers) and a return address on the call stack, just the actual iterator (e.g. array index), this can be a lot more space efficient.
When you're looping over the result of Y and need to call X or Z, push the current state onto the stack. Branch back to the beginning of the foreach, and call Y on the new entity. When you get to the end of a loop, pop the old state if there is any, and pick up in the middle of that loop.

OR between two function call

What is the meaning of a || between two function call
like
{
//some code
return Find(n.left,req)||Find(n.right,req);
}
http://www.careercup.com/question?id=7560692
can some one help me to understand . Many thanks in advance.
It means that it returns true if one of the two functions is true (or both of them).
Depends on the programming language, the method calls Find(n.left,req) -> if it's true - returns true. if it's false, it calls Find(n.right,req) and returns its Boolean value.
In Java (and C and C#) || means "lazy or". The single stroke | also means "or", but operates slightly differently.
To calculate a||b, the computer calculates the truth value (true or false) of a, and if a is true it returns the value true without bothering to calculate b, hence the word "lazy". Only if a is false, will it checks b to see if it is true (and so if a or b is true).
To calculate a|b, the computer works out the value of a and b first, then "ors" the answers together.
The "lazy or" || is more efficient, because it sometimes does not need to calculate b at all. The only reason you might want to use a|b is if b is actually a function (method) call, and because of some side-effect of the method you want to be sure it executes exactly once. I personally consider this poor programming technique, and on the very few occasions that I want b to always be explicitly calculated I do this explicitly and then use a lazy or.
Eg consider a function or method foo() which returns a boolean. Instead of
boolean x = a|foo(something);
I would write
boolean c=foo(something);
boolean x = a||c;
Which explicitly calls foo() exactly once, so you know what is going on.
Much better programming practice, IMHO. Indeed the best practice would be to eliminate the side effect in foo() entirely, but sometimes that's hard to do.
If you are using lazy or || think about the order you evaluate it in. If a is easy to calculate and usually true, a||b will be more efficient than b||a, as most of the time a simple calculation for a is all that is needed. Conversely if b is usually false and is difficult to calculate, b||a will not be much more efficient than a|b. If one of a or b is a constant and the other a method call, you should have the constant as the first term a||foo() rather than foo()||a as a method call will always be slower than using a simple local variable.
Hope this helps.
Peter Webb
return Find(n.left,req)||Find(n.right,req);
means execute first find {Find(n.left,req)} and return true if it returns true or
execute second find return the value true if it return true, otherwise false.

Can someone describe through code a practical example of backtracking with iteration instead of recursion?

Recursion makes backtracking easy as it guarantees that you won't go through the same path again. So all ramifications of your path are visited just once. I am trying to convert a backtracking tail-recursive (with accumulators) algorithm to iteration. I heard it is supposed to be easy to convert a perfectly tail-recursive algorithm to iteration. But I am stuck in the backtracking part.
Can anyone provide a example through code so myself and others can visualize how backtracking is done? I would think that a STACK is not needed here because I have a perfectly tail-recursive algorithm using accumulators, but I can be wrong here.
If the function is actually recursive, then the transformation is as follows (and this is what a compiler which understand TCO will do for you, so you shouldn't have to do it yourself):
Original:
function func(a1, a2, a3...)
... doesn't contain either return or call
return val
...
return func(x1, x2, x3...)
...
... etc.
Converted to:
function func(a1, a2, a3...)
func: // label for goto (yuk!)
...
return val // No change
...
a1 = x1; a2 = x2; a3 = x3...; goto func;
...
... etc.
In order to make this transformation work with mutually co-recursive functions, you need to combine them into a single function, each of which comes with a label. As above, simple return statements are not altered, and return foo(...) turn into assignment to parameter variables followed by goto foo.
Of course, when combining the functions, you may need to rename local variables to avoid conflicts. And you will also lose the ability to use more than one top-level function, unless you add something like a switch statement (with gotos) at the top entry point, before any label. (In fact, in a language in which allowed goto case foo, you could just use the case labels as labels.)
The use of goto is, of course, ugly. If you use a language which preferably guarantees tail-call optimization, or failing that, at least makes a reasonable attempt to do it and reports when it fails, then there is absolutely no motivation to replace the recursive solution, which (in my opinion) is almost always more readable.
In some cases, it's possible to replace the goto and label with something like while (1) { ... }or other such loops, but that involves replacing thegotos withcontinue` (or equivalent), and that won't work if they're nested inside of other loops. So you can actually waste quite a lot of time making the ugly transformation slightly less ugly, and still not end up with a program as readable as the original.
I'll stop proselytizing recursion now. :)
Edited (I couldn't help it, sorry)
Here's a back-tracking n-queens solution in Lua (which does do TCO), consisting of a tail-recursive solver and a tail-recursive verifier:
function solve(legal, n, first, ...)
if first == nil -- Failure
then return nil
elseif first >= n -- Back-track
then return solve(legal, n, ...)
elseif not legal(first + 1, ...) -- Continue search
then return solve(legal, n, first + 1, ...)
elseif n == 1 + select("#", ...) -- Success
then return first + 1, ...
else -- Forward
return solve(legal, n, 0, first + 1, ...)
end
end
function queens_helper(dist, first, second, ...)
if second == nil
then return true
elseif first == second or first - dist == second or first + dist == second
then return false
else
return queens_helper(dist + 1, first, ...)
end
end
function queens_legal(...) return queens_helper(1, ...) end
-- in case you want to try n-rooks, although the solution is trivial.
function rooks_legal(first, second, ...)
if second == nil then return true
elseif first == second then return false
else return rooks_legal(first, ...)
end
end
function queens(n) return solve(queens_legal, n, 0) end

How to recognize what is, and what is not tail recursion?

Sometimes it's simple enough (if the self call is the last statement, it's tail recursion), but there are still cases that confuse me. A professor told me that "if there's no instruction to execute after the self-call, it's tail recursion". How about these examples (disregard the fact that they don't make much sense) :
a) This one should be tail recursive, seeing how the self-call is the last statement, and there's nothing left to execute after it.
function foo(n)
{
if(n == 0)
return 0;
else
return foo(n-2);
}
b) But how about this one? It should be a tail call, because if the condition is true, nothing except it will be executed, but it's not the last statement?
function foo(n)
{
if(n != 0)
return foo(n-2);
else
return 0;
}
c) How about this one? In both cases, the self call will be the last thing executed :
function foo(n)
{
if(n == 0)
return 0;
else
{
if(n > 100)
return foo(n - 2);
else
return foo(n - 1);
}
}
It might help you to think about this in terms of how tail-call optimisations are actually implemented. That's not part of the definition, of course, but it does motivate the definition.
Typically when a function is called, the calling code will store any register values that it will need later, on the stack. It will also store a return address, indicating the next instruction after the call. It will do whatever it needs to do to ensure that the stack pointer is set up correctly for the callee. Then it will jump to the target address[*] (in this case, the same function). On return, it knows the return value is in the place specified by the calling convention (register or stack slot).
For a tail call, the caller doesn't do this. It ignores any register values, because it knows it won't need them later. It sets up the stack pointer so that the callee will use the same stack the caller did, and it doesn't set itself up as the return address, it just jumps to the target address. Thus, the callee will overwrite the same stack region, it will put its return value in the same location that the caller would have put its return value, and when it returns, it will not return to its caller, but will return to its caller's caller.
Therefore, informally, a function is tail-recursive when it is possible for a tail call optimisation to occur, and when the target of the tail call is the function itself. The effect is more or less the same as if the function contained a loop, and instead of calling itself, the tail call jumps to the start of the loop. This means there must be no variables needed after the call (and indeed no "work to do", which in a language like C++ means nothing to be destructed), and the return value of the tail call must be returned by the caller.
This is all for simple/trivial tail-recursion. There are transformations that can be used to make something tail-recursive which isn't already, for example introducing extra parameters, that store some information used by the "bottom-most" level of recursion, to do work that would otherwise be done on the "way out". So for instance:
int triangle(int n) {
if (n == 0) return 0;
return n + triangle(n-1);
}
can be made tail-recursive, either by the programmer or automatically by a smart enough compiler, like this:
int triangle(int n, int accumulator = 0) {
if (n == 0) return accumulator;
return triangle(n-1, accumulator + n);
}
Therefore, the former function might be described as "tail recursive" by someone who's talking about a smart enough language/compiler. Be prepared for that variant usage.
[*] Storing a return address, moving the stack pointer, and jumping, may or may not be wrapped up in a single opcode by the architecture, but even if not that's typically what happens.
All your functions are tail recursive.
no instruction left after the self-call
means: After the self-call, you return from the function, i.e. no more code has to be executed, and not that there is no more line of code in the function.
Yep; I think your professor meant that in any path, if the final instruction is recursive, then it is tail recursion.
So, all three examples are tail-recursive.
All three examples are tail recursive. Generally speaking, it is tail recursion, if the result of the function (the expression following the "return" keyword) is a lone call to the function itself. No other operator must be involved in the outermost level of the expression. If the call to itself is only a part of an expression then the machine must execute the call but then has to return back into the evaluation of said expression, that is, it was not at the tail of the function execution but in the middle of an expression. This however does not apply to any parameters that the recursive call may take: anything is allowed there, including recursive calls to itself (e.g. "return foo(foo(0));"). The optimization of calls to jumps is only possible for the outer call then, of course.

Resources