How to assess maximum number of recursive calls before stack overflows - algorithm

Lets take a recursive function, for example factorial. Lets also assume that we have a stack of 1 MB size. Using a pen and paper, how can I estimate the number of recursive calls to the function before the stack overflows? I'm not interested in any particular language but rather in an abstract approach.
There are questions on SO that look similar but most of them are concerned with a specific language, or extending stack size, or estimating it by running specific function, or preventing overflow. I would like to find a mathematical way to estimate it.
I found similar question in an algorithmic challenge but couldn't come up with any reasonable solution.
Any suggestion highly appreciated.
EDIT
In response to provided replays if the language truly cannot be taken out of the equation let's assume it's C#. Also, since we are passing simple int or long to the function it's not passed by reference but as a copy. Also, assume a naive implementation, without hashing, without multi-threading, an implementation that as much as possible resembles a mathematical representation of the function:
private static long Factorial(long n)
{
if (n < 0)
{
throw new ArgumentException("Negative numbers not supported");
}
switch (n)
{
case 0:
return 1;
case 1:
return 1;
default:
return n * Factorial(n - 1);
}
}

It highly depends on the implementation of the function. How much memory does the function use, before calling itself again. When it recurses 100 times, you will also have 100 function scopes in memory, including the function arguments and variables. It also reserves 100 places on the stack to store the return values.
I don't think the language can easily be taken out of the equation, because you need to know exactly how the stack is used. For examples are objects passed by reference? Or are the objects copy as a new instance on the stack?

Related

Compiling container operations to efficient incremental code

Most programming languages nowadays provide containers like list, set, multiset, map. Operations on all elements of a container e.g. copy_if or transform normally take O(n) time. Lazy evaluation can make this sublinear if you only need the first few elements of the result, but it's back to linear if you need the full result.
Consider for example the following implementation of the DPLL algorithm for propositional satisfiability. It's essentially a translation of the pseudocode to C++, thus optimized for readability. But each step takes time linear in the number of variables, so even if it could guess all the assignments correctly the first time, total time and memory consumption would be quadratic in the number of variables, making the implementation too slow to use in practice. This is true even if we apply well-known optimizations like passing containers by constant reference and using lazy evaluation anywhere it might avoid unnecessary work.
Efficient implementations of DPLL use incremental techniques, where instead of scanning and copying entire containers, only the consequences of making a small change like assigning a single variable are computed, using O(1) time and memory per step.
A human can translate pseudocode or unoptimized reference implementation to an efficient incremental implementation; this is what we do when we write practical SAT solvers, theorem provers etc. The price we pay is to thenceforth work with a large, complex body of optimized code, which is much more difficult than working with a concise description of the mathematical logic.
What we ideally want is a higher-level compiler, that can compile the unoptimized reference implementation into efficient incremental code.
My question is: has any work yet been done on such a thing? Are there any existing implementations, partial implementations or discussions to look at? Higher-level optimization of container operations is not entirely unknown in principle, e.g. SQL query optimizers, Haskell loop fusion, but I'm not aware of anything that tries to go as far as I'm looking for here.
Unoptimized reference implementation (simple translation of pseudocode) of DPLL:
bool dpll(map<Var, bool> m, set<set<Literal>> clauses) {
clauses = eval(m, clauses);
// Solved
if (isFalse(clauses))
return false;
if (isTrue(clauses))
return true;
// Unit clause
auto unitClauses =
copy_if(clauses, [](set<Literal> clause) { return clause.size() == 1; });
if (unitClauses.size()) {
auto x = front(front(unitClauses));
return dpll(m + makeMap(x.var, x.pol), clauses);
}
// Pure literal
auto pureVars =
copy_if(vars(clauses), [=](Var x) { return pure(x, clauses); });
if (pureVars.size()) {
auto x = front(pureVars);
return dpll(m + makeMap(x, pol(x, clauses)), clauses);
}
// Choice
auto x = choose(vars(clauses));
return dpll(m + makeMap(x, false), clauses) ||
dpll(m + makeMap(x, true), clauses);
}
Similar reference implementation of helper functions: https://github.com/russellw/ayane/blob/master/logic/dpll.cpp

How would i sort a queue using only one additional queue

So basically, Im asked to sort a queue, and i can only use one helper queue , and am only allowed constant amount of additional memory.
How would i sort a queue using only one additional queue ?
Personally, I don't like interview questions with arbitrary restrictions like this, unless it actually reflects the conditions you have to work with at the company. I don't think it actually finds qualified candidates or rather, I don't think it accurately eliminates unqualified ones.
When I did technical interviews for my company, all of the questions were realistic and relevant. Setting that aside.
While there are surely several ways to solve this, and using recursion is certainly one, I would hope that if you tried to solve it with recursion they would have asked you to do it over without recursion, considering they are placing limits on the memory you can use and the data structures. Otherwise, they might as well as given you two queues, a stack and as much memory as you want.
Here is an example of one way to solve it. At the end of this, queueB should be sorted. It uses just one extra local variable. This was done in C#.
queueB.Enqueue(queueA.Dequeue());
while (queueA.Count > 0)
{
int next = queueA.Dequeue();
for (int i = 0; i < queueB.Count; ++i)
{
if (queueB.Peek() < next)
{
queueB.Enqueue(queueB.Dequeue());
}
else
{
queueB.Enqueue(next);
next = queueB.Dequeue();
}
}
queueB.Enqueue(next);
}

question abouut string sort

i have question from programming pearls
problem is following
show how to use Lomuto's partitioning scheme to sort varying length bit strings in time proportional to the sum of their length
and algorithm is following
each record in x[0..n-1] has an integer length and pointer to the array bit[0..length-1]
code
void bsort(l,u,depth)
{
if (l>=u) return;
for (int i=l;i<u;i++)
if (x[i].length<depth)
swap(i,l++);
m=l;
if (x[i].bit[depth] == 0) swap(i,m++);
bsort(l,m-1,depth+1);
bsort(m,u,depth+1);
}
I need the following things:
how this algorithm works
how implement in java?
It's essentially almost the same in Java. If you know Java, which I assume, this shouldn't take more than a few minutes to port. I'm sure we'd be more than happy to give you some pointers as to how the algorithm works, but I'd like to see some work from you first. Take a pencil and some paper and trace the code. That's going to be your best bet with recursion.

Is there a way to predict unknown function value based on its previous values

I have values returned by unknown function like for example
# this is an easy case - parabolic function
# but in my case function is realy unknown as it is connected to process execution time
[0, 1, 4, 9]
is there a way to predict next value?
Not necessarily. Your "parabolic function" might be implemented like this:
def mindscrew
#nums ||= [0, 1, 4, 9, "cat", "dog", "cheese"]
#nums.pop
end
You can take a guess, but to predict with certainty is impossible.
You can try using neural networks approach. There are pretty many articles you can find by Google query "neural network function approximation". Many books are also available, e.g. this one.
If you just want data points
Extrapolation of data outside of known points can be estimated, but you need to accept the potential differences are much larger than with interpolation of data between known points. Strictly, both can be arbitrarily inaccurate, as the function could do anything crazy between the known points, even if it is a well-behaved continuous function. And if it isn't well-behaved, all bets are already off ;-p
There are a number of mathematical approaches to this (that have direct application to computer science) - anything from simple linear algebra to things like cubic splines; and everything in between.
If you want the function
Getting esoteric; another interesting model here is genetic programming; by evolving an expression over the known data points it is possible to find a suitably-close approximation. Sometimes it works; sometimes it doesn't. Not the language you were looking for, but Jason Bock shows some C# code that does this in .NET 3.5, here: Evolving LINQ Expressions.
I happen to have his code "to hand" (I've used it in some presentations); with something like a => a * a it will find it almost instantly, but it should (in theory) be able to find virtually any method - but without any defined maximum run length ;-p It is also possible to get into a dead end (evolutionary speaking) where you simply never recover...
Use the Wolfram Alpha API :)
Yes. Maybe.
If you have some input and output values, i.e. in your case [0,1,2,3] and [0,1,4,9], you could use response surfaces (basicly function fitting i believe) to 'guess' the actual function (in your case f(x)=x^2). If you let your guessing function be f(x)=c1*x+c2*x^2+c3 there are algorithms that will determine that c1=0, c2=1 and c3=0 given your input and output and given the resulting function you can predict the next value.
Note that most other answers to this question are valid as well. I am just assuming that you want to fit some function to data. In other words, I find your question quite vague, please try to pose your questions as complete as possible!
In general, no... unless you know it's a function of a particular form (e.g. polynomial of some degree N) and there is enough information to constrain the function.
e.g. for a more "ordinary" counterexample (see Chuck's answer) for why you can't necessarily assume n^2 w/o knowing it's a quadratic equation, you could have f(n) = n4 - 6n3 + 12n2 - 6n, which has for n=0,1,2,3,4,5 f(n) = 0,1,4,9,40,145.
If you do know it's a particular form, there are some options... if the form is a linear addition of basis functions (e.g. f(x) = a + bcos(x) + csqrt(x)) then using least-squares can get you the unknown coefficients for the best fit using those basis functions.
See also this question.
You can apply statistical methods to try and guess the next answer, but that might not work very well if the function is like this one (c):
int evil(void){
static int e = 0;
if(50 == e++){
e = e * 100;
}
return e;
}
This function will return nice simple increasing numbers then ... BAM.
That's a hard problem.
You should check out the recurrence relation equation for special cases where it could be possible such a task.

Why should recursion be preferred over iteration?

Iteration is more performant than recursion, right? Then why do some people opine that recursion is better (more elegant, in their words) than iteration? I really don't see why some languages like Haskell do not allow iteration and encourage recursion? Isn't that absurd to encourage something that has bad performance (and that too when more performant option i.e. recursion is available) ? Please shed some light on this. Thanks.
Iteration is more performant than
recursion, right?
Not necessarily.
This conception comes from many C-like languages, where calling a function, recursive or not, had a large overhead and created a new stackframe for every call.
For many languages this is not the case, and recursion is equally or more performant than an iterative version. These days, even some C compilers rewrite some recursive constructs to an iterative version, or reuse the stack frame for a tail recursive call.
Try implementing depth-first search recursively and iteratively and tell me which one gave you an easier time of it. Or merge sort. For a lot of problems it comes down to explicitly maintaining your own stack vs. leaving your data on the function stack.
I can't speak to Haskell as I've never used it, but this is to address the more general part of the question posed in your title.
Haskell do not allow iteration because iteration involves mutable state (the index).
As others have stated, there's nothing intrinsically less performant about recursion. There are some languages where it will be slower, but it's not a universal rule.
That being said, to me recursion is a tool, to be used when it makes sense. There are some algorithms that are better represented as recursion (just as some are better via iteration).
Case in point:
fib 0 = 0
fib 1 = 1
fib n = fib(n-1) + fib(n-2)
I can't imagine an iterative solution that could possibly make the intent clearer than that.
Here is some information on the pros & cons of recursion & iteration in c:
http://www.stanford.edu/~blp/writings/clc/recursion-vs-iteration.html
Mostly for me Recursion is sometimes easier to understand than iteration.
Iteration is just a special form of recursion.
Recursion is one of those things that seem elegant or efficient in theory but in practice is generally less efficient (unless the compiler or dynamic recompiler) is changing what the code does. In general anything that causes unnecessary subroutine calls is going to be slower, especially when more than 1 argument is being pushed/popped. Anything you can do to remove processor cycles i.e. instructions the processor has to chew on is fair game. Compilers can do a pretty good job of this these days in general but it's always good to know how to write efficient code by hand.
Several things:
Iteration is not necessarily faster
Root of all evil: encouraging something just because it might be moderately faster is premature; there are other considerations.
Recursion often much more succinctly and clearly communicates your intent
By eschewing mutable state generally, functional programming languages are easier to reason about and debug, and recursion is an example of this.
Recursion takes more memory than iteration.
I don't think there's anything intrinsically less performant about recursion - at least in the abstract. Recursion is a special form of iteration. If a language is designed to support recursion well, it's possible it could perform just as well as iteration.
In general, recursion makes one be explicit about the state you're bringing forward in the next iteration (it's the parameters). This can make it easier for language processors to parallelize execution. At least that's a direction that language designers are trying to exploit.
As a low level ITERATION deals with the CX registry to count loops, and of course data registries.
RECURSION not only deals with that it also adds references to the stack pointer to keep references of the previous calls and then how to go back.-
My University teacher told me that whatever you do with recursion can be done with Iterations and viceversa, however sometimes is simpler to do it by recursion than Iteration (more elegant) but at a performance level is better to use Iterations.-
In Java, recursive solutions generally outperform non-recursive ones. In C it tends to be the other way around. I think this holds in general for adaptively compiled languages vs. ahead-of-time compiled languages.
Edit:
By "generally" I mean something like a 60/40 split. It is very dependent on how efficiently the language handles method calls. I think JIT compilation favors recursion because it can choose how to handle inlining and use runtime data in optimization. It's very dependent on the algorithm and compiler in question though. Java in particular continues to get smarter about handling recursion.
Quantitative study results with Java (PDF link). Note that these are mostly sorting algorithms, and are using an older Java Virtual Machine (1.5.x if I read right). They sometimes get a 2:1 or 4:1 performance improvement by using the recursive implementation, and rarely is recursion significantly slower. In my personal experience, the difference isn't often that pronounced, but a 50% improvement is common when I use recursion sensibly.
I find it hard to reason that one is better than the other all the time.
Im working on a mobile app that needs to do background work on user file system. One of the background threads needs to sweep the whole file system from time to time, to maintain updated data to the user. So in fear of Stack Overflow , i had written an iterative algorithm. Today i wrote a recursive one, for the same job. To my surprise, the iterative algorithm is faster: recursive -> 37s, iterative -> 34s (working over the exact same file structure).
Recursive:
private long recursive(File rootFile, long counter) {
long duration = 0;
sendScanUpdateSignal(rootFile.getAbsolutePath());
if(rootFile.isDirectory()) {
File[] files = getChildren(rootFile, MUSIC_FILE_FILTER);
for(int i = 0; i < files.length; i++) {
duration += recursive(files[i], counter);
}
if(duration != 0) {
dhm.put(rootFile.getAbsolutePath(), duration);
updateDurationInUI(rootFile.getAbsolutePath(), duration);
}
}
else if(!rootFile.isDirectory() && checkExtension(rootFile.getAbsolutePath())) {
duration = getDuration(rootFile);
dhm.put(rootFile.getAbsolutePath(), getDuration(rootFile));
updateDurationInUI(rootFile.getAbsolutePath(), duration);
}
return counter + duration;
}
Iterative: - iterative depth-first search , with recursive backtracking
private void traversal(File file) {
int pointer = 0;
File[] files;
boolean hadMusic = false;
long parentTimeCounter = 0;
while(file != null) {
sendScanUpdateSignal(file.getAbsolutePath());
try {
Thread.sleep(Constants.THREADS_SLEEP_CONSTANTS.TRAVERSAL);
} catch (InterruptedException e) {
e.printStackTrace();
}
files = getChildren(file, MUSIC_FILE_FILTER);
if(!file.isDirectory() && checkExtension(file.getAbsolutePath())) {
hadMusic = true;
long duration = getDuration(file);
parentTimeCounter = parentTimeCounter + duration;
dhm.put(file.getAbsolutePath(), duration);
updateDurationInUI(file.getAbsolutePath(), duration);
}
if(files != null && pointer < files.length) {
file = getChildren(file,MUSIC_FILE_FILTER)[pointer];
}
else if(files != null && pointer+1 < files.length) {
file = files[pointer+1];
pointer++;
}
else {
pointer=0;
file = getNextSybling(file, hadMusic, parentTimeCounter);
hadMusic = false;
parentTimeCounter = 0;
}
}
}
private File getNextSybling(File file, boolean hadMusic, long timeCounter) {
File result= null;
//se o file é /mnt, para
if(file.getAbsolutePath().compareTo(userSDBasePointer.getParentFile().getAbsolutePath()) == 0) {
return result;
}
File parent = file.getParentFile();
long parentDuration = 0;
if(hadMusic) {
if(dhm.containsKey(parent.getAbsolutePath())) {
long savedValue = dhm.get(parent.getAbsolutePath());
parentDuration = savedValue + timeCounter;
}
else {
parentDuration = timeCounter;
}
dhm.put(parent.getAbsolutePath(), parentDuration);
updateDurationInUI(parent.getAbsolutePath(), parentDuration);
}
//procura irmao seguinte
File[] syblings = getChildren(parent,MUSIC_FILE_FILTER);
for(int i = 0; i < syblings.length; i++) {
if(syblings[i].getAbsolutePath().compareTo(file.getAbsolutePath())==0) {
if(i+1 < syblings.length) {
result = syblings[i+1];
}
break;
}
}
//backtracking - adiciona pai, se tiver filhos musica
if(result == null) {
result = getNextSybling(parent, hadMusic, parentDuration);
}
return result;
}
Sure the iterative isn't elegant, but alhtough its currently implemented on an ineficient way, it is still faster than the recursive one. And i have better control over it, as i dont want it running at full speed, and will let the garbage collector do its work more frequently.
Anyway, i wont take for granted that one method is better than the other, and will review other algorithms that are currently recursive. But at least from the 2 algorithms above, the iterative one will be the one in the final product.
I think it would help to get some understanding of what performance is really about. This link shows how a perfectly reasonably-coded app actually has a lot of room for optimization - namely a factor of 43! None of this had anything to do with iteration vs. recursion.
When an app has been tuned that far, it gets to the point where the cycles saved by iteration as against recursion might actually make a difference.
Recursion is the typical implementation of iteration. It's just a lower level of abstraction (at least in Python):
class iterator(object):
def __init__(self, max):
self.count = 0
self.max = max
def __iter__(self):
return self
# I believe this changes to __next__ in Python 3000
def next(self):
if self.count == self.max:
raise StopIteration
else:
self.count += 1
return self.count - 1
# At this level, iteration is the name of the game, but
# in the implementation, recursion is clearly what's happening.
for i in iterator(50):
print(i)
I would compare recursion with an explosive: you can reach big result in no time. But if you use it without cautions the result could be disastrous.
I was impressed very much by proving of complexity for the recursion that calculates Fibonacci numbers here. Recursion in that case has complexity O((3/2)^n) while iteration just O(n). Calculation of n=46 with recursion written on c# takes half minute! Wow...
IMHO recursion should be used only if nature of entities suited for recursion well (trees, syntax parsing, ...) and never because of aesthetic. Performance and resources consumption of any "divine" recursive code need to be scrutinized.
Iteration is more performant than recursion, right?
Yes.
However, when you have a problem which maps perfectly to a Recursive Data Structure, the better solution is always recursive.
If you pretend to solve the problem with iterations you'll end up reinventing the stack and creating a messier and ugly code, compared to the elegant recursive version of the code.
That said, Iteration will always be faster than Recursion. (in a Von Neumann Architecture), so if you use recursion always, even where a loop will suffice, you'll pay a performance penalty.
Is recursion ever faster than looping?
"Iteration is more performant than recursion" is really language- and/or compiler-specific. The case that comes to mind is when the compiler does loop-unrolling. If you've implemented a recursive solution in this case, it's going to be quite a bit slower.
This is where it pays to be a scientist (testing hypotheses) and to know your tools...
on ntfs UNC max path as is 32K
C:\A\B\X\C.... more than 16K folders can be created...
But you can not even count the number of folders with any recursive method, sooner or later all will give stack overflow.
Only a Good lightweight iterative code should be used to scan folders professionally.
Believe or not, most top antivirus cannot scan maximum depth of UNC folders.

Resources