Calculating time complexity of functions when both contain each other? - complexity-theory

Consider the following C code:
int foo(int x)
{
if(x < 1) return 1;
else return foo(x-1) + bar(x);
}
int bar(int x)
{
if(x < 2) return 1;
else return foo(x-1) + bar(x/2);
}
What will be the time complexity and space complexity for each of foo and bar function?

The way to solve this is to look at what the functions are actually doing. There is no general rule. You just have to switch on your brain.
When you call foo (1000), how many calls foo (999) will be made at least?
How many calls foo (998) will be made at least?
...
How many calls foo (501) will be made at least?
How many calls foo (500) will be made at least?

Related

How to write recurrence of any recursive function?

I have seen many examples, but couldn't get the right procedure to write the recurrence of any recursive function. For example: If I see these functions,
how can I write an accurate recurrence?
void print (int n)
{
if (n>0)
{cout<< n<<endl;
print(n-1);}
}
and
int power (int x, int n)
{
If (n==0)
return 1;
else
return x * power (x, n-1);
}
and
Int fib (int n)
{
If (n==0 or n==1)
return 1;
else
return fib(n-1) +fib (n-2);
}
and
int numberofDigits(int n) {
if ((-10 < n) && (n < 10))
return 1;
else
return 1 + numberofDigits(n/10);
}
Those are all recursive functions. We can see, some of the functions like (fibonacci, factorial etc) functions have pre-described recurrences on the internet.
But how to write a recurrence by ourselves?
There are generally to cases to a recursive function:
The recursive case
The "base" case
For example in the Fibonacci case the base case was "if n is 1 or 0 then the answer is 1", while the recursive case is defined as the sum of the two previous numbers.
The way to create your own recursive function is to isolate the recursive and base cases, and then implement. So the canonical recursive function would look like this:
type my_recursion(type var)
{
if (/* base case */)
return /* base case value */;
else
return my_recursion(/* pass something in here */);
}

Is it a bad practice to place conditional statements inside iterations?

So let's consider the case where there is the below snippet:
if(x == 0)
{
for(var i = 0; i < 5; ++ i)
{
//do something
}
}
else
{
for(var i = 0; i < 5; ++ i)
{
//do something different
}
}
As you can see, both of the conditions iterate through the same for loop but perform different actions based on the condition. My question is, is it a bad practice to have something like this:
for(var i = 0; i < 5; ++ i)
{
if(x == 0){
// do something
}else{
// do something else
}
}
The reason being that I think that this may be a bad practice is due to the fact that for every instance of the loop, a conditional check is being executed on it against the first snippet where he condition is checked first and then the loop is executed. Am I mistaken?
Unless you're writing code that must run at high speed and be super efficient, go for code readability over efficiency. In this case, I'd say the 2nd example is clearer as it's less code, and a commonly followed pattern.
I suppose another factor to consider though is that the 2nd example seems to imply that x might change value, whereas the 1st example doesn't. This would be worth putting a comment nearby explaining the choice.
My gut instinct would agree with you, that the 1st example would be more efficient, but in reality the compiler's optimisations are likely to make short work of examples like the above - they'll both likely be equal in performance.
Here's a pretty impressive list of optimisations that can be made on loops, to give you an idea, and also branch optimisation (see the answer), may have an effect if the loops run for many iterations.
If you use
if(x = 0)
{
for(var i = 0; i < 5; ++ i)
{
//do something
}
}
else
{
for(var i = 0; i < 5; ++ i)
{
//do something different
}
}
then you have done one comparison and a loop performing the tasks in the scope for 5 times.
When you use
for(var i =0; i < 5; ++ i)
{
if(x = 0){
// do something
}else{
// do something else
}
}
then a loop performing the tasks in the scope for 5 times. In addition to that, the comparison is done 5 times.
On first sight the former results in least amount of instructions. But the compiler/interpreter may perform an optimization step by doing that comparison one time. But this depend of the compiler/interpreter. If you have a good comprehending of how a compiler/interpreter works for a given language you're programming on, you can "abuse" the knowledge to write a readable code and yet having a well-optimized output.
Yet, another alternative is to work with functions. This approach is only useful if the variable x is constant throughout the loop, with other words: you don't modify it during the loop itself. This is the case at your first example. However, in the second example: x can be changed in the loop, which leads to running either the if {} or else{} block during the loop, executing two different functions.
Alternative: Select a function first and use it in the loop. It may be more useful if you have lots of different tasks to perform. Just select a function beforehand. Some programming languages allows this, another don't. So it depends of the language itself.
// check which function to run
variable runThisFunction;
if (x = 0) runThisFunction = { // do something }
else runThisFunction = { // do something else }
// loop 5 times using the function
for(var i =0; i < 5; ++ i)
{
call runThisFunction with arg i provided.
}

Writing a program to count the number of recursive calls

Suppose I have the following recursive function, that returns the nth fibonacci number:
private int fib(int n) {
if(n == 1) return 0;
if(n == 2) return 1;
return fib(n - 1) + fib(n - 2);
}
How would I write a piece of code to return the total number of recursive calls made by this function ? I was thinking of introducing a count parameter as in fib(int n, int count = 0) or a static variable inside fib as static int count = 0 and increment count right before the recursive call. I had no success with either of these two approaches as I couldn't return count. Is there a way to get the total number of recursive calls without modifying the original function ?
You can calculate by induction that the number of recursion calls to calculate F(n) is 2 * F(n) - 1 (for the basic case where n <= 2 it is 1). Try to write induction steps, if you will not be able, I will update my answer with a proof later.
So actually there is no need to write a recursive algorithm. Also there is O(log(n)) algorithm to calculate n-th fibonacci number based on matrix exponentiation.
So with some math, you can end up with an O(log(n)) algorithm to find the number of recursive calls. But if you will continue modifying your function, you will find this in approximately O(1.6^n)
You can use a reference variable to keep track of the times the function is called.
Why don't you try something like this:
#include <iostream>
using namespace std;
int fib(int n,int& count) {
count++;
if(n == 1) return 0;
if(n == 2) return 1;
return fib(n - 1,count) + fib(n - 2,count);
}
int main()
{
int nth=7;
int count=0;
int num=fib(nth,count);
cout<<nth<<"th fibonacci sequence is "<<num<<" function calls: "<<count<<"recursive calls:"<<count-1<<endl;
return 0;
}
Without modifying the function? Use a proxy:
http://tutorials.jenkov.com/java-reflection/dynamic-proxies.html#proxy
https://docs.oracle.com/javase/1.5.0/docs/guide/reflection/proxy.html#examples
Foo foo = (Foo) DebugProxy.newInstance(new FooImpl());
foo.bar(null);

Base and recursive cases in Recursive Functions

I am new to recursion and want to know if the below code can be called a recursive function,although the function is calling itself not sure whether it satisfies the condition to have base and recursive cases.
Below function sorts flight tickets based on starting point of the trip ( the trip is assumed to have multiple transitions and tickets are shuffled like ABC-> EFG, XYZ-> MNO, EFG -> KLM , KLM->XYZ).
public List<Ticket> sortTickets(List<Ticket> shuffled, String source, int n) {
if (sortedList.size() != shuffled.size()) {
n = (n < 0) ? shuffled.size() - 1 : n;
if (shuffled.get(n).source == source) {
sortedList.add(shuffled.get(n));
source = shuffled.get(n).destination;
return sortTickets(shuffled, source, n - 1);
} else {
return sortTickets(shuffled, source, n - 1);
}
} else {
return sortedList;
}
}
Every function that calls itself is recursive (regardless of having base cases).
A recursive function without base cases will always infinitely recurse (not that lack of base cases will necessarily prevent infinite recursion - it might be that one of the execution paths of the function never gets to a base case).
Your function has a base case - when sortedList.size() == shuffled.size().

Is there some way to speed up recursion by remembering child nodes?

For example,
Look at the code that calculates the n-th Fibonacci number:
fib(int n)
{
if(n==0 || n==1)
return 1;
return fib(n-1) + fib(n-2);
}
The problem with this code is that it will generate stack overflow error for any number greater than 15 (in most computers).
Assume that we are calculating fib(10). In this process, say fib(5) is calculated a lot of times. Is there some way to store this in memory for fast retrieval and thereby increase the speed of recursion?
I am looking for a generic technique that can be used in almost all problems.
Yes your insight is correct.
This is called dynamic programming. It is usually a common memory runtime trade-off.
In the case of fibo, you don't even need to cache everything :
[edit]
The author of the question seems to be looking for a general method to cache rather than a method to compute Fibonacci. Search wikipedia or look at the code of the other poster to get this answer. Those answers are linear in time and memory.
**Here is a linear-time algorithm O(n), constant in memory **
in OCaml:
let rec fibo n =
let rec aux = fun
| 0 -> (1,1)
| n -> let (cur, prec) = aux (n-1) in (cur+prec, cur)
let (cur,prec) = aux n in prec;;
in C++:
int fibo(int n) {
if (n == 0 ) return 1;
if (n == 1 ) return 1;
int p = fibo(0);
int c = fibo(1);
int buff = 0;
for (int i=1; i < n; ++i) {
buff = c;
c = p+c;
p = buff;
};
return c;
};
This perform in linear time. But log is actually possible !!!
Roo's program is linear too, but way slower, and use memory.
Here is the log algorithm O(log(n))
Now for the log-time algorithm (way way way faster), here is a method :
If you know u(n), u(n-1), computing u(n+1), u(n) can be done by applying a matrix:
| u(n+1) | = | 1 1 | | u(n) |
| u(n) | | 1 0 | | u(n-1) |
So that you have :
| u(n) | = | 1 1 |^(n-1) | u(1) | = | 1 1 |^(n-1) | 1 |
| u(n-1) | | 1 0 | | u(0) | | 1 0 | | 1 |
Computing the exponential of the matrix has a logarithmic complexity.
Just implement recursively the idea :
M^(0) = Id
M^(2p+1) = (M^2p) * M
M^(2p) = (M^p) * (M^p) // of course don't compute M^p twice here.
You can also just diagonalize it (not to difficult), you will find the gold number and its conjugate in its eigenvalue, and the result will give you an EXACT mathematical formula for u(n). It contains powers of those eigenvalues, so that the complexity will still be logarithmic.
Fibo is often taken as an example to illustrate Dynamic Programming, but as you see, it is not really pertinent.
#John:
I don't think it has anything to do with do with hash.
#John2:
A map is a bit general don't you think? For Fibonacci case, all the keys are contiguous so that a vector is appropriate, once again there are much faster ways to compute fibo sequence, see my code sample over there.
This is called memoization and there is a very good article about memoization Matthew Podwysocki posted these days. It uses Fibonacci to exemplify it. And shows the code in C# also. Read it here.
If you're using C#, and can use PostSharp, here's a simple memoization aspect for your code:
[Serializable]
public class MemoizeAttribute : PostSharp.Laos.OnMethodBoundaryAspect, IEqualityComparer<Object[]>
{
private Dictionary<Object[], Object> _Cache;
public MemoizeAttribute()
{
_Cache = new Dictionary<object[], object>(this);
}
public override void OnEntry(PostSharp.Laos.MethodExecutionEventArgs eventArgs)
{
Object[] arguments = eventArgs.GetReadOnlyArgumentArray();
if (_Cache.ContainsKey(arguments))
{
eventArgs.ReturnValue = _Cache[arguments];
eventArgs.FlowBehavior = FlowBehavior.Return;
}
}
public override void OnExit(MethodExecutionEventArgs eventArgs)
{
if (eventArgs.Exception != null)
return;
_Cache[eventArgs.GetReadOnlyArgumentArray()] = eventArgs.ReturnValue;
}
#region IEqualityComparer<object[]> Members
public bool Equals(object[] x, object[] y)
{
if (Object.ReferenceEquals(x, y))
return true;
if (x == null || y == null)
return false;
if (x.Length != y.Length)
return false;
for (Int32 index = 0, len = x.Length; index < len; index++)
if (Comparer.Default.Compare(x[index], y[index]) != 0)
return false;
return true;
}
public int GetHashCode(object[] obj)
{
Int32 hash = 23;
foreach (Object o in obj)
{
hash *= 37;
if (o != null)
hash += o.GetHashCode();
}
return hash;
}
#endregion
}
Here's a sample Fibonacci implementation using it:
[Memoize]
private Int32 Fibonacci(Int32 n)
{
if (n <= 1)
return 1;
else
return Fibonacci(n - 2) + Fibonacci(n - 1);
}
Quick and dirty memoization in C++:
Any recursive method type1 foo(type2 bar) { ... } is easily memoized with map<type2, type1> M.
// your original method
int fib(int n)
{
if(n==0 || n==1)
return 1;
return fib(n-1) + fib(n-2);
}
// with memoization
map<int, int> M = map<int, int>();
int fib(int n)
{
if(n==0 || n==1)
return 1;
// only compute the value for fib(n) if we haven't before
if(M.count(n) == 0)
M[n] = fib(n-1) + fib(n-2);
return M[n];
}
EDIT: #Konrad Rudolph
Konrad points out that std::map is not the fastest data structure we could use here. That's true, a vector<something> should be faster than a map<int, something> (though it might require more memory if the inputs to the recursive calls of the function were not consecutive integers like they are in this case), but maps are convenient to use generally.
According to wikipedia Fib(0) should be 0 but it does not matter.
Here is simple C# solution with for cycle:
ulong Fib(int n)
{
ulong fib = 1; // value of fib(i)
ulong fib1 = 1; // value of fib(i-1)
ulong fib2 = 0; // value of fib(i-2)
for (int i = 0; i < n; i++)
{
fib = fib1 + fib2;
fib2 = fib1;
fib1 = fib;
}
return fib;
}
It is pretty common trick to convert recursion to tail recursion and then to loop. For more detail see for example this lecture (ppt).
What language is this? It doesnt overflow anything in c...
Also, you can try creating a lookup table on the heap, or use a map
caching is generally a good idea for this kind of thing. Since fibonacci numbers are constant, you can cache the result once you have calculated it. A quick c/pseudocode example
class fibstorage {
bool has-result(int n) { return fibresults.contains(n); }
int get-result(int n) { return fibresult.find(n).value; }
void add-result(int n, int v) { fibresults.add(n,v); }
map<int, int> fibresults;
}
fib(int n ) {
if(n==0 || n==1)
return 1;
if (fibstorage.has-result(n)) {
return fibstorage.get-result(n-1);
}
return ( (fibstorage.has-result(n-1) ? fibstorage.get-result(n-1) : fib(n-1) ) +
(fibstorage.has-result(n-2) ? fibstorage.get-result(n-2) : fib(n-2) )
);
}
calcfib(n) {
v = fib(n);
fibstorage.add-result(n,v);
}
This would be quite slow, as every recursion results in 3 lookups, however this should illustrate the general idea
Is this a deliberately chosen example? (eg. an extreme case you're wanting to test)
As it's currently O(1.6^n) i just want to make sure you're just looking for answers on handling the general case of this problem (caching values, etc) and not just accidentally writing poor code :D
Looking at this specific case you could have something along the lines of:
var cache = [];
function fib(n) {
if (n < 2) return 1;
if (cache.length > n) return cache[n];
var result = fib(n - 2) + fib(n - 1);
cache[n] = result;
return result;
}
Which degenerates to O(n) in the worst case :D
[Edit: * does not equal + :D ]
[Yet another edit: the Haskell version (because i'm a masochist or something)
fibs = 1:1:(zipWith (+) fibs (tail fibs))
fib n = fibs !! n
]
Try using a map, n is the key and its corresponding Fibonacci number is the value.
#Paul
Thanks for the info. I didn't know that. From the Wikipedia link you mentioned:
This technique of saving values that
have already been calculated is called
memoization
Yeah I already looked at the code (+1). :)
#ESRogs:
std::map lookup is O(log n) which makes it slow here. Better use a vector.
vector<unsigned int> fib_cache;
fib_cache.push_back(1);
fib_cache.push_back(1);
unsigned int fib(unsigned int n) {
if (fib_cache.size() <= n)
fib_cache.push_back(fib(n - 1) + fib(n - 2));
return fib_cache[n];
}
Others have answered your question well and accurately - you're looking for memoization.
Programming languages with tail call optimization (mostly functional languages) can do certain cases of recursion without stack overflow. It doesn't directly apply to your definition of Fibonacci, though there are tricks..
The phrasing of your question made me think of an interesting idea.. Avoiding stack overflow of a pure recursive function by only storing a subset of the stack frames, and rebuilding when necessary.. Only really useful in a few cases. If your algorithm only conditionally relies on the context as opposed to the return, and/or you're optimizing for memory not speed.
Mathematica has a particularly slick way to do memoization, relying on the fact that hashes and function calls use the same syntax:
fib[0] = 1;
fib[1] = 1;
fib[n_] := fib[n] = fib[n-1] + fib[n-2]
That's it. It caches (memoizes) fib[0] and fib[1] off the bat and caches the rest as needed. The rules for pattern-matching function calls are such that it always uses a more specific definition before a more general definition.
One more excellent resource for C# programmers for recursion, partials, currying, memoization, and their ilk, is Wes Dyer's blog, though he hasn't posted in awhile. He explains memoization well, with solid code examples here:
http://blogs.msdn.com/wesdyer/archive/2007/01/26/function-memoization.aspx
The problem with this code is that it will generate stack overflow error for any number greater than 15 (in most computers).
Really? What computer are you using? It's taking a long time at 44, but the stack is not overflowing. In fact, your going to get a value bigger than an integer can hold (~4 billion unsigned, ~2 billion signed) before the stack is going to over flow (Fibbonaci(46)).
This would work for what you want to do though (runs wiked fast)
class Program
{
public static readonly Dictionary<int,int> Items = new Dictionary<int,int>();
static void Main(string[] args)
{
Console.WriteLine(Fibbonacci(46).ToString());
Console.ReadLine();
}
public static int Fibbonacci(int number)
{
if (number == 1 || number == 0)
{
return 1;
}
var minus2 = number - 2;
var minus1 = number - 1;
if (!Items.ContainsKey(minus2))
{
Items.Add(minus2, Fibbonacci(minus2));
}
if (!Items.ContainsKey(minus1))
{
Items.Add(minus1, Fibbonacci(minus1));
}
return (Items[minus2] + Items[minus1]);
}
}
If you're using a language with first-class functions like Scheme, you can add memoization without changing the initial algorithm:
(define (memoize fn)
(letrec ((get (lambda (query) '(#f)))
(set (lambda (query value)
(let ((old-get get))
(set! get (lambda (q)
(if (equal? q query)
(cons #t value)
(old-get q))))))))
(lambda args
(let ((val (get args)))
(if (car val)
(cdr val)
(let ((ret (apply fn args)))
(set args ret)
ret))))))
(define fib (memoize (lambda (x)
(if (< x 2) x
(+ (fib (- x 1)) (fib (- x 2)))))))
The first block provides a memoization facility and the second block is the fibonacci sequence using that facility. This now has an O(n) runtime (as opposed to O(2^n) for the algorithm without memoization).
Note: the memoization facility provided uses a chain of closures to look for previous invocations. At worst case this can be O(n). In this case, however, the desired values are always at the top of the chain, ensuring O(1) lookup.
As other posters have indicated, memoization is a standard way to trade memory for speed, here is some pseudo code to implement memoization for any function (provided the function has no side effects):
Initial function code:
function (parameters)
body (with recursive calls to calculate result)
return result
This should be transformed to
function (parameters)
key = serialized parameters to string
if (cache[key] does not exist) {
body (with recursive calls to calculate result)
cache[key] = result
}
return cache[key]
By the way Perl has a memoize module that does this for any function in your code that you specify.
# Compute Fibonacci numbers
sub fib {
my $n = shift;
return $n if $n < 2;
fib($n-1) + fib($n-2);
}
In order to memoize this function all you do is start your program with
use Memoize;
memoize('fib');
# Rest of the fib function just like the original version.
# Now fib is automagically much faster ;-)
#lassevk:
This is awesome, exactly what I had been thinking about in my head after reading about memoization in Higher Order Perl. Two things which I think would be useful additions:
An optional parameter to specify a static or member method that is used for generating the key to the cache.
An optional way to change the cache object so that you could use a disk or database backed cache.
Not sure how to do this sort of thing with Attributes (or if they are even possible with this sort of implementation) but I plan to try and figure out.
(Off topic: I was trying to post this as a comment, but I didn't realize that comments have such a short allowed length so this doesn't really fit as an 'answer')

Resources