Consider two functions that accept as a parameter an unsigned integer and returns the number of digits of this number. One function is recursive and the other is non-recursive.
In terms of complexity , which implementation is better?
The language used is C/C++.
Here is non-recursive function:
int nbOfDigitsNR(int nb) {
int i=0
while(nb!=0){
nb=nb/10;
++i;
}
return i; // i is the number of digits
}
the recursive function:
int nbOfDigitsNR(int nb) {
static int i;
if (nb!=0){
i=i+1;
nbOfDigitsNR(nb/10);}
return i;
}
I suggest that the time complexity is the same: O(n),
and the space complexity is different: O(n) recursive. O(1) non recursive.
Should one solution be recursive and other iterative, the time complexity should be the same, if of course this is the same algorithm implemented twice - once recursively and once iteratively.
The difference comes in terms of space complexity and how programming language, in your case C++, handles recursion.
Your example illustrates exactly that. Both functions will have the same time complexity, while recursive one will have bigger space complexity, since C++ allocates variables for each recursive call on stack.
You are correct about time and space complexities, if n represents number of digits. Should n represent integer, then replace it with lg(n).
Saying that a function is recursive or non-recursive doesn't tell us anything about its complexity.
It could be equal, or one of them with a lower complexity.. it entirely depends on the algorithm.
I have a blue and a gray car. Which one is faster?
Related
I'm trying to solve question 11.1 in Elements of Programming Interviews (EPI) in Java: Search a Sorted Array for First Occurrence of K.
The problem description from the book:
Write a method that takes a sorted array and a key and returns the index of the first occurrence of that key in the array.
The solution they provide in the book is a modified binary search algorithm that runs in O(logn) time. I wrote my own algorithm also based on a modified binary search algorithm with a slight difference - it uses recursion. The problem is I don't know how to determine the time complexity of my algorithm - my best guess is that it will run in O(logn) time because each time the function is called it reduces the size of the candidate values by half. I've tested my algorithm against the 314 EPI test cases that are provided by the EPI Judge so I know it works, I just don't know the time complexity - here is the code:
public static int searchFirstOfKUtility(List<Integer> A, int k, int Lower, int Upper, Integer Index)
{
while(Lower<=Upper){
int M = Lower + (Upper-Lower)/2;
if(A.get(M)<k)
Lower = M+1;
else if(A.get(M) == k){
Index = M;
if(Lower!=Upper)
Index = searchFirstOfKUtility(A, k, Lower, M-1, Index);
return Index;
}
else
Upper=M-1;
}
return Index;
}
Here is the code that the tests cases call to exercise my function:
public static int searchFirstOfK(List<Integer> A, int k) {
Integer foundKey = -1;
return searchFirstOfKUtility(A, k, 0, A.size()-1, foundKey);
}
So, can anyone tell me what the time complexity of my algorithm would be?
Assuming that passing arguments is O(1) instead of O(n), performance is O(log(n)).
The usual theoretical approach for analyzing recursion is calling the Master Theorem. It is to say that if the performance of a recursive algorithm follows a relation:
T(n) = a T(n/b) + f(n)
then there are 3 cases. In plain English they correspond to:
Performance is dominated by all the calls at the bottom of the recursion, so is proportional to how many of those there are.
Performance is equal between each level of recursion, and so is proportional to how many levels of recursion there are, times the cost of any layer of recursion.
Performance is dominated by the work done in the very first call, and so is proportional to f(n).
You are in case 2. Each recursive call costs the same, and so performance is dominated by the fact that there are O(log(n)) levels of recursion times the cost of each level. Assuming that passing a fixed number of arguments is O(1), that will indeed be O(log(n)).
Note that this assumption is true for Java because you don't make a complete copy of the array before passing it. But it is important to be aware that it is not true in all languages. For example I recently did a bunch of work in PL/pgSQL, and there arrays are passed by value. Meaning that your algorithm would have been O(n log(n)).
public static int div(int numItems)
{
if (numItems == 0)
return 0;
else
return numItems%2 + div(numItems/2);
}
I am thinking that the time complexity will be logarithmic, i.e O(log n), but I am not able to figure out how. Can anyone help me with this?
The complexity is O(logn). Imagine the number in its binary representation. Then the integer division by 2 is equivalent to removing the least significant bit from that representation. When all bits have been consumed the base case kicks in. As a number has O(logn) bits in its binary representation, this is also the complexity for this recursive function.
Most recursive functions I have seen being asked about (e.g. Fibonacci or Hanoi) have had O(1) returns, but what would the time complexity be if it wasn't O(1) but O(n) instead?
For example, a recursive Fibonacci with O(n) base case:
class fibonacci {
static int fib(int n) {
if (n <= 1)
for (int i=0;i<n;i++) {
// something
}
return n;
return fib(n-1) + fib(n-2);
}
public static void main (String args[])
{
int n = 9;
System.out.println(fib(n));
}
}
The base case for the function that you’ve written here actually still has time complexity O(1). The reason for this is that if the base case triggers here, then n ≤ 1, so the for loop here will run at most once.
Because so many base cases trigger when the input size is small, it’s comparatively rare to get a base case whose runtime is, say, O(n) when the input to the algorithm has size n. This would mean that the base case is independent of the array size, which can happen but is somewhat unusual.
A more common occurrence - albeit one I think is still pretty uncommon - would be for a recursive function to have two different parameters to it (say, n and k), where the recursion reduces n but leaves k unmodified. For example, imagine taking the code you have here and replacing the for loop on n in the base case with a for loop on k in the base case. What happens then?
This turns out to be an interesting question. In the case of this particular problem, it means that the total work done will be given by O(k) times the number of base cases triggered, plus O(1) times the number of non-base-case recursive calls. For the Fibonacci recursion, the number of base cases triggered computing Fn is Fn+1 and there are (Fn+1 - 1) non-base-case calls, so the overall runtime would be Θ(k Fn+1 + Fn+1) = Θ(k φn). For the Towers of Hanoi, you’d similarly see a scaling effect where the overall runtime would be Θ(k 2n). But for other recursive functions the runtime might vary in different ways, depending on how those functions were structured.
Hope this helps!
I want to know difference(greater or lesser) between space complexity of a recursive and non recursive program with lowest space complexity.I know that recursion uses stack in its operations,but does recursion always increase space complexity.Can recursion be helpful in reducing space complexity?I will also appreciate any guidance for a good tutorial of stack use in recursion.Please also provide an short example for clarification if possible.
Recursion can increase space complexity, but never decreases.
Consider for example insert into binary search tree.
Nonrecursive implementation (using while cycle) uses O(1) memory.
Recursive implementation uses O(h) memory (where h is the depth of the tree).
In more formal way:
If there is a recursive algorithm with space complexity O(X), then there always is nonrecursive algorithm with space complexity O(X) (you can just simulate recursion by using stack). But it doesn't go the other way (see example above).
You can achieve exactly the same space complexity with tail recursion as loop iteration if your implementation language support TCO.
int acc=1;
(for int i=1; i<=n; i++) {
acc*=i;
}
return acc;
Is the same as:
int fact (int i, int acc) {
if( i == 1)
return acc;
return fact(i--, i*acc);
}
return fact(n);
Say, for example, the iterative and recursive versions of the Fibonacci series. Do they have the same time complexity?
The answer depends strongly on your implementation. For the example you gave there are several possible solutions and I would say that the naive way to implement a solution has better complexity when implemented iterative. Here are the two implementations:
int iterative_fib(int n) {
if (n <= 2) {
return 1;
}
int a = 1, b = 1, c;
for (int i = 0; i < n - 2; ++i) {
c = a + b;
b = a;
a = c;
}
return a;
}
int recursive_fib(int n) {
if (n <= 2) {
return 1;
}
return recursive_fib(n - 1) + recursive_fib(n-2);
}
In both implementations I assumed a correct input i.e. n >= 1. The first code is much longer but its complexity is O(n) i.e. linear, while the second implementation is shorter but has exponential complexity O(fib(n)) = O(φ^n) (φ = (1+√5)/2) and thus is much slower.
One can improve the recursive version by introducing memoization(i.e. remembering the return values of the function you have already computed). This is usually done by introducing an array where you store the values. Here is an example:
int mem[1000]; // initialize this array with some invalid value. Usually 0 or -1
// as memset can be used for that: memset(mem, -1, sizeof(mem));
int mem_fib(int n) {
if (n <= 2) {
return mem[n] = 1;
}
if (mem[n-1] == -1) {
solve(n-1);
}
if (mem[n-2] == -1) {
solve(n-2);
}
return mem[n] = mem[n-1] + mem[n-2];
}
Here the complexity of the recursive algorithm is linear just like the iterative solution. The solution I introduced above is the top-down approach for dynamic programming solution of your problem. The bottom-up approach will lead to something very similar to the solution I introduced as iterative.
There a lot of articles on dynamic programming including in wikipedia
Depending on the problems I have met in my experience some are way harder to be solved with bottom-up approach(i.e. iterative solution), while others are hard to solve with top-down approach.
However the theory states that each problem that has an iterative solution has a recursive with the same computational complexity (and vice versa).
Hope this answer helps.
The particular recursive algorithm for calculation fibanocci series is less efficient.
Consider the following situation of finding fib(4) through the recursive algorithm
int fib(n) :
if( n==0 || n==1 )
return n;
else
return fib(n-1) + fib(n-2)
Now when the above algorithm executes for n=4
fib(4)
fib(3) fib(2)
fib(2) fib(1) fib(1) fib(0)
fib(1) fib(0)
It's a tree. It says that for calculating fib(4) you need to calculate fib(3) and fib(2) and so on.
Notice that even for a small value of 4, fib(2) is calculated twice and fib(1) is calculated thrice. This number of additions grows for large numbers.
There is a conjecture that the number of additions required for calculating fib(n) is
fib(n+1) -1
So this duplication is the one which is the cause of reduced performance in this particular algorithm.
The iterative algorithm for fibonacci series is considerably faster since it does not involve calculating the redundant things.
It may not be the same case for all the algorithms though.
If you take some recursive algorithm you can convert it to iterative by storing all function local variables in an array, effectively simulating stack on heap. If done like this there's no difference between iterative and recursive.
Note that there are (at least) two recursive Fibonacci algorithms, so for the example to be exact you need to specify which recursive algorithm you're talking about.
Yes, every iterative algorithm can be transformed into recursive version and vice versa. One way by passing continuations and the other by implementing stack structure. This is done without increase in time complexity.
If you can optimize tail-recursion then every iterative algorithm can be transformed to recursive one without increasing asymptotic memory complexity.
Yes, if you use exactly the same ideas underlying the algorithm, it does not matter. However, recursion is often easy to use with regard to iteration. For instance, writing a recursive version of the towers of Hanoi is quite easy. Transforming the recursive version into a corresponding iterative version is difficult and error prone even though it can be done. Actually there is theorem that states that every recursive algorithm can be transformed into an equivalent iterative one (doing this requires mimicking the recursion iteratively using one or more stack data structures to hold parameters passed to recursive invocations).