TimeComplexity : O(n) VS O(2^n) - time

Given the following two functions, why is the time complexity of the first n while the second is 2^n?
The only difference is the +1 before the return in the second function. I don't see how that can influence the time complexity.
int f1(int n){
if (n==1)
return 1;
return f1(f1(n-1));
}
int f2(int n){
if (n==1)
return 1;
return 1+f2(f2(n-1));
}

The key insight here is that f1 always returns one, given anything, and f1(1) is evaluated in constant time.
Each of these functions will result in two recursive calls -- an inner recursive call first then an outer recursive call -- except in the case in which n is one. In that case the function will evaluate zero recursive calls.
However, since function f1 always returns 1 regardless of its input, one of the recursive calls it makes, the outer recursive call, will always be called on n of 1. Thus the time complexity of f1 reduces to the time complexity of f(n) = f(n-1) which is O(n) -- because the only other call it will make takes O(1) time.
When evaluating f2 on the other hand, the inner recursive call will be called on n-1 and the outer recursive call will be called on n-1 as well because f2(n) yields n. You can see this by induction. By definition, f2 of 1 is 1. Assume f2(n) = n. Then by definition f2(n+1) yields 1 + f2(f2(n+1-1)) which reduces to 1 + (n+1-1) or just n+1, by the induction hypothesis.
Thus each call to f2(n) results in two times however many calls f2(n-1) makes. This implies a O(2^n) time complexity.

It is related to the time it would take for the recursive function to return.
In the first function, the 1 that you return is carried outside of all of them, and so when you reach 1, you immediately end all nested calls.
On the second function, because the return is incremented by 1, when you reach 1, you create more nested calls.
To visualize this, put a print statement in your function and examine the output.
In python, that would be
def f1 (n):
print (n)
if n < 2:
return 1
return f1(f1(n-1))
def f2 (n):
print (n)
if n < 2:
return 1
return 1 + f2(f2(n-1))
f1(10)
f2(10)

Related

time complexity recursive function

How can i calculate the time complexity and the t(n) equation of this recursive function?
Function CoeffBin(n,k)
if (n=1) or (k=0) then return(1)
else return (CoeffBin(n-1,k) + CoeffBin(n-1,k-1))
Let T(n, k) be the cost function and assume a unit cost of the statement if (n=1) or (k=0) then return(1).
Now neglecting the cost of addition, we have the recurrence
T(n, k) =
1 if n = 1 or k = 0 (that is T(1, k) = T(n, 0) = 1)
T(n-1, k) + T(n-1, k-1) otherwise
The solution is T(n, k)=B(n-1, n-1+k)=B(k, n-1+k) where B denotes Binomial numbers
and the costs also follows Pascal's triangle !
For a more precise estimate, we can assume the costs a when n=1or k=0, and T(n-1,k)+T(n-1,k-1)+b otherwise. The solution is then (a+b)B(k, n+k-1)-b.
Note that, at the base level (that is, when not doing recursive calls), the function always returns ones.
So, to have an answer of X, the program will ultimately need to do X-1 additions, and thus do X calls executing the case in the first line and X-1 calls executing the second line.
So, whatever the intended result of the function call is -- perhaps choose(n,k), -- if you prove that it works, you automatically establish that the number of calls is proportional to that result.

What are the number of operations in this method?

I am currently learning about asymptotic analysis, however I am unsure of how many operations are in these nested loops. Would somebody be able to help me understand how to approach them? Also what would the Big-O notation be? (This is also my first time on stack overflow so please forgive any formatting errors in the code).
public static void primeFactors(int n){
while( n % 2 == 0){
System.out.print(2 + " ");
n /= 2;
}
for(int i = 3; i <= Math.sqrt(n); i += 2){
while(n % i == 0){
System.out.print(i + " ");
n /= i;
}
}
}
In the worst case, the first loop (while) is O(log(n)). In the second loop, the outer loop runs in O(sqrt(n)) and the inner loop runs in O(log_i(n)). Hence, the time complexity of the second loop (inner and outer in total) is:
O(sum_{i = 3}^{sqrt(n)} log_i(n))
Therefore, the time complexity of the mentioned algorithm is O(sqrt(n) log(n)).
Notice that, if you mean n is modified inside the inner loop, and it affects on sqrt(n) in the outer loop, so the complexity of the second loop is O(sqrt(n)). Therefore, under this assumtion, the time complexity of the alfgorithm will be O(sqrt(n)) + O(log(n)) == O(sqrt(n)).
First, we see that the first loop is really a special case of the inner loop that occurs in the other loop, with i equal to two. It was separated as a special case in order to be able to increase i with steps of 2 instead of 1. But from the point of view of asymptotic complexity the step by 2 makes no difference: it represents a constant coefficient, which we can ignore. And so for our analysis we can just rewrite the code to this:
public static void primeFactors(int n){
for(int i = 2; i <= Math.sqrt(n); i += 1){ // note the change in start and increment value
while(n % i == 0){
System.out.print(i + " ");
n /= i;
}
}
}
The number of times that n/i is executed, corresponds to the number of non-distinct prime divisors that a number has. According to this Q&A that number of times is O(loglogn). It is not straightforward to derive this, so I had to look it up.
We should also consider the number of times the for loop iterates. The Math.sqrt(n) boundary for i can lower as the for loop iterates. The more divisions take place, the (much) fewer iterations the for loop has to make.
We can see that at the time that the loop exits, i has surpassed the square root of the greatest prime divisor of n. In the worst case that greatest prime divisor is n itself (when n is prime). So the for loop can iterate up to the square root of n, so O(√n) times. In that case the inner loop never iterates (no divisions).
We should thus see which is more determining, and we get O(√n + loglogn). This is O(√n).
The first loop divides n by 2 until it is not divisible by 2 anymore. The maximum number of time this can happen is log2(n).
The second loop, at first sight, seems to run sqrt(n) times the inner loop which is also O(log(n)), but that is actually not the case. Everytime the while condition of the second loop is satisfied, n is drastically decreased, and since the condition of a for-loop is executed on each iteration, sqrt(n) also decreases. The worst case actually happens if the condition of while loop is never satisfied, i.e. if n is prime.
Hence the overall time complexity is O(sqrt(n)).

complexity of a recursive function inside another recursive function

I can't figure out the solution of this excercise:
Calculate the complexity of f(g(n))+g(f(n)) with g and f defined as follows:
int f(int x) {
if (x<=1) return 2;
int a = g(x) + 2*f(x/2);
return 1+ x + 2*a;
}
int g(int x) {
int b=0;
if (x<=1) return 5;
for (int i=1, i<=x*x;i++)
b+=i;
return b + g(x-1);
}
could anyone explain me how to get to the solution?
There are two separate steps to solving this problem. Firstly we must look at the time complexity of each function, and then the output complexity.
Time complexity
Since g is self-contained, let's look at it first.
The work done in g consists of:
x^2 executions of a loop
A recursive call with parameter x - 1
Hence one might write the time complexity recurrence relation as (using upper-case to distinguish it from the original function)
To solve it, repeatedly self-substitute to give a summation. This is the sum of the squares of natural numbers from 6 to x:
Where in the last step we used a standard result. And thus, since a is a constant:
Next, f:
One call to g(x)
One recursive call with parameter x / 2
Some constant amount of work
Using a similar method:
Applying the stopping condition:
Thus:
Since the exponential term 2^(-x^3) vanishes:
Output complexity
This is basically the same process as above, with slightly different recursion relations. I'll skip the details and just state the results (using lower case for output functions):
Thus the final time complexity of f(g(n)) + g(f(n)) is:
Which matches the result given by your source.

What will be the complexity of this code?

My code is :
vector<int> permutation(N);
vector<int> used(N,0);
void try(int which, int what) {
// try taking the number "what" as the "which"-th element
permutation[which] = what;
used[what] = 1;
if (which == N-1)
outputPermutation();
else
// try all possibilities for the next element
for (int next=0; next<N; next++)
if (!used[next])
try(which+1, next);
used[what] = 0;
}
int main() {
// try all possibilities for the first element
for (int first=0; first<N; first++)
try(0,first);
}
I was learning complexity from some website where I came across this code. As per my understanding, the following line iterates N times. So the complexity is O(N).
for (int first=0; first<N; first++)
Next I am considering the recursive call.
for (int next=0; next<N; next++)
if (!used[next])
try(which+1, next);
So, this recursive call has number of step involved = t(n) = N.c + t(0).(where is some constant step)
There we can say that for this step, the complexity is = O(N).
Thus the total complexity is - O(N.N) = O(N^2)
Is my understanding right?
Thanks!
Complexity of this algorithm is O(N!) (or even O(N! * N) if outputPermutation takes O(N) which seems possible).
This algorithm outputs all permutations of 0..N natural numbers without repetitions.
Recursive function try itself sequentially tries to put N elements into position which and for each try it recursively invokes itself for the next which position, until which reaches N-1. Also, for each iteration try is actually invoked (N - which) times, because on each level some element is marked as used in order to eliminate repetitions. Thus the algorithm takes N * (N - 1) * (N - 2) ... 1 steps.
It is a recursive function. The function "try" calls itself recursively, so there is a loop in main (), a loop in try (), a loop in the recursive call to try (), a loop in the next recursive call to try () and so on.
You need to analyse very carefully what this function does, or you will get a totally wrong result (as you did). You might consider actually running this code, with values of N = 1 to 20 and measuring the time. You will see that it is most definitely not O (N^2). Actually, don't skip any of the values of N; you will see why.

How to calculate time complexity of the following algorithm

How to calculate time complexity of the following algorithm. I tried but I am getting confused because recursive calls.
power (real x, positive integer n)
//comment : This algorithm returns xn, taking x and n as input
{
if n=1 then
return x;
y = power(x, |n/2|)
if n id odd then
return y*y*x //comment : returning the product of y2 and x
else
return y * y //comment : returning y2
}
can some one explain in simple steps.
To figure out the time complexity of a recursive function you need to calculate the number of recursive calls that is going to be made in terms of some input variable N.
In this case, each call makes at most one recursive invocation. The number of invocations is on the order of O(log2N), because each invocation decreases N in half.
The rest of the body of the recursive function is O(1), because it does not depend on N. Therefore, your function has time complexity of O(log2N).
Each call is considered a constant time operation, and how many times will it recurse is equal to how many times can you do n/2 before n = 1, which is at most log2(n) times. Therefore the worst case running time is O(log2n).

Resources