int foo(int n)
{
if(n==0)
return 1;
int sum = 0;
for(int i = 0;i < n;i++)
sum += foo(n-1);
return sum;
}
I am learning Big O notation recently.
Could someone give me an idea about how to determine this recurrence function's runtime by using big-O notation and how to present the runtime of this function.
So think about what happens if you run a foo(n) for a big n. The sum then consists of n times a call to foo(n-1). In the next level of the recursion tree we have foo(n-1) and again we call n(-1) times foo(n-1-1) but for each of the n foo(n-1) branches of our tree. We know that the tree has the height of n as we have to go until foo(n-n). So at every recursion step you turn one instance of foo into O(n) instances of foo(n-1).
I am not certain if I should reveal the answer as it seems like an exercise but it seems quit obvious, just draw a few levels of the recursion tree and your find your answer.
Related
In the book Introduction To Algorithms , the naive approach to solving rod cutting problem can be described by the following recurrence:
Let q be the maximum price that can be obtained from a rod of length n.
Let array price[1..n] store the given prices . price[i] is the given price for a rod of length i.
rodCut(int n)
{
initialize q as q=INT_MIN
for i=1 to n
q=max(q,price[i]+rodCut(n-i))
return q
}
What if I solve it using the below approach:
rodCutv2(int n)
{
if(n==0)
return 0
initialize q = price[n]
for i = 1 to n/2
q = max(q, rodCutv2(i) + rodCutv2(n-i))
return q
}
Is this approach correct? If yes, why do we generally use the first one? Why is it better?
NOTE:
I am just concerned with the approach to solving this problem . I know that this problem exhibits optimal substructure and overlapping subproblems and can be solved efficiently using dynamic programming.
The problem with the second version is it's not making use of the price array. There is no base case of the recursion so it'll never stop. Even if you add a condition to return price[i] when n == 1 it'll always return the result of cutting the rod into pieces of size 1.
your 2nd approach is absolutely correct and its time complexity is also same as the 1st one.
In Dynamic Programming also, we can make tabulation on same approach.Here is my solution for recursion :
int rodCut (int price[],int n){
if(n<=0) return 0;
int ans = price[n-1];
for(int i=1; i<=n/2 ; ++i){
ans=max(ans, (rodCut(price , i) + rodCut(price , n-i)));
}
return ans;
}
And, Solution for Dynamic Programming :
int rodCut(int *price,int n){
int ans[n+1];
ans[0]=0; // if length of rod is zero
for(int i=1;i<=n;++i){
int max_value=price[i-1];
for(int j=1;j<=i/2;++j){
max_value=max(max_value,ans[j]+ans[i-j]);
}
ans[i]=max_value;
}
return ans[n];
}
Your algorithm looks almost correct - you need to be a bit careful when n is odd.
However, it's also exponential in time complexity - you make two recursive calls in each call to rodCutv2. The first algorithm uses memoisation (the price array), so avoids computing the same thing multiple times, and so is faster (it's polynomial-time).
Edit: Actually, the first algorithm isn't correct! It never stores values in prices, but I suspect that's just a typo and not intentional.
I have a question regarding the space (memory) complexity of this particular piece of pseudocode:
int b(int n, int x) {
int sol = x;
if (n>1) {
for (int i = 1; i <= n; i++) {
sol = sol+i;
}
for (int k=0; k<3; k++) {
sol = sol + b(n/3,sol/9);
}
}
return sol;
}
The code gets called: b(n,0)
My opinion is, that the space complexity progresses linearly, that is n, because as the input n grows, so does the amount of variable declarations (sol).
Whereas a friend of mine insists it must be log(n). I didn't quite get his explanation. But he said something about the second for loop and that the three recursive calls happen in sequence.
So, is n or log(n) correct?
The total number times function b is called is O(n), but space complexity is O(log(n)).
Recursive calls in your program cause the execution stack to grow. Every time a recursive call takes place all local variables are pushed to the stack (stack size increases). And when function comes back from recursion the local variables are poped from the stack (stack size decreases).
So what you want to calculate here is the maximum size of execution stack, which is maximum depth of recursion, which is clearly O(log(n)).
I think the complexity is
O(log 3 base (n) )
My code is :
vector<int> permutation(N);
vector<int> used(N,0);
void try(int which, int what) {
// try taking the number "what" as the "which"-th element
permutation[which] = what;
used[what] = 1;
if (which == N-1)
outputPermutation();
else
// try all possibilities for the next element
for (int next=0; next<N; next++)
if (!used[next])
try(which+1, next);
used[what] = 0;
}
int main() {
// try all possibilities for the first element
for (int first=0; first<N; first++)
try(0,first);
}
I was learning complexity from some website where I came across this code. As per my understanding, the following line iterates N times. So the complexity is O(N).
for (int first=0; first<N; first++)
Next I am considering the recursive call.
for (int next=0; next<N; next++)
if (!used[next])
try(which+1, next);
So, this recursive call has number of step involved = t(n) = N.c + t(0).(where is some constant step)
There we can say that for this step, the complexity is = O(N).
Thus the total complexity is - O(N.N) = O(N^2)
Is my understanding right?
Thanks!
Complexity of this algorithm is O(N!) (or even O(N! * N) if outputPermutation takes O(N) which seems possible).
This algorithm outputs all permutations of 0..N natural numbers without repetitions.
Recursive function try itself sequentially tries to put N elements into position which and for each try it recursively invokes itself for the next which position, until which reaches N-1. Also, for each iteration try is actually invoked (N - which) times, because on each level some element is marked as used in order to eliminate repetitions. Thus the algorithm takes N * (N - 1) * (N - 2) ... 1 steps.
It is a recursive function. The function "try" calls itself recursively, so there is a loop in main (), a loop in try (), a loop in the recursive call to try (), a loop in the next recursive call to try () and so on.
You need to analyse very carefully what this function does, or you will get a totally wrong result (as you did). You might consider actually running this code, with values of N = 1 to 20 and measuring the time. You will see that it is most definitely not O (N^2). Actually, don't skip any of the values of N; you will see why.
I'm familiar with other sorting algorithms and the worst I've heard of in polynomial time is insertion sort or bubble sort. Excluding the truly terrible bogosort and those like it, are there any sorting algorithms with a worse polynomial time complexity than n^2?
Here's one, implemented in C#:
public void BadSort<T>(T[] arr) where T : IComparable
{
for (int i = 0; i < arr.Length; i++)
{
var shortest = i;
for (int j = i; j < arr.Length; j++)
{
bool isShortest = true;
for (int k = j + 1; k < arr.Length; k++)
{
if (arr[j].CompareTo(arr[k]) > 0)
{
isShortest = false;
break;
}
}
if(isShortest)
{
shortest = j;
break;
}
}
var tmp = arr[i];
arr[i] = arr[shortest];
arr[shortest] = tmp;
}
}
It's basically a really naive sorting algorithm, coupled with a needlessly-complex method of calculating the index with the minimum value.
The gist is this:
For each index
Find the element from this point forward which
when compared with all other elements after it, ends up being <= all of them.
swap this shortest element with the element at this index
The innermost loop (with the comparison) will be executed O(n^3) times in the worst case (descending-sorted input), and every iteration of the outer loop will put one more element into the correct place, getting you just a bit closer to being fully sorted.
If you work hard enough, you could probably find a sorting algorithm with just about any complexity you want. But, as the commenters pointed out, there's really no reason to seek out an algorithm with a worst-case like this. You'll hopefully never run into one in the wild. You really have to try to come up with one this bad.
Here's an example of elegant algorithm called slowsort which runs in Ω(n^(log(n)/(2+ɛ))) for any positive ɛ:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.116.9158&rep=rep1&type=pdf (section 5).
Slow Sort
Returns the sorted vector after performing SlowSort.
It is a sorting algorithm that is of humorous nature and not useful.
It's based on the principle of multiply and surrender, a tongue-in-cheek joke of divide and conquer.
It was published in 1986 by Andrei Broder and Jorge Stolfi in their paper Pessimal Algorithms and Simplexity Analysis.
This algorithm multiplies a single problem into multiple subproblems
It is interesting because it is provably the least efficient sorting algorithm that can be built asymptotically, and with the restriction that such an algorithm, while being slow, must still all the time be working towards a result.
void SlowSort(vector<int> &a, int i, int j)
{
if(i>=j)
return;
int m=i+(j-i)/2;
int temp;
SlowSort(a, i, m);
SlowSort(a, m + 1, j);
if(a[j]<a[m])
{
temp=a[j];
a[j]=a[m];
a[m]=temp;
}
SlowSort(a, i, j - 1);
}
Here's the code I've implemented in a nutshell. The two inside for loop should have a complexity of O(n2) where n=vertices. I just can't figure out the overall time complexity along the outer for loop. I think its gonna be O( E * n2) where E is the number of edges and n is the number of vertices.
int vertices;
for(int Edges = 0; Edges < vertices-1 ; Edge++)
{
for (int i=0; i< vertices; i++)
for (int j=0; j<vertices; j++)
{
// TASKS
}
}
This code is for a prim algorithm. I can post the whole code if you want. Thanks :)
Ahh!!! What is so typical about it!
In your inner loops,you have variables named i & j,so you figured out easily the complexity.
Just with an addition of EDGE variable which is not at all different from the other 2,you have got confused! Check the number of iterations!!!
The outer-loop would run VERTICES-1 iterations.
Therefore, complexity = (VERTICES-1) * (VERTICES) *(VERTICES) = (VERTICES^3) - (VERTICES^2).
Program's complexity would be O(Vertices^3) OR O(n^3) where n=vertices...
Sigma notation makes things clear: