What is the big O notation of the following function:
n^2 + n log n2^n
We can use some identities on the expression you provided:
Β Β Β π2 + πlog(π2π)
is:
Β Β Β π2 + π[logπ + log(2π)]
is:
Β Β Β π2 + π[logπ + πlog2]
Now in terms of asymptotic complexity, O(logπ + πlog2) = O(π), so then the big O for the whole expression is:
Β Β Β O(π2 + π2) = O(π2)
Related
I want to identify the time complexity of the loops below.
Are these the right thoughts about time complexity?
Loop 1
for (auto i = 1; n > 0; n -= i, i +=2) {}
My thoughts: O(n)
Because i has only linear changes and if n --> +infinity, then n-i doesn't matter.
Loop 2
for (auto i = 1; n > 0; n -= i, i += i / 2) {}
My thoughts: O(n)
Because we have a geometric progression of i:
i_n = i_1 *(3/2)^(n - 1)
The first is O(βπ)
Let's first rewrite it to not change π, as that is confusing. Let's introduce π to take that changing role:
for (auto i = 1, m = n; m > 0; m -= i, i +=2) {}
π follows the sequence 1, 3, 5, 7, ...
After π iterations:
Β Β Β π = πββππ=1(2πβ1)
which is (by Wikipedia):
Β Β Β πβπΒ²
The loop ends when πβπΒ²β€0, i.e. when βπβ€π. As π is a measure of the time complexity, we have O(βπ)
The second is O(logπ)
The value of π will indeed follow a geometric sequence. Let's again introduce π as the changing value (instead of π), then when π iterations have been made:
Β Β Β π=πββππ=1(3/2)π,
which is (by Wikipedia):
Β Β Β π=πβ((3/2)π+1β1)/((3/2)β1)
Β Β Β π=πβ2((3/2)π+1β1)
The loop ends when πβ2((3/2)π+1β1)β€0, or
Β Β Β π/2+1β€(3/2)π+1, or
Β Β Β log1.5(π/2+1)β€k+1
Since π is a measure of the time complexity, we have O(logπ).
Can you give the asymptotic analysis of this
i = 1;
k = 1;
while(k<n){
k = k+ i;
i = i + 1;
}
I have tried analysing but I was stuck at the inner loop
Let π be the number of the iteration, then the values of π and π evolve as follows:
π
π
π
0
1
1
1
1+1
2
2
1+1+2
3
3
1+1+2+3
4
4
1+1+2+3+4
5
So we see that π is 1 + βππ=1 π
This sum is a triangular number, and so:
Β Β Β π = 1 + π(π+1)/2
And if we fix π to the total number of iterations, then:
Β Β Β 1 + π(π-1)/2 < π β€ 1 + π(π+1)/2.
So π is O(πΒ²), and thus π is O(βπ).
The number of iterations π is a measure of the complexity, so it is O(βπ).
I have:
π(π) = 2π+5 + π2
π(π) = 2π+1 - 1
I must show whether:
π(π) = Ξ©(π(π)) or/and
π(π) = O(π(π))
I know that you don't need to acknowledge the π2 in π(π) or the -1 in π(π) because 2π+5 and 2π+1 have the higher complexity. But I'm not really sure how to find out the lower and the upper bound.
My approach would be to say that the +5 in π(π) and the +1 in π(π) doesn't change anything about the complexity, which means that both of the above statements are true and π(π) = ΞΈβ(π(π)). But I have no way to prove this.
We have
π(π) = 2π+5 + π2
π(π) = 2π+1 β 1
π(π) = Ξ©(π(π)) is true when we can find a π such that π(π) β₯ πβ
π(π)) for any π greater than a chosen π0.
We see that even with π=1 and π0=0 this is true.
π(π) = O(π(π)) is true when we can find a π such that π(π) β€ πβ
π(π)) for any π greater than a chosen π0:
Β Β Β 2π+5 + π2 β€ π(2π+1 β 1)
Let's choose π = 25, then we must show that for large enough π:
Β Β Β 2π+5 + π2 β€ 25(2π+1 β 1)
Β Β Β 2π+5 + π2 β€ 2β
2π+5 β 32
Β Β Β π2 β€ 2π+5 β 32
We can see that this is true for all π greater than 1.
I'm working on my DSA. I came across a question for which the recursive func looks something like this:
private int func(int currentIndex, int[] arr, int[] memo){
if(currentIndex >= arr.length)
return 0;
if(memo[currentIndex] > -1)
return memo[currentIndex];
int sum = 0;
int max = Integer.MIN_VALUE;
for(int i=currentIndex;i<currentIndex+3 && i<arr.length;i++){
sum += arr[i];
max = Math.max(max, sum - func(i+1, arr, memo));
}
memo[currentIndex] = max;
return memo[currentIndex];
}
If I'm not using memoization, by intuition at every step I've 3 choices so the complexity should be 3^n. But how do I prove it mathematically?
So far I could come up with this: T(n) = T(n-1) + T(n-2) + T(n-3) + c
Also, what should be the complexity if I use memoization? I'm completely blank here.
The recurrence relation without memoization is:
Β Β Β π(π) = π(π-1) + π(π-2) + π(π-3) + π
This is similar to the Tribonnaci sequence, which corresponds to a complexity of about O(1.84π).
With memoization it becomes a lot easier, as then the function runs in constant time when it is called with an argument that already has the result memoized. In practice this means that when one particular execution of the for loop has executed the first recursive call, the two remaining recursive calls will run in constant time, and so the recurrence relation simplifies to:
Β Β Β π(π) = π(π-1) + π
...which is O(π).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Please Explain it.Using recurrence tree, solve the recurrence T(n) = T(n β 1) + O(n)
You build the recurrence tree by repeatedly expanding the term on the right side. This tree is actually just a chain, as each node in that tree only has one child:
O(n)
|
O(n-1)
|
O(n-2)
|
...
The height of this tree is n, and the sum of the terms is
Β Β Β Ξ£i=1..nO(i)
...which is:
Β Β Β O( Ξ£i=1..ni )
...which is (cf. triangular numbers):
Β Β Β O( n(n+1)/2 )
...which is:
Β Β Β O(n2).