What is the loops time complexity? - algorithm

I want to identify the time complexity of the loops below.
Are these the right thoughts about time complexity?
Loop 1
for (auto i = 1; n > 0; n -= i, i +=2) {}
My thoughts: O(n)
Because i has only linear changes and if n --> +infinity, then n-i doesn't matter.
Loop 2
for (auto i = 1; n > 0; n -= i, i += i / 2) {}
My thoughts: O(n)
Because we have a geometric progression of i:
i_n = i_1 *(3/2)^(n - 1)

The first is O(βˆšπ‘›)
Let's first rewrite it to not change 𝑛, as that is confusing. Let's introduce π‘š to take that changing role:
for (auto i = 1, m = n; m > 0; m -= i, i +=2) {}
𝑖 follows the sequence 1, 3, 5, 7, ...
After π‘˜ iterations:
Β  Β  Β  π‘š = π‘›βˆ’βˆ‘π‘˜π‘–=1(2π‘–βˆ’1)
which is (by Wikipedia):
Β  Β  Β  π‘›βˆ’π‘˜Β²
The loop ends when π‘›βˆ’π‘˜Β²β‰€0, i.e. when βˆšπ‘›β‰€π‘˜. As π‘˜ is a measure of the time complexity, we have O(βˆšπ‘›)
The second is O(log𝑛)
The value of 𝑖 will indeed follow a geometric sequence. Let's again introduce π‘š as the changing value (instead of 𝑛), then when π‘˜ iterations have been made:
Β  Β  Β  π‘š=π‘›βˆ’βˆ‘π‘˜π‘–=1(3/2)𝑖,
which is (by Wikipedia):
Β  Β  Β  π‘š=π‘›βˆ’((3/2)π‘˜+1βˆ’1)/((3/2)βˆ’1)
Β  Β  Β  π‘š=π‘›βˆ’2((3/2)π‘˜+1βˆ’1)
The loop ends when π‘›βˆ’2((3/2)π‘˜+1βˆ’1)≀0, or
Β  Β  Β  𝑛/2+1≀(3/2)π‘˜+1, or
Β  Β  Β  log1.5(𝑛/2+1)≀k+1
Since π‘˜ is a measure of the time complexity, we have O(log𝑛).

Related

Asymptotic analysis of a program

Can you give the asymptotic analysis of this
i = 1;
k = 1;
while(k<n){
k = k+ i;
i = i + 1;
}
I have tried analysing but I was stuck at the inner loop
Let π‘š be the number of the iteration, then the values of π‘˜ and 𝑖 evolve as follows:
π‘š
π‘˜
𝑖
0
1
1
1
1+1
2
2
1+1+2
3
3
1+1+2+3
4
4
1+1+2+3+4
5
So we see that π‘˜ is 1 + βˆ‘π‘šπ‘–=1 𝑖
This sum is a triangular number, and so:
Β  Β  Β  π‘˜ = 1 + π‘š(π‘š+1)/2
And if we fix π‘š to the total number of iterations, then:
Β  Β  Β  1 + π‘š(π‘š-1)/2 < 𝑛 ≀ 1 + π‘š(π‘š+1)/2.
So 𝑛 is O(π‘šΒ²), and thus π‘š is O(βˆšπ‘›).
The number of iterations π‘š is a measure of the complexity, so it is O(βˆšπ‘›).

What is the big O notation of the following sentence

What is the big O notation of the following function:
n^2 + n log n2^n
We can use some identities on the expression you provided:
Β  Β  Β  𝑛2 + 𝑛log(𝑛2𝑛)
is:
Β  Β  Β  𝑛2 + 𝑛[log𝑛 + log(2𝑛)]
is:
Β  Β  Β  𝑛2 + 𝑛[log𝑛 + 𝑛log2]
Now in terms of asymptotic complexity, O(log𝑛 + 𝑛log2) = O(𝑛), so then the big O for the whole expression is:
Β  Β  Β  O(𝑛2 + 𝑛2) = O(𝑛2)

Big O Notation of the terms f(n) = 2^(n+5) + n^2 and g(n) = 2^(n+1) - 1

I have:
𝑓(𝑛) = 2𝑛+5 + 𝑛2
𝑔(𝑛) = 2𝑛+1 - 1
I must show whether:
𝑓(𝑛) = Ξ©(𝑔(𝑛)) or/and
𝑓(𝑛) = O(𝑔(𝑛))
I know that you don't need to acknowledge the 𝑛2 in 𝑓(𝑛) or the -1 in 𝑔(𝑛) because 2𝑛+5 and 2𝑛+1 have the higher complexity. But I'm not really sure how to find out the lower and the upper bound.
My approach would be to say that the +5 in 𝑓(𝑛) and the +1 in 𝑔(𝑛) doesn't change anything about the complexity, which means that both of the above statements are true and 𝑓(𝑛) = θ​(𝑔(𝑛)). But I have no way to prove this.
We have
𝑓(𝑛) = 2𝑛+5 + 𝑛2
𝑔(𝑛) = 2𝑛+1 βˆ’ 1
𝑓(𝑛) = Ξ©(𝑔(𝑛)) is true when we can find a π‘˜ such that 𝑓(𝑛) β‰₯ π‘˜β‹…π‘”(𝑛)) for any 𝑛 greater than a chosen 𝑛0.
We see that even with π‘˜=1 and 𝑛0=0 this is true.
𝑓(𝑛) = O(𝑔(𝑛)) is true when we can find a π‘˜ such that 𝑓(𝑛) ≀ π‘˜β‹…π‘”(𝑛)) for any 𝑛 greater than a chosen 𝑛0:
Β  Β  Β  2𝑛+5 + 𝑛2 ≀ π‘˜(2𝑛+1 βˆ’ 1)
Let's choose π‘˜ = 25, then we must show that for large enough 𝑛:
Β  Β  Β  2𝑛+5 + 𝑛2 ≀ 25(2𝑛+1 βˆ’ 1)
Β  Β  Β  2𝑛+5 + 𝑛2 ≀ 2β‹…2𝑛+5 βˆ’ 32
Β  Β  Β  𝑛2 ≀ 2𝑛+5 βˆ’ 32
We can see that this is true for all 𝑛 greater than 1.

Time Complexity of a recursive function calling itself thrice

I'm working on my DSA. I came across a question for which the recursive func looks something like this:
private int func(int currentIndex, int[] arr, int[] memo){
if(currentIndex >= arr.length)
return 0;
if(memo[currentIndex] > -1)
return memo[currentIndex];
int sum = 0;
int max = Integer.MIN_VALUE;
for(int i=currentIndex;i<currentIndex+3 && i<arr.length;i++){
sum += arr[i];
max = Math.max(max, sum - func(i+1, arr, memo));
}
memo[currentIndex] = max;
return memo[currentIndex];
}
If I'm not using memoization, by intuition at every step I've 3 choices so the complexity should be 3^n. But how do I prove it mathematically?
So far I could come up with this: T(n) = T(n-1) + T(n-2) + T(n-3) + c
Also, what should be the complexity if I use memoization? I'm completely blank here.
The recurrence relation without memoization is:
Β  Β  Β  𝑇(𝑛) = 𝑇(𝑛-1) + 𝑇(𝑛-2) + 𝑇(𝑛-3) + 𝑐
This is similar to the Tribonnaci sequence, which corresponds to a complexity of about O(1.84𝑛).
With memoization it becomes a lot easier, as then the function runs in constant time when it is called with an argument that already has the result memoized. In practice this means that when one particular execution of the for loop has executed the first recursive call, the two remaining recursive calls will run in constant time, and so the recurrence relation simplifies to:
Β  Β  Β  𝑇(𝑛) = 𝑇(𝑛-1) + 𝑐
...which is O(𝑛).

Using recurrence tree, solve the recurrence T(n) = T(n βˆ’ 1) + O(n) [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Please Explain it.Using recurrence tree, solve the recurrence T(n) = T(n βˆ’ 1) + O(n)
You build the recurrence tree by repeatedly expanding the term on the right side. This tree is actually just a chain, as each node in that tree only has one child:
O(n)
|
O(n-1)
|
O(n-2)
|
...
The height of this tree is n, and the sum of the terms is
Β  Β  Β  Ξ£i=1..nO(i)
...which is:
Β  Β  Β  O( Ξ£i=1..ni )
...which is (cf. triangular numbers):
Β  Β  Β  O( n(n+1)/2 )
...which is:
Β  Β  Β  O(n2).

Resources