Complexity of a nested geometric sequence - algorithm

Below code is actually bounded by O(n^2), could anyone please explain why?
for (int i = 1; i < n; i++) {
int j = i;
while (j < n) {
do operation with cost O(j)
j = j * 3;
}
}

It's not that tricky.
Your inner loop's complexity forms a geometric progression with total omplexity O(n).
Without filling the details all the way (this seems like a HW problem), note that the formula for a geometric sequence is
a_0 (q^k - 1) / q - 1, (a_0 = first element, q = multiplication factor, k = num elements).
and your q^k here is O(n).
Your outer loop is O(n).
Since it's a nested loop, and the inner term does not depend on the outer index, you can multiply.

The proof is geometric progression :)
For i = n, the inner loop doesn't execute more than once if i > n/3 (because in next iteration j is >= n).
So for 2n/3 iterations of outer loop the inner loop iterates only once and does O(j) operation. So for 2n/3 iterations the complexity is O(n^2).
To calculate complexity of remaining iterations set n = n/3, now apply steps 1, 2 and 3

Methodically, using Sigma notation, you may do the following:

Related

Determining the big-O of three nested for loops with if statment

What is the big-O for the following code :
y=1;
x=3;
for(int i =1 ; i < =n ; i*=2)
for(int j =1; j<= i * i; j++)
if (i % j == 0)
for(int k = 1; k<=j; k++)
y=y*x;
My Thoughts :
Looking at another similar questions I think the inner most loop is O(n) and the first loop is O(log (n))..as for the middle its O(n^2)
so the overall result would be O(log(n)*n^3)
Is my answer and way of thinking right ? I'm new to this so i hope i can get some help explaning how this loops work.
the most inner loop will run j time if i % j == 0. As the middle loop will run i^2 times, only when j < i it will be possible to satisfy the specified condition. Hence, among i^2 iteration of the middle loop, at least i^2 - i times, the condition will not be satisfied.
Suppose we denote the number of divisors of i with tau(i), among j < i only tau(i) times the condition will satisfy that means the total complexity of the most inner loop is equal to the sum of divisions of i which is at most 77/16 i (see this post for the proof).
Hence, the total complexity of the middle loop with the inner loop is at most (i^2 - i) + (i - tau(i)) + 77/16 i = i^2 + 77/16 i - tau(i).
We also know that the tau(i) is in O(i^(1/loglog(i))) (see the proof here). Now, to find the complexity of the whole loop, we need to sum the last expression for i = 1, 2, 4, ..., n. As we desire to find the asymptotic complexity and we have a sum here, we can ignore the lower powers of i. Therefore, the time complexity of the whole loop is 1 + 2^2 + (2^2)^2 + ... + (2^2)^log(n) = ((2^2)^(log(n)+1)-1)/(2^2-1) = Theta(n^2) (a geometric sum with factor of 2^2 and log(n) items).
In sum, the higher time complexity analysis for the specified code is Theta(n^2) which is also in O(n^2) as well.

When do you multiply by the average Big O of nested loop?

When do we multiply the Big O of the outer loop by the average Big O of the inner loop?
for (int i = n; i > 0; i /= 2) {
for (int j = 0; j < i; j++) {
//statement
}
}
Answer: O(n)
In this question, the outer loop is O(log n). Since the inner loop executes some number of times based on 'i', the average Big O is taken. This results in a summation of n + n/2 + n/4 + ... = 2n which is then divided to get the average: O(2n / log n).
Hence O(log n) * O(2n / log n) = O(n).
But then why don't we need to take the average for this question?
for (i = 0; i < n; i++) {
for (j = 0; j < i * i ; j++){
}
}
Answer: O(n^3)
The outer loop is O(n). The inner loop is O(n^2). But why?
Doesn't the inner loop execute based on a value of 'i' in the outer loop? So why isn't the average taken like the question above? - resulting in something like O(n^2 / n) for the inner loop.
The complexity of an algorithm is determined by counting how many operations it performs when executed. For a loop which iterates k times, where m is the mean number of operations on each iteration, the total number of operations performed is always total = k * m, simply because the mean is defined to be m = total / k.
It also makes sense to do this in big O notation: if a loop iterates O(k) times, and performs a mean of O(m) operations on each iteration, then the total number of operations is O(k * m).
The problem is that in your second example, you are not calculating the mean correctly. When you say "The inner loop is O(n^2)", this is already the mean; you are considering the number of iterations of the inner loop for one iteration of the outer loop. The total number of iterations is O(n^3), and the number of iterations of the outer loop is n, so the mean is O(n^3 / n) = O(n^2) as expected.
It is more natural to do this calculation the other way around: after observing that the mean is O(n^2), and the outer loop iterates O(n) times, the total is O(n^2 * n) = O(n^3).

Time Complexity of algorithm with nested if statement and for loops

So im trying to solve this question but me and my friends are a bit confused by it
for (Int64 i = 1; i < Math.Pow(2, n); i = 2*i)
{
if (i <= 4 || i >= Math.Pow(2, n - 2))
{
for (Int64 j = n; j >= 0; j = j - 2)
{
//constant number of C operations
}
}
else
{
for (Int64 j = n; j > 1; j = (Int64) Math.Ceiling((double) j/2))
{
//constant number of C operations
}
}
}
I get that the outer loop is O(n) but I can't get what the inner part is, I'm pretty sure it's just O(n) since the largest of the inner for loops i the top one at O(n) as apposed to the lower one which is O(log n).
Is this the right way in thinking about it? or am i wrong
but I can't get what the inner part is, I'm pretty sure it's just O(n) since the largest of the inner for loops i the top one at O(n) as apposed to the lower one which is O(log n)
The top inner loop is indeed Θ(n). It does not follow that the inner part is Θ(n), at least not in the sense that you should multiply this by the Θ(n) of the outer loop, obtaining an overall Θ(n2). This is because the top loop in the inner part is executed only a constant number of times. The Θ(n) of the top inner loop should be added to that of the outer loop.
The overall complexity of this code is Θ(n log(n)). Specifically, it is Θ(n) Θ(log(n)) (total complexity of executing the bottom inner loop altogether) + Θ(n) + Θ(n) (total complexity of executing the top inner loop altogether).

Time complexity analysis inconsistency

I have this code :
int fun(int n)
{
int count = 0;
for (int i = n; i > 0; i /= 2)
for (int j = 0; j < i; j++)
count += 1;
return count;
}
The time complexity of this code can be thought of as O(n) because O(n+n/2+n/4+...) = O(n)
By that logic, the time complexity of this snippet can also be argued to be O(n) :
for(i = 1; i < n; i *= 2)
//O(1) statements
Since O(1+2+4+..+n/4+n/2) = O(n). But since the loop runs log(n) times, it can be log(n) too.
Why is the former one not : log(n) times the outer loop * log(n) times the inner loop so, log(n)log(n)
What am I doing wrong ?
The first snippet has the outer loop that executes O(log n) times, and each iteration the inner loop executes O(i) times. If you sum any number of terms of the form n / 2^k, you'll get O(n).
The second piece of code has O(log n) iterations of O(1) operations, and sum of logarithmic amount of constants is still logarithmic.
In the first example, you don't have an O(1) statement inside your loop, as you have for (int j = 0; j < i; j++) count += 1. If in your second example you put the same inner loop of the first example, you are back to the same complexity. The first loop is not O(n*log(n)); this is easy to demonstrate because you can find an upper bound in O(2n) which is equivalent to O(n).
The time complexity of the 2nd one should not be calculated as a series O(1+2+4+..+n/4+n/2) = O(n), because it is not that series.
Notice the first one. It is being calculated as a series because one counts the number of times the inner for loop executes and then add all of them (series) to get the final time complexity.
When i=n inner for loop executes n times
When i=(n/2) inner for loop executes n/2 times
When i=(n/4) inner for loop executes n/4 times
and so on..
But in the second one, there is no series to add. It just comes to a formula (2^x) = n, which evaluates to x = logn.
(2^x) = n this formula can be obtained by noticing that i starts with 1, and when it becomes 2 it is multiplied by 2 until it reaches n.
So one needs to find out how many times 2 needs to be multiplied by 2 to reach n.
Thus the formula (2^x) = n, and then solve for x.

Why Bubble sort complexity is O(n^2)?

As I understand, the complexity of an algorithm is a maximum number of operations performed while sorting. So, the complexity of Bubble sort should be a sum of arithmmetic progression (from 1 to n-1), not n^2.
The following implementation counts number of comparisons:
public int[] sort(int[] a) {
int operationsCount = 0;
for (int i = 0; i < a.length; i++) {
for(int j = i + 1; j < a.length; j++) {
operationsCount++;
if (a[i] > a[j]) {
int temp = a[i];
a[i] = a[j];
a[j] = temp;
}
}
}
System.out.println(operationsCount);
return a;
}
The ouput for array with 10 elements is 45, so it's a sum of arithmetic progression from 1 to 9.
So why Bubble sort's complexity is n^2, not S(n-1) ?
This is because big-O notation describes the nature of the algorithm. The major term in the expansion (n-1) * (n-2) / 2 is n^2. And so as n increases all other terms become insignificant.
You are welcome to describe it more precisely, but for all intents and purposes the algorithm exhibits behaviour that is of the order n^2. That means if you graph the time complexity against n, you will see a parabolic growth curve.
Let's do a worst case analysis.
In the worst case, the if (a[i] > a[j]) test will always be true, so the next 3 lines of code will be executed in each loop step. The inner loop goes from j=i+1 to n-1, so it will execute Sum_{j=i+1}^{n-1}{k} elementary operations (where k is a constant number of operations that involve the creation of the temp variable, array indexing, and value copying). If you solve the summation, it gives a number of elementary operations that is equal to k(n-i-1). The external loop will repeat this k(n-i-1) elementary operations from i=0 to i=n-1 (ie. Sum_{i=0}^{n-1}{k(n-i-1)}). So, again, if you solve the summation you see that the final number of elementary operations is proportional to n^2. The algorithm is quadratic in the worst case.
As you are incrementing the variable operationsCount before running any code in the inner loop, we can say that k (the number of elementary operations executed inside the inner loop) in our previous analysis is 1. So, solving Sum_{i=0}^{n-1}{n-i-1} gives n^2/2 - n/2, and substituting n with 10 gives a final result of 45, just the same result that you got by running the code.
Worst case scenario:
indicates the longest running time performed by an algorithm given any input of size n
so we will consider the completely backward list for this worst-case scenario
int[] arr= new int[]{9,6,5,3,2};
Number of iteration or for loops required to completely sort it = n-1 //n - number of elements in the list
1st iteration requires (n-1) swapping + 2nd iteration requires (n-2) swapping + ……….. + (n-1)th iteration requires (n-(n-1)) swapping
i.e. (n-1) + (n-2) + ……….. +1 = n/2(a+l) //sum of AP
=n/2((n-1)+1)=n^2/2
so big O notation = O(n^2)

Resources