Big O notation on some examples [duplicate] - big-o

Could someone help me out with the time complexity analysis on this pseudocode?
I'm looking for the worst-case complexity here, and I can't figure out if it's O(n^4), O(n^5) or something else entirely. If you could go into detail into how you solved it exactly, it would be much appreciated.
sum = 0
for i = 1 to n do
for j = 1 to i*i do
if j mod i == 0 then
for k = 1 to j do
sum = sum + 1

First loop: O(n)
Second loop: i is in average n/2, you could have an exact formula but it's O(n²)
Third loop happens i times inside the second loop, so an average of n/2 times. And it's O(n²) as well, estimating it.
So it's O(n*n²*(1 + 1/n*n²)), I'd say O(n^4). The 1/n comes from the fact that the third loop happens roughly 1/n times inside the second one.
It's all a ballpark estimation, with no rigorous proof, but it should be right. You could confirm it by running code yourself.

Related

Run-Time complexities of the following functions

I need some help with these functions and if the run-time complexities for it are correct, I'm learning the concepts currently in class and I've looked at videos and such but I can't find any videos explaining these tougher ones, so I'm hoping I can get some help here if I'm doing it right.
sum = 0
for i = 1 to n*n
for j = 1 to i * i * i
sum++
For this one I am thinking the answer is O(n^5) because the outer loop is running n^2 times while the inner loop will be running n^3 times and together that'll make n^5
sum = 0
for i = 1 to n^2 // O(n^2) times
j = i
while j > 0 //(O(n+1) since the while loop will check one more time if the loop is valid
sum++
j = (j div 5)
for this run time I'm assuming its going to run O(n^3 + 1) times since outer loop is running n^2 times and while loop will be n+1 and together thats n^3 + 1.
for i = 1 to n // n times
for j = 1 to n { // n^2 times
C[i,j] = 0
for k = 1 to n // n^3 times?
C[i,j] = C[i,j] + A[i,k]*B[k,j]
}
so for this one I'm thinking it's O(n^6) but I am really iffy on this one. I have seen some examples online where people will figure the loop to be O(n log n) but I am totally lost on how that is found. Any help would be greatly appreciated!
Your understanding of the first and the third algorithms looks correct. The second, however, is totally off. The inner loop
while j > 0 //(O(n+1) since the while loop will check one more time if the loop is valid
sum++
j = (j div 5)
starts with j being equal to i and divides j by 5 at each iteration, so it runs log(i) times. In turn, i varies from 1 to n^2, and the total execution time is a
sum[i: 1..n^2] log(i)
By the property of a logarithm this sum is equal to log ((n^2)!). Using Stirling approximation for factorial one obtains the time complexity being O(n^2 log(n^2)) = O(2 n^2 log(n)) = O(n^2 log(n)).

is this program quadratic or linear?

I have the following pseudocode:
for i=1 to 3*n
for j=1 to i*i
for k=1 to j
if j mod i=1 then
s=s+1
endif
next k
next j
next i
When I want to analyze the number of times the part s=s+1 is performed, assuming that this operation takes constant time, I end up with a quadratic complexity or is it linear? The value of n can be any positive integer.
The calculations that I made are the following:
Definitely not quadratic, but should at least be polynomial.
It goes through 3n iterations.
On each iteration it does 9n2 more.
On each of those it does up to 9n2 more.
So I think it would be O(n5).
When talking about running time, you should always make explicit in terms of what you are defining your running time.
If we assume you are talking about your running time in terms of n, the answer is O(n^5). This is because what you are doing boils down to this, when we get rid of the constant factors:
do n times:
do n^2 times:
do n^2 times:
do something
And n * n^2 * n^2 = n^5

theoretical analysis of comparisons

I'm first asked to develop a simple sorting algorithm that sorts an array of integers in ascending order and put it to code:
int i, j;
for ( i = 0; i < n - 1; i++)
{
if(A[i] > A[i+1])
swap(A, i+1, i);
for (j = n - 2; j >0 ; j--)
if(A[j] < A[j-1])
swap(A, j-1, j);
}
Now that I have the sort function, I'm asked to do a theoretical analysis for the running time of the algorithm. It says that the answer is O(n^2) but I'm not quite sure how to prove that complexity.
What I know so far is that the 1st loop runs from 0 to n-1, (so n-1 times), and the 2nd loop from n-2 to 0, (so n-2 times).
Doing the recurrence relation:
let C(n) = the number of comparisons
for C(2) = C(n-1) + C(n-2)
= C(1) + C(0)
C(2) = 0 comparisons?
C(n) in general would then be: C(n-1) + C(n-2) comparisons?
If anyone could guide my step by step, that would be greatly appreciated.
When doing a "real" big O - time complexity analysis, you select one operation that you count, obviously the one that dominates the running time. In your case you could either choose the comparison or the swap, since worst case there will be a lot of swaps right?
Then you calculate how many times this will be evoked, scaling to input. So in your case you are quite right with your analysis, you simply do this:
C = O((n - 1)(n - 2)) = O(n^2 -3n + 2) = O(n^2)
I come up with these numbers through reasoning about the flow of data in your code. You have one outer for-loop iterating right? Inside that for-loop you have another for-loop iterating. The first for-loop iterates n - 1 times, and the second one n - 2 times. Since they are nested, the actual number of iterations are actually the multiplication of these two, because for every iteration in the outer loop, the whole inner loop runs, doing n - 2 iterations.
As you might know you always remove all but the dominating term when doing time complexity analysis.
There is a lot to add about worst-case complexity and average case, lower bounds, but this will hopefully make you grasp how to reason about big O time complexity analysis.
I've seen a lot of different techniques for actually analyzing the expression, such as your recurrence relation. However I personally prefer to just reason about the code instead. There are few algorithms which have hard upper bounds to compute, lower bounds on the other hand are in general very hard to compute.
Your analysis is correct: the outer loop makes n-1 iterations. The inner loop makes n-2.
So, for each iteration of the outer loop, you have n-2 iterations on the internal loop. Thus, the total number of steps is (n-1)(n-2) = n^2-3n+2.
The dominating term (which is what matters in big-O analysis) is n^2, so you get O(n^2) runtime.
I personally wouldn't use the recurrence method in this case. Writing the recurrence equation is usually helpful in recursive functions, but in simpler algorithms like this, sometimes it's just easier to look at the code and do some simple math.

Time complexity for dependant nested for loop?

Can you explain me how to find time complexity for this?
sum=0;
for(k=1;k<=n;k*=2)
for(j=1;j<=k;j++)
sum++;
So, i know the outer loop has time complexity of O(logn), but since the iterations of the inner loop depends on the value of the outer loop, the complexity of this algorithm is not O(nlogn).
The book says it is O(n).
I really dont understand how it is O(n)...Can someone please explain it...
I'll be really grateful if u could go into the details btw :D
A mathematical solution would help me understand better...
Just see how many times the inner loop runs:
1 + 2 + 4 + 8 + 16 +...+ n
Note that if n = 32, then this sum = 31 + 32. ~ 2n.
This is because the sum of all the terms except the last term is almost equal to the last term.
Hence the overall complexity = O(n).
EDIT:
The geometric series sum (http://www.mathsisfun.com/algebra/sequences-sums-geometric.html) is of the order of:
(2^(logn) - 1)/(2-1) = n-1.
The outer loop executed log(Base2)n times.so it is O(log(Base2)n).
the inner loop executed k times for each iteration of the outer loop.now in each iteration of the outer loop, k gets incremented to k*2.
so total number of inner loop iterations=1+2+4+8+...+2^(log(Base2)n)
=2^0+2^1+2^2+...+2^log(Base2)n (geometric series)
=2^((log(base2)n+1)-1/(2-1)
=2n-1.
=O(n)
so the inner loop is O(n).
So total time complexity=O(n), as O(n+log(base2)n)=O(n).
UPDATE:It is also O(nlogn) because nlogn>>n for large value of n , but it is not asymptotically tight. you can say it is o(nlogn)[Small o] .
I believe you should proceed like the following to formally obtain your algorithm's order of growth complexity, using Mathematics (Sigma Notation):

big oh of algorithm with nested loop

find the big oh characterization
input: n
s<-0
for i<-1 to n^2 do
for j<-1 to i do
s<-s+i
Both loops iterate n^2 times. I'm not sure if I should add or multiply the loops. Right now I think the answer is O(n^4).
The outer loops i from 0 to n^2, while the inner one loops j from 1 to i.
Thus, the inner loop has complexity of i (inner loop needs i steps, i is varying). Let us denote the runtime of the inner loop by IL(i)=i.
To get the total runtime, we see that the inner loop is executed n^2 times with varying "parameter" i. Thus, we obtain as total runtime:
n^2 n^2
---- ----
\ IL(i) = \ i
/ /
---- ----
i=1 i=1
That is, you have to sum up all numbers from 1 to n^2. There are many basic textbooks explaining how to evaluate this.
The result is (n^2)(n^2+1)/2, which leads to an overal complexity of O(n^4).
You are correct the answer is O(n^4). First the inner loop cost i. Then the outer loop cost sum(1->p) i = p(p-1)/2, where p=n^2. Which gives a cost of O(n^4)
Your argument is almost correct.
The number of loop iterations:
1 + 2 + ... + n^2
= (n^2+1)*(n^2)/2
= (n^4 + n^2)/2
So the answer is Θ(n^4) (as a side note it also is O(n^4)).
You can prove that formally by choosing three constants c1, c2 and n0 such that:
c1*n^4 <= (n^4 + n^2)/2 <= c2*n^4 forall n >= n0
Since the maximum value of i is n^2, that gives us O(n^2) * O(n^2) which is equal to O(n^4).
You can check simply the basic sum formula. Your sum goes not to N but N^2 which gives you
n^2(n^2+1)/2
and this is indeed O(n^4)

Resources