Running time of algorithm in worst case - algorithm

What's the running time of the following algorithm in the worst-case, assuming that it takes a constant time c1 to do a comparison and another constant time c2 to swap two elements?
for (int i = 0; i < n; i++)
{
for (int j = 0; j < n - 1; j++)
{
if (array[j] > array [j+1])
{
swap(array[j], array[j+1]);
}
}
}
I get 2+4n^2. How I calculate it (starting from the inner loop):
The inner loop runs (n-1) times.
The first time it runs, there is the initialisation of j and the comparison of j with (n-1) to know whether to enter the loop. This gives 2 instructions.
Each time it runs, there is the comparison of j with (n-1) to know whether to continue the loop, the increment of j, the array comparison and the swapping. This gives 4 instructions which run (n-1) times, therefore 4(n-1) instructions.
The inner loop thus contains 2+4(n-1) instructions.
The outer loop runs n times.
The first time the outer loop runs, there is the initialisation of i and the comparison of i with n. This gives 2 instructions.
Each time it runs, there is the comparison of i with n, the increment of i and the inner loop. This gives (2+(2+4(n-1)))n instructions.
Altogether, there are 2+(2+(2+4(n-1)))n instructions, which gives 2+4n^2.
Is it correct?

You forgot to account for the addition of j+1 for the index in the if statement and the swap call, and the n-1 calculation in the inner for loop will be an extra instruction.
Remember, every calculation counts as an instruction, which means that essentially every operator in your code adds an instruction, not just the comparisons, function calls, and loop control stuff.
for (int i = 0; i < n; i++) //(1 + 1) + n(1 + 1 + innerCost) (init+comp) + numLoops(comp+inc+innerCost)
{
for (int j = 0; j < n - 1; j++) //(1 + 2) + (n-1)(1 + 1 + 1 + inner) (init+comp) + numLoops(sub+comp+inc+innerCost)
{
if (array[j] > array [j+1]) //1 + 1 (1 for comparison, 1 for +)
{
swap(array[j], array[j+1]); //1 + 1 (1 for function call, 1 for +)
}
}
}
runtime = (1+1) + n(1+1+ (1+2)+(n-1)(1+1+1+ (1+1 + 1+1)))
runtime = 2 + n( 2 + 3 +(n-1)( 3 + 2 + 2))
runtime = 2 + n( 5 +(n-1)(7))
runtime = 2 + n( 5 + 7n - 7)
runtime = 2 + n(7n-2)
runtime = 2 + 7n^2 - 2n = 7n^2 - 2n + 2

Related

How to count primitive comparison operations of for loop when initialization is non-zero?

For a for loop as follows:
for(i = 0; i < n; i++){
}
The initialization counts for 1 operation, the conditional test executes n + 1 times and the increment n times. This gives a T(n) = 1 + n + 1 + n = 2n + 2. This I understand. Where I get confused is when i is assigned a non-zero value. I assume when i = 1 then the comparison only occurs n times and results in T(n) = 1 + n + n = 2n + 1? But then what happens if i is assigned 10? Or a negative value? Are the number of comparisons still n or n + 1?
Let us replace the initialization by 0 with an initialisation by i0 to make the whole thing more general.
So the pseudo code looks like this:
i0 = 0;
for (i = i0; i < n; i++) { }
Now we can state the formula more general:
T(n) = T_init(n) + T_cmp(n) + T_inc(n)
= 1 + (n-i0+1) + (n - i0)
= 2*(n - i0 + 1)
= 2*n - 2*i0 + 2
T_init should be clear. As you said initialisation is always run only once, independent of n.
The comparison is always run directly after the increment but also once before the first loop iteration. So T_cmp(n) = T_inc(n) + 1.
You could also try it with some helper functions:
function init() {
print("init");
return i0;
}
function cmp(i,n) {
print("cmp");
return i < n;
}
function inc(i) {
print("inc");
return i+1;
}
for (i = init(); cmp(i,n); i = inc(i)) { }
This should print a line for each operation so you can count lines to measure "time". (Well it is pseudo code so you have to adopt to your language to run it :)

Big O complexity on dependent nested loops

Can I get some help in understanding how to solve this tutorial question! I still do not understand my professors explanation. I am unsure of how to count the big 0 for the third/most inner loop. She explains that the answer for this algorithm is O(n^2) and that the 2nd and third loop has to be seen as one loop with the big 0 of O(n). Can someone please explain to me the big O notation for the 2nd / third loop in basic layman terms
Assuming n = 2^m
for ( int i = n; i > 0; i --) {
for (int j =1; j < n; j *= 2){
for (int k =0; k < j; k++){
}
}
}
As far as I understand, the first loop has a big O notation of O(n)
Second loop = log(n)
Third loop = log (n) (since the number of times it will be looped has been reduced by logn) * 2^(2^m-1)( to represent the increase in j? )
lets add print statement to the innermost loop.
for (int j =1; j < n; j *= 2){
for (int k =0; k < j; k++){
print(1)
}
}
output for
j = 1, 1 1
j = 2, 1 1 1
j = 4, 1 1 1 1 1
...
j = n, 1 1 1 1 1 ... n+1 times.
The question boils down to how many 1s will this print.
That number is
(2^0 + 1) + (2^1 + 1) + (2^2 + 1) + ... + (n + 1)
= (2^0 + 1) + (2^1 + 1) + (2^2 + 1) + ... + (n + 1)
= log n + (1 + 2 + 4 + ... + n)
= O(log n + n)
= O(n).
assuming you know why (1 + 2 + 4 + ... + n) = O(n)
O-notation is an upperbound. You can say it has O(n^2). For least upperbound, I believe it should be O(n*log(n)*log(n)) which belongs to O(n^2).
It’s because of the logarithm. If you have log(16) raised to the power 2 is 16. So log(n) raised to the power of 2 is n. That is why your teacher says to view the second and third loop as O(n) together.
If the max iterations for the second loop are O(log(n)) then the second and third loops will be: O(1 + 2 + 3 + ... + log(n)) = O(log(n)(log(n) + 1)/2) = O((log(n)^2 + log(n))/2) = O(n)
for ( int i = n; i > 0; i --) { // This runs n times
for (int j =1; j < n; j *= 2){ // This runs atmost log(n) times, i.e m times.
for (int k =0; k < j; k++){ // This will run atmost m times, when the value of j is m.
}
}
}
Hence, the overall complexity will be the product of all three, as mentioned in the comments under the question.
Upper bound can be loose or tight.
You can say that it is loosely bound under O(n^2) or tightly bound under O(n * m^2).

Time complexity analysis of algorithm

Hi I'm trying to analyze the time complexity of this algorithm but I'm having difficult unraveling and counting how many times the final loop will execute. I realize that the first loop is log(n) but after that I can't seem to get to a sum which evaluates well. Here is the algorithm:
for(int i = 1; i <= n; i = 2*i){
for(int j = 1; j <= i; j = 2*j){
for(int k = 0; k <= j; k++){
// Some elementary operation here.
}
}
}
I cannot figure out how many times the loop k executes in general w.r.t to n
Thanks for any help!
It is O(n).
1 + 2 + 4 + ... + 2^N == 2^(N + 1) - 1.
The last loop, for a specific j, executes j times.
And for a specific i, the inner 2 loops execute 1 + 2 + 4 + ... + i times, which is equal to about 2 * i.
So the total execution times is 1 * 2 + 2 * 2 + 4 * 2 + ... + N * 2, which is about 4 * N.

What is the Big O analysis of this algorithm?

I'm working on a data structures course and I'm not sure how to proceed w/ this Big O analysis:
sum = 0;
for(i = 1; i < n; i++)
for(j = 1; j < i*i; j++)
if(j % i == 0)
for(k = 0; k < j; k++)
sum++;
My initial idea is that this is O(n^3) after reduction, because the innermost loop will only run when j/i has no remainder and the multiplication rule is inapplicable. Is my reasoning correct here?
Let's ignore the outer loop for a second here, and let's analyze it in terms of i.
The mid loop runs i^2 times, and is invoking the inner loop whenever j%i == 0, that means you run it on i, 2i, 3i, ...,i^2, and at each time you run until the relevant j, this means that the inner loop summation of running time is:
i + 2i + 3i + ... + (i-1)*i = i(1 + 2 + ... + i-1) = i* [i*(i-1)/2]
The last equality comes from sum of arithmetic progression.
The above is in O(i^3).
repeat this to the outer loop which runs from 1 to n and you will get running time of O(n^4), since you actually have:
C*1^3 + C*2^3 + ... + C*(n-1)^3 = C*(1^3 + 2^3 + ... + (n-1)^3) =
= C/4 * (n^4 - 2n^3 + n^2)
The last equation comes from sum of cubes
And the above is in O(n^4), which is your complexity.

Instruction execution of a C++ code

Hello I have an algorthm in C++ and I want to find the instructions executed. The code is below
cin >> n;
for(i=1;i<=n;i++)
for (j = 1; j <= n; j ++)
A[i][j] = 0;
for(i=1;i<=n;i++)
A[i][i] = 1;
now, after my calculation, I got this T(n) = n^2+8n-5. I just need someone else to verify if I am correct. Thanks
Ok, let's do an analysis step by step.
The first instruction
cin >> n
counts as one operation: 1.
Then the loop
for(i=1;i<=n;i++)
for (j = 1; j <= n; j ++)
A[i][j] = 0;
Let's go from inside out. The j loop performs n array assignments (A[i][j] = 0), (n + 1) j <= n comparisons and n j ++ assignments. It also performs once the assignment j = 1. So in total this gives: n + (n +1) + n + 1 = 3n + 2.
Then the outer i loop performs (n + 1) i <= n comparisons, n i ++ assignments and executes n times the j loop. It also performs one i = 1 assignment. This results in: n + (n + 1) + n * (3n + 2) + 1 = 3n^2 + 4n + 2.
Finally the last for loop performs n array assignments, (n + 1) i <= n comparisons and n i ++ assignments. It also performs one assignment i = 1. This results in: n + (n + 1) + n + 1 = 3n + 2.
Now, adding up three operations we get:
(1) + (3n^2 + 4n + 2) + (3n + 2) = 3n^2 + 7n + 5 = T(n) total operations.
The time function is equivalent, assuming that assignments, comparisons, additions and cin are all done in constant time. That would yield an algorithm of complexity O(n^2).
This is of curse assuming that n >= 1.

Resources