How to calculate Running time of an algorithm - algorithm

I have the following algorithm below :
for(i = 1; i < n; i++){
SmallPos = i;
Smallest = Array[SmallPos];
for(j = i + 1; j <= n; j++)
if (Array[j] < Smallest) {
SmallPos = j;
Smallest = Array[SmallPos];
}
Array[SmallPos] = Array[i];
Array[i] = Smallest;
}
Here is my calculation :
For the nested loop, I find a time complexity of
1 ("int i = 0") + n+1 ("i < n") + n ("i++")
* [1 ("j = i + 1") + n+1 ("j < n") + n ("j++")]
+ 3n (for the if-statement and the statements in it)
+ 4n (the 2 statements before the inner for-loop and the final 2 statements after the inner for-loop).
This is (1 + n + 1 + n)(1 + 1 + n + n) + 7n = (2 + 2n)(2 + 2n) + 7n = 4n^2 + 15n + 4.
But unfortunately, the text book got T(n) = 2n^2 +4n -5.
Please, anyone care to explain to me where I got it wrong?

Here is a formal manner to represent your algorithm, mathematically (Sigma Notation):
Replace c by the number of operations in the outer loop, and c' by the number of operations in the inner loop.

Related

Time Complexity log(n) vs Big O (root n)

Trying to analyze the below code snippet.
For the below code can the time complexity be Big O(log n)?. I am new to asymptotic analysis. In the tutorial it says its O( root n).
int p = 0;
for(int i =1;p<=n;i++){
p = p +i;
}
,,,
Variable p is going to take the successive values 1, 1+2, 1+2+3, etc.
This sequence is called the sequence of triangular numbers; you can read more about it on Wikipedia or OEIS.
One thing to be noted is the formula:
1 + 2 + ... + i = i*(i+1)/2
Hence your code could be rewritten under the somewhat equivalent form:
int p = 0;
for (int i = 1; p <= n; i++)
{
p = i * (i + 1) / 2;
}
Or, getting rid of p entirely:
for (int i = 1; (i - 1) * i / 2 <= n; i++)
{
}
Hence your code runs while (i-1)*i <= 2n. You can make the approximation (i-1)*i ≈ i^2 to see that the loop runs for about sqrt(2n) operations.
If you are not satisfied with this approximation, you can solve for i the quadratic equation:
i^2 - i - 2n == 0
You will find that the loop runs while:
i <= (1 + sqrt(1 + 8n)) / 2 == 0.5 + sqrt(2n + 0.125)

calculate big oh notation with for loop condition

I am trying to understand the calculation of Big-oh notation of a function, when a function contains a for-loop, which has a specific condition
for (initialization, condition, increment)
or() like below:
public void functE( int n )
{
int a;
for ( int i = 0; i < n; i++ )
for ( int j = 0; j <= n - i; j++ )
a = i;
}
I suppose the big-oh of this function to be O(n^2), is this valid?
Assuming a = i takes 1 time unit, you can consider each i:
i = 0 : n + 1
i = 1 : n
i = 2 : n - 1
...
i = n - 1 : 2
So in total:
T(n) = (n+1) + (n) + (n-1) + ... + 2 = n + [1 + ... + n] = n + n * (n + 1)/2 = (n^2 + 3n) / 2 = O(n^2)
by using tabular method ,complexity is n^2+(2-i)n+2
a cording to the Big-Oh notation,
let f(n)=n^2+(2-i)n+2
f(n) <= c*g(n)
n^2+(2-i)n+2 <= n^2+(2-i)n+(2-i)n
n^2+(2-i)n+2 <= n^2+2(2-i)n
n^2+(2-i)n+2 <= n^2+n^2
n^2+(2-i)n+2 <= 2n^2
therefore, c=2 and g(n)=n^2,
f(n)=O(g(n)) = O(n^2)

Big O complexity on dependent nested loops

Can I get some help in understanding how to solve this tutorial question! I still do not understand my professors explanation. I am unsure of how to count the big 0 for the third/most inner loop. She explains that the answer for this algorithm is O(n^2) and that the 2nd and third loop has to be seen as one loop with the big 0 of O(n). Can someone please explain to me the big O notation for the 2nd / third loop in basic layman terms
Assuming n = 2^m
for ( int i = n; i > 0; i --) {
for (int j =1; j < n; j *= 2){
for (int k =0; k < j; k++){
}
}
}
As far as I understand, the first loop has a big O notation of O(n)
Second loop = log(n)
Third loop = log (n) (since the number of times it will be looped has been reduced by logn) * 2^(2^m-1)( to represent the increase in j? )
lets add print statement to the innermost loop.
for (int j =1; j < n; j *= 2){
for (int k =0; k < j; k++){
print(1)
}
}
output for
j = 1, 1 1
j = 2, 1 1 1
j = 4, 1 1 1 1 1
...
j = n, 1 1 1 1 1 ... n+1 times.
The question boils down to how many 1s will this print.
That number is
(2^0 + 1) + (2^1 + 1) + (2^2 + 1) + ... + (n + 1)
= (2^0 + 1) + (2^1 + 1) + (2^2 + 1) + ... + (n + 1)
= log n + (1 + 2 + 4 + ... + n)
= O(log n + n)
= O(n).
assuming you know why (1 + 2 + 4 + ... + n) = O(n)
O-notation is an upperbound. You can say it has O(n^2). For least upperbound, I believe it should be O(n*log(n)*log(n)) which belongs to O(n^2).
It’s because of the logarithm. If you have log(16) raised to the power 2 is 16. So log(n) raised to the power of 2 is n. That is why your teacher says to view the second and third loop as O(n) together.
If the max iterations for the second loop are O(log(n)) then the second and third loops will be: O(1 + 2 + 3 + ... + log(n)) = O(log(n)(log(n) + 1)/2) = O((log(n)^2 + log(n))/2) = O(n)
for ( int i = n; i > 0; i --) { // This runs n times
for (int j =1; j < n; j *= 2){ // This runs atmost log(n) times, i.e m times.
for (int k =0; k < j; k++){ // This will run atmost m times, when the value of j is m.
}
}
}
Hence, the overall complexity will be the product of all three, as mentioned in the comments under the question.
Upper bound can be loose or tight.
You can say that it is loosely bound under O(n^2) or tightly bound under O(n * m^2).

Complexity of triple-nested Loop

I have the following algorithm to find all triples
for (int i = 0; i < N; i++)
for (int j = i+1; j < N; j++)
for (int k = j+1; k < N; k++)
if (a[i] + a[j] + a[k] == 0)
{ cnt++; }
I now that I have triple loop and I check all triples. How to show that the number of different triples that can be chosen from N items is precisely N*(N-1)*(N-2)/6 ?
If we have two loops
for (int i = 0; i < N; i++)
for (int j = i+1; j < N; j++)
...
when i = 0 we go to the second loop N-1 times
i = 1 => N-2 times
...
i = N-1 => 1 time
i = N => 0 time
So 0 + 1 + 2 + 3 + ... + N-2 + N-1 = ((0 + N-1)/2)*N = N*N - N/2
But how to do the same proof with triples?
Ok, I'll do this step by step. It's more of a maths problem than anything.
Use a sample array for visualisation:
[1, 5, 7, 11, 6, 3, 2, 8, 5]
The first time the 3rd nested loop begins at 7, correct?
And the 2nd at 5, and the 1st loop at 1.
The 3rd nested loop is the important one.
It will loop through n-2 times. Then the 2nd loop increments.
At this point the 3rd loop loops through n-3 times.
We keep adding these till we get.
[(n-2) + (n-3) + ... + 2 + 1 + 0]
Then the 1st loop increments, so we begin at n-3 this time.
So we get:
[(n-3) + ... + 2 + 1 + 0]
Adding them all together we get:
[(n-2) + (n-3) + ... + 2 + 1 + 0] +
[(n-3) + ... + 2 + 1 + 0] +
[(n-4) + ... + 2 + 1 + 0] +
.
. <- (n-2 times)
.
[2 + 1 + 0] +
[1 + 0] +
[0]
We can rewrite this as:
(n-2)(1) + (n-3)(2) + (n-4)(3) + ... + (3)(n-4) + (2)(n-3) + (1)(n-2)
Which in maths notation we can write like this:
Make sure you look up the additive properties of Summations. (Back to college maths!)
We have
=
... Remember how to convert sums to polynomials
=
=
Which has a complexity of O(n^3).
One way is to realize that the number of such triples is equal to C(n, 3):
n!
C(n, 3) = --------
3!(n-3)!
= (n-2)(n-1)n/6
Another is to count what your loops do:
for (int i = 0; i < N; i++)
for (int j = i+1; j < N; j++)
for (int k = j+1; k < N; k++)
You've already showed that two loops from 0 to n-1 execute n(n-1)/2 operations.
For i = 0, the inner loops execute (n-1)(n-2)/2 operations.
For i = 1, the inner loops execute (n-2)(n-3)/2 operations.
...
For i = N - 1, the inner loops execute 0 operations.
We have:
(n-1)(n-2)/2 + (n-2)(n-3)/2 + ... =
= sum(i = 1 to n) {(n - i)(n - i - 1)/2}
= 1/2 sum(i = 1 to n) {n^2 - ni - n - ni + i^2 + i}
= 1/2 sum(i = 1 to n) {n^2} - sum{ni} - 1/2 sum{n} + 1/2 sum{i^2} + 1/2 sum{i}
= 1/2 n^3 - n^2(n+1)/2 - 1/2 n^2 + n(n+1)(2n+1)/12 + n(n+1)/4
Which reduces to the right formula, but it gets too ugly to continue here. You can check that it does on Wolfram.

Instruction execution of a C++ code

Hello I have an algorthm in C++ and I want to find the instructions executed. The code is below
cin >> n;
for(i=1;i<=n;i++)
for (j = 1; j <= n; j ++)
A[i][j] = 0;
for(i=1;i<=n;i++)
A[i][i] = 1;
now, after my calculation, I got this T(n) = n^2+8n-5. I just need someone else to verify if I am correct. Thanks
Ok, let's do an analysis step by step.
The first instruction
cin >> n
counts as one operation: 1.
Then the loop
for(i=1;i<=n;i++)
for (j = 1; j <= n; j ++)
A[i][j] = 0;
Let's go from inside out. The j loop performs n array assignments (A[i][j] = 0), (n + 1) j <= n comparisons and n j ++ assignments. It also performs once the assignment j = 1. So in total this gives: n + (n +1) + n + 1 = 3n + 2.
Then the outer i loop performs (n + 1) i <= n comparisons, n i ++ assignments and executes n times the j loop. It also performs one i = 1 assignment. This results in: n + (n + 1) + n * (3n + 2) + 1 = 3n^2 + 4n + 2.
Finally the last for loop performs n array assignments, (n + 1) i <= n comparisons and n i ++ assignments. It also performs one assignment i = 1. This results in: n + (n + 1) + n + 1 = 3n + 2.
Now, adding up three operations we get:
(1) + (3n^2 + 4n + 2) + (3n + 2) = 3n^2 + 7n + 5 = T(n) total operations.
The time function is equivalent, assuming that assignments, comparisons, additions and cin are all done in constant time. That would yield an algorithm of complexity O(n^2).
This is of curse assuming that n >= 1.

Resources