How to count primitive comparison operations of for loop when initialization is non-zero? - big-o

For a for loop as follows:
for(i = 0; i < n; i++){
}
The initialization counts for 1 operation, the conditional test executes n + 1 times and the increment n times. This gives a T(n) = 1 + n + 1 + n = 2n + 2. This I understand. Where I get confused is when i is assigned a non-zero value. I assume when i = 1 then the comparison only occurs n times and results in T(n) = 1 + n + n = 2n + 1? But then what happens if i is assigned 10? Or a negative value? Are the number of comparisons still n or n + 1?

Let us replace the initialization by 0 with an initialisation by i0 to make the whole thing more general.
So the pseudo code looks like this:
i0 = 0;
for (i = i0; i < n; i++) { }
Now we can state the formula more general:
T(n) = T_init(n) + T_cmp(n) + T_inc(n)
= 1 + (n-i0+1) + (n - i0)
= 2*(n - i0 + 1)
= 2*n - 2*i0 + 2
T_init should be clear. As you said initialisation is always run only once, independent of n.
The comparison is always run directly after the increment but also once before the first loop iteration. So T_cmp(n) = T_inc(n) + 1.
You could also try it with some helper functions:
function init() {
print("init");
return i0;
}
function cmp(i,n) {
print("cmp");
return i < n;
}
function inc(i) {
print("inc");
return i+1;
}
for (i = init(); cmp(i,n); i = inc(i)) { }
This should print a line for each operation so you can count lines to measure "time". (Well it is pseudo code so you have to adopt to your language to run it :)

Related

Time Complexity log(n) vs Big O (root n)

Trying to analyze the below code snippet.
For the below code can the time complexity be Big O(log n)?. I am new to asymptotic analysis. In the tutorial it says its O( root n).
int p = 0;
for(int i =1;p<=n;i++){
p = p +i;
}
,,,
Variable p is going to take the successive values 1, 1+2, 1+2+3, etc.
This sequence is called the sequence of triangular numbers; you can read more about it on Wikipedia or OEIS.
One thing to be noted is the formula:
1 + 2 + ... + i = i*(i+1)/2
Hence your code could be rewritten under the somewhat equivalent form:
int p = 0;
for (int i = 1; p <= n; i++)
{
p = i * (i + 1) / 2;
}
Or, getting rid of p entirely:
for (int i = 1; (i - 1) * i / 2 <= n; i++)
{
}
Hence your code runs while (i-1)*i <= 2n. You can make the approximation (i-1)*i ≈ i^2 to see that the loop runs for about sqrt(2n) operations.
If you are not satisfied with this approximation, you can solve for i the quadratic equation:
i^2 - i - 2n == 0
You will find that the loop runs while:
i <= (1 + sqrt(1 + 8n)) / 2 == 0.5 + sqrt(2n + 0.125)

What is a more formal complexity of O(n/2)?

I need to know how to analyze algorithms from the perspective of a discrete math class. This is not homework, the example is from here at 1:54. Meaning, I need to to know the constants, not just the general big-0. This is the method or algorithm I'm confused with. Would It not be 3n/2 + 1.
My thought is that for the first method the for loop runs n(1/2) + 1 times and inside the for loop the statement is run n times, or maybe O(1/2) times inside as well. Then, add these two.
For the subsequent method, the loop executes n + 1 times and n inside. So I believe it would be (n + 1) + n. So, 2n + 1. This is the process I'm following for the first method but I'm not quite as confident.
for(i = 0; i < n; i = i+2) { // n(1/2) + 1
do something; //n
}
for(i = 0; i < n; i = i++) { // n + 1
do something; //n
}

Running time of algorithm in worst case

What's the running time of the following algorithm in the worst-case, assuming that it takes a constant time c1 to do a comparison and another constant time c2 to swap two elements?
for (int i = 0; i < n; i++)
{
for (int j = 0; j < n - 1; j++)
{
if (array[j] > array [j+1])
{
swap(array[j], array[j+1]);
}
}
}
I get 2+4n^2. How I calculate it (starting from the inner loop):
The inner loop runs (n-1) times.
The first time it runs, there is the initialisation of j and the comparison of j with (n-1) to know whether to enter the loop. This gives 2 instructions.
Each time it runs, there is the comparison of j with (n-1) to know whether to continue the loop, the increment of j, the array comparison and the swapping. This gives 4 instructions which run (n-1) times, therefore 4(n-1) instructions.
The inner loop thus contains 2+4(n-1) instructions.
The outer loop runs n times.
The first time the outer loop runs, there is the initialisation of i and the comparison of i with n. This gives 2 instructions.
Each time it runs, there is the comparison of i with n, the increment of i and the inner loop. This gives (2+(2+4(n-1)))n instructions.
Altogether, there are 2+(2+(2+4(n-1)))n instructions, which gives 2+4n^2.
Is it correct?
You forgot to account for the addition of j+1 for the index in the if statement and the swap call, and the n-1 calculation in the inner for loop will be an extra instruction.
Remember, every calculation counts as an instruction, which means that essentially every operator in your code adds an instruction, not just the comparisons, function calls, and loop control stuff.
for (int i = 0; i < n; i++) //(1 + 1) + n(1 + 1 + innerCost) (init+comp) + numLoops(comp+inc+innerCost)
{
for (int j = 0; j < n - 1; j++) //(1 + 2) + (n-1)(1 + 1 + 1 + inner) (init+comp) + numLoops(sub+comp+inc+innerCost)
{
if (array[j] > array [j+1]) //1 + 1 (1 for comparison, 1 for +)
{
swap(array[j], array[j+1]); //1 + 1 (1 for function call, 1 for +)
}
}
}
runtime = (1+1) + n(1+1+ (1+2)+(n-1)(1+1+1+ (1+1 + 1+1)))
runtime = 2 + n( 2 + 3 +(n-1)( 3 + 2 + 2))
runtime = 2 + n( 5 +(n-1)(7))
runtime = 2 + n( 5 + 7n - 7)
runtime = 2 + n(7n-2)
runtime = 2 + 7n^2 - 2n = 7n^2 - 2n + 2

How to calculate Running time of an algorithm

I have the following algorithm below :
for(i = 1; i < n; i++){
SmallPos = i;
Smallest = Array[SmallPos];
for(j = i + 1; j <= n; j++)
if (Array[j] < Smallest) {
SmallPos = j;
Smallest = Array[SmallPos];
}
Array[SmallPos] = Array[i];
Array[i] = Smallest;
}
Here is my calculation :
For the nested loop, I find a time complexity of
1 ("int i = 0") + n+1 ("i < n") + n ("i++")
* [1 ("j = i + 1") + n+1 ("j < n") + n ("j++")]
+ 3n (for the if-statement and the statements in it)
+ 4n (the 2 statements before the inner for-loop and the final 2 statements after the inner for-loop).
This is (1 + n + 1 + n)(1 + 1 + n + n) + 7n = (2 + 2n)(2 + 2n) + 7n = 4n^2 + 15n + 4.
But unfortunately, the text book got T(n) = 2n^2 +4n -5.
Please, anyone care to explain to me where I got it wrong?
Here is a formal manner to represent your algorithm, mathematically (Sigma Notation):
Replace c by the number of operations in the outer loop, and c' by the number of operations in the inner loop.

Big O for while loops

I had this question for my assignment the other day, but I was still unsure if I'm right.
for(int i =1; i <n; i++) //n is some size
{
for(j=1; j<i; j++)
{
int k=1;
while (k<n)
{
k=k+C; //where C is a constant and >=2
}
}
}
I know the nested for loops is O(n^2) but I wasn't sure with the while loop. I assumed that the whole code will be O(n^3).
The inner loop is literally O(n/C)=O(n), so yes, overall it's O(n^3) (the second loop has an upper bound of O(n))
int k=1;
while (k<n){
k=k+C //where C is a constant and >=2
}
This will take (n-1)/C steps: write u = (k-1)/C. Then, k = Cu + 1 and the statement becomes
u=0;
while(u < (n-1)/C) {
u=u+1
}
Hence the while loop is O(n) (since C is constant)
EDIT: let me try to explain it the other way around.
Start with a dummy variable u. The loop
u=0;
while(u < MAX) {
u = u+1
}
runs MAX times.
When you let MAX = (n-1) / C, the loop is
u=0;
while(u < (n-1)/C) {
u=u+1
}
And that runs (n-1)/C times.
Now, the check u < (n-1)/C is equivalent to C*u < n-1 or C*u + 1 < n, so the loop is equivalent to
u=0;
while(C*u + 1 < n) {
u=u+1
}
Now, suppose that we rewrote this loop in terms of a variable k = C * u + 1. Then,
u=0;
k=1; // C * 0 + 1 = 1
The loop looks like
while(C*u + 1 < n) {
while(k < n) {
and the inner condition is
u=u+1
k=k+C //C * (u+1) + 1 = C * u + 1 + C = old_k + C
Putting it all together:
int k=1;
while (k<n){
k=k+C
}
takes (n-1)/C steps.
Formally, you may proceed using the following methodology (Sigma Notation):
Where a symbolizes the number of constant operations inside the innermost loop (a = 1 if you want to count the exact number of iterations).
Well, you would need to look at how many times the while loop body is run for a given value of n and C. For example n is 10 and C is 3. The body would run 3 times: k = 1, k = 4, k = 7. For n = 100 and C = 2, the body would run 50 times: k = 1,3,5,7,9,...,91,93,95,97,99. It is a matter of counting by C until n. You should be able to calculate the Big-O complexity from that clue.

Resources