What is the complexity of this pseudo-code - algorithm

I found this excersise in the exam and I ecountered difficulty to solve the problem.
I can suppuse for sure the algorithm will take at least O(n) due the for, but I don't know how to approach the while. I know that in this case i have to evaluate the worst if-else branch and for sure it is the second one.
for i=1...n do
j=n
while i<j do
if j mod 2 = 0 then j=j-1
else j=i
intuitively i think the total cost is: O(nlogn)=O(n)*O(logn)

In short: the while loop will each time run at most two iterations. This makes the algorithm O(n).
The while loop will iterate at most two times. Indeed let us take a look at the while loop:
while i < j do
if j mod 2 = 0 then
j=j-1
else
j=i
It is clear that we will only execute the while loop if i < j. Furthermore if j mod 2 == 1 (so j is odd), then it will set j = i, and thus the while loop will no longer run.
If on the other hand j mod 2 == 0 (so j is even), then we decrement j. There are now two things that could happen, either i == j, in which case we do not perform an extra iteration. If we however perform an extra iteration, the if condition will fail, since decrementing an even number results in an odd number. Since we each time set j = n, it also means that the number of steps that the while loop performs is determined by n itself.
This thus means that regardless what the values for i and j are, the body of the while loop will be performed at most two times.
Since we perform the while loop extactly n times, it thus means that the algorithm is still O(n). We here make the assumption that we can check the parity of a number and decrement a number in constant time.

Does mod refer to Modulo? In that case, the while loop will evaluate at most twice; once to decrement j, but then j mod 2 will be 1, and after setting j=i your i<j will be false. The complexity difference here oscillates rather than increases with input, and is thus O(1) for this branch.

Related

How can we find the Time Complexity of the Algorithm

I want to know about the time complexity of the given code below, I have a doubt it is O of root n
i = n, sum = 0
while (i >= 0){
i /= 2
sum += i*i*i
}
I am really confused can anyone help me out and explain
If you're unsure you can always use "time" module to roughly get an idea of the complexity. It would go something like this.
import time
start = time.time() # put this before the loop
end = time.time() #put this after the loop
print(end - start) #this gives you evaluation time of your loop
Find evaluation times for different n's and determine the complexity.
Just by looking at it though, your loop gets executed roughly log2(n) times, and inside you have 2 multiplications and one division (so nothing complex). Therefore, I would assume O(log(n)) is a reasonable guess.
As mentioned in the comments, I'm assuming that the code was meant to be written as
i = n, sum = 0
while (i > 0) { // <--- Change >= to >
i /= 2
sum += i*i*i
}
since otherwise the code would be an infinite loop. With this in mind, let's take a look at what this code is doing.
For starters, note that the sum variable isn't doing anything that impacts our time complexity. On each iteration, it gets bigger, but we're doing only O(1) work to update it. That means that the time complexity here is going to depend on how many times the loop runs. Notice that, across the different iterations of the loop, the value of i will take on the sequence
n, n / 2, n / 4, n / 8, n / 16, n / 32, ...
and, in particular, on the kth iteration of the loop the value of i will be equal to n / 2k (ignoring rounding down, which we can safely do here). The question, then, is at what iteration of the loop we end up with n / 2k < 1, which is when the loop will stop. Solving, we get that
n / 2k < 1
n < 2k
log2 n < k
So this loop will stop as soon as the number of loop iterations k is greater than log2 n. This means that we do Θ(log n) loop iterations, of which each iteration does O(1) work, so the total work done is Θ(log n).

For loop - big-O

I am trying to do this problem out of a book and am struggling to understand the answer.
for (i = 0; i < N; ++i) {
if ((i % 2) == 0) {
outVal[i] = inVals[i] * i
}
}
here's how I was breaking it down:
I=0 -> executes 1 time
I < n and ++I each execute once every iteration. so 1n+1n = 2n.
the if statement contains 2 operands, so now we are at 4n+1.
the contents of the if statement only executes n/2 times, so we are at 4n+1+n/2
however, big O drops those terms off, leaving us with N as the answer
Here's what I don't get: the explanation for the answer of my problem says this:
outVal[i] = inVals[i] * i; executes every other loop iteration, so the total number of operations include: 1 operation at the start of the loop, 3 operations every loop iteration, 1 operation every other loop iteration, and 1 operation for the final loop condition check.
how are there only 3 operations in the loop? I counted 4 as stated above. Please let me know the rationale behind this.
The complexity is measured by the time/space you take to accomplish a task. i<N and ++i do not take time dependant of your space variable N (the length of the loop).
You must not add how many times an operation is done and sum all of them - you must, instead, choose the one who takes more time or space, as that's the algorithm bottleneck. In a loop, msot of the operations run equal times, so we use the length of the loop as its complexity space or time.
The loop will run N times, so that's its complexity -> O(n)
Inside the loop, the if scope will run N/2 times, as you correctly said -> O(n/2)
But those runs are already added to the first loop iterations. You will not add it since there are no external iterations.
So, the complexity of the algorithm is O(n).
Regarding the operations, the 3 are:
Checking I
Adding 1 to I
The if condition
All of them are done in every iteration.

time complexity of three codes where variables depend on each other

1) i=s=1;
while(s<=n)
{
i++;
s=s+i;
}
2) for(int i=1;i<=n;i++)
for(int j=1;j<=n;j+=i)
cout<<"*";
3) j=1;
for(int i=1;i<=n;i++)
for(j=j*i;j<=n;j=j+i)
cout<<"*";
can someone explain me the time complexity of these three codes?
I know the answers but I can't understand how it came
1) To figure this out, we need to figure out how large s is on the x'th iteration of the loop. Then we'll know how many iterations occur until the condition s > n is reached.
On the x'th iteration, the variable i has value x + 1
And the variable s has value equal to the sum of i for all previous values. So, on that iteration, s has value equal to
sum_{y = 1 .. x} (y+1) = O(x^2)
This means that we have s = n on the x = O(\sqrt{n}) iteration. So that's the running time of the loop.
If you aren't sure about why the sum is O(x^2), I gave an answer to another question like this once here and the same technique applies. In this particular case you could also use an identity
sum_{y = 1 .. x} y = y choose 2 = (y+1)(y) / 2
This identity can be easily proved by induction on y.
2) Try to analyze how long the inner loop runs, as a function of i and n. Since we start at one, end at n, and count up by i, it runs n/i times. So the total time for the outer loop is
sum_{i = 1 .. n} n/i = n * sum_{i = 1 .. n} 1 / i = O(n log n)
The series sum_{i = 1 .. n} 1 / i is called the harmonic series. It is well-known that it converges to O(log n). I can't enclose here a simple proof. It can be proved using calculus though. This is a series you just have to know. If you want to see a simple proof, you can look on on wikipedia at the "comparison test". The proof there only shows the series is >= log n, but the same technique can be used to show it is <= O(log n) also.
3.) This looks like kind of a trick question. The inner loop is going to run once, but once it exits with j = n + 1, we can never reenter this loop, because no later line that runs will make j <= n again. We will run j = j * i many times, where i is a positive number. So j is going to end up at least as large as n!. For any significant value of n, this is going to cause an overflow. Ignoring that possibility, the code is going to perform O(n) operations in total.

Calculating time complexity of inner loops

So I am learning how to calculate time complexities of algorithms from Introduction to Algorithms by Cormen. The example given in the book is insertion sort:
1. for i from 2 to length[A]
2. value := A[i]
3. j := i-1
4. while j > 0 and A[j] > value
5. A[j+1] := A[j]
6. j := j-1
7. A[j+1] = value
Line 1. executes n times.
Line 4., according to the book, executes times.
So, as a general rule, is all inner loops' execution time represented by summation?
Anecdotally, most loops can be represented by summation. In general this isn't true, for example I could create the loop
for(int i = n; i > 1; i = ((i mod 2 == 0) ? i / 2 : 3 * i + 1)))
i.e. initialize i to n, on each iteration if i is even then divide it by two, else multiply i by three and add one, halt when i <= 1. What's the big-oh for this? Nobody knows.
Obviously I've never seen a real-life program using such a weird loop, but I have seen while loops that either increase and decrease the counter depending on some arbitrary condition that might change from iteration to iteration.
When the inner loop has a condition, the times the program will iterate through it(ti) will differ in each iteration of the outer loop. That's why it's equal to the summation of all the (ti)s.
However, if the inner loop was a for loop, the times the program will iterate through it is constant. So you just multiply the times of the outer loop by the times of the inner loop.
like this:
for i=1 to n
for j=1 to m
s++
the complexity here is n*m

How to calculate worst case analysis of this algorithm?

sum = 0;
for(int i = 0; i < N; i++)
for(int j = i; j >= 0; j--)
sum++;
From what I understand, the first line is 1 operation, 2nd line is (i+1) operations, 3rd line is (i-1) operations, and 4th line is n operations. Does this mean that the running time would be 1 + (i+1)(i-1) + n? It's just these last steps that confuse me.
To analyze the algorithm you don't want to go line by line asking "how much time does this particular line contribute?" The reason is that each line doesn't execute the same number of times. For example, the innermost line is executed a whole bunch of times, compared to the first line which is run just once.
To analyze an algorithm like this, try identifying some quantity whose value is within a constant factor of the total runtime of the algorithm. In this case, that quantity would probably be "how many times does the line sum++ execute?", since if we know this value, we know the total amount of time that's spent by the two loops in the algorithm. To figure this out, let's trace out what happens with these loops. On the first iteration of the outer loop, i == 0 and so the inner loop will execute exactly once (counting down from 0 to 0). On the second iteration of the outer loop, i == 1 and the inner loop executes exactly twice (first with j == 1, once with j == 0. More generally, on the kth iteration of the outer loop, the inner loop executes k + 1 times. This means that the total number of iterations of the innermost loop is given by
1 + 2 + 3 + ... + N
This quantity can be shown to be equal to
N (N + 1) N^2 + N N^2 N
--------- = ------- = --- + ---
2 2 2 2
Of these two terms, the N^2 / 2 term is the dominant growth term, and so if we ignore its constant factors we get a runtime of O(N2).
Don't look at this answer as something you should memorize - think of all of the steps required to get to the answer. We started by finding some quantity to count, and then saw how that quantity was influenced by the execution of the loops. From this, we were able to derive a mathematical expression for that quantity, which we then simplified. Finally, we took the resulting expression and determined the dominant term, which serves as the big-O for the overall function.
Work from inside-out.
sum++
This is a single operation on it's own, as it doesn't repeat.
for(int j = i; j >= 0; j--)
This loops i+1 times. There are several operations in there, but you probably don't mean to count the number of asm instructions. So I'll assume for this question this is a multiplier of i+1. Since the loop contents is a single operation, the loop and its block perform i+1 operations.
for(int i = 0; i < N; i++)
This loops N times. So as before, this is a multiplier of N. Since the block performs i+1 operations, this loop performs N(N+1)/2 operations in total. And that's your answer! If you want to consider big-O complexity, then this simplifies to O(N2).
It's not additive: the inner loop happens once for EACH iteration of the outer loop. So it's O(n2).
By the way, this is a good example of why we use asymptotic notation for this kind of thing -- depending on the definition of "operation" the exact details of the count could vary pretty widely. (Like, is sum++ a single operation, or is it add sum to 1 giving temp; load temp to sum?) But since we know that all that can be hidden in a constant factor, it's still going to be O(n2).
No; you don't count a specific number of operations for each line and then add them up. The entire point of constructions like 'for' is to make it possible for a given line of code to run more than once. You're supposed to use thinking and logic skills to figure out how many times the line 'sum++' will run, as a function of N. Hint: it runs once for every time that the third line is encountered.
How many times is the second line encountered?
Each time the second line is encountered, the value of 'i' is set. How many times does the third line run with that value of i? Therefore, how many times will it run overall? (Hint: if I give you a different amount of money on several different occasions, how do you find out the total amount I gave you?)
Each time the third line is encountered, the fourth line happens once.
Which line happens most often? How often does it happen, in terms of N?
So guess what interest you is the sum++ and how many time you execute it.
The final stat of sum would give you that answer.
Actually your loop is just:
Sigma(n) n goes from 1 to N.
Which equal to: N*(N+1) / 2 This give you in big-o-notation O(N^2)
Also beside the name of you question there is no worst case in you algorithm.
Or you could say that the worst case is when N goes to infinity.
Using Sigma notation to represent your loops:

Resources