How is runtime affected when for() loop is inside an if() statement? - runtime

Say we have this code:
doSomething(int n) {
for(int i = 0; i < n; i++) {
if (i % 7 == 0) {
for(int j = 0; j < n; j++) {
print("*");
}
}
}
}
What are the big-O and big-omega runtimes (with proofs/work shown)?
My mind is being blown by the if() statement and how to prove the big-omega (since for big-O we can just ignore the condition because it's an upper bound).
Any help is much appreciated. Thanks!

Let's begin by trying to rewrite things in a way that more clearly exposes the runtime. For example, that inner loop has a runtime of Θ(n), so let's rewrite the code like this:
for(int i = 0; i < n; i++) {
if (i % 7 == 0) {
do Θ(n) work
}
}
Now, think about what happens when this loop runs. One out of every seven iterations will trigger that inner loop and do Θ(n) work, and the remaining six-sevenths won't and will do Θ(1) work per iteration. This means that the total work done is
n ((1 / 7) × Θ(n) + (6 / 7) × Θ(1))
= n(Θ(n) + Θ(1))
= n(Θ(n))
= Θ(n2).
In other words, the total work done here is Θ(n2). And that makes sense, too, since essentially that if statement just reduces the work done by a factor of 1/7, and big-O notation ignores constant factors.
You had originally written the question for this code:
for(int i = 0; i < n; i++) {
if (n % 7 == 0) {
for(int j = 0; j < n; j++) {
print("*");
}
}
}
Here's my original answer:
Let's begin by trying to rewrite things in a way that more clearly exposes the runtime. For example, that inner loop has a runtime of Θ(n), so let's rewrite the code like this:
for(int i = 0; i < n; i++) {
if (n % 7 == 0) {
do Θ(n) work
}
}
So now let's think about what's happening here. There are two possibilities for what could happen. First, it might be the case that n is a multiple of seven. In that case, the if statement triggers every time, and on each of the n outer loop iterations we do Θ(n) work. Therefore, we can say that the total work done in the worst case will be Θ(n2). We can't tighten that bound because as n gets larger and larger, we keep running into more and more multiples of seven. Second, it might be the case that n isn't a multiple of seven. When that happens, we do Θ(n) loop iterations that all do Θ(1) work each, so in the best case we'll do Θ(n) total work.
Overall, this means that in the worst case we do Θ(n2) work, and in the best case we do Θ(n) work. We can therefore say that the runtime of the function is O(n2) and Ω(n), though I think that the more precise descriptions of the best and worst-case runtimes are a little more informative.
The key to the analysis here is realizing that that if statement is either going to always to fire or never going to fire. That makes it easier for us to split the reasoning into two separate cases.
Remember that a big-O analysis isn't just about multiplying a bunch of things together. It's about thinking about what the program will actually do if we were to run it, then thinking through how the logic will play out. You'll rarely be wasting your time if you try approaching big-O analysis this way.

Related

What is the worst-case asymptotic cost of this algorithm?

void Ordena(TipoItem *A, int n)
{
TipoItem x;
int i, j;
for (i = 1; i < n; i++)
{
for (j = n; j > i; j--)
{
if (A[j].chave < A[j - 1].chave)
{
x = A[j];
A[j] = A[j - 1];
A[j - 1] = x;
}
}
}
}
I believe the worst case is when the array is in descending order, am I right?
About the asymptotic cost in terms of number of movements, is it O(n²) or O(2n²) ?
I've just started learning about asymptotic cost (as you can tell).
As you were saying the worst-case scenario here is the one where the array is in descending order because you will have to execute the if statement every single time. However, since we are talking about the asymptotic notation, it is quite irrelevant whether or not you execute the if statement because the cost of those three instructions is actually constant(i.d. O(1)). Therefore, the important thing here is how many times you actually have to loop through the elements in the array and this happens exactly, if you do the math, n^2/2 + n/2 times. So, the computational complexity is O(n^2) because the predominant part here is n^2/2, and the asymptotic notation doesn't take into account the multiplicative factor 1/2, even if sometimes these factors could influence the execution time.

Big-O & Exact Runtime

I am learning about Big-O and although I started to understand things, I still am not able to correctly measure the Big-O of an algorithm.
I've got a code:
int n = 10;
int count = 0;
int k = 0;
for (int i = 1; i < n; i++)
{
for (int p = 200; p > 2*i; p--)
{
int j = i;
while (j < n)
{
do
{
count++;
k = count * j;
} while (k > j);
j++;
}
}
}
which I have to measure the Big-O and Exact Runtime.
Let me start, the first for loop is O(n) because it depends on n variable.
The second for loop is nested, therefore makes the big-O till now an O(n^2).
So how we gonna calculate the while (j < n) (so only three loops till now) and how we gonna calculate the do while(k > j) if it appears, makes 4 loops, such as in this case?
A comprehend explanation would be really helpful.
Thank you.
Unless I'm much mistaken, this program has an infinite loop and therefore it's time complexity cannot usefully be analyzed.
In particular
do
{
count++;
k = count * j;
} while (k > j);
as soon as this loop is entered for the second time and count = 2, k will be set greater to j, and will remain so indefinitely (ignoring integer overflow, which will happen pretty quickly).
I understand that you're learning Big-Oh notation, but creating toy examples like this probably isn't the best way to understand Big-Oh. I would recommend reading a well-known algorithms textbook where they walk you through new algorithms, explaining and analyzing the time and space complexity as they do so.
I am assuming that the while loop should be:
while (k < j)
Now, in this case, the first for loop would take O(n) time. The second loop would take O(p) time.
Now, for the third loop,
int j = i;`
while (j < n){
...
j++;
}
could be rewritten as
for(j=i;j<n;j++)
meaning it shall take O(n) time.
For the last loop, the value of k increases exponentially.
Consider it to be same as
for(k = count*j ;k<j ;j++,count++)
Hence it shall take O(logn) time.
The total time complexity is O(n^2*p*logn).

What is the time complexity of while loops?

I'm trying to find the time complexity of while loops and I have no idea where to begin. I understand how to find the complexity class of for loops, but when it comes to while loops, I get completely lost. Any advice/tips on where to begin?
Here is an example of a problem:
x = 0;
A[n] = some array of length n;
while (x != A[i]) {
i++;
}
When you gotta do something n times, you have to do it n times. You can use a for loop, a while loop or whatever that your programming language (or pseudo code!) offers.
Crudely, big O notation comments on the amount of work you have to do, and doesn't care about how you do it (somewhat less crudely, it comments on how the amount of work that needs to be done grows with the growing input).
(More details below)
I think you are confusing things here.
for and while are programming language constructs to express operations that are to be repeated.
Algorithm analysis is agnostic of the language of implementation, and doesn't care about the actual constructs you use while expressing the algorithm.
Consider following:
1. Input n
2. Do OperationX n times
The Big O notation machinery helps you in commenting on the complexity of the above operation. This helps in many cases. For example, it can help you in comparing the runtime of the operation above with the following:
1. Input n
2. Input m
3. Repeat m OperationX n times.
The Big O notation will tell you that the former is O(n) and the latter is O(m * n) (assuming OperationX takes a constant time).
You can write the code using the loop construct of your choice, it doesn't matter.
for(int i = 0; i < n; i++) {
operationX;
}
Is the first operation, and
i = j = 0;
while(i < n) {
j = 0;
while(j < m) {
operationX;
j++;
}
i++;
}
the second. The big O notation doesn't really care if the for and while was switched.
A[n] = some array of length n;
for(x = 0;x != A[i];i++) {
}
Is a for re-write of your question with the same complexity (O(n)).

Time in Big O notation for if(N^2%N==0)

I wanted to understand how to calculate time complexity on if statements.
I got this problem:
sum = 0;
for(i=0;i<n;i++)
{
for(j=1;j<i*i;j++)
{
if(j%i==0)
{
for(k=0;k<j;k++)
{
sum++;
}
}
}
}
Now, I understand that for lines (1) and (2) I have n^3 in total, and according to my professor the total time is n^4, I also see that the if statement is testing to check when the remainder of n^2/n is 0, and the for loop in line (4) in my opinion should be n^2, but I don't know how to calculate it in order for lines (3) through (4) have O(n) in total. Any help is welcome. Thanks in advance.
Let's compute sum by rewriting the program and observing some math facts:
Phase 1:
for (i = 0; i < n; i++) {
for (j = 0; j < i*i; j += i) {
sum += j;
}
}
Phase 2 (use arithmetic progression):
for (i = 0; i < n; i++) {
sum += i*i * (i + 1) / 2;
}
Phase 3:
Sum of cubes is a polynomial 4th degree
So, sum = O(n^4). The original program achieves that by adding 1, so it needs O(n^4) additions.
There are 6 lines of actual code here. (You had them all on one line which was very hard to read. John Odom fixed that.)
1) Runs in O(1)
2) Runs in O(n), total is O(n)
3) Runs in O(n^2), total is O(n^3)
4) Runs in O(1), this filters the incoming from O(n^3) to O(n^2)
Edit: I missed the fact that this loop went to n^2 instead of n.
5) Runs in O(n) but this n is the square of the original n, thus it's really O(n^2), total is O(n^4)
6) Runs O(1), total is O(n^4)
Thus the total time is O(n^4)
Note, however:
for(j=1;j<i*i;j++)
{
if(j%i==0)
This would be much better rewritten as
for(j=i;j<i*i;j+=i)
(hopefully I got the syntax right, it's been ages since I've done C.)

big theta for quad nested loop with hash table lookup

for (int i = 0; i < 5; i++) {
for (int j = 0; j < 5; j++) {
for (int k = 0; k < 5; k++) {
for (int l = 0; l < 5; l++) {
look up in a perfect constant time hash table
}
}
}
}
what would the running time of this be in big theta?
my best guess, a shot in the dark: i always see that nested for loops are O(n^k) where k is the number of loops, so the loops would be O(n^4), then would i multiply by O(1) for constant time? what would this all be in big theta?
If you consider that accessing a hash table is really theta(1), then this algorithm runs in theta(1) too, because it makes only a constant number (5^4) lookups at the hashtable.
However, if you change 5 to n, it will be theta(n^4) because you'll do exactly n^4 constant-time operations.
The big-theta running time would be Θ(n^4).
Big-O is an upper bound, where-as big-theta is a tight bound. What this means is that to say the code is O(n^5) is also correct (but Θ(n^5) is not), whatever's inside the big-O just has to be asymptotically bigger than or equal to n^4.
I'm assuming 5 can be substituted for another value (i.e. is n), if not, the loop would run in constant time (O(1) and Θ(1)), since 5^4 is constant.
Using Sigma notation:
Indeed, instructions inside the innermost loop will execute 625 times.

Resources