O(f(n)) for two for loops - big-o

The first loop would give O(n) but the second loop will run forever won't it? Would this mean the overall O(n) is just O(n) or does the inner for loop count as something more than a constant?
for (int i=0; i<n; i++) {
for (int j=0; j<=42; j--) {
System.out.println(i+j);
}
}

The System.out.println indicates this is Java.
In Java, the int type is a 32-bit signed integer type, with values ranging from -2,147,483,648 to 2,147,483,647. Decreasing a variable that is already at the minimum value wraps it around to the maximum.
This means that the inner loop will run for 2,147,483,648 iterations before the variable j wraps around, becomes larger than 42, and terminates the inner loop. The inner loop is thus not infinite, as indicated in the question, but instead finite and constant.
Since the inner loop always execute this many steps, it is constant, and thus the big-o notation is wholely dictated by the outer loop.
So the big-o notation for this piece of code is O(n).
If the compiler/runtime can be configured to crash when you try to underflow an int, instead of wrapping it, the program will always crash, and it will manage to just run 1 iteration of the outer loop before it crashes at the end of the inner loop. It would thus be O(1). I do not know enough Java, however, so I don't know if this is possible.
If instead of Java, this is supposed by be in context of another theoretical programming language, or seen upon as a pure algorithmic description, in which int does not actually have a lower bound, then the program does not terminate and it does in fact not have a big-o notation at all.
More about this last paragraph here: O-notation, O(∞) = O(1)?.

Related

I have implemented two for loops using a single while loop. It's time complexity should be O(n) or O(n2). Would it bypass O(n) constraint in question?

In different coding competitions, there is a constraint of O(n). So, I thought of implementing using a single while loop that works as nested for loop. Would I be able to bypass the constraint or not in competitive coding platforms.
Code:
#include<bits/stdc++.h>
using namespace std;
int main(){
int i=0, j=0, n;
cin>>n;
while(i<n && j<n){
cout<<i<<" "<<j<<endl;
if(j==n-1){
++i;
j=0;
}
else{
j++;
}
}
return 0;
}
You can also say that your algorithm runs in constant-time O(1) with respect to
n = the number of coffee cups drunk by Stackoverflow users.
As the number of coffee cups grows, your code still takes the same time to run. This is a perfectly correct statement.
However, those constraints you talk of specify a maximum time-complexity with respect to some given, or understood, variable n. Usually the size of the input to your program, again measured in some way that is either explicitly given or implicitly understood. So no, re-defining the variables won't get you around the constraints.
That said, writing a nested loop as a single loop, such as in the way you discovered, is not useless as there are situations where it might come in handy. But no, it will not improve your asymptotic complexity. Ultimately, you need to count the total number of operations, or measure the actual time, for various inputs, and this is what will give you your time complexity. In your case, it's O(n2) with respect to the value of n given to your code.
You can't determine complexity by counting loop depth. Your code is still O(n2) (where n is the given value n), because j will be modified n2 times, and in fact it will even print out n2 lines.
To determine complexity, you need to count operations. A lot of times, when you see two nested loops, they will both do O(n) iterations, and O(n2) time will result, but that is a hint only, not a universal truth.

If my Termination statement in a for-loop is i < n * n , is my running time then O(n^2)?

So im just a bit confused on how to correctly interpret the running time of this for-loop:
for (int i = 0; i < n * n; ++i) {}
I know the basics of O-Notation im just insecure of how to correctly interpret the running time and I couldn't find similar examples.
The problem is actually a triple nested for loop and I know you just multiply the running time of nested loops but this one makes me insecure.
Yes.
n multiplied by itself is n2, and you perform n2 iterations.
There are no constant factors and no other considerations in this short example.
The complexity is simply O(n2).
Note that this does not consider any hypothetical operations performed inside the loop. Also note that, if we take the loop exactly at face value, it doesn't actually do any meaningful work so we could say that it has no algorithmic complexity at all. You would need to present a real example to really say.

Run time to Theta Notation

After looking at the code:
for(i=n-1; i>=0; i-=2)
for(j=15; j<100; j+=3)
sum +=i+j
I would say that the run time for this in terms of Theta Notation to be Θ(n^2), as for there are two loops and const (i and j). Would this be correct?
I’ll put in another plug for that old asymptotic maxim of
“When in doubt, work inside-out!”
Let’s take another look at that code:
for(i=n-1; i>=0; i-=2)
for(j=15; j<100; j+=3)
sum +=i+j;
Let’s begin with the innermost statement, the one that adds into the variable sum. That statement’s runtime is independent of any of the other variables here, so it does Θ(1) work. So let’s rewrite the code like this:
for(i=n-1; i>=0; i-=2)
for(j=15; j<100; j+=3)
do Theta(1) work
Now, let’s look at that inner for loop. Notice that this loop always runs the exact same number of times (around 30ish), regardless of the values of the other variables. That means that this loop runs a constant number of times and does a constant amount of work, and so the net effect of that loop is to do Θ(1) work. This is shown here:
for(i=n-1; i>=0; i-=2)
do Theta(1) work
So now we’re left this this final loop. Here, we see that the work done depends directly and linearly on n. Specifically, this loop does Θ(n) iterations and does Θ(1) work per iteration, and so the total work done is Θ(n).
Notice that it’s not the number of for loops that determines the runtime, but rather what those loops are doing. Counting the number of loops is a good way to get a rough estimate for the runtime, but the approach I illustrated above of working from the inside outward is more precise.

Big O Notation - Growth Rate

I am trying to understand if my reasoning is correct:
If I am given the following snippet of code and asked to find it's Big O:
for(int i = 3; i < 1000; i++)
sum++;
I want to say O(n) because we are dealing with one for loop and sum++ which is iterated say n times but then looking at this I realise we are not dealing with n at all as we are given the amount of times this for loop iterates... but in my mind it would be wrong to say that this has a Big O of O(1) because the growth is linear and not constant and depends on the size of this loop (although the loop is 'constant'). Would I be correct in saying that this is O(n)?
Also, another one that has me thinking around which has a similar setup:
for(int i = 0; i < n * n * n; i++)
for(int j = 0; j < i; j++)
sum++;
Now here again I know that when dealing with a nested loop containing and outer and inner loop we would use the multiplication rule to derive our Big O. Let's assume that the inner loop was in fact j < n then I would say that the Big O of this snippet of code is O(n^4) but as it isn't and we have a the second loop running its iterations off i and not n then would it be correct to say this as a Big Order of O(n^3)?
I think what is throwing me is where 'n' is not appearing and we're given a constant or another variable and all of a sudden I'm assuming n must not be considered for that section of code. However, having said that the other part of my reasoning is telling me that despite not seeing an 'n' I should still treat the code as though there were an n as the growth rate would be the same regardless of the variable?
It works best if you consider the code to always be within a function, where the function's arguments are used to calculate complexity. Thus:
// this is O(1), since it always takes the same time
void doSomething() {
for(int i = 3; i < 1000; i++)
sum++;
}
And
// this is O(n^6), since it only takes one argument
// and if you plot it, the curve matches t = k * n^6
void doSomethingElse(int n) {
for(int i = 0; i < n * n * n; i++)
for(int j = 0; j < i; j++)
sum++;
}
In the end, the whole point of big-O is to say what the run-times (or memory-footprints; but if you don't say anything, you are referring to run-times) look like as the problem size increases. It matters not what happens in the inside (although you can use that to estimate complexity) - what really matters is what you would measure outside.
Looking closer at your second snippet, it's O(n^6) because:
outer loop runs exactly n^3 times; inner loop runs, on average, n^3 / 2 times.
therefore, inner sum runs n^3 * k * n^3 times (with k a constant). In big-O notation, that's O(n^6).
The first is either O(1) or simply a wrong question, just like you understand it.
The second is O(n6). Try to imagine the size of the inner loop. On first iteration, it will be 1. On the second, 2. On the ith, it will be i, and on the last, it will be n*n*n. So it will be n*n*n/2, but that's O(n*n*n). That, times the outer O(n3) is O(n6) overall.
Although the calculation of O() for your question, by others, may be correct, here is a little more insight that should help delineate the conceptual outlook for this whole asymptotic analysis story.
I think what is throwing me is where 'n' is not appearing and we're given a constant or another
variable and all of a sudden I'm assuming n must not be considered for
that section of code.
The simplest way to understand this one is to identify if the execution of a line of code is affected by/related to the current value of n.
Had the inner loop been, let's say, j < 10 instead of j < i, the complexity would have well been O(n^3).
Why is any constant considered O(1)?
This may agreeably sound a little counter-intuitive at first however, here is a small conceptual summary to clear the air.
Let us say that your first loop runs 1000 times. Then you set it to 10^1000 times and try to believe that hey, it doesn't take the same time anymore.
Fair enough! Even though it may now take your computer 5 seconds more to run the same piece of code, the time complexity still remains O(1).
What this practically means is that you can actually calculate the time that it takes your computer to execute that piece of code and it will remain constant forever (for the same configuration).
Big-Oh is actually a function on the input and not the measure of the discrete value itself (time/space).
I hope that the above explanation also helps clarify why we actually ignore the constants in the O() notation.
Why is this Big-Oh thing so generalized and why is it used at the first place?
I thought of including this extra info as I myself had this question in mind when learning this topic for the first time.
Asymptotic time-complexity is an apriori analysis of any algorithm to understand the worst (Big-Oh) behavior (time/space) of that program regardless of the size of the input.
Eg. Your second code can not perform worse than O(n^6).
It is generalized because from one computer to another, only the constant changes, not Big-Oh.
With more experience, you will realize that practically, you would want your algorithm's time-complexity to be as asymptotically small as possible. Till a polynomial function it is fine. But for large inputs, today's computers start coughing if you try to run an algorithm with exponential time complexity of the order O(k^n) or O(n^n), eg. The Travelling Salesman and other NP-C/H problems.
Hope this adds to the info. :)

Big Oh Notation and Calculating the Running Time for a Triple-Nested For-Loop

In Computer Science, it is very important for Computer Scientists to know how to calculate the running times of algorithms in order to optimize code. For you Computer Scientists, I pose a question.
I understand that, in terms of n, a double-nested for-loop typically has a running time of n2 and a triple-nested for-loop typically has a running time of n3.
However, for a case where the code looks like this, would the running time be n4?
x = 0;
for(a = 0; a < n; a++)
for(b = 0; b < 2a; b++)
for (c=0; c < b*b; c++)
x++;
I simplified the running time for each line to be virtually (n+1) for the first loop, (2n+1) for the second loop, and (2n)2+1 for the third loop. Assuming the terms are multiplied together, and we extract the highest term to find the Big Oh, would the running time be n4, or would it still follow the usual running-time of n3?
I would appreciate any input. Thank you very much in advance.
You are correct, n*2n*4n2 = O(n4).
The triple nested loop only means there will be three numbers to multiply to determine the final Big O - each multiplicand itself is dependent on how much "processing" each loop does though.
In your case the first loop does O(n) operations, the second one O(2n) = O(n) and the inner loop does O(n2) operations, so overall O(n*n*n2) = O(n4).
Formally, using Sigma Notation, you can obtain this:
Could this be a question for Mathematics?
My gut feelings, like BrokenGlass is that it is O(n⁴).
EDIT: Sum of squares and Sum of cubes give a pretty good understanding of what is involved. The answer is a resounding O(n^4): sum(a=0 to n) of (sum(b=0 to 2a) of (b^2)). The inner sum is congruent to a^3. Therefore your outer sum is congruent to n^4.
Pity, I thought you might get away with some log instead of n^4. Never mind.

Resources