In different coding competitions, there is a constraint of O(n). So, I thought of implementing using a single while loop that works as nested for loop. Would I be able to bypass the constraint or not in competitive coding platforms.
Code:
#include<bits/stdc++.h>
using namespace std;
int main(){
int i=0, j=0, n;
cin>>n;
while(i<n && j<n){
cout<<i<<" "<<j<<endl;
if(j==n-1){
++i;
j=0;
}
else{
j++;
}
}
return 0;
}
You can also say that your algorithm runs in constant-time O(1) with respect to
n = the number of coffee cups drunk by Stackoverflow users.
As the number of coffee cups grows, your code still takes the same time to run. This is a perfectly correct statement.
However, those constraints you talk of specify a maximum time-complexity with respect to some given, or understood, variable n. Usually the size of the input to your program, again measured in some way that is either explicitly given or implicitly understood. So no, re-defining the variables won't get you around the constraints.
That said, writing a nested loop as a single loop, such as in the way you discovered, is not useless as there are situations where it might come in handy. But no, it will not improve your asymptotic complexity. Ultimately, you need to count the total number of operations, or measure the actual time, for various inputs, and this is what will give you your time complexity. In your case, it's O(n2) with respect to the value of n given to your code.
You can't determine complexity by counting loop depth. Your code is still O(n2) (where n is the given value n), because j will be modified n2 times, and in fact it will even print out n2 lines.
To determine complexity, you need to count operations. A lot of times, when you see two nested loops, they will both do O(n) iterations, and O(n2) time will result, but that is a hint only, not a universal truth.
So im just a bit confused on how to correctly interpret the running time of this for-loop:
for (int i = 0; i < n * n; ++i) {}
I know the basics of O-Notation im just insecure of how to correctly interpret the running time and I couldn't find similar examples.
The problem is actually a triple nested for loop and I know you just multiply the running time of nested loops but this one makes me insecure.
Yes.
n multiplied by itself is n2, and you perform n2 iterations.
There are no constant factors and no other considerations in this short example.
The complexity is simply O(n2).
Note that this does not consider any hypothetical operations performed inside the loop. Also note that, if we take the loop exactly at face value, it doesn't actually do any meaningful work so we could say that it has no algorithmic complexity at all. You would need to present a real example to really say.
To find time complexity, I set a value for n, but once I iterated through the algorithm, I was unable to determine what it was. Any suggestions on how to find a formal for it so I can determine what the big-O is.
for (int i = 0; i < 2*n; i++){
X
X
}
for (int i = n; i > 0; i--) {
X
}
X are just operations in the algorithm.
I set n to two and it increases very fast every time it goes through the loop n doubles. It looks like it might be 2^n.
Since i increases by 1 each time through the first loop and n doubles each time through the loop, do you think the loop would ever terminate? (It probably would terminate when 2*n overflows, but if you're operating in an environment that, say, automatically switches to a Big Integer package when numbers exceed the size of a primitive int, the program will probably simply crash when the system runs out of memory.)
But let's say that this is not what's happening and that n is constant. You don't say whether the execution time for X depends on the value of i, but let's say it does not.
The way to approach this is to answer the question: Since i increases by 1 each time through the first loop, how many iterations will be required to reach the loop termination condition (for i to be at least 2*n)? The answer, of course, is O(n) iterations. (Constant coefficients are irrelevant for O() calculations.)
By the same reasoning, the second loop requires O(n) iterations.
Thus the overall program requires O(n + n) = O(n) iterations (and O(n) executions of X).
Your time complexity should be O(n). I assume you dont got any other loops where X is provided. You are using a n*2 which just doubles your n for this algorithm and your time complexity will then increase linear.
If you for an example using floyds algortihm which includes 3 nested for loops you can draw the conclusion that floyd have a time complexity of O(n^3) where n is the number of elements and 3 is represented by 3 nested for loops.
You may proceed like the following:
Note that you can ameliorate your algorithm (first loop) by avoiding to multiply n by 2 at every
In Computer Science, it is very important for Computer Scientists to know how to calculate the running times of algorithms in order to optimize code. For you Computer Scientists, I pose a question.
I understand that, in terms of n, a double-nested for-loop typically has a running time of n2 and a triple-nested for-loop typically has a running time of n3.
However, for a case where the code looks like this, would the running time be n4?
x = 0;
for(a = 0; a < n; a++)
for(b = 0; b < 2a; b++)
for (c=0; c < b*b; c++)
x++;
I simplified the running time for each line to be virtually (n+1) for the first loop, (2n+1) for the second loop, and (2n)2+1 for the third loop. Assuming the terms are multiplied together, and we extract the highest term to find the Big Oh, would the running time be n4, or would it still follow the usual running-time of n3?
I would appreciate any input. Thank you very much in advance.
You are correct, n*2n*4n2 = O(n4).
The triple nested loop only means there will be three numbers to multiply to determine the final Big O - each multiplicand itself is dependent on how much "processing" each loop does though.
In your case the first loop does O(n) operations, the second one O(2n) = O(n) and the inner loop does O(n2) operations, so overall O(n*n*n2) = O(n4).
Formally, using Sigma Notation, you can obtain this:
Could this be a question for Mathematics?
My gut feelings, like BrokenGlass is that it is O(n⁴).
EDIT: Sum of squares and Sum of cubes give a pretty good understanding of what is involved. The answer is a resounding O(n^4): sum(a=0 to n) of (sum(b=0 to 2a) of (b^2)). The inner sum is congruent to a^3. Therefore your outer sum is congruent to n^4.
Pity, I thought you might get away with some log instead of n^4. Never mind.
NOTE: I'm ultra-newbie on algorithm analysis so don't take any of my affirmations as absolute truths, anything (or everything) that I state could be wrong.
Hi, I'm reading about algorithm analysis and "Big-O-Notation" and I fell puzzled about something.
Suppose that you are asked to print all permutations of a char array, for [a,b,c] they would be ab, ac, ba, bc, ca and cb.
Well one way to do it would be (In Java):
for(int i = 0; i < arr.length; i++)
for(int q = 0; q < arr.length; q++)
if(i != q)
System.out.println(arr[i] + " " + arr[q]);
This algorithm has a notation of O(n2) if I'm correct.
I thought other way of doing it:
for(int i = 0; i < arr.length; i++)
for(int q = i+1; q < arr.length; q++)
{
System.out.println(arr[i] + " " + arr[q]);
System.out.println(arr[q] + " " + arr[i]);
}
Now this algorithm is twice as fast than the original, but unless I'm wrong, for big-O-notation it's also a O(2)
Is this correct? Probably it isn't so I'll rephrase: Where am I wrong??
You are correct. O-notation gives you an idea of how the algorithm scales, not the absolute speed. If you add more possibilities, both solutions will scale the same way, but one will always be twice as fast as the other.
O(n) operations may also be slower than O(n^2) operations, for sufficiently small 'n'. Imagine your O(n) computation involves taking 5 square roots, and your O(n^2) solution is a single comparison. The O(n^2) operation will be faster for small sets of data. But when n=1000, and you are doing 5000 square roots but 1000000 comparisons, then the O(n) might start looking better.
I think most people agree first one is O(n^2). Outer loop runs n times and inner loop runs n times every time outer loop runs. So the run time is O(n * n), O(n^2).
The second one is O(n^2) because the outer loop runs n times. The inner loops runs n-1 times. On average for this algorithm, inner loop runs n/2 times for every outer loop. so the run time of this algorithm is O(n * n/2) => O ( 1/2 * n^2) => O(n^2).
Big-O notation says nothing about the speed of the algorithm except for how fast it is relative to itself when the size of the input changes.
An algorithm could be O(1) yet take a million years. Another algorithm could be O(n^2) but be faster than an O(n) algorithm for small n.
Some of the answers to this question may help with this aspect of big-O notation. The answers to this question may also be helpful.
Ignoring the problem of calling your program output "permutation":
Big-O-Notation omits constant coefficients. And 2 is a constant coefficient.
So, there is nothing wrong for programs two times faster than the original to have the same O()
You are correct. Two algorithms are equivalent in Big O notation if one of them takes a constant amount of time more ("A takes 5 minutes more than B"), or a multiple ("A takes 5 times longer than B") or both ("A takes 2 times B plus an extra 30 milliseconds") for all sizes of input.
Here is an example that uses a FUNDAMENTALLY different algorithm to do a similar sort of problem. First, the slower version, which looks much like your original example:
boolean arraysHaveAMatch = false;
for (int i = 0; i < arr1.length(); i++) {
for (int j = i; j < arr2.length(); j++) {
if (arr1[i] == arr2[j]) {
arraysHaveAMatch = true;
}
}
}
That has O(n^2) behavior, just like your original (it even uses the same shortcut you discovered of starting the j index from the i index instead of from 0). Now here is a different approach:
boolean arraysHaveAMatch = false;
Set set = new HashSet<Integer>();
for (int i = 0; i < arr1.length(); i++) {
set.add(arr1[i]);
}
for (int j = 0; j < arr2.length(); j++) {
if (set.contains(arr2[j])) {
arraysHaveAMatch = true;
}
}
Now, if you try running these, you will probably find that the first version runs FASTER. At least if you try with arrays of length 10. Because the second version has to deal with creating the HashSet object and all of its internal data structures, and because it has to calculate a hash code for every integer. HOWEVER, if you try it with arrays of length 10,000,000 you will find a COMPLETELY different story. The first version has to examine about 50,000,000,000,000 pairs of numbers (about (N*N)/2); the second version has to perform hash function calculations on about 20,000,000 numbers (about 2*N). In THIS case, you certainly want the second version!!
The basic idea behind Big O calculations is (1) it's reasonably easy to calculate (you don't have to worry about details like how fast your CPU is or what kind of L2 cache it has), and (2) who cares about the small problems... they're fast enough anyway: it's the BIG problems that will kill you! These aren't always the case (sometimes it DOES matter what kind of cache you have, and sometimes it DOES matter how well things perform on small data sets) but they're close enough to true often enough for Big O to be useful.
You're right about them both being big-O n squared, and you actually proved that to be true in your question when you said "Now this algorithm is twice as fast than the original." Twice as fast means multiplied by 1/2 which is a constant, so by definition they're in the same big-O set.
One way of thinking about Big O is to consider how well the different algorithms would fare even in really unfair circumstances. For instance, if one was running on a really powerful supercomputer and the other was running on a wrist-watch. If it's possible to choose an N that is so large that even though the worse algorithm is running on a supercomputer, the wrist watch can still finish first, then they have different Big O complexities. If, on the other hand, you can see that the supercomputer will always win, regardless of which algorithm you chose or how big your N was, then both algorithms must, by definition, have the same complexity.
In your algorithms, the faster algorithm was only twice as fast as the first. This is not enough of an advantage for the wrist watch to beat the supercomputer, even if N was very high, 1million, 1trillion, or even Graham's number, the pocket watch could never ever beat the super computer with that algorithm. The same would be true if they swapped algorithms. Therefore both algorithms, by definition of Big O, have the same complexity.
Suppose I had an algorithm to do the same thing in O(n) time. Now also suppose I gave you an array of 10000 characters. Your algorithms would take n^2 and (1/2)n^2 time, which is 100,000,000 and 50,000,000. My algorithm would take 10,000. Clearly that factor of 1/2 isn't making a difference, since mine is so much faster. The n^2 term is said to dominate the lesser terms like n and 1/2, essentially rendering them negligible.
The big-oh notation express a family of function, so say "this thing is O(n²)" means nothing
This isn't pedantry, it is the only, correct way to understand those things.
O(f) = { g | exists x_0 and c such that, for all x > x_0, g(x) <= f(x) * c }
Now, suppose that you're counting the steps that your algorithm, in the worst case, does in term of the size of the input: call that function f.
If f \in O(n²), then you can say that your algorithm has a worst-case of O(n²) (but also O(n³) or O(2^n)).
The meaninglessness of the constants follow from the definition (see that c?).
The best way to understand Big-O notation is to get the mathematical grasp of the idea behind the notation. Look for dictionary meaning of the word "Asymptote"
A line which approaches nearer to some curve than assignable distance, but, though
infinitely extended, would never meet it.
This defines the maximum execution time (imaginary because asymptote line meets the curve at infinity), so what ever you do will be under that time.
With this idea, you might want to know, Big-O, Small-O and omega notation.
Always keep in mind, Big O notation represents the "worst case" scenario. In your example, the first algorithm has an average case of full outer loop * full inner loop, so it is n^2 of course. Because the second case has one instance where it is almost full outer loop * full inner loop, it has to be lumped into the same pile of n^2, since that is its worst case. From there it only gets better, and your average compared to the first function is much lower. Regardless, as n grows, your functions time grows exponentially, and that is all Big O really tells you. The exponential curves can vary widely, but at the end of the day, they are all of the same type.