Time complexity of nested for loops - algorithm

the following sample loops have O(n^2) time complexity
Can anyone explain me why it is O(n^2)? As it depends on the value of c...
loop 1---
for (int i = 1; i <=n; i += c)
{
for (int j = 1; j <=n; j += c)
{
// some O(1) expressions
}
}
loop 2---
for (int i = n; i > 0; i -= c)
{
for (int j = i+1; j <=n; j += c)
{
// some O(1) expressions
}
}
If c=0 ; then it runs infinite number of times , in the similar way if c value is increased then the number of times the inner loops run will be decreased
Can anyone explain it to me?

Each of these parts of code takes a time O(n^2/c^2). c is probably considered a strictly positive constant here and therefore O(n^2/c^2) = O(n^2). But it all depends on the context...

Big-O notation is a relative representation of the complexity of an algorithm.
Big-O does not says anything about how many iterations your algorithm will make in any case .
It says in worst case your algorithm will be making n squared computations. Which is useful if you have to compare 2 algorithms.
In your code if we assume c to be a constant then it could be ignored from Big-O notation because Big-O is all about comparison and how thing scale. where constants play no role.
But when c is not a constant the correct Big-O notation would be O(n^2/c^2).
Read this awesome explanation of Big-O by cletus .

For every FIXED c, the time is O (n^2). The number of iterations is roughly max (n^2 / c^2, n^2) unless c = 0. n^2 / c^2 is O (n^2) for every fixed n.
If you had code where c was changed during the loop, you might get a different answer.

Related

What is the Big-O of this Code?

I thought the Big-O notation will be n^3, but the output does not even closely match my Big O:
int bigO(int [] myArray, int x) {
int count = 0;
for (int i = 0; i < x; i++)
for (int j = i+1; j < x; j++)
for (int k = j+1; k < x; k++) {
System.out.println(myArray[i] + ", " + myArray[j] + ", " +
myArray[k]);
count++;
}
return count;
}
My Apologies, I should have "x" instead of "n"
That's because your function does not perform exactly n^3 operations.
Actually, it does f(n) = (1/6)*n^3 - (1/2)*n^2 + (1/3)*n operations (found it using polynomial fitting).
But, by the definition, f(n) is O(n^3). The intuition behind this is:
(1/6)*n^3 is the dominant factor
(1/6)*n^3 grows within a constant factor of n^3.
Here's a static analysis for your code. Because the loops have all different iteration ranges, it's best if you start with the most inner loop and work your way from inner to outer loop.
The most inner for loop has n-j-1 iterations.
So if you look at the 2 inner loops, you have Sum (n-j-1) iterations (for j in the interval [i+1; n-1]). So you have (n-(i+1)-1) + (n-(i+2)-1) + ... + (n-(n-1)-1) iterations, which equals to (n-i-2) + (n-i-3) + ... + 1 + 0, which is an arithmetic series and the result is: (n-i-2)*(n-i-1)/2.
And now we loop over the outer loop and get Sum (n-i-2)*(n-i-1)/2 iterations (for i in the interval [0; n-1]). This is equal to 1/2*Sum(i^2) + (-n+3/2)*Sum(i) + (n^2/2-3n/2+1)*Sum(1). These sums are easy to calculate and after a bit of rearranging you receive: n^3/6 -n^2/2+n/3, which is the same formula as the one of #JuanLopes.
Since your functions is O(n^3) (n^3/6 -n^2/2+n/3 = O(n^3)), your code doesn't have exactly n^3 iterations. The dominant factor is n^3/6, and you will have about this many iterations.
Big-O notation is not a per-se feature of an algorithm! It describes how the output "grows" over time/space with respect to the size of the input. Define the "size" of your input and you can compute the Big-O complexity of it.
As soon as you change the definition of "size" of your input, you get totally different complexities.
Example:
Algorithm to apply a gaussian filter to a set of images of size X*Y
With respect the no. of images the algorithm operates in linear time
With respect the global no. of pixels to process the algorithm is quadratic
So the answer is: you didn't define your N :-)

Big O complexity of two nested loops

I'm trying to find the Big-O complexity of the following algorithm:
int i, j;
for (i = 0; i < n; i += 5)
{
for (j = 1; j < n; j *= 3)
{
// O(1) code here
}
}
n is the size of an array passed into the method. Struggling with this due to the i += 5 and j *= 3. I know this is probably wrong but I tried the following...
Outer loop iterates n/5 times. Is that just O(n)?
Inner loop iterates log3(n) times. Must be just log(n).
Since they're nested, multiply the complexities together.
So the Big O complexity is just O(n log(n))?
You can proceed like the following:
Yes you are right. the time complexity is n(log n) -- base 3.
Try taking a very large input value for n and you will understand that the graph for [(n/5)*(log3n)]works identical. Hope this helps.

big theta for quad nested loop with hash table lookup

for (int i = 0; i < 5; i++) {
for (int j = 0; j < 5; j++) {
for (int k = 0; k < 5; k++) {
for (int l = 0; l < 5; l++) {
look up in a perfect constant time hash table
}
}
}
}
what would the running time of this be in big theta?
my best guess, a shot in the dark: i always see that nested for loops are O(n^k) where k is the number of loops, so the loops would be O(n^4), then would i multiply by O(1) for constant time? what would this all be in big theta?
If you consider that accessing a hash table is really theta(1), then this algorithm runs in theta(1) too, because it makes only a constant number (5^4) lookups at the hashtable.
However, if you change 5 to n, it will be theta(n^4) because you'll do exactly n^4 constant-time operations.
The big-theta running time would be Θ(n^4).
Big-O is an upper bound, where-as big-theta is a tight bound. What this means is that to say the code is O(n^5) is also correct (but Θ(n^5) is not), whatever's inside the big-O just has to be asymptotically bigger than or equal to n^4.
I'm assuming 5 can be substituted for another value (i.e. is n), if not, the loop would run in constant time (O(1) and Θ(1)), since 5^4 is constant.
Using Sigma notation:
Indeed, instructions inside the innermost loop will execute 625 times.

Program complexity classes

Basically I am struggling to come to grips with operation counting and Big-O notation. I understand it is possibly one of the harder parts of computer science to understand and I have to admit I am struggling with it. Could anyone give me some help with these examples, and possibly any further help/links with Big-O?
for (i = 0; i < N; i++)
{ for (j = i; j < N; j++)
{ sequence of statements }
}
Here I would say the complexity is O(N²) - Quadratic
int m = -9
for (j = 0; j < n; j+=5)
{
if (j<m)
{
for (k = 1; k <n; k*=3)
{some code}
}
}
Here I would also say is O(N²). The first loop takes N and the second loop takes N so I would say the answer is O(N*N) which is equal to O(N²).
Any help and advice for further understanding would be great!!
The first is indeed O(n^2), as you suspected, assuming the 'sequence of statements' is O(1).
However, the second part of code is O(n), since the condition j < m is never met - and thus, the outer loop only iterates itself without actually doing nothing. The inner loop is not even reachable.
As side note, some compilers may actually optimize the second part of code to run in O(1) by just setting the end values of variables, but this is not the point of the question.
The second example is complexity O(N).
int m = -9
for (j = 0; j < n; j+=5)
{
if (j<m)
{
// this never executes; m is negative and j is positive
}
}
First example:
The inner loop executes N times when i = 0, N-1 times when i = 1, and so on...
You can just calculate the number of steps the for loops execute
(N) + (N - 1) + (N - 2) + ... + 2 + 1
steps = N(N+1)/2 = (N^2 + N) / 2
N <=> 1 |add the left to the right| => N+1
(N - 1) <=> 2 |add the left to the right| => N+1
(N - 2) <=> 3 |add the left to the right| => N+1 . . . N
What does the big-O nation means?
F(N) = O(G(N)) means that |F(N)|<= c*|G(N)| where c > 0
It means that the G(N) function is an upper bound on the grothw rate of the function. F(N) could not grow faster than G(N).
Just to throw this out here: if we assume that the j<-9 clause is a mistake and ignore it, then your second example has two nested loops. However the inner loop is actually multiplying k times 3. So this makes this inner loop O(log n). Which makes the pair of loops O(n log n). I'm not sure this is the answer, but you asked for further understanding, so... you know... maybe that's further.
Ok, also on a further note I would really suggest you to go through a single lecture in the series of introduction of algorithms. Believe me you won't need to look any further.
http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-046j-introduction-to-algorithms-sma-5503-fall-2005/video-lectures/

Complexity of algorithm

What is the complexity given for the following problem is O(n). Shouldn't it be
O(n^2)? That is because the outer loop is O(n) and inner is also O(n), therefore n*n = O(n^2)?
The answer sheet of this question states that the answer is O(n). How is that possible?
public static void q1d(int n) {
int count = 0;
for (int i = 0; i < n; i++) {
count++;
for (int j = 0; j < n; j++) {
count++;
}
}
}
The complexity for the following problem is O(n^2), how can you obtain that? Can someone please elaborate?
public static void q1E(int n) {
int count = 0;
for (int i = 0; i < n; i++) {
count++;
for (int j = 0; j < n/2; j++) {
count++;
}
}
}
Thanks
The first example is O(n^2), so it seems they've made a mistake. To calculate (informally) the second example, we can do n * (n/2) = (n^2)/2 = O(n^2). If this doesn't make sense, you need to go and brush up what the meaning of something being O(n^k) is.
The complexity of both code is O(n*n)
FIRST
The outer loop runs n times and the inner loop varies from 0 to n-1 times
so
total = 1 + 2 + 3 + 4 ... + n
which if you add the arithmetic progression is n * ( n + 1 ) / 2 is O(n*n)
SECOND
The outer loop runs n times and the inner loop varies from 0 to n-1/2 times
so
total = 1 + 1/2 + 3/2 + 4/2 ... + n/2
which if you add the arithmetic progression is n * ( n + 1 ) / 4 is also O(n*n)
First case is definitely O(n^2)
The second is O(n^2) as well because you omit constants when calculate big O
Your answer sheet is wrong, the first algorithm is clearly O(n^2).
Big-Oh notation is "worst case" so when calculating the Big-Oh value, we generally ignore multiplications / divisions by constants.
That being said, your second example is also O(n^2) in the worst case because, although the inner loop is "only" 1/2 n, the n is the clear bounding factor. In practice the second algorithm will be less than O(n^2) operations -- but Big-Oh is intended to be a "worst case" (ie. maximal bounding) measurement, so the exact number of operations is ignored in favor of focusing on how the algorithm behaves as n approaches infinity.
Both are O(n^2). Your answer is wrong. Or you may have written the question incorrectly.

Resources