I have the following pseudocode:
for i=1 to 3*n
for j=1 to i*i
for k=1 to j
if j mod i=1 then
s=s+1
endif
next k
next j
next i
When I want to analyze the number of times the part s=s+1 is performed, assuming that this operation takes constant time, I end up with a quadratic complexity or is it linear? The value of n can be any positive integer.
The calculations that I made are the following:
Definitely not quadratic, but should at least be polynomial.
It goes through 3n iterations.
On each iteration it does 9n2 more.
On each of those it does up to 9n2 more.
So I think it would be O(n5).
When talking about running time, you should always make explicit in terms of what you are defining your running time.
If we assume you are talking about your running time in terms of n, the answer is O(n^5). This is because what you are doing boils down to this, when we get rid of the constant factors:
do n times:
do n^2 times:
do n^2 times:
do something
And n * n^2 * n^2 = n^5
Related
I have the following algorithm:
for(int i = 1; i < n; i++)
for(int j = 0; j < i; j++)
if(j % i == 0) System.out.println(i + " " + j);
This will run max(n,0) times.
Would the Big-O notation be O(n)? If not, what is it and why?
Thank you.
You haven't stated what you are trying to measure with the Big-O notation. Let's assume it's time complexity. Next we have to define what the dependent variable is against which you want to measure the complexity. A reasonable choice here is the absolute value of n (as opposed to the bit-length), since you are dealing with fixed-length ints and not arbitrary-length integers.
You are right that the println is executed O(n) times, but that's counting how often a certain line is hit, it's not measuring time complexity.
It's easy to see that the if statement is hit O(n^2) times, so we have already established that the time complexity is bounded from below by Omega(n^2). As a commenter has already noted, the if-condition is only true for j=0, so I suspect that you actually meant to write i % j instead of j % i? This matters because the time complexity of the println(i + " " + j)-statement is certainly not O(1), but O(log n) (you can't possibly print x characters in less than x steps), so at first sight there is a possibility that the overall complexity is strictly worse than O(n^2).
Assuming that you meant to write i % j we could make the simplifying assumption that the condition is always true, in which case we would obtain the upper bound O(n^2 log n), which is strictly worse than O(n^2)!
However, noting that the number of divisors of n is bounded by O(Sqrt(n)), we actually have O(n^2 + n*Sqrt(n)*log(n)). But since O(Sqrt(n) * log(n)) < O(n), this amounts to O(n^2).
You can dig deeper into number theory to find tighter bounds on the number of divisors, but that doesn't make a difference since the n^2 stays the dominating factor.
So the tightest upper bound is indeed O(n^2), but it's not as obvious as it seems at first sight.
max(n,0) would indeed be O(n). However, your algorithm is in O(n**2). Your first loop goes n times, and the second loop goes i times which is on average n/2. That makes O(n**2 / 2) = O(n**2). However, unlike the runtime of the algorithm, the amount of times println is reached is in O(n), as this happens exactly n times.
So, the answer depends on what exactly you want to measure.
What is the worst case time complexity of the following:
def fun(n):
count=0
i=n
while i>0:
for j in range(0,i):
count+=1
i/=2
return count
Worst case time complexity if n is really big integer.
O(log(n)*n) (Guessing).
Here's how I came to my conclusion.
while i>0:
...
i/=2
This will run log(n) times because it's getting halved every time it runs.
for j in range(0,i):
This will run n times first, n/2 times the 2nd time, and so on. So, total running time for this line is n + n/2 + n/4 .... 1 = (2n-1)
count+=1
This is a cheap operation so is O(1).
Thus making total running time of this function O(log(n)) * O(2n-1), if n is an integer. Simplifying becomes O(log(n)*(n)).
This algorithm appears to have a quadratic efficiency. (Why?)
To analyze complexity, you just have to count the number of operations.
Here, there are two nested loops:
for i in 0 to n – 1 do
for j in 0 to n – 1 do
operation() // Do something
done
done
With i = 0, operation will be ran for all j in [0,n-1] that is n times. Then increment i, and repeat until i > n-1. That is the operation is ran n*n times in the worst case.
So in the end, this codes does n^2 operations, that's why it has quadratic efficiency.
This is a trick question, it appears to be both n^2 and 2^n at the same time depending on the whether the if statement on line j is executed.
I know the big-O complexity of this algorithm is O(n^2), but I cannot understand why.
int sum = 0;
int i = 1; j = n * n;
while (i++ < j--)
sum++;
Even though we set j = n * n at the beginning, we increment i and decrement j during each iteration, so shouldn't the resulting number of iterations be a lot less than n*n?
During every iteration you increment i and decrement j which is equivalent to just incrementing i by 2. Therefore, total number of iterations is n^2 / 2 and that is still O(n^2).
big-O complexity ignores coefficients. For example: O(n), O(2n), and O(1000n) are all the same O(n) running time. Likewise, O(n^2) and O(0.5n^2) are both O(n^2) running time.
In your situation, you're essentially incrementing your loop counter by 2 each time through your loop (since j-- has the same effect as i++). So your running time is O(0.5n^2), but that's the same as O(n^2) when you remove the coefficient.
You will have exactly n*n/2 loop iterations (or (n*n-1)/2 if n is odd).
In the big O notation we have O((n*n-1)/2) = O(n*n/2) = O(n*n) because constant factors "don't count".
Your algorithm is equivalent to
while (i += 2 < n*n)
...
which is O(n^2/2) which is the same to O(n^2) because big O complexity does not care about constants.
Let m be the number of iterations taken. Then,
i+m = n^2 - m
which gives,
m = (n^2-i)/2
In Big-O notation, this implies a complexity of O(n^2).
Yes, this algorithm is O(n^2).
To calculate complexity, we have a table the complexities:
O(1)
O(log n)
O(n)
O(n log n)
O(n²)
O(n^a)
O(a^n)
O(n!)
Each row represent a set of algorithms. A set of algorithms that is in O(1), too it is in O(n), and O(n^2), etc. But not at reverse. So, your algorithm realize n*n/2 sentences.
O(n) < O(nlogn) < O(n*n/2) < O(n²)
So, the set of algorithms that include the complexity of your algorithm, is O(n²), because O(n) and O(nlogn) are smaller.
For example:
To n = 100, sum = 5000. => 100 O(n) < 200 O(n·logn) < 5000 (n*n/2) < 10000(n^2)
I'm sorry for my english.
Even though we set j = n * n at the beginning, we increment i and decrement j during each iteration, so shouldn't the resulting number of iterations be a lot less than n*n?
Yes! That's why it's O(n^2). By the same logic, it's a lot less than n * n * n, which makes it O(n^3). It's even O(6^n), by similar logic.
big-O gives you information about upper bounds.
I believe you are trying to ask why the complexity is theta(n) or omega(n), but if you're just trying to understand what big-O is, you really need to understand that it gives upper bounds on functions first and foremost.
Our prof and various materials say Summation(n) = (n) (n+1) /2 and hence is theta(n^2). But intuitively, we just need one loop to find the sum of first n terms! So, it has to be theta(n).I'm wondering what am I missing here?!
All of these answers are misunderstanding the problem just like the original question: The point is not to measure the runtime complexity of an algorithm for summing integers, it's talking about how to reason about the complexity of an algorithm which takes i steps during each pass for i in 1..n. Consider insertion sort: On each step i to insert one member of the original list the output list is i elements long, thus it takes i steps (on average) to perform the insert. What is the complexity of insertion sort? It's the sum of all of those steps, or the sum of i for i in 1..n. That sum is n(n+1)/2 which has an n^2 in it, thus insertion sort is O(n^2).
The running time of the this code is Θ(1) (assuming addition/subtraction and multiplaction are constant time operations):
result = n*(n + 1)/2 // This statement executes once
The running time of the following pseudocode, which is what you described, is indeed Θ(n):
result = 0
for i from 1 up to n:
result = result + i // This statement executes exactly n times
Here is another way to compute it which has a running time of Θ(n²):
result = 0
for i from 1 up to n:
for j from i up to n:
result = result + 1 // This statement executes exactly n*(n + 1)/2 times
All three of those code blocks compute the natural numbers' sum from 1 to n.
This Θ(n²) loop is probably the type you are being asked to analyse. Whenever you have a loop of the form:
for i from 1 up to n:
for j from i up to n:
// Some statements that run in constant time
You have a running time complexity of Θ(n²), because those statements execute exactly summation(n) times.
I think the problem is that you're incorrectly assuming that the summation formula has time complexity theta(n^2).
The formula has an n^2 in it, but it doesn't require a number of computations or amount of time proportional to n^2.
Summing everything up to n in a loop would be theta(n), as you say, because you would have to iterate through the loop n times.
However, calculating the result of the equation n(n+1)/2 would just be theta(1) as it's a single calculation that is performed once regardless of how big n is.
Summation(n) being n(n+1)/2 refers to the sum of numbers from 1 to n. Which is a mathematical formula and can be calculated without a loop which is O(1) time. If you iterate an array to sum all values that is an O(n) algorithm.