prod=1;
Nfour= sqrt(n) * sqrt(n);
For k=1 to Nfour do
If K mod sqrt(n)=0 then
for j=1 to k do
if j mod sqrt(n)=0 then
for m=1 to j do
prod=prod *4
How can I calculate time complexity of this algorithm?
By looking at the above code we have 3 nested for loops which will take n^3 complexity as they are nested and remaining operations take constant time.
This resource might be helpful - 1- http://bigocheatsheet.com/
2- How to find time complexity of an algorithm
Related
I have pseudo code for a function (below). I understand that if each of i, j and k were 1 then worst case run time would be O(n^3). I am struggling to understand the impact of n/2 though - if any - on run time. Any guidance would be great.
for i=n/2; i < n; increase i by 1
for j=1; j < n/2; increase j by 1
for k = 1; k < n; increase k by k*2
Execute a Statement
Your understanding is not correct
k is increased by k*2 which leads to logarithmic time, the complexity is actually O(n^2 * log n)
The O(n/2) = O(n), therefore the n/2 does not have any impact on asymptotic growth.
If you are not sure, the general approach is to count it as precise as possible and then remove the constants.
for i will be execute n/2 times, the for j also n/2 times and k will be executed log n times.
n/2 * n/2 * log n = (n^2/4) * log n. You can remove constants, so O(n^2 * log)
The worst case time complexity is not O(N^3)
Check the innermost for loop. K increases by K * 2
That means, the innermost for loop will take O(lgN) time
Other two outer loops would take O(N) time each and N/2 would not have any effect on the asymptotic growth of run time.
So, the overall time complexity would be O(N^2 * lgN)
How can I find the complexity of the following algorithm that produce the summation of a series .
series: 1+(1+2)+(1+2+3)+.......+(1+2+3+...+n)
algorithm:
for(i=1; i<=n; i++){
for(j=1; j<=i; j++){
sum = sum + j;
}
}
To find time complexity, let's analyse how many times the core (inside the loops) is run.
The outside loop is run n times, so complexity is at least O(n).
The inside loop is run
once when i=1
twice when i=2
... n times when i=n
So the total number of times it will be run is the sum of integers between 1 and n: (n * (n+1)) / 2 = n^2 / 2 + n / 2, which is O(n^2).
Space complexity on the other hand is simpler in this case. As memory requirements don't depend on the input length, space complexity of the algorithm above is O(1) (meaning the amount of memory needed is the same (basically the size of sum) regardless of n, and weather the result fits in sum).
Note that for the same task, different algorithms may have different complexities. As #AxelKemper correctly noted in his comment, you can express the solution as a single polynomial of n, so the most efficient solution would have the complexity of O(1). However, the algorithm above does not work that way and has a higher complexity.
The sum
1+(1+2)+(1+2+3)+.......+(1+2+3+...+n)
is equal to
1/2(1+1) + 2/2(2+1) + 3/2(3+1)+.......+n/2(n+1)
This can be rewritten as
1/2(1+2+...+n) + 1/2(1+4+9+....+n*n)
This in turn leads to
n/4(n+1) + 1/12(2n^3 + 3n^2 + n)
which can be simplified to
n^3/6 + n^2/2 + n/3
Ignoring the wordlength of n, the complexity to evaluate this polynomial does not depend on n.
Therefore, the time complexity of the problem is O(1)
The time complexity of the shown algorithm is O(n^2) as explained in the accepted answer.
Say I have following algorithm:
for(int i = 1; i < N; i *= 3) {
sum++
}
I need to calculate the complexity using tilde-notation, which basically means that I have to find a tilde-function so that when I divide the complexity of the algorithm by this tilde-function, the limit in infinity has to be 1.
I don't think there's any need to calculate the exact complexity, we can ignore the constants and then we have a tilde-complexity.
By looking at the growth off the index, I assume that this algorithm is
~ log N
But rather than having a binary logarithmic function, the base in this case is 3.
Does this matter for the exact notation? Is the order of growth exactly the same and thus can we ignore the base when using Tilde-notation? Do I approach this correctly?
You are right, the for loop executes ceil(log_3 N) times, where log_3 N denotes the base-3 logarithm of N.
No, you cannot ignore the base when using the tilde notation.
Here's how we can derive the time complexity.
We will assume that each iteration of the for loop costs C, for some constant C>0.
Let T(N) denote the number of executions of the for-loop. Since at j-th iteration the value of i is 3^j, it follows that the number of iterations that we make is the smallest j for which 3^j >= N. Taking base-3 logarithms of both sides we get j >= log_3 N. Because j is an integer, j = ceil(log_3 N). Thus T(N) ~ ceil(log_3 N).
Let S(N) denote the time complexity of the for-loop. The "total" time complexity is thus C * T(N), because the cost of each of T(N) iterations is C, which in tilde notation we can write as S(N) ~ C * ceil*(log_3 N).
I know the big-O complexity of this algorithm is O(n^2), but I cannot understand why.
int sum = 0;
int i = 1; j = n * n;
while (i++ < j--)
sum++;
Even though we set j = n * n at the beginning, we increment i and decrement j during each iteration, so shouldn't the resulting number of iterations be a lot less than n*n?
During every iteration you increment i and decrement j which is equivalent to just incrementing i by 2. Therefore, total number of iterations is n^2 / 2 and that is still O(n^2).
big-O complexity ignores coefficients. For example: O(n), O(2n), and O(1000n) are all the same O(n) running time. Likewise, O(n^2) and O(0.5n^2) are both O(n^2) running time.
In your situation, you're essentially incrementing your loop counter by 2 each time through your loop (since j-- has the same effect as i++). So your running time is O(0.5n^2), but that's the same as O(n^2) when you remove the coefficient.
You will have exactly n*n/2 loop iterations (or (n*n-1)/2 if n is odd).
In the big O notation we have O((n*n-1)/2) = O(n*n/2) = O(n*n) because constant factors "don't count".
Your algorithm is equivalent to
while (i += 2 < n*n)
...
which is O(n^2/2) which is the same to O(n^2) because big O complexity does not care about constants.
Let m be the number of iterations taken. Then,
i+m = n^2 - m
which gives,
m = (n^2-i)/2
In Big-O notation, this implies a complexity of O(n^2).
Yes, this algorithm is O(n^2).
To calculate complexity, we have a table the complexities:
O(1)
O(log n)
O(n)
O(n log n)
O(n²)
O(n^a)
O(a^n)
O(n!)
Each row represent a set of algorithms. A set of algorithms that is in O(1), too it is in O(n), and O(n^2), etc. But not at reverse. So, your algorithm realize n*n/2 sentences.
O(n) < O(nlogn) < O(n*n/2) < O(n²)
So, the set of algorithms that include the complexity of your algorithm, is O(n²), because O(n) and O(nlogn) are smaller.
For example:
To n = 100, sum = 5000. => 100 O(n) < 200 O(n·logn) < 5000 (n*n/2) < 10000(n^2)
I'm sorry for my english.
Even though we set j = n * n at the beginning, we increment i and decrement j during each iteration, so shouldn't the resulting number of iterations be a lot less than n*n?
Yes! That's why it's O(n^2). By the same logic, it's a lot less than n * n * n, which makes it O(n^3). It's even O(6^n), by similar logic.
big-O gives you information about upper bounds.
I believe you are trying to ask why the complexity is theta(n) or omega(n), but if you're just trying to understand what big-O is, you really need to understand that it gives upper bounds on functions first and foremost.
I have the following pseudocode:
for i=1 to 3*n
for j=1 to i*i
for k=1 to j
if j mod i=1 then
s=s+1
endif
next k
next j
next i
When I want to analyze the number of times the part s=s+1 is performed, assuming that this operation takes constant time, I end up with a quadratic complexity or is it linear? The value of n can be any positive integer.
The calculations that I made are the following:
Definitely not quadratic, but should at least be polynomial.
It goes through 3n iterations.
On each iteration it does 9n2 more.
On each of those it does up to 9n2 more.
So I think it would be O(n5).
When talking about running time, you should always make explicit in terms of what you are defining your running time.
If we assume you are talking about your running time in terms of n, the answer is O(n^5). This is because what you are doing boils down to this, when we get rid of the constant factors:
do n times:
do n^2 times:
do n^2 times:
do something
And n * n^2 * n^2 = n^5