This question already has answers here:
How can I find the time complexity of an algorithm?
(10 answers)
Big O, how do you calculate/approximate it?
(24 answers)
Closed 5 years ago.
Hello, I have 2 functions in the image. And the question is about getting their runtime.
The answer for EX4 function is O(n^2) and EX5 is O(n^4).
I don't get this.
Question for EX4:
we have inner loop that starts with j=0 to i. From my perspective, this is equivalent to "1+2+...+n", so is "n(n+1)/2", therefore, O(n^2) for inner loop only.
However, we also know that outer loop runs from i=0 to n, which is O(n). So the answer I thought for EX4 was actually "O(n) * O(n^2) = O(n^3)". But the real answer says O(n^2). Why is that?
Question for EX5:
Similarly, I thought it is "n*(n+1)/2 = O(n^2)" for inner loop and also "n*n=O(n^2)" for outer loop, so the whole runtime become O(n^2) * O(n^2) = O(n^4), which is same as real answer of this question. But if I justify this in this way, then my solution to EX4 doesn't make sense. Why is it O(n^4) specifically?
Related
This question already has answers here:
Is log(n!) = Θ(n·log(n))?
(10 answers)
Closed 1 year ago.
Given two algorithms to solve a problem, one with time complexity O(log(N!)) and another with O(N), which of the two will be faster?
O(log(n!)) Is the same as O(n*log(n)), so O(n) has the better worst-case.
See also https://stackoverflow.com/questions/2095395/is-logn-Θn-logn
This question already has answers here:
How can I find the time complexity of an algorithm?
(10 answers)
Closed 5 years ago.
The steps of algorithm are:
Set J = N
Repeat while J >1
Module A.
J = J/2
Return.
I need to find the time complexity of following in Big O notation.
Assuming Module A is a O(k) operation you observe that J is decreasing exponentially most specifically you observe that the loop will always reapeat log n (base 2 because it is halfed at each iteration) times thus you have a time complexity of O(k*log n)
This question already has answers here:
Example of O(n!)?
(16 answers)
Closed 5 years ago.
I can't seem to find any examples that uses O(n!) time complexity.
I can't seem to comprehend how it works. Please help
A trivial example is the random sort algorithm. It randomly shuffles its input until it gets it sorted.
Of course, it has strictly no use in the real world, but still, it is O(n!).
EDIT: As pointed out in the comments, this is actually the average time performance of this algorithm. The best-case time complexity is O(1), which happens when the algorithm finds the right permutation right away, and is unbounded in the worst case, since you have no guarantee that the right permutation will come up.
This question already has answers here:
What is a plain English explanation of "Big O" notation?
(43 answers)
Closed 5 years ago.
I have just started "Cracking the Coding Interview" by Gayle Macdowell. In this BigO topic, It says we should drop the non-dominant term.
O(n^2 + n) becomes O(n^2) and O(n + log n) becomes O(n).
Well, I understand that. If we suppose the value of n to be some large number then we can ignore the smaller result since, it will be comparatively much more smaller than the larger one.
But, in this case how can O(5*2^n + 1000n^100) become O(2^n)) ?
Isn't n^100 dominant than 2 ^n ?
n^100, or n raised to any constant, does not dominate 2^n.
This question already has answers here:
Big O, how do you calculate/approximate it?
(24 answers)
Closed 6 years ago.
Find out the c and n0.
Please explain with the steps.
limit as n --> infinity of (n+1)^5 / n^5 = 1.
This is neither 0 nor infinity, so they have the same complexity. This complexity is traditionally written as O(n^5).
This does assume that each step is constant for whatever you are measuring.