This question already has answers here:
Is Big O(logn) log base e?
(7 answers)
Closed 8 years ago.
Most solutions to Exercise 4.4.6 of Intro. to Algorithms 3rd edition say,
n*log3(n) = Big omega of (n*lg(n)).
Dose it mean log3(n) is equivalent to log2(n) when we are discussing time complexity of algorithms?
Thanks
As far as big-Oh notation is concerned, the base of the logarithms doesn't make any real difference, because of this important property, called Change of Base.
According to this property, changing the base of the logarithm, in terms of big-oh notation, only affects the complexity by a constant factor.
So, yes. In terms of big-Oh notation, log3(n) is equivalent to log2(n).
Related
This question already has answers here:
What is a plain English explanation of "Big O" notation?
(43 answers)
Closed 5 years ago.
I have just started "Cracking the Coding Interview" by Gayle Macdowell. In this BigO topic, It says we should drop the non-dominant term.
O(n^2 + n) becomes O(n^2) and O(n + log n) becomes O(n).
Well, I understand that. If we suppose the value of n to be some large number then we can ignore the smaller result since, it will be comparatively much more smaller than the larger one.
But, in this case how can O(5*2^n + 1000n^100) become O(2^n)) ?
Isn't n^100 dominant than 2 ^n ?
n^100, or n raised to any constant, does not dominate 2^n.
This question already has answers here:
What is the difference between Θ(n) and O(n)?
(9 answers)
Closed 5 years ago.
I am taking an online class on algorithms and I had the following quiz. I got it wrong and am trying to understand the reason for the answer.
Which of the following is O(n^3)?
a) 11n + 151 gn + 100
b) 1/3 n^2
c) 25000 n^3
d) All of the above.
The correct answer is (d) all of the above. The reason is that Big-O notation provides only the upper bound on the growth rate of function as n gets large.
I am not sure why the answer is not (c). For example, the upper bound on (b) is less than n^3.
The reason is that formally, big-O notation is an asymptotic upper bound.
So 1/3*n^2 is O(n^2), but it is also O(n^3) and also O(2^n).
While in every-day conversion about complexity O(...) is used as a tight (both upper and lower bound), the theta-notation, or Θ(...) is the technically correct term for this.
For more info see What is the difference between Θ(n) and O(n)?
This question already has answers here:
Which algorithm is faster O(N) or O(2N)?
(6 answers)
Closed 8 years ago.
If an algorithm iterates over a list of numbers two times before returning an answer is the runtime O(2n) or O(n)? Does the runtime of an algorithm always lack a coefficient?
Big-O notation refers to the asymptotic "worst-case" complexity of an algorithm. Any constants are factored out of the analysis. Hence, from a theoretical perspective, O(2n) should always be represented as O(n). However, from the standpoint of practical implementation, if you can cut that down to one iteration over the list of numbers you will see some (small) increase in performance.
It may still be slower than an implementation that doesn't iterate twice, but that is still O(n), as the time complexity scales based only on the size of n.
Convention is that you ignore constant coefficients when reporting Big-O time.
So if an algorithm were O(n), O(2n), or O(3n) for example, you would report O(n).
Your suspicion is correct. You leave off the coefficient. See http://en.wikipedia.org/wiki/Big_O_notation.
From the example,
Now one may apply the second rule: $6x^4$ is a product of 6 and $x^4$
in which the first factor does not depend on x. Omitting this factor
results in the simplified form $x^4$.
This question already has answers here:
What is a plain English explanation of "Big O" notation?
(43 answers)
Closed 9 years ago.
What is Big O notation and why do we measure complexity of any algorithm in Big O notation?
An example will do the good.
You must check wiki
In mathematics, big O notation describes the limiting behavior of a
function when the argument tends towards a particular value or
infinity, usually in terms of simpler functions. It is a member of a
larger family of notations that is called Landau notation,
Bachmann–Landau notation (after Edmund Landau and Paul Bachmann), or
asymptotic notation. In computer science, big O notation is used to
classify algorithms by how they respond (e.g., in their processing
time or working space requirements) to changes in input size. In
analytic number theory, it is used to estimate the "error committed"
while replacing the asymptotic size, or asymptotic mean size, of an
arithmetical function, by the value, or mean value, it takes at a
large finite argument. A famous example is the problem of estimating
the remainder term in the prime number theorem.
This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
are there any O(1/n) algorithms?
Is it ever possible for your code to be Big O less than O(1)?
O(1) simply means a constant time operation. That time could be 1 nanosecond or 1 million years, the notation is not a measure of absolute time. Unless of course you are working on the OS for a time machine, than perhaps your DoTimeTravel( ) function would have O(-1) complexity :-)
Not really. O(1) is constant time. Whether you express that as O(1) or O(2) or O(.5) really makes little difference as far as purely big O notation goes.
As noted in this question, it is technically possible to have an O(1/n), but that no real-world useful algorithm would satisfy this (though some do algorithm's do have 1/n as part of their algorithmic complexity).
The only thing that would take less than O(1) (constant time) would be an operation that did absolutely nothing, and thus took zero time. But even a NOP usually takes a fixed number of cycles...