DOes it have same complexity since they vary by constant multiplier, or should it be made n^3 and n^2 and be compared?
For 'BigOh' notation, the constant multiplier really doesn't matter. All that it does is, it gives the order of the running time complexity.
You can consider this small example:
Say you have 3 * 100 = 300 apples and 2 * 100 = 200 apples. Surely, 300 != 200, but the order of both are same, that is in order of hundreds.
So by the same means, 3(log n) != 2(log n), but both 3(log n) and 2(log n) are in the order of log n, that is O(log n).
Yes. Multiplying by a constant doesn't matter. The are both just O(log n).
In fact, this is part of the definition of big-o notation. If a function may be bounded by a polynomial in n, then as n tends to infinity, you may disregard lower-order terms of the polynomial.
Both are equivalent to O(log n). The constant does not alter the complexity.
Big O notation is defined by the set of all upper bounded functions.
With that being said, it is also important to note that Big O can be defined mathematically as:
O(g(n)) = {f(n): f(n) < c·g(n), c being some arbitrary constant}
So as you can see, the constant doesn't really matter in Big O; we don't care if there is one that works. So both 3logn and 2logn both can be described as O(logn).
Related
I would like to know, because I couldn't find any information online, how is an algorithm like O(n * m^2) or O(n * k) or O(n + k) supposed to be analysed?
Does only the n count?
The other terms are superfluous?
So O(n * m^2) is actually O(n)?
No, here the k and m terms are not superfluous,they do have a valid existence and essential for computing time complexity. They are wrapped together to provide a concrete-complexity to the code.
It may seem like the terms n and k are independent to each other in the code,but,they both combinedly determines the complexity of the algorithm.
Say, if you've to iterate a loop of size n-elements, and, in between, you have another loop of k-iterations, then the overall complexity turns O(nk).
Complexity of order O(nk), you can't dump/discard k here.
for(i=0;i<n;i++)
for(j=0;j<k;j++)
// do something
Complexity of order O(n+k), you can't dump/discard k here.
for(i=0;i<n;i++)
// do something
for(j=0;j<k;j++)
// do something
Complexity of order O(nm^2), you can't dump/discard m here.
for(i=0;i<n;i++)
for(j=0;j<m;j++)
for(k=0;k<m;k++)
// do something
Answer to the last question---So O(n.m^2) is actually O(n)?
No,O(nm^2) complexity can't be reduced further to O(n) as that would mean m doesn't have any significance,which is not the case actually.
FORMALLY: O(f(n)) is the SET of ALL functions T(n) that satisfy:
There exist positive constants c and N such that, for all n >= N,
T(n) <= c f(n)
Here are some examples of when and why factors other than n matter.
[1] 1,000,000 n is in O(n). Proof: set c = 1,000,000, N = 0.
Big-Oh notation doesn't care about (most) constant factors. We generally leave constants out; it's unnecessary to write O(2n), because O(2n) = O(n). (The 2 is not wrong; just unnecessary.)
[2] n is in O(n^3). [That's n cubed]. Proof: set c = 1, N = 1.
Big-Oh notation can be misleading. Just because an algorithm's running time is in O(n^3) doesn't mean it's slow; it might also be in O(n). Big-Oh notation only gives us an UPPER BOUND on a function.
[3] n^3 + n^2 + n is in O(n^3). Proof: set c = 3, N = 1.
Big-Oh notation is usually used only to indicate the dominating (largest
and most displeasing) term in the function. The other terms become
insignificant when n is really big.
These aren't generalizable, and each case may be different. That's the answer to the questions: "Does only the n count? The other terms are superfluous?"
Although there is already an accepted answer, I'd still like to provide the following inputs :
O(n * m^2) : Can be viewed as n*m*m and assuming that the bounds for n and m are similar then the complexity would be O(n^3).
Similarly -
O(n * k) : Would be O(n^2) (with the bounds for n and k being similar)
and -
O(n + k) : Would be O(n) (again, with the bounds for n and k being similar).
PS: It would be better not to assume the similarity between the variables and to first understand how the variables relate to each other (Eg: m=n/2; k=2n) before attempting to conclude.
I am just a bit confused. If time complexity of an algorithm is given by
what is that in big O notation? Just or we keep the log?
If that's the time-complexity of the algorithm, then it is in big-O notation already, so, yes, keep the log. Asymptotically, there is a difference between O(n^2) and O((n^2)*log(n)).
A formal mathematical proof would be nice here.
Let's define following variables and functions:
N - input length of the algorithm,
f(N) = N^2*ln(N) - a function that computes algorithm's execution time.
Let's determine whether growth of this function is asymptotically bounded by O(N^2).
According to the definition of the asymptotic notation [1], g(x) is an asymptotic bound for f(x) if and only if: for all sufficiently large values of x, the absolute value of f(x) is at most a positive constant multiple of g(x). That is, f(x) = O(g(x)) if and only if there exists a positive real number M and a real number x0 such that
|f(x)| <= M*g(x) for all x >= x0 (1)
In our case, there must exists a positive real number M and a real number N0 such that:
|N^2*ln(N)| <= M*N^2 for all N >= N0 (2)
Obviously, such M and x0 do not exist, because for any arbitrary large M there is N0, such that
ln(N) > M for all N >= N0 (3)
Thus, we have proved that N^2*ln(N) is not asymptotically bounded by O(N^2).
References:
1: - https://en.wikipedia.org/wiki/Big_O_notation
A simple way to understand the big O notation is to divide the actual number of atomic steps by the term withing the big O and validate you get a constant (or a value that is smaller than some constant).
for example if your algorithm does 10n²⋅logn steps:
10n²⋅logn/n² = 10 log n -> not constant in n -> 10n²⋅log n is not O(n²)
10n²⋅logn/(n²⋅log n) = 10 -> constant in n -> 10n²⋅log n is O(n²⋅logn)
You do keep the log because log(n) will increase as n increases and will in turn increase your overall complexity since it is multiplied.
As a general rule, you would only remove constants. So for example, if you had O(2 * n^2), you would just say the complexity is O(n^2) because running it on a machine that is twice more powerful shouldn't influence the complexity.
In the same way, if you had complexity O(n^2 + n^2) you would get to the above case and just say it's O(n^2). Since O(log(n)) is more optimal than O(n^2), if you had O(n^2 + log(n)), you would say the complexity is O(n^2) because it's even less than having O(2 * n^2).
O(n^2 * log(n)) does not fall into the above situation so you should not simplify it.
if complexity of some algorithm =O(n^2) it can be written as O(n*n). is it O(n)?absolutely not. so O(n^2*logn) is not O(n^2).what you may want to know is that O(n^2+logn)=O(n^2).
A simple explanation :
O(n2 + n) can be written as O(n2) because when we increase n, the difference between n2 + n and n2 becomes non-existent. Thus it can be written O(n2).
Meanwhile, in O(n2logn) as the n increases, the difference between n2 and n2logn will increase unlike the above case.
Therefore, logn stays.
So, clearly, log(n) is O(n). But, what about (log(n))^2? What about sqrt(n) or log(n)—what bounds what?
There's a family of comparisons like this:
nᵃ (vs.) (log(n))ᵇ
I run into these comparisons a lot, and I've never come up with a good way to solve them. Hints for tactics for solving the general case?
[EDIT: I'm not talking about the computational complexity of calculating the values of these functions. I'm talking about the functions themselves. E.g., f(n) = n is an upper bound on g(n) = log(n) because f(n) ≤ c g(n) for c = 1 and n₀ > 0.]
log(n)^a is always O(n^b), for any positive constants a, b.
Are you looking for a proof? All such problems can be reduced to seeing that log(n) is O(n), by the following trick:
log(n)^a = O(n^b) is equivalent to:
log(n) = O(n^{b/a}), since raising to the 1/a power is an increasing function.
This is equivalent to
log(m^{a/b}) = O(m), by setting m = n^{b/a}.
This is equivalent to log(m) = O(m), since log(m^{a/b}) = (a/b)*log(m).
You can prove that log(n) = O(n) by induction, focusing on the case where n is a power of 2.
log n -- O(log n)
sqrt n -- O(sqrt n)
n^2 -- O(n^2)
(log n)^2 -- O((log n)^2)
n^a versus (log(n))^b
You need either bases or powers the same. So use your math to change n^a to log(n)^(whatever it gets to get this base) or (whatever it gets to get this power)^b. There is no general case
I run into these comparisons a lot (...)
Hints for tactics for solving the general case?
As you as about general case and that you following a lot into such questions. Here is what I recommend :
Use limit definition of BigO notation, once you know:
f(n) = O(g(n)) iff limit (n approaches +inf) f(n)/g(n) exists and is not +inf
You can use Computer Algebra System, for example opensource Maxima, here is in Maxima documentation about limits .
For more detailed info and example - check out THIS answer
if T(n) is O(n), then it is also correct to say T(n) is O(n2) ?
Yes; because O(n) is a subset of O(n^2).
Assuming
T(n) = O(n), n > 0
Then both of the following are true
T(n) = O(2n)
T(n) = O(n2)
This is because both 2n and n2 grow as quickly as or more quickly than just plain n. EDIT: As Philip correctly notes in the comments, even a value smaller than 1 can be the multiplier of n, since constant terms may be dropped (they become insignificant for large values of n; EDIT 2: as Oli says, all constants are insignificant per the definition of O). Thus the following is also true:
T(n) = O(0.2n)
In fact, n2 grows so quickly that you can also say
T(n) = o(n2)
But not
T(n) = Θ(n2)
because the functions given provide an asymptotic upper bound, not an asymptotically tight bound.
if you mean O(2 * N) then yes O(n) == O(2n). The time taken is a linear function of the input data in both cases
I disagree with the other answer that says O(N) = O(N*N). It is true that the O(N) function will finish in less time than O(N*N), but the completion time is not a function of n*n so it really isnt true
I suppose the answer depends on why u r asking the question
O also known as Big-Oh is a upper bound. We can say that there exists a C such that, for all n > N, T(n) < C g(n). Where C is a constant.
So until an unless the large co-efficient in T(n) is smaller or equal to g(n) then that statement is always valid.
According to the definition of big O f(n) <= C*g(n)(which means f(n) = O(g(n)), it could be deduced that:
f(n) <= C
f(n) <= 2C
I think there are no big differences between these two. What I could come up with is:
f(n) = 1 - 1 / n
f(n) = 2 - 1 / n
C = 1
But what differs this two complexities,since both are constant complexity?
Could you show some real world code to demonstrate the differences between O(1) and O(2).
There is no difference between O(1) and O(2). Algorithms classifying as O(1) are O(2) and vice versa. In fact, O(c1) is O(c2) for any positive constants c1 and c2.
O(c) where c is a positive constants simply means that the runtime is bounded independent of the input or problem size. From this it is clear (informally) that O(1) and O(2) are equal.
Formally, consider a function f in O(1). Then there is a constant c such that f(n) <= c * 1 for all n. Let d = c / 2. Then f(n) <= c = (c / 2) * 2 = d * 2 which shows that f is O(2). Similarly if g is O(2) there is a constant c such that g(n) <= c * 2 for all n. Let d = 2 * c. Then g(n) <= c * 2 = d = d * 1 which shows that g is O(1). Therefore O(1) = O(2).
O(1) and O(2) are the same, as is any O(constant value).
The point being that neither rely on some function of N.
There is no difference.
In the graph below, the red line represents O(n) and the green curve represents O(n2).
As you can see by the red line, the 2 and the 1 become insignificant as x increases (the green curve grows at a much faster rate). This is what Big-O notation is trying to capture; constants are relatively meaningless.
Maybe they meant that both algorithms execute in constant time regardless of input size (usually denoted as N), but one of them is twice as fast. But it's an abuse of the big-O notation.
There is no difference between O(1) and O(2).
The order-of notation is unique up to a constant. O(f(x)) means that there is some constant k such that the time would be less than kf(x).
If something is O(2), then there is some constant k that the program takes less than 2k. Therefore, there's another constant, k' = 2k that works for O(1).
There is not difference between O(1) and O(2). In fact you would not use this notation. It is more like O(N) or O(n^2), O(log(N)) etc. This is just a indication of the order of magnitude of the algorithm. Put in other words, O(1) would be constant in time. O(N) would be linear in the number of items (N), O(n^2) would be exponential in time, etc.
Big-O notation is generally used for asymptotic analysis of algorithm complexity, ie analysis of how the algorithm performs as n increases toward infinity. The highest order term in n of the function f(n) will be the dominant part of the function in this case.
As such, lower order terms in n are usually dropped from the function when expressed in big-O (for example, f(n)=2n^2+4 will result in O(n^2) asymptotic big-O complexity).
In the case of the highest term being constant and not dependent on n, then all these constants are effectively the same asymptotically speaking and are usually simply reduced to O(1).
Hence, O(2) would be considered equivalent to O(1).
You typically do not write O(2) or O(2n) but O(1) and O(n) instead. The difference is in actual speed, of course - 5s vs 10s for instance. I found your question a bit confusing.