I know in Big O Notation we only consider the highest order, leading polynomial term because we are basically placing this theoretic worst case bound on compute-time complexity but sometimes I get confused on when we can legally drop/consider terms as constants. For example if our equation ran in
O((n^3)/3) --> we pull out the "1/3" fraction, treat it as a constant, drop it, then say our algo runs in O(n^3) time.
What about in the case of O((n^3)/((log(n))^2))? In this case could we pull out the 1/((log(n)^2)) term, treat it as a constant, drop it, and then ultimately conclude our algorithm is O(n^3). It does not look like we can, but what differentiates this case from the above case? both can be treated as constants because their values are relatively small in comparison to the leading polynomial term in the numerator but in the second case, the denominator term really brings down the worst case bound (convergence) as n values get larger and larger.
At this point, it starts to be a good idea to go back and look at the formal definition for big O notation. Essentially, when we say that f(x) is O(g(x)) we mean that there exists a constant factor a and a starting input n0 such that for all x >= n0 then f(x) <= a*g(x).
For a concrete example, to prove that 3*x^2 + 17 is O(n^2) we can use a = 20 and n0 = 1.
From this definition it becomes easy to see why the constant factors get dropped off - its just a matter of adjusting the a to compensate. As for your 1/log(n) question, if we have f(x) is O(g(x)) and g(x) is O(h(x)) then we also have f(x) is O(h(x)). So yes, 10*n^3/log(n) + x is O(n^3) but that is not a tighter upper bound and it is a weaker statement than saying that 10*n^3/log(n) + x is O(n^3/log(n)). For a tight bounds you would want to use big-Theta notation instead.
Any value which is fixed and does not depend on a variable (like n) can be treated as constant. You can separate the constants, remove the lower order terms and classify the result as the complexity. Also big O notation states if
f(x) <= c*g(x)
Then f(x) ~ O(g(x)). For example-
n^3 * 5 -> here 5 is a constant. Complexity is O(n^3)
4*(n^3)/((log(n))^2 + 7 -> here 7 and 4 are constants. Complexity is O(n^3/(logn)^2)
Related
Here's an example from Wikipedia:
Let f(x) = 6x^4 - 2x^3 + 5, then f(x) is a "big O" of x^4.
Then I swap the coefficients between x^3 and x^4. Now that f(x) = -2x^4 + 6x^3 + 5. Will f(x) still be a "big O" of x^4? I assume not, since f(x) will take less time as x becomes larger. My question is:
What is the big-o complexity of the above equation? At the moment, I'm ignoring -2x^4, which makes f(x) is a "big O" of x^3, but I'm not sure about that.
The expression -2x^4 + 6x^3 + 5 is still O(x^4), because the leading constant is not part of the big-O expression. Empirically, it should be clear that as x gets arbitrarily large, the x^3 term will basically become irrelevant. Whether the x^4 terms becomes a very large positive or negative number has no bearing on the running time.
Big-O and related asymptotic classes are usually defined in terms of the absolute value of the function, so that it doesn't matter whether it is increasing or decreasing asymptotically.
So it would still be Theta(x^4), which also implies O(x^4).
In context of time complexity analysis this isn't really relevant though, since you cannot have a program/algorithm taking negative time, so the function you are asking about cannot be a function describing the execution time of a program/algorithm.
I‘m trying to wrap my head around the meaning of the Landau-Notation in the context of analysing an algorithm‘s complexity.
What exactly does the O formally mean in Big-O-Notation?
So the way I understand it is that O(g(x)) gives a set of functions which grow as rapidly or slower as g(x), meaning, for example in the case of O(n^2):
where t(x) could be, for instance, x + 3 or x^2 + 5. Is my understanding correct?
Furthermore, are the following notations correct?
I saw the following written down by a tutor. What does this mean? How can you use less or equal, if the O-Notation returns a set?
Could I also write something like this?
So the way I understand it is that O(g(x)) gives a set of functions which grow as rapidly or slower as g(x).
This explanation of Big-Oh notation is correct.
f(n) = n^2 + 5n - 2, f(n) is an element of O(n^2)
Yes, we can say that. O(n^2) in plain English, represents "set of all functions that grow as rapidly as or slower than n^2". So, f(n) satisfies that requirement.
O(n) is a subset of O(n^2), O(n^2) is a subset of O(2^n)
This notation is correct and it comes from the definition. Any function that is in O(n), is also in O(n^2) since growth rate of it is slower than n^2. 2^n is an exponential time complexity, whereas n^2 is polynomial. You can take limit of n^2 / 2^n as n goes to infinity and prove that O(n^2) is a subset of O(2^n) since 2^n grows bigger.
O(n) <= O(n^2) <= O(2^n)
This notation is tricky. As explained here, we don't have "less than or equal to" for sets. I think tutor meant that time complexity for the functions belonging to the set O(n) is less than (or equal to) the time complexity for the functions belonging to the set O(n^2). Anyways, this notation doesn't really seem familiar, and it's best to avoid such ambiguities in textbooks.
O(g(x)) gives a set of functions which grow as rapidly or slower as g(x)
That's technically right, but a bit imprecise. A better description contains the addenda
O(g(x)) gives the set of functions which are asymptotically bounded above by g(x), up to constant factors.
This may seem like a nitpick, but one inference from the imprecise definition is wrong.
The 'fixed version' of your first equation, if you make the variables match up and have one limit sign, seems to be:
This is incorrect: the ratio only has to be less than or equal to some fixed constant c > 0.
Here is the correct version:
where c is some fixed positive real number, that does not depend on n.
For example, f(x) = 3 (n^2) is in O(n^2): one constant c that works for this f is c = 4. Note that the requirement isn't 'for all c > 0', but rather 'for at least one constant c > 0'
The rest of your remarks are accurate. The <= signs in that expression are an unusual usage, but it's true if <= means set inclusion. I wouldn't worry about that expression's meaning.
There's other, more subtle reasons to talk about 'boundedness' rather than growth rates. For instance, consider the cosine function. |cos(x)| is in O(1), but its derivative fluctuates from negative one to positive one even as x increases to infinity.
If you take 'growth rate' to mean something like the derivative, example like these become tricky to talk about, but saying |cos(x)| is bounded by 2 is clear.
For an even better example, consider the logistic curve. The logistic function is O(1), however, its derivative and growth rate (on positive numbers) is positive. It is strictly increasing/always growing, while 1 has a growth rate of 0. This seems to conflict with the first definition without lots of additional clarifying remarks of what 'grow' means.
An always growing function in O(1) (image from the Wikipedia link):
I'm trying to understand the correct answer to the following question:
screen shot of question
The answer was that all were true because lgn can be said to be theta of log8n, which includes all three of the choices.
This is confusing to me because for any positive value of n, logn is going to be larger than log8n, correct?. To say that logn is tightly bound by log8n would mean that logn is both Big O of log8n and Big Omega of log8n. Or in plain English, logn is no greater than k1 x log8n and no less than k2 x log8n.
My answer was that logn was Big Omega of log8n because it should never take less time. Why is this wrong?
Assuming that lg denotes natural logarithm, one has that log_8(n)=lg(n)/lg(8) thus those two functions differ only via a multiplicative factor.
Now, let's inspect for example the Big O case from this table, where f is lg and g stands for log_8. If we set k=lg(8), then the condition in the third column is satisfied automatically. In other words, the condition "|f| is bounded above by g (up to constant factor)" is satisfied, since, in loose terms, those two functions are in fact the same up to constant (multiplicative) factor.
The same reasoning applies to Big Omega and therefore one obtains Big Theta as well (which you get directly with k1=k2=lg(8) in its definition)
I know that If I have a for loop, and a nested for loop, which both iterate 1 to n times, I can multiply the run times of both loops to get O(n^2). This is a clean and simple calculation. However, if you had iterations like so,
n = 2, k = 5
n = 3, k = 9
n = 4, k = 14
where k is the number of times the inner for loop iterates. At one point, it is larger than n^2, then it is exactly n^2, then it becomes less than n^2. Assuming you cannot determine k based on n, and maybe even having these points of n very far apart how do you calculate Big-O?
I tried graphing points. And at one point, I could say it was O(n^3) since some points exceed n^2, and further down, it would be O(n^2). Which one should I choose?
You state in your question that k is:
"... At one point, it is larger than n^2"
This is the uncertainty (or non-specificity) in your question that makes it hard to answer rigorously. Anyway, for the remainder of this answer, we shall assume that what you mean by the quote above is that:
For all values of n, the value of k(n) is bounded from above by
C·n^2, for some constant C>0.
From here on, let's refer to this statement as (+).
Now, since you're mentioning Big-O notation, we'll proceed to somewhat loosely define what this actually means:
f(n) = O(g(n)) means c · g(n) is an upper bound on f(n). Thus
there exists some constant c such that f(n) is always ≤ c · g(n),
for sufficiently large n (i.e. , n ≥ n0 for some constant n0).
I.e., Big-O notation is a way to describe an upper bound here for the asymptotic (limiting) behaviour of our algorithm. You write in your question, however, that:
"And at one point, I could say it was O(n^3) since some points exceed n^2, and further down, it would be O(n^2)"
Now this is a very specific analysis of how the inner loop of your algorithm behaves for specific values of n, and really not something that is related to asymptotic analysis (or Big-O notation). We're not interested in specifics about how the algorithms behaves for specific values of n, but whether we can find some general upper bound for the algorithm given n is "sufficiently large" (n ≥ n0 for some constant n0).
Now, with these comments above, we can proceed to analysing the asymptotic behaviour of your algorithm.
We can approach this using Sigma notation, making use of statement (+) above, k(n) < C·n:
The last step (++) follows from the definition of Big-O-notation, that we loosely stated above.
Hence, given that we interpret your information regarding k as (+), your algorithm runs in O(n^3) (which is an upper bound, not necessarily a tight one).
I have used the Master Theorem to solve recurrence relations. I have gotten it down to Θ(3n2-9n). Does this equal Θ(n2)? I have another recurrence for which the solution is Θ(2n3 - 1002). In BigTheta notation do you always use only the largest term? So my second one would be Θ(n3)? It just seems like 100n2 would be more important in the second case. So will it matter if I discard it?
Any suggestions?
Yes. Your assumptions are correct. The first one is Θ(n2) and the second one is Θ(n3). When you are using Θ notation you only require the largest term.
In case of your second recurrence consider the n = 1000, then n3 = 1000000000. Where as 100n2 is just 100000000. As the value of n increases, n3 becomes more and more predominant than 100n2.
For theoretical purpose you don't need to consider the constant, how ever large it might be. But practical applications might prefer an algorithm with a small constant even if the complexity is high. For example it might be better to use an algorithm having complexity 0.01n 3 over an algorithm having 10000n2 complexity if the value of n is not very large.
if we have function f(n) = 3n^2-9n , lower order terms and costants can be ignored, we consider higher order terms ,because they play major role in growth of function.
By considering only higher order term we can easily find the upper bound, here is the example.
f(n)= 3n^2-9n
For all sufficient large value of n>=1,
3n^2<=3n^2
and -9n <= n^2
thus, f(n)=3n^2 -9n <= 3n^2 + n^2
<= 4n^2
*The upper bound of f(n) is 4n^2 , that means for all sufficient large
value of n>=1, the value of f(n) wouldn't be greater than 4n^2.*
therefore, f(n)= Θ(n^2) where c=4 and n0=1
we can directly find the upper bound by saying to ignore lower order terms and constants in the equation f(n)= 3n^2-9n , result will be the same Θ(n^2)