Assuming f(n)=n!, I can prove that for C=1 and n_0=1 Big-oh of f(n) = O(n!).
However, to prove RHS I found C>=1/n & n_0=0.
Can C be in terms of n?
It is certainly the case that n! = O((n+1)!), and yes, most any choice of the constant c should work since (n+1)! grows faster than n! by a factor of (n+1).
Note that asking where n! = O((n+1)!) is a different question (strictly speaking) from asking whether O(n!) = O((n+1)!), and in this case the answers are different. The latter of these can be interpreted to mean, "Is the set of all functions which are bounded by above by n! the same as the set of all functions bounded from above by (n+1)!?" That is not true since you can show there is no constant c greater than (n+1).
Related
I‘m trying to wrap my head around the meaning of the Landau-Notation in the context of analysing an algorithm‘s complexity.
What exactly does the O formally mean in Big-O-Notation?
So the way I understand it is that O(g(x)) gives a set of functions which grow as rapidly or slower as g(x), meaning, for example in the case of O(n^2):
where t(x) could be, for instance, x + 3 or x^2 + 5. Is my understanding correct?
Furthermore, are the following notations correct?
I saw the following written down by a tutor. What does this mean? How can you use less or equal, if the O-Notation returns a set?
Could I also write something like this?
So the way I understand it is that O(g(x)) gives a set of functions which grow as rapidly or slower as g(x).
This explanation of Big-Oh notation is correct.
f(n) = n^2 + 5n - 2, f(n) is an element of O(n^2)
Yes, we can say that. O(n^2) in plain English, represents "set of all functions that grow as rapidly as or slower than n^2". So, f(n) satisfies that requirement.
O(n) is a subset of O(n^2), O(n^2) is a subset of O(2^n)
This notation is correct and it comes from the definition. Any function that is in O(n), is also in O(n^2) since growth rate of it is slower than n^2. 2^n is an exponential time complexity, whereas n^2 is polynomial. You can take limit of n^2 / 2^n as n goes to infinity and prove that O(n^2) is a subset of O(2^n) since 2^n grows bigger.
O(n) <= O(n^2) <= O(2^n)
This notation is tricky. As explained here, we don't have "less than or equal to" for sets. I think tutor meant that time complexity for the functions belonging to the set O(n) is less than (or equal to) the time complexity for the functions belonging to the set O(n^2). Anyways, this notation doesn't really seem familiar, and it's best to avoid such ambiguities in textbooks.
O(g(x)) gives a set of functions which grow as rapidly or slower as g(x)
That's technically right, but a bit imprecise. A better description contains the addenda
O(g(x)) gives the set of functions which are asymptotically bounded above by g(x), up to constant factors.
This may seem like a nitpick, but one inference from the imprecise definition is wrong.
The 'fixed version' of your first equation, if you make the variables match up and have one limit sign, seems to be:
This is incorrect: the ratio only has to be less than or equal to some fixed constant c > 0.
Here is the correct version:
where c is some fixed positive real number, that does not depend on n.
For example, f(x) = 3 (n^2) is in O(n^2): one constant c that works for this f is c = 4. Note that the requirement isn't 'for all c > 0', but rather 'for at least one constant c > 0'
The rest of your remarks are accurate. The <= signs in that expression are an unusual usage, but it's true if <= means set inclusion. I wouldn't worry about that expression's meaning.
There's other, more subtle reasons to talk about 'boundedness' rather than growth rates. For instance, consider the cosine function. |cos(x)| is in O(1), but its derivative fluctuates from negative one to positive one even as x increases to infinity.
If you take 'growth rate' to mean something like the derivative, example like these become tricky to talk about, but saying |cos(x)| is bounded by 2 is clear.
For an even better example, consider the logistic curve. The logistic function is O(1), however, its derivative and growth rate (on positive numbers) is positive. It is strictly increasing/always growing, while 1 has a growth rate of 0. This seems to conflict with the first definition without lots of additional clarifying remarks of what 'grow' means.
An always growing function in O(1) (image from the Wikipedia link):
I understand that classifying in Big-O is essentially having an "upper-bound" of sort so I understand it graphically, I don't understand is how to use the Big-O formal definition to solve this types of problems. Any help is appreciated.
Although there are platforms better suited for this question than SO, since this gets purely mathematical, understanding this is fundamental for using big-O also in a Computer Science context, so I like the question. And understanding your particular example will likely shed some light on what big-O is in general, as well as provide some practical intuition. So I will try to explain:
We have a function f(n) = n2. To show that it is not in O(1), we have to show that it grows faster than a function g(n) = c where c is some constant. In other words, that f(n) > g(n) for sufficiently large n.
What does sufficiently large mean? It means that for any constant c, we can find an N so that f(n) > g(n) for all n > N.
This is how we define asymptotic behaviour. Your constant function may be larger than f(n), but as n grows enough, you will eventually reach a point where f(n) remains larger forever. How to prove it?
Whatever constant c you choose - however large - we can construct our N. Let us say N = √c. Since c is a constant, so is √c.
Now, we have f(n) = n2 > c = g(n) whenever n > √c.
Therefore, f(n) is not in O(1). □
The key here is that we found an constant N (that is, which depends on our constant c but not on our variable n) such that this inequivalence holds. It can take some wrapping one's head around it, but doing a few other simple examples can help get the intuition.
My algorithm class's homework claims that O(n3) is more efficient than O(2n).
When I put these functions into a graphing calculator, f(x)=2x appears to be consistently more efficient for very large n (starting from around n = 982).
Considering that for a function f(n) = O(g(n)), it must be smaller for all n greater than some n0, wouldn't this mean that from n = 982 we could say that O(2n) is more efficient?
2^n grows extremely faster than n^3. Maybe you have input wrong values into your calculator or something like that. Also note that more efficient means less time which means a lower value on the y-axis.
Let me show you some correct plots for those functions (using Wolfram Alpha):
First 2^n is smaller (but just for a tiny range), after that you can see how 2^n grows beyond it.
After this intersection the situation never changes again and 2^n remains extremely greater than n^3. That also holds for the range you analysed, so > 982, like seen in the next plot (the plot for n^3 is near the x-axis):
Also note that in the Big-O-Notation we always compare functions based on their growth. This is why something like O(n^3) does not contain functions f : f(x) <= n^3 but rather f : f(x) <= C * n^3 where C is an arbitrary constant, it could be big, it could be small. This accounts for the growth-factor in the comparison. Also note that it is allowed that the condition does not hold for a finite amount of x but there must exist some bound x' from where on the condition holds, so for every x > x'.
Compare this explanation to the complete mathematical definition from Wikipedia where C is k, x is n and x' is n_0:
Which defines, if true, that f(n) is in the set O(g(n)).
You confuse O(2n) and 2n. O(2n) is actually C*2n, where C is an arbitrary chosen positive constant. Likewise, O(n3) is D*n3, where D is another arbitrary chosen positive constant. The claim "O(n3) is more efficient than O(2n)" means that, given any fixed C and D, it is always possible to find such n0 that for any n >= n0, D*n3
< C*2n.
I know that If I have a for loop, and a nested for loop, which both iterate 1 to n times, I can multiply the run times of both loops to get O(n^2). This is a clean and simple calculation. However, if you had iterations like so,
n = 2, k = 5
n = 3, k = 9
n = 4, k = 14
where k is the number of times the inner for loop iterates. At one point, it is larger than n^2, then it is exactly n^2, then it becomes less than n^2. Assuming you cannot determine k based on n, and maybe even having these points of n very far apart how do you calculate Big-O?
I tried graphing points. And at one point, I could say it was O(n^3) since some points exceed n^2, and further down, it would be O(n^2). Which one should I choose?
You state in your question that k is:
"... At one point, it is larger than n^2"
This is the uncertainty (or non-specificity) in your question that makes it hard to answer rigorously. Anyway, for the remainder of this answer, we shall assume that what you mean by the quote above is that:
For all values of n, the value of k(n) is bounded from above by
C·n^2, for some constant C>0.
From here on, let's refer to this statement as (+).
Now, since you're mentioning Big-O notation, we'll proceed to somewhat loosely define what this actually means:
f(n) = O(g(n)) means c · g(n) is an upper bound on f(n). Thus
there exists some constant c such that f(n) is always ≤ c · g(n),
for sufficiently large n (i.e. , n ≥ n0 for some constant n0).
I.e., Big-O notation is a way to describe an upper bound here for the asymptotic (limiting) behaviour of our algorithm. You write in your question, however, that:
"And at one point, I could say it was O(n^3) since some points exceed n^2, and further down, it would be O(n^2)"
Now this is a very specific analysis of how the inner loop of your algorithm behaves for specific values of n, and really not something that is related to asymptotic analysis (or Big-O notation). We're not interested in specifics about how the algorithms behaves for specific values of n, but whether we can find some general upper bound for the algorithm given n is "sufficiently large" (n ≥ n0 for some constant n0).
Now, with these comments above, we can proceed to analysing the asymptotic behaviour of your algorithm.
We can approach this using Sigma notation, making use of statement (+) above, k(n) < C·n:
The last step (++) follows from the definition of Big-O-notation, that we loosely stated above.
Hence, given that we interpret your information regarding k as (+), your algorithm runs in O(n^3) (which is an upper bound, not necessarily a tight one).
Question 1: Under what circumstances would O(f(n)) = O(k f(n)) be the most appropriate form of time-complexity analysis?
Question 2: Working from mathematical definition of O notation, how to show that O(f(n)) = O(k f(n)), for positive constant k?
For the first Question I think it is average case and worst case form of time-complexity. Am I right? And what else should I write in that?
For the second Question, I think we need to define the function mathematically. So is the answer something like because the multiplication by a constant just corresponds to a readjustment of value of the arbitrary constant k in definition of O?
My view: For the first one I think it
is average case and worst case form of
time-complexity. am i right? and what
else do i write in that?
No! Big O notation has NOTHING to do with average case or worst case. It is only about the order of growth of a function - particularly, how quickly a function grows relative to another one. A function f can be O(n) in the average case and O(n^2) in the worst case - this just means the function behaves differently depending on its inputs, and so the two cases must be accounted for separately.
Regarding question 2, it is obvious to me from the wording of the question that you need to start with the mathematical definition of Big O. For completeness's sake, it is:
Formal Definition: f(n) = O(g(n))
means there are positive constants c
and k, such that 0 ≤ f(n) ≤ cg(n) for
all n ≥ k. The values of c and k must
be fixed for the function f and must
not depend on n.
(source http://www.itl.nist.gov/div897/sqg/dads/HTML/bigOnotation.html)
So, you need to work from this definition and write a mathematical proof showing that f(n) = O(k(n)). Start by substituting O(g(n)) with O(k*f(n)) in the definition above; the rest should be quite easy.
Question 1 is a little vague, but your answer for question 2 is definitely lacking. The question says "working from the mathematical definition of O notation". This means that your instructor wants you to use the mathematical definition:
f(x) = O(g(x)) if and only if limit [x -> a+] |f(x)/g(x)| < infinity, for some a
And he wants you to plug in g(x) = k f(x) and prove that that inequality holds.
The general argument you posted might get you partial credit, but it is reasoning rather than mathematics, and the question is asking for mathematics.