f (n) = log10 n ∈O(log n)
so far according to me c = 1 and k = 1, which proves that the above notation is correct, but I am not sure if it is that simple and I don't know how to break it down into steps to prove it. Does someone know how to break it down into steps.
It's not sufficient to give a pair of constants for which you think the definition of Big O is true unfortunately. :) You have to actually prove that those constants work.
Since you're asking for steps to prove this (and not for someone to prove it for you), I'll do my best to give you some steps to start with.
Firstly, for this problem, I suggest you review three pieces of information that will be helpful for you:
The definition of Big 0. f(n) ∈ O(g(n)) if and only if f(n) ≤ c∙g(n) for all n ≥ n0 and some constant c > 0. If this definition doesn't ring a bell, then you will need to take a few steps back and review this in more depth.
Since the world of computer science generally defaults to using base 2, when we say log n, we really mean log2 n. (I'm assuming you're asking this for a computer science-related reason, but if not, you can ignore this piece and still solve your problem.)
Logarithm rules. Recall that loga b = logc b / logc a
Now, see if you can use these three pieces of information to prove that f(n) = log10 n ∈ O(log n).
Related
I understand that classifying in Big-O is essentially having an "upper-bound" of sort so I understand it graphically, I don't understand is how to use the Big-O formal definition to solve this types of problems. Any help is appreciated.
Although there are platforms better suited for this question than SO, since this gets purely mathematical, understanding this is fundamental for using big-O also in a Computer Science context, so I like the question. And understanding your particular example will likely shed some light on what big-O is in general, as well as provide some practical intuition. So I will try to explain:
We have a function f(n) = n2. To show that it is not in O(1), we have to show that it grows faster than a function g(n) = c where c is some constant. In other words, that f(n) > g(n) for sufficiently large n.
What does sufficiently large mean? It means that for any constant c, we can find an N so that f(n) > g(n) for all n > N.
This is how we define asymptotic behaviour. Your constant function may be larger than f(n), but as n grows enough, you will eventually reach a point where f(n) remains larger forever. How to prove it?
Whatever constant c you choose - however large - we can construct our N. Let us say N = √c. Since c is a constant, so is √c.
Now, we have f(n) = n2 > c = g(n) whenever n > √c.
Therefore, f(n) is not in O(1). □
The key here is that we found an constant N (that is, which depends on our constant c but not on our variable n) such that this inequivalence holds. It can take some wrapping one's head around it, but doing a few other simple examples can help get the intuition.
Like the question says, how exactly do we always find the c and n0 for a given bound?
For example, when I had to solve the problem...
Prove that 5n^2+2n+1 = O(n^2)
I was able to look at 2n and say "This can never be greater than 2n^2" and for 1 I was also able to say "This can never be greater than n^2".
By taking this into consideration, I was able to pick C = 8 and n0 = 1.
However, when I'm given a problem such as..
Prove that n^3=O(2^n) using the basic definition of Big O notation.
I have absolutely no clue what to do since the only thing I have to work with is n^3. How do I identify C and n0 for these types of problems?
You need to find an argument in each specific case, there is no algorithm to find a proof.
In your example we can use that n^3 is even in o(2^n) which clearly implies it is in O(2^n). To see the former, consider the limit for n->infinity of (n^3 / 2^n).
Using L'Hôpital's rule three times, you see the limit is 0.
Ok Im a bit new and my mathematical proof knowledge and practice is novice will, its not much as I'm only a few weeks into an algorithm and design paper and am a little learning challenged. I am trying to wrap my head around a several lab question questions and I'm hoping that if someone can help me with this I will build a little momentum and be able to answer the harder ones under my own steam.
I have a definition: Let f, g be functions. If there exist c, n0 > 0 such that,for all n > n0, f(n) ≤ c · g(n), then f(n) is O(g(n))
Have to prove this using the definition.
For every function f : N→N, f(n) is O(f(n)).
Now firstly I'm confused bu the face that g(n) isnt in this question but it is in the harder ones so I know it isnt a typo. I'm thinking that it is the same function so shouldn't it be big theta? I am very confused. Also how to present this as a proof is also quite mysterious to me. Can I do this as a direct proof?
Most appreciate any help.
Perhaps you are puzzled by the fact that the statement you are trying to prove uses only one function, namely f, while the definition you quote involves two different functions.
That being said, the statement you are trying to prove is that every function from N to N does not asymptotically grow faster than itself, what is not so surprising. For a formal proof, let f : N -> N be such a function.
Let c := 1 and n0 := 0; let n be an integer such that n > n0. Then we obtain
f(n) = 1 * f(n) = c * f (n) <= c * f (n)
which, by definition, means that f in O(f), which was the statment to be proved.
This is a direct proof which is carried out by explicit choice of c and n0 from the definition and showing that they satisfy the condition from the definition.
As this is apparently a homework question, I suppose it is given as an example to introduce the formal definition and how to work with it, and not so much because the statement itself is interesting.
This question already has answers here:
What does O(log n) mean exactly?
(32 answers)
Closed 7 years ago.
So I have been studying Big O notation ( Noob ) , and most things looks like alien language to me. Now I understand basic of log like log 16 of base2 is the power of 2 equals the number 16. Now for Binary search big O(logN) making no sense to me , what is the value of LogN exacly what is the base here? I have searched internet, problem is everyone explained this mathmetically which i cant catch I am not good with math. Can someone explain this to me in Basic English not Alien language like exponential. I know How Binary search works.
Second question: [I dont even know what f = Ω(g) this symbol means] Can someone explain to me in Plain English what is required here , I dont want the answer , just what this means.
Question :
In each of the following situations, indicate whether f = O(g), or f = Ω(g), or both. (in which case f = Θ(g)).
f(n) g(n)
(a) n-100 ...............n-200
(b) 100n + logn .......n + (log n)2
(c) log2n ............... log3n
Update: I just realized that I studied algorithms from MIT's videos. Here is the link to the first of those videos. Keep going to next lecture as far as you want.
Clearly, Log(n) has no value without fixing what n is and what base of log we are using. The purpose of mentioning log(n) so often is to help people understand the rate of growth of a particular algorithm or piece of code. It is only to help people see things in perspective. To build your perspective, see the comparison below:
1 < logn < n < nlogn < n2 < 2^n < n! < n^n
The line above says that after some value of n on the number line, the rate of growth of the above written functions is in the order mentioned there. This way, decision makers can decide which approach they want to take in solving their problem (and students can pass their Algorithm Design and Analysis exam).
Coming to your question, when books say 'binary search's run time is Log(n)', essentially they mean that the if you have n elements, the running time for binary search would be proportional to Log(n) and if you have 17n elements then you can expect the answer from your algorithm in a time duration that is proportional to Log(17n). In this case, the base of Log function is 2 because in binary search, we have exactly <= 2 paths to pick from at every node.
Since, the log function's base can be easily converted from any number to any other number by multiplying a constant, telling what the base is becomes irrelevant as in Big O notations, constants are ignored.
Coming to the answer to your second question, images will explain it the best.
Big O is only about the upper bound on a function. In the image below, f(n) = O(g(n)). In other words, there are positive constants c and k, such that 0 ≤ f(n) ≤ cg(n) for all n ≥ k.
Importance of k is that after 'k' this Big O will stay true, no matter what value of n. If we can't fix a 'k', we cannot say that the growth rate will always stay below the function mentioned in O(...).
Importance of c is in saying that it is the function between O(...) that's really important.
Omega is simply the inversion of Big O. If f(n) = O(g(n)), then g(n) = Ω(f(n)). In other words, Ω() is about your function staying above what is mentioned in Ω(...) for a given value of another 'k' and another 'c'.
The pictorial visualization is
Finally, Big theta is about finding a mathematical function that grows at same rate as your given function. But how do you prove that this function runs same as your function. By using two constant values.
Since it runs same as your given function, you should be able to multiply two constants 'c1' and 'c2' that will be able to put c1 * g(n) above your function f(n) and put c2 * g(n) below your function f(n).
The thing behind Big theta is to provide a function with same rate of growth. Note that there may be no constant 'c' that will be able to get f(n) and g(n) to overlap. Nobody is concerned with that. The only concern is to be able to sandwich the f(n) between a g(n) using two constants so that we can confidently say that we found the rate of growth of f(n).
How to apply the above learned ideas to your question?
Let's take each of them one by one. You can use some online tool to plot these functions and see first hand, how these function behave when you go along the number line.
f(n) = n - 100 and g(n) = n - 200
Here, the rate of growth can be found out by differentiating both functions wrt n. d(f(n))/dn = d(g(n))/dn = 1. Therefore, even though the running times of f(n) and g(n) may be different, their rate of growth is same. Can you pick 'c1' and 'c2' such that c1 * g(n) < f(n) < c2 * g(n)?
f(n) = 100n + log(n) and g(n) = n + 2(log (n))
Differentiate and tell if you can relate the functions as Big O or Big Theta or Big Omega.
f(n) = log (2n) and g(n) = log (3n)
Same as above.
(The images are taken from different pages on this website: http://xlinux.nist.gov/dads/HTML/)
My experience: Try to compare the growth rate of a lot of different functions. Eventually you will get the hang of it for all of them and it will become very intuitive for you. Given concentrated effort for one week or two, this concept cannot remain esoteric for anyone.
First of all, let's go through the notations. I'm assuming from the questions that
O(f) is upper bound,
Ω(f) is lower bound, and
Θ(f) is both
For O(log(N)) in this case, generally the base isn't given because the general form of log(N) is known regardless of the base. E.g.,
(source: rapidtables.com)
So if you've worked through the binary search algorithm (I suggest you do this if you haven't), you should find that the worst case scenario (upper bound) is log_2(N). So given N terms, it will take "log_2(N) computations" in the worst case in order to find the term.
For your second question,
You are simply comparing computational run-times of f and g.
f = O(g)
is when f is an upper bound on g, i.e., f will definitely take longer to compute than g. Alternately,
f = Ω(g)
is when f is a lower bound on g, i.e., g will definitely take longer to compute than f. Lastly,
f = Θ(g)
is when the f is both an upper and lower bound on g, i.e., the run times are the same.
You need to compare the two functions for each question and determine which will take longer to compute. As Mitch mentioned you can check here where this question has already been answered.
Edit: accidentally linked e^x instead of log(x)
The reason the base of the log is never specified is because it is actually completely irrelevant. You can convince yourself of this in three steps:
First, recall that log_2(x) = log_10(x)/log_10(2). But also recall that log_10(2) is a constant, which we'll call k2, so really, log_2(x) * k2 = log_10(x)
Second, recall that this is not unique to logs of base 2. The constants of conversion vary, but all the log functions are related to each other through multiplicative constants.
(You can prove this to yourself if you understand the mathematics behind log functions, or you can just work it up very quickly on a spreadsheet-- have a column of log_2(x) and a column of log_3(x) and divide them.)
Finally, remember that in Big Oh notation, constants basically drop out as being irrelevant. Trying to draw a distinction between O(log_2(N)) and O(log_3(N)) is like trying to draw a distinction between O(N) and O(2N). It is a distinction that does not matter because log_2 and log_3 are related through a constant.
Honestly, the base of the log does not matter.
I've been working on a problem for several hours now, and I need clarification:
I needed to simplify (as much as possible) the following big-O expressions. For each, I put down what I thought was the correct answer. I would like solutions, but I would appreciate an explanation as well if I am incorrect. I am trying to learn Big O notation as well as possible, and I think doing these problems helped a lot. I just want to make sure I'm on the right path.
a) O(sqrt(n) + log(n)*log(n))
I thought this was O(n)
b) O(3log2 n + 2log3 n)
I thought this was O(log3 (n))
c) O(n^3 + 2n^2 +3n + 4)
I thought this was O(n^3)
Thanks for all your help!
Let's go through this one at a time.
O(sqrt(n) + log(n)*log(n)). I thought this was O(n)
You are correct that this is O(n), but that's not a particularly tight bound. Let's start with a simplifying question: which grows faster, O(sqrt(n)) or O(log(n) * log(n))? Using that information, can you drop one of the two terms from the summation?
O(3log2 n + 2log3 n). I thought this was O(log3 (n))
Remember that "big-O ignores the base of logarithms" (that is, logb n = O(logc n) for any b and c that are greater than one). You're technically right that it's O(log3 n), but that's not the cleanest solution. You'd be better off saying O(log n) here.
O(n^3 + 2n^2 +3n + 4). I thought this was O(n^3)
Exactly right! This works because 2n2 + 3n + 4 is O(n3), so you can drop those terms from the summation. Now, can you use a similar trick to simplify your answer to part (a)?
Hope this helps!
Ok the answer is long but I was pretty throughout.
Intro:
1st thing you need to do is to properly define what you mean by big O. Relevant read. Traditionally it's defined only as upper bound. But it's not very useful in computer science, at least not for task such as yours. You could technically answer with anything growing faster than example i.e. saying O(n!) for all the questions would technically be ok.
More useful is big theta, and usually in CS I saw big O redefined to the meaning of big Theta from the read above. The difference is that your bound, must be tighter and also apply from below.
Definitions/Rules: My favourite method to calculate Big O (and Theta) is using limits. It allows to sum asymptotic behaviour relations in a simple and straight forward manner.
Basically if (x->inf is implied here and thereafter):
lim f(x) / g(x) = infinity - f asymptotically grows bigger than g
lim f(x) / g(x) is a constant > 0 - f asymptotically grows the same as g
lim f(x) / g(x) = 0 - f asymptotically grows slower than g
Number 2. is big Theta. Number 2. and 3. combined are traditional Big O as in "f belongs to O(g)" (or "is O(g)" which is somewhat confusing wording). It means that f will not outgrow g so g is its upper bound.
Now with a little math is pretty easy to prove that Big O (or Theta) will care only about the fastest growing term. This comes straight from limit properties.
I will use O as big Theta from now on because everything holds for Big O too as it is looser.
Explanation of examples:
Your 3rd example is the easiest. You can safely drop 2n^2 +3n + 4 because n^3 is growing faster. You can prove that n^3 + 2n^2 +3n + 4 is O(n^3) it by calculating lim n^3 / (n^3 + 2n^2 +3n + 4).
Same goes for your 2nd exaple, but you need to go through logarithm properties. Basically:
log b1 (x) = c log b2 (x) - it means you can switch the base of logarithm at the expense of a constant... and from above rules definition a constant factor does not change anything, it's still 2. just the constant changes.
Your 1st example is hardest/trickiest, because the limit is most complicated. However, O(f+g) is either O(f) or O(g), because either one grows faster, so the other can be dropped or they asymptotically grow the same so either one can be chosen (their fastest growing term will be the same anyways). This means you need to check which one is growing faster, you do this by ... calculating lim sqrt(n)/(log(n)*log(n)) and choosing according to rules from above. I think this one needs d'Hospital rule.
(a) is the toughest one there I think; (b) and (c) use fairly common rules for Big-Oh simplification.
For (a), I suggest making a substition: let m = [some function of n that makes one of the two terms simpler] and rearrange to get n = [something]. You can then use this to substitute m into the expression, thereby getting rid of all appearances of n, and simplify it according to Big-Oh rules. Then, provided that the function you picked is an increasing function of n, you can substitute n back in and simplify further if need be.