I need to derive the Big-O complexity of this expression:
c^n + n*(log(n))^2 + (10*n)^c
where c is a constant and n is a variable.
I'm pretty sure I understand how to derive the Big-O complexity of each term individually, I just don't know how the Big-O complexity changes when the terms are combined like this.
Ideas?
Any help would be great, thanks.
The answer depends on |c|
If |c| <= 1 it's O(n*(log(n))^2)
IF |c| > 1 it's O(c^n)
The O() notation considers the highest term; think about which one will dominate for very, very large values of n.
In your case, the highest term is c^n, actually; the others are essentially polynomial. So, it's exponential complexity.
Wikipedia is your friend:
In typical usage, the formal definition of O notation is not used directly; rather, the O notation for a function f(x) is derived by the following simplification rules:
If f(x) is a sum of several terms, the one with the largest growth rate is kept, and all others omitted.
If f(x) is a product of several factors, any constants (terms in the product that do not depend on x) are omitted.
Related
if f(n) = 3n + 8,
for this we say or prove that f(n) = Ω(n)
Why we not use Ω(1) or Ω(logn) or .... for describing growth rate of our function?
In the context of studying the complexity of algorithms, the Ω asymptotic bound can serve at least two purposes:
check if there is any chance of finding an algorithm with an acceptable complexity;
check if we have found an optimal algorithm, i.e. such that its O bound matches the known Ω bound.
For theses purposes, a tight bound is preferable (mandatory).
Also note that f(n)=Ω(n) implies f(n)=Ω(log(n)), f(n)=Ω(1) and all lower growth rates, and we needn't repeat that.
You can actually do that. Check the Big Omega notation here and let's take Ω(log n) as an example. We have:
f(n) = 3n + 8 = Ω(log n)
because:
(according to the 1914 Hardy-Littlewood definition)
or:
(according to the Knuth definition).
For the definition of liminf and limsup symbols (with pictures) please check here.
Perhaps what was really meant is Θ (Big Theta), that is, both O() and Ω() simultaneously.
I‘m trying to wrap my head around the meaning of the Landau-Notation in the context of analysing an algorithm‘s complexity.
What exactly does the O formally mean in Big-O-Notation?
So the way I understand it is that O(g(x)) gives a set of functions which grow as rapidly or slower as g(x), meaning, for example in the case of O(n^2):
where t(x) could be, for instance, x + 3 or x^2 + 5. Is my understanding correct?
Furthermore, are the following notations correct?
I saw the following written down by a tutor. What does this mean? How can you use less or equal, if the O-Notation returns a set?
Could I also write something like this?
So the way I understand it is that O(g(x)) gives a set of functions which grow as rapidly or slower as g(x).
This explanation of Big-Oh notation is correct.
f(n) = n^2 + 5n - 2, f(n) is an element of O(n^2)
Yes, we can say that. O(n^2) in plain English, represents "set of all functions that grow as rapidly as or slower than n^2". So, f(n) satisfies that requirement.
O(n) is a subset of O(n^2), O(n^2) is a subset of O(2^n)
This notation is correct and it comes from the definition. Any function that is in O(n), is also in O(n^2) since growth rate of it is slower than n^2. 2^n is an exponential time complexity, whereas n^2 is polynomial. You can take limit of n^2 / 2^n as n goes to infinity and prove that O(n^2) is a subset of O(2^n) since 2^n grows bigger.
O(n) <= O(n^2) <= O(2^n)
This notation is tricky. As explained here, we don't have "less than or equal to" for sets. I think tutor meant that time complexity for the functions belonging to the set O(n) is less than (or equal to) the time complexity for the functions belonging to the set O(n^2). Anyways, this notation doesn't really seem familiar, and it's best to avoid such ambiguities in textbooks.
O(g(x)) gives a set of functions which grow as rapidly or slower as g(x)
That's technically right, but a bit imprecise. A better description contains the addenda
O(g(x)) gives the set of functions which are asymptotically bounded above by g(x), up to constant factors.
This may seem like a nitpick, but one inference from the imprecise definition is wrong.
The 'fixed version' of your first equation, if you make the variables match up and have one limit sign, seems to be:
This is incorrect: the ratio only has to be less than or equal to some fixed constant c > 0.
Here is the correct version:
where c is some fixed positive real number, that does not depend on n.
For example, f(x) = 3 (n^2) is in O(n^2): one constant c that works for this f is c = 4. Note that the requirement isn't 'for all c > 0', but rather 'for at least one constant c > 0'
The rest of your remarks are accurate. The <= signs in that expression are an unusual usage, but it's true if <= means set inclusion. I wouldn't worry about that expression's meaning.
There's other, more subtle reasons to talk about 'boundedness' rather than growth rates. For instance, consider the cosine function. |cos(x)| is in O(1), but its derivative fluctuates from negative one to positive one even as x increases to infinity.
If you take 'growth rate' to mean something like the derivative, example like these become tricky to talk about, but saying |cos(x)| is bounded by 2 is clear.
For an even better example, consider the logistic curve. The logistic function is O(1), however, its derivative and growth rate (on positive numbers) is positive. It is strictly increasing/always growing, while 1 has a growth rate of 0. This seems to conflict with the first definition without lots of additional clarifying remarks of what 'grow' means.
An always growing function in O(1) (image from the Wikipedia link):
I had seen in one of the videos (https://www.youtube.com/watch?v=A03oI0znAoc&t=470s) that, If suppose f(n)= 2n +3, then BigO is O(n).
Now my question is if I am a developer, and I was given O(n) as upperbound of f(n), then how I will understand, what exact value is the upper bound. Because in 2n +3, we remove 2 (as it is a constant) and 3 (because it is also a constant). So, if my function is f(n) where n = 1, I can't say g(n) is upperbound where n = 1.
1 cannot be upperbound for 1. I find hard understanding this.
I know it is a partial (and probably wrong answer)
From Wikipedia,
Big O notation characterizes functions according to their growth rates: different functions with the same growth rate may be represented using the same O notation.
In your example,
f(n) = 2n+3 has the same growth rate as f(n) = n
If you plot the functions, you will see that both functions have the same linear growth; and as n -> infinity, the difference between the 2 gets minimal.
In Big O notation, f(n) = 2n+3 when n=1 means nothing; you need to look at the trend, not discreet values.
As a developer, you will consider big-O as a first indication for deciding which algorithm to use. If you have an algorithm which is say, O(n^2), you will try to understand whether there is another one which is, say, O(n). If the problem is inherently O(n^2), then the big-O notation will not provide further help and you will need to use other criterion for your decision. However, if the problem is not inherently O(n^2), but O(n), you should discard any algorithm that happen to be O(n^2) and find an O(n) one.
So, the big-O notation will help you to better classify the problem and then try to solve it with an algorithm whose complexity has the same big-O. If you are lucky enough as to find 2 or more algorithms with this complexity, then you will need to ponder them using a different criterion.
I know that the big-o complexity of the following equation is O(n^3)
4n^3 + 6n + 6
because the n^3 is the dominant term.
Would the big-o complexity be the same for the same function, but with a negative coefficient?
-4n^3 + 6n + 6
Actually if you have negative terms in a big-O computation, you can ignore them because they make you win time.
In this case, the complexity would be O(n).
Don't know to what kind of algorithm something like that could correspond though, but to answer the general question, you could have something like O(an^2 - bn) which would give a O(n^2)complexity.
Edit:
Found a funny related question, about time travelling in algorithm solving.
Formally we analyse monotonically increasing function.
It's implied by formal definitions of asymptotic complexity.
Let's look at one of definitions, at wikipedia:
One writes
f(x) = O(g(x)) as x -> inf
if and only if there is a positive constant M such that for all
sufficiently large values of x, f(x) is at most M multiplied by g(x)
in absolute value. That is, f(x) = O(g(x)) if and only if there exists
a positive real number M and a real number x0 such that
|f(x)| <= M|g(x)| for all x>x0
As you can see, this definition works on absolute values.
In some other sources (like books about data structures and algorithms) you might find definitions without absolute value, but somewhere assumption that analysed functions are monotonically increasing (warning: sometimes assumptions are hidden in book references or implied by properties of analysed universe).
To sum it up: asymptotic analysis are designed to be used on monotonically increasing functions. Sometimes it's enforced by assumption, sometimes by absolute value in equation.
You may find other agrugments like this another SO answer , but with same conclusion.
Order notation question, big-o notation and the like:
What does the max and min of a function mean in terms of order notation?
for example:
DEFINITION:
"Maximum" rules: Suppose that f(n) and g(n) are positive functions for all n > n0.
Then:
O[f(n) + g(n)] = O[max (f(n),g(n)) ]
etc...
I need to use these definitions to prove something for homework.. thanks for the help!
EDIT: f(n) and g(n) are supposed to represent running times of algorithms with respect to input size
With Big-O notation, you're talking about upper bounds on computation. This means that you are only interested in the largest term of the combined function as n (the variable) tends to infinity. What's more, you drop any constant multipliers as well since the formal definition of the notation allows you to discard those parts, which is important as it allows you to focus on the algorithm's behavior and not on the implementation of the algorithm.
So, we are combining two functions through summation. Well, there are two cases (strictly three, but it's symmetric):
One function is of higher order than the other. At that point, the higher order function dominates the lesser; you can pretend that the lesser doesn't exist.
Both functions are of the same order. This is just like you are doing some kind of ratio-ed sum of the two (since we've already thrown away the scaling factors) but then you just end up with the same factor and have just changed the scaling factors a bit.
The net result looks very much like the max() function, even though it's really not (it's actually a generalization of max over the space of functions), so it's very convenient to use the notation.
It is a regular max between natural numbers. f is a function mapped to numbers [f:N->N], and so is g.
Thus, f(n) is in N, and so max(f(n),g(n)) is just standard max: f(n) > g(n) ? f(n) : g(n)
O[max (f(n),g(n)) ] means: which ever is more 'expensive': f or g: it is the upper bound.