I'm currently taking data structures and algorithms and started the topic of Big-O Notation. When writing the simplest Big-O Expression this is the process that I have been taking/understanding. For example, the expression below:
3n + n^0.3 + 0.3*n^1.3
My process of obtaining the simplest Big-O expression is the following:
Drop/ignore constants that are not within any logarithmic function or n^constant. So the expression above would be
n + n^0.3 + n^1.3
I look for the dominant term/fastest growth rate by plugging in a low/high value into n. For example:
Plug in n = 2. The expression becomes:
(2) + (2)^0.3 + (2)^1.3
2 + 1.2311 + 2.4623
Plug in n = 1000. The expression becomes:
(1000) + (1000)^0.3 + (1000)^1.3
1000 + 7.9433 + 7943.2823
Looking at the growth rate based on the low/high values, the dominant term/fastest growth rate is n^1.3. So the expression above would have the simplest Big-O expression of O(n^1.3).
Is this the right approach when finding the simplest Big-O expression? Are there other considerations that I should take or am missing when doing this approach?
Related
I'm learning algorithms and currently trying to understand the big-O notation. One of the exercises question looks like log(n) + 10^6n^5000 + 3^n. The task is to simplify the expression using Θ-expression. As I understand it asks to say the Θ for this expression which means it looks like this: log(n) + 10^6n^5000 + 3^n = Θ(n^5000)?
Yes. But, the result is wrong! It should be \Theta(3^n), as 3^n is an exponential function and grows faster than a polynomial function such as n^{5000}. Also, you can think about the limit of the given function over 3^n when n goes to infinity.
I'm currently taking an algorithm class, and we're covering Big O notations and such. Last time, we talked about how
O (n^2 + 3n + 5) = O(n^2)
And I was wondering, if the same rules apply to this:
O(n^2) + O(3n) + O(5) = O(n^2)
Also, do the following notations hold ?
O(n^2) + n
or
O(n^2) + Θ (3n+5)
The later n is outside of O, so I'm not sure what it should mean. And in the second notation, I'm adding O and Θ .
At least for practical purposes, the Landau O(...) can be viewed as a function (hence the appeal of its notation). This function has properties for standard operations, for example:
O(f(x)) + O(g(x)) = O(f(x) + g(x))
O(f(x)) * O(g(x)) = O(f(x) * g(x))
O(k*f(x)) = O(f(x))
for well defined functions f(x) and g(x), and some constant k.
Thus, for your examples,
Yes: O(n^2) + O(3n) + O(5) = O(n^2)
and:
O(n^2) + n = O(n^2) + O(n) = O(n^2),
O(n^2) + Θ(3n+5) = O(n^2) + O(3n+5) = O(n^2)
The notation:
O(n^2) + O(3n) + O(5) = O(n^2)
as well as, for example:
f(n,m) = n^2 + m^3 + O(n+m)
is abusing the equality symbol, as it violates the axiom of equality. To be more formally correct, you would need to define O(g(x)) as a set-valued function, the value of which is all functions that do not grow faster than g(x), and use set membership notation to indicate that a specific function is a member of the set.
Addition and multiplication is not defined for Landau's symbol (Big O).
In complexity theory, the Landau symbols are used for sets of functions. Therefore O(*) does not represent a single function but an entire set. The + operator is not defined for sets, however, the following is commonly used when analyzing functions:
O(*) + g(n)
This usually represents a set of functions where g(n) is added to every function in O(*). The resulting set can be represented in big-O notation again.
O(*) + O(**)
This is similar. However, it behaves like a kind of cartesian product. Every function from O(**) is added to every function from O(*).
O(*) + Θ(*)
The same rules apply here. However, the result can usually not be expressed as Θ(**) because of the loosening by O(*). Expressing it as O(**) is still possible.
Also the following notations hold
O(n^2) + n = O(n^2)
and
O(n^2) + Θ(3n+5) = O(n^2), Θ(n)
Hope it makes sense...
I understand how to calculate a function's complexity for the most part. The same goes for determining the order of growth for a mathematical function. [I probably don't understand it as much as I think I do, which is why I'm probably asking this.] For instance:
an^3 + bn^2 + cn + d could be written as O(n^3) in big-o notation, since for large enough nthe values of the term bn^2 + cn + d are insignificant in comparison to an^3 (the constant coefficients a, b, c and d are left out as well, as their contribution to the value become insignificant, too).
What I don't understand is, how does this work when the leading term is involved in some sort of division? For instance:
a/n^3 + bn^2 or n^3/a + bn^2
Let n=100, a=1000 and b=10 for the former formula, then we have
n^3/a = 100^3/1000 = 1000 and bn^2 = 10*100^2 = 100,000
or even more dramatic for the latter - in this case the leading term is not only growing slowly as above, but it's also shrinking, isn't it?:
a/n^3 = 1000/100^3 = 0.001 and bn^2 = 100,000 as above.
In both cases the second term contributes much more, so isn't it n^2that actually determines the order of growth?
It gets even more complicated (for me, at least) when the leading term is followed by a subtraction (a/n^3 - bn^2) or when the second term is also a division (n^3/a + n^2/b) or when both are divisions but in mixed order (a/n^3 + n^2/b), etc.
The list seems endless, so my general question is, how to understand and handle formulas that involve division (and subtraction) in order to determine the order of growth for a given function?
A division is just a multiplication by the multiplicative inverse, so n^3/a == n^3 * a^-1, and you can handle it the same way as any other coefficient.
With regards to substraction a*n^3 - b*n^2 <= a*n^3, so it is also in O(n^3). Also, a*n^3 - b*n^2 >= a/2 * n^3 for large enough values of n, and it is also in Omega(n^3). A more detailed explanation about substraction can be found in: Algorithm complexity when faced with substraction in value
big O notation is generally used for increasing (don't have to be monotonically) functions, and a decreasing function such as a/n is not a good fit for it, though O(1/n) seems to be still perfectly defined, AFAIK, and it is a subset of O(1) (unless you take into account only discrete functions). However, this has very little value for algorithm's analysis, as a complexity of an algorithm cannot really shrink..
There's a very simple rule for the type of questions you posted.
Suppose you're trying to find the order of growth of f(n), and you find some simple function g(n) such that
lim {n -> inf} f(n) / g(n) = k
where k is a positive finite constant. Then
f(n) = Theta(g(n))
(It's easy to see this from the calculus definitions.)
Now let's see how this applies to your examples:
lim {n -> inf} (a/n^3 + bn^2) / n^2 = b
so it's Theta(n^2).
lim {n -> inf} (a n^3 - bn^2) / n^3 = a
so it's Theta(n^2).
(of course, assuming a and b are positive.)
If I have an algorithm with the running time T(n) = 5n^4/100000 + n^3/100, I know that I get Θ(n^4).
Now, if I have something like T(n) = (10n^2 + 20n^4 + 100n^3)/(n^4), does this yield Θ(n^3)?
I am trying to eliminate low-order terms to use the Substitution method to prove this.
Big-Theta means, that growth is both big-O and big-Omega.
So first case in your question is Θ(n^4), not Θ(n^3) since 5n^4/100000 + n^3/100 belongs to O(n^4) and not O(n^3).
Second case:
Thus, it's Θ(1) - because result is O(1) and Ω(1): all members, except 20 (constant) will limit to zero when n is growing.
Somewhat similar to fibonacci sequence
Running time of an algorithm is given by
T (n) =T (n-1)+T(n-2)+T(n-3) if n > 3
= n otherwise the order of this algorithm is?
if calculated by induction method then
T(n) = T(n-1) + T(n-2) + T(n-3)
Let us assume T(n) to be some function aⁿ
then aⁿ = an-1 + an-2 + an-3
=> a³ = a² + a + 1
which give complex solutions also roots of above equation according to my calculations are
a = 1.839286755
a = 0.419643 - i ( 0.606291)
a = 0.419643 + i ( 0.606291)
Now, how can I proceed further or is there any other method for this?
If I remember correctly, when you have determined the roots of the characteristic equation, then the T(n) can be the linear combination of the powers of those Roots
T(n)=A1*root1^n+A2*root2^n+A3*root3^n
So I guess the maximum complexity here will be
(maxroot)^n where maxroot is the maximum absolute value of your roots. So for your case it is ~ 1.83^n
Asymptotic analysis is done for running times of programs which give us how the running time will grow with the input.
For Recurrence relations (like the one you mentioned), we use a two step process:
Estimate the running time using the recursion tree method.
Validate(Confirm) the estimate using the substitution method.
You can find explanation of these methods in any algorithm text (eg. Cormen).
it can be aproximated like 3+9+27+......3^n which is O(3^n)