Big O notation with non integer power - big-o

so I'm preparing for an exam and this is last year's exam. I wonder why this Big O Notation statement is considered true.
n^4 + 10000n^4.5 = O(0.0001 ∗ n^5)
If the power of the n is not an integer, do we need to round it up?

Related

does the constant in calculating time complexity have to be a integer number?

suppose f(n) = O(g(n)) then we say that
0 <= f(n) <= cg(n) .
my question is does this 'c' have to be a integer . not only for big-O notation but for all other notations like omega and theta notation?
i want to solve the question n! = w(2^n) based upon that since the expression
comes out to be n! = c1*2^n. now i have to calculate c1 and n0>=n for that.
NO , it does not have a integer . it can be any positive real number and it also be in fraction also.

What is the relation between mathematical approximation of Big O notation and pratical one in programming

I have a question about asymptotic complexity, specifically Big O notation.
Mathematically speaking: we have a function T(n) it's output : the amount of time taken by our algorithm.
for example T(n) = 1000 + 10n .
We choose a simple function F(n) = n , and for some large natural number N and constant C , T(n) <= C.F(n) which is equivalent to that T(n) belonging O(f(n)) ( sets of functions which T one of them dominated by C.F(n) ).
My specific question: I didn't get the point of upper bounds , where from a specific input N , all T(n) points are upper bounded by F(n) points .
What's the relation between that mathematical approximation and worst-best case complexity in computer science and Big O notation
After reading more about the topic.
This mathematical proof is just an approximation to explain that our function T(n) [ which it's output number of instructions or steps our algorithm spend to do some specific job ] is upper bounded by the equivalent function C.F(n) where C is some specific big constant > 0 . so our upper bound refers to the worst-case complexity , that our algorithm can reach ( and can reach because mathematically T(n) and F(n) intersect in some specific big N > 0 which become a supremum ( upper bound bellong to our function) ) .
so in resume , it's an approximation to prove that the worst case complexity is an upper bound ( or supremum ) that our algorithm can reach .

Confused on Little O meaning

So what I took from little o page is when you apply the small O notation we have to check if one rate is faster then the other (small o focuses on the upper bound)?
In this case when we apply small o:
2^n = o(3^n) will be false as 2^n and 3^n upper bound is equal in speed but not less then
2n = o(n^2) is true as n^2 upper bound is 2 and 2n does not have an upper bound.
Am I on the right track?
2^n is in o(3^n) (little o), since:
lim_n->infinity (2^n / 3^n) = 0
Simmilarly. for 2n, it is easy to show that it is in o(n^2)
An intuitive for "little o" is - it's an upper bound, but not a tight one. It means, a function f(n) is in o(g(n)) if f(n) is in O(g(n)), but not in Omega(g(n)).
In your example, 2^n is in O(3^n), but it is not in Omega(3^n), so we can say it is in o(3^n)
the only difference between the big O and Small O is that big O allows the function to grow at equal phase however the small O states that g(x) has higher rate of growth and can never be equal after a specific point x'(considering f(x)=o(g(x)) )
The first example you have provided is wrong as small O states that:
for f(x)=o(g(x))
|f(x)|x'
however in the above case where f(x)=2^x and g(x)=3^x there exists no C and x' to satisfy it
as g(x) has higher rate of growth.
the best way to define small O if you understand Big O is:
A function is called small O if it is Big O but not Big Omega
-- this is because the big omega and big O only intersect at the condition when the rate of groth of both function s is equal so if we remove that specific case it is small O.
However please remember that if f(x) is Big O g(x) it can also be small O of g(x) however vice versa is not possible.

Big O, Theta, and big Omega notation

Based on my understanding, big O is essentially similar to theta notation but can include anything bigger than the given function (e.g. n^3 = O(n^4), n^3 = O(n^5), etc.), and big Omega includes anything smaller than the given function (n^3 = Ω(n^2), etc.).
However, my professor said the other day that n^0.79 = Ω(n^0.8), while he was doing an exercise that involved the master theorem.
Why/how is this true when n^0.8 is larger than n^0.79?
You have big O and big Omega backwards. Big O is everything the "same" or smaller than the function.

Order functions of algorithms

Can someone help me understand this question? I may have it on my tomorrow exam but I can't find similar question on internet or in my lectures.
First you need to express each function as a Theta(something).
For instance, for the first one: Theta((1-n)(n^3-17)) = Theta(n^4 + ...) = Theta(n^4).
For the second one: Theta(30+log(n^9)) = Theta(30 + 9logn) = Theta(logn).
These are sorted as g1, g2, because n^4 = Omega(logn).
And so on.
For the sorting: saying that g1 = Omega(g2) means that g1 grows at least as fast as g2, that is we are defining a lower bound. So, sort them from the worst (slowest, with fastest growth), to the best (NB: it is strange that the exercise want "the first to be to most preferable", but the definition of Omega leaves no doubt).
Btw: if you want to be more formal, here is the definition of the Omega notation:
f = Omega(g) iff exist c and n0 > 0 such that forall n >= n0 we have 0 <= c*g(n) <= f(n) (in words: f grows at least as fast as g).
First, you have to calculate the Theta notations by determing the growths-class of each function, e.G. 1, log(n), n, n log(n) and so on. To do that you have of course to expand those functions.
Having the growths-class of each function you have to order them by their goodness.
Last, you have to put these functions into relations, like g1 = omega(g2). Therefore just keep in mind that a function t(n) is said to be in omega(g(n)) if t(n) is bounded below by some multiple of g(n), e.G. n³ >= n² and therefore n³ is elemnt of omega(n²). This can also be written as n³ = omega(n²)
For theta, this answer and that one summarize what is to be found in your problem. Which g function can you find such that (say f is one of your 8 functions above)
multiplied by a constant bounds asymptotically above f (called O(g(n)))
multiplied by (usually) another constant bounds asymptotically below f (called omega(g(n))
For instance, for the iv: 10^5n, Θ(n) fits, as you can easily find two constants where k1.n bounds below 10^5n and k2.n to bounds it above, asymptotically. (here f is O(n) and Omega(n) as f, the iv. is an easy one).
You need to understand that all big O and Big Omega and Big theta apply for worse/best/average case
for some function:
Big O -> O(..) is the upper limit this function will never exceed .. e.g. for higher values
Big Omega -> is the lower pound the function never goes below it .e.g in small values
Big theta is like: there are 2 constants such that:
Big omega * c < Big Theta < Big O *c2
so going to your sample:
i) its of order n^4 for both Big Omega, and O(n^ + n).
viii) its constant so both Obig O and big Omega the same.. thus big Theta the same

Resources