I'm looking for something like a chain rule for orders of magnitude. Suppose:
y = O(x)
z = O(y)
Then:
z = O(x)
But we can go more generic than this. If p is a polynomial:
y = O(x)
z = O(p(y))
Then:
z = O(p(x))
None of this seems like it would be hard to prove. But can we generalise this any further?
The proof is straightforward. Suppose p(y) = an y^k + ... + a1 y + a0. As y = O(x), there is a constant c that y < c*x. Hence, p(y) < an*c^k x^k + + ... + a1*c x + a0 = f(x). If c <= 1, p(y) < p(x) as f(x) <= p(x). If c > 1, we can say p(y) < c^k p(x). Hence, there is a constant c' = c^k such that p(y) < c' p(x). Therefore, p(y) = O(p(x)).
Eventually, as z = O(p(y)), we proved that z = O(p(x)).
To have a more precise proof, you can use a mathematical induction over the degree of the polynomial p(x).
To generalize the case, we should try to find that functions with a specific property which is f(y) < c' f(x), if y < c x. One of the big category of the functions is f(x) is increasing, and f(cx) = \Theta(f(x)). Hence, the transitivity will be satisfied for all of these functions. For example, f(x) = sqrt(x) satisfied the constraint, but f(x) = 2^x not.
Related
My Algorithms textbook has the following excerpt:
I am struggling to understand their proof that there exists a tight bound IF the limit as n goes to infinity of the ratio of two functions is a constant.
Specifically, where do they get 0.5c and 2c from?
My thoughts: A tight bound means that a function T(n) is bounded above by f(n) and below by g(n). Now lets say T(n) = n^2, f(n) = an^2, and g(n) = bn^2. Then we know the tightbound of T(n) is Theta(n^2) since the ratio of f(n) and g(n) is a constant, a/b.
The formal definition of the statement "lim w(x) = c as x -> infinity" is the following:
For all epsilon > 0, there exists some N such that for all x > N, |w(x) - c| < epsilon.
Now we are given that lim f(x) / g(x) = c as x -> infinity, and that c > 0. Then c / 2 > 0.
Consider epsilon = c / 2. Then epsilon > 0, so there exists some N such that for all x > N, we have |f(x) / g(x) - c| < epsilon = c / 2. This is equivalent to saying -c/2 < f(x) / g(x) - c < c / 2, which is in turn equivalent to saying c/2 < f(x) / g(x) < 3c / 2.
Now since for all x > N, we have c/2 < f(x) / g(x), then (since we always assume that f and g are positive valued) we can conclude that for all x > N, f(x) > g(x) c/2. Thus, we have shown that f(x) = Omega(g(x)).
And similarly, since for all x > N, we have f(x) / g(x) < 3/2 c, we see that for all x > N, f(x) < g(x) (3/2 c). Then we have shown that f(x) = O(g(x)).
Thus, we see that f(x) = Theta(g(x)), as required.
we define big-O notation as follows: f(x) = O(g(x)) if there
exist positive constants M and x0 such that f(x) <= M g(x) for all x > x0.
I am now defining a new version of big-O notation: f(x) = O'(g(x)) if there
exist positive constants M' and x0'
such that f(x) <= M'g(x) for all x >= x0' (the difference being the non strict inequality >=).
Are these two definitions equivalent? In other words, if f(x) = O(g(x)), must
it be the case that f(x) = O'(g(x)), and vice versa? Need Proof for the same
Yes, both definitions are equivalent. Here is why:
If f(x) <= Mg(x) for all x > x0, then the same would hold for x >= x0 + 1 (so you can take x0 + 1 as the value of x0' in the definition of O'). Conversely, if the inequality holds for all x >= x0', it will remain true for all x > x0' (and you can take x0 = x0' in the definition of O).
I am studying big O notation from this book.
The deffinition of big O notation is:
We say that f (x) is O(g(x)) if there are constants C and k such that |f (x)| ≤ C|g(x)| whenever x > k.
Now here is the first example:
EXAMPLE 1 Show that f (x) = x^2 + 2x + 1 is O(x^2).
Solution: We observe that we can readily estimate the size of f (x) when x > 1 because x 1. It follows that
0 ≤ x^2 + 2x + 1 ≤ x^2 + 2x^2 + x^2 = 4x^2
whenever x > 1. Consequently, we can take C = 4 and k = 1 as witnesses to show that f (x) is O(x^2). That is, f (x) = x^2 + 2x + 1
1. (Note that it is not necessary to use absolute values here because all functions in these equalities are positive when x is positive.)
I honestly don't know how they got c = 4, looks like they jump straight to the equation manipulation and my algebra skills are pretty weak. However, I found another way through [The accepted answer to this question])What is an easy way for finding C and N when proving the Big-Oh of an Algorithm?) that says to add all coefficients to find c if k = 1. So x^2+2x+1 = 1+2+1 = 4.
Now for k = 2, I'm completely lost:
Alternatively, we can estimate the size of f (x) when x > 2. When x > 2, we have 2x ≤ x^2 and 1 ≤ x^2. Consequently, if x > 2, we have
0 ≤ x^2 + 2x + 1 ≤ x^2 + x^2 + x^2 = 3x^2.
It follows that C = 3 and k = 2 are also witnesses to the relation f (x) is O(x^2).
Can anyone explain what is happening? What method are they using?
First alternative:
C=4?
The C=4 come from the inequalities
0 ≤ x^2 + 2x + 1 ≤ x^2 + 2x^2 + x^2 = 4x^2 = C*x^2, with C=4 (+)
The second inequality in (+) is true for all x greater than 1, since, term by term
2x < 2x^2, given x>1
1 < x^2, given x>1
k = 1?
From above, we've shown that (+) holds as long as x is larger than 1, i.e.
(+) is true given x > k, with k=1
Second alternative:
k=2?
By the statement, we want to study f(x) for x larger than 2, i.e.
Study f(x) for x > k, k=2
Given x > 2, it's apparent that
0 ≤ x^2 + 2x + 1 ≤ x^2 + x^2 + x^2 = 3x^2 = C*x^2, with C=3 (++)
since, for x>2, we have
2x = x^2 given x=2 ==> 2x < x^2 given x>2
for x=2, 1 < x^2 = 4, so 1 < x^2 for all x>2
Both examples show that f(x) is O(x^2). By using your constants C and k, recall that then Big-O notation for f(x) can be summarized as something along the lines
... we can say that f(x) is O(g(x)) if we can find a constant C such
that |f(x)| is less than C|g(x)| or all x larger than k, i.e., for all
x>k. (*)
This, by no means, implies that we need to find a unique set of (C, k) to prove that some f(x) is some O(g(x)), just some set (C, k) such that (*) above holds.
See e.g. the following link for some reference on how to specify the asymptotic behaviour of a function:
https://www.khanacademy.org/computing/computer-science/algorithms/asymptotic-notation/a/big-o-notation
I have the following function to prove that its time complexity is less or equal to O(xlogx)
f(x) =xlogx+3logx2
I need some help to solve this.
Given,
f(x) = xlogx+3logx^2
= xlogx+6logx // since log (a^b) = b log a
As we know, f(x) = O(g(x)), if | f(x) | <= M. | g(x) |, where M is a positive real number.
Therefore, for M>=7 and x varying in the real positive range,
M . x log x >= x log x + 6 log x
>= (x+6) log x.
f(x) = x log x + 3log x^2
= O(x log x).
I can clearly see than N^2 is bounded by c2^N, but how do i prove it by using formal definition of big-O. I can simply prove it by M.I.
Here is my attempt..
By definition, there for any n>n0, there exist a constant C which
f(n) <= Cg(n)
where
f(n) = n^2
and
g(n) = 2^n
Should I take log to both side and solve for C?
and one more question about fibonacci sequence, i wanna solve the recurrence relation.
int fib(int n){
if(n<=1) return n;
else return fib(n-1) + fib(n-2);
The equation is..
T(n) = T(n-1)+T(n-2)+C // where c is for the adding operation
so
T(n) = T(n-2) + 2T(n-3) + T(n-4) + 3c
and one more
T(n) = T(n-3) + 3T(n-4) + 3T(n-5) + T(n-6) + 6c
then i started to get lost to form the general equation i
The pattern is somehow like pascal triangle?
t(n) = t(n-i) + aT(n-i-1) + bT(n-i-2) + ... + kT(n-i-i) + C
As you point out, to see if f(x) ϵ O(g(x)) you need to find...
...some c > 0 and
...some x0
such that f(x) < c·g(x) for all x > x0.
In this case, you can pick c = 1 and x0 = 2. What you need to prove is that
x2 < 2x for all x > 2
At this point you can log both sides (since if log(x) > log(y), then x > y.) Assuming you're using base-2 log you get the following
log(x2) < log(2x)
and by standard laws of logarithms, you get
2·log(x) < x·log(2)
Since log(2) = 1 this can be written as
2·log(x) < x
If we set x = 2, we get
2·log(2) = 2
and since x grows faster than log(x) we know that 2·log(x) < x holds for all x > 2.
For the most part, the accepted answer (from aioobe) is correct, but there is an important correction that needs to be made.
Yes, for x=2, 2×log(x) = x or 2×log(2) = 2 is correct, but then he incorrectly implies that 2×log(x) < x is true for ALL x>2, which is not true.
Let's take x=3, so the equation becomes: 2×log(3) < 3 (an invalid equation).
If you calculate this, you get: 2×log(3) ≈ 3,16993 which is greater than 3.
You can clearly see this if you plot f(x) = x2 and g(x) = 2x or if you plot f(x)= 2×log(x) and g(x) = x (if c=1).
Between x=2 and x=4, you can see that g(x) will dip below f(x). It is only when x ≥ 4, that f(x) will remain ≤ c×g(x).
So to get the correct answer, you follow the steps described in aioobe's answer, but you plot the functions to get the last intersection where f(x) = c×g(x). The x at that intersection is your x0 (together with the choosen c) for which the following is true: f(x) ≤ c×g(x) for all x ≥ x0.
So for c=1 it should be: for all x≥4, or x0=4
To improve upon the accepted answer:
You have to prove that x^2 < 2^x for all x > 2
Taking log on both sides, we have to prove that:
2·log(x) < x for all x > 2
Thus we have to show the function h(x)=x-2·log(x)>0 for all x>2
h(2)=0
Differentiating h(x) with respect to x, we get h'(x)= 1 - 1/(x·ln(2))
For all x>2, h'(x) is always greater than 0, thus h(x) keeps increasing and since h(2)=0,
it is hence proved that h(x) > 0 for all x > 2,
or x^2 < 2^x for all x > 2