Asymptotic analysis of functions - algorithm

I have the following function to prove that its time complexity is less or equal to O(xlogx)
f(x) =xlogx+3logx2
I need some help to solve this.

Given,
f(x) = xlogx+3logx^2
= xlogx+6logx // since log (a^b) = b log a
As we know, f(x) = O(g(x)), if | f(x) | <= M. | g(x) |, where M is a positive real number.
Therefore, for M>=7 and x varying in the real positive range,
M . x log x >= x log x + 6 log x
>= (x+6) log x.
f(x) = x log x + 3log x^2
= O(x log x).

Related

Chain rule for orders of magnitude

I'm looking for something like a chain rule for orders of magnitude. Suppose:
y = O(x)
z = O(y)
Then:
z = O(x)
But we can go more generic than this. If p is a polynomial:
y = O(x)
z = O(p(y))
Then:
z = O(p(x))
None of this seems like it would be hard to prove. But can we generalise this any further?
The proof is straightforward. Suppose p(y) = an y^k + ... + a1 y + a0. As y = O(x), there is a constant c that y < c*x. Hence, p(y) < an*c^k x^k + + ... + a1*c x + a0 = f(x). If c <= 1, p(y) < p(x) as f(x) <= p(x). If c > 1, we can say p(y) < c^k p(x). Hence, there is a constant c' = c^k such that p(y) < c' p(x). Therefore, p(y) = O(p(x)).
Eventually, as z = O(p(y)), we proved that z = O(p(x)).
To have a more precise proof, you can use a mathematical induction over the degree of the polynomial p(x).
To generalize the case, we should try to find that functions with a specific property which is f(y) < c' f(x), if y < c x. One of the big category of the functions is f(x) is increasing, and f(cx) = \Theta(f(x)). Hence, the transitivity will be satisfied for all of these functions. For example, f(x) = sqrt(x) satisfied the constraint, but f(x) = 2^x not.

How to prove which of the following functions has the greater growth rate

I came across this question. To prove whether the following statement was true or falseLet f(n) = n + log n, then f(n) = O(log^2 n).I'm unsure as to how I can go about proving or disproving whether log^2n is the upper bound for n or not. Could someone help me construct a proof for the same.
Consider the function
g(x) = x(ln x)^2 ; x > 0
This function is positive and increasing for 0 < x < e^(-2).
To see why this is true, let's calculate its derivative:
g'(x) = 1*(ln x)^2 + x*2(ln x)/x
basically because the derivative of ln x is 1/x. Then
g'(x) = (ln x)((ln x) + 2)
which is positive for 0 < x < e^(-2), since both factors are negative in that interval.
This proves that g(x) is positive and increasing in the interval (0, e^(-2)). Therefore, there exists a positive constant c such that
g(x) > c ; if x is small enough
which implies that
g(1/n) > c ; if n is large enough
or
(1/n)(ln n)^2 > c
or
n < (1/c)(ln n)^2 = O((ln n)^2)
and since ln n is also O((ln n)^2) we get
n + (ln n) = O((ln n)^2)
as we wanted to see.

Confused on how to find c and k for big O notation if f(x) = x^2+2x+1

I am studying big O notation from this book.
The deffinition of big O notation is:
We say that f (x) is O(g(x)) if there are constants C and k such that |f (x)| ≤ C|g(x)| whenever x > k.
Now here is the first example:
EXAMPLE 1 Show that f (x) = x^2 + 2x + 1 is O(x^2).
Solution: We observe that we can readily estimate the size of f (x) when x > 1 because x 1. It follows that
0 ≤ x^2 + 2x + 1 ≤ x^2 + 2x^2 + x^2 = 4x^2
whenever x > 1. Consequently, we can take C = 4 and k = 1 as witnesses to show that f (x) is O(x^2). That is, f (x) = x^2 + 2x + 1
1. (Note that it is not necessary to use absolute values here because all functions in these equalities are positive when x is positive.)
I honestly don't know how they got c = 4, looks like they jump straight to the equation manipulation and my algebra skills are pretty weak. However, I found another way through [The accepted answer to this question])What is an easy way for finding C and N when proving the Big-Oh of an Algorithm?) that says to add all coefficients to find c if k = 1. So x^2+2x+1 = 1+2+1 = 4.
Now for k = 2, I'm completely lost:
Alternatively, we can estimate the size of f (x) when x > 2. When x > 2, we have 2x ≤ x^2 and 1 ≤ x^2. Consequently, if x > 2, we have
0 ≤ x^2 + 2x + 1 ≤ x^2 + x^2 + x^2 = 3x^2.
It follows that C = 3 and k = 2 are also witnesses to the relation f (x) is O(x^2).
Can anyone explain what is happening? What method are they using?
First alternative:
C=4?
The C=4 come from the inequalities
0 ≤ x^2 + 2x + 1 ≤ x^2 + 2x^2 + x^2 = 4x^2 = C*x^2, with C=4 (+)
The second inequality in (+) is true for all x greater than 1, since, term by term
2x < 2x^2, given x>1
1 < x^2, given x>1
k = 1?
From above, we've shown that (+) holds as long as x is larger than 1, i.e.
(+) is true given x > k, with k=1
Second alternative:
k=2?
By the statement, we want to study f(x) for x larger than 2, i.e.
Study f(x) for x > k, k=2
Given x > 2, it's apparent that
0 ≤ x^2 + 2x + 1 ≤ x^2 + x^2 + x^2 = 3x^2 = C*x^2, with C=3 (++)
since, for x>2, we have
2x = x^2 given x=2 ==> 2x < x^2 given x>2
for x=2, 1 < x^2 = 4, so 1 < x^2 for all x>2
Both examples show that f(x) is O(x^2). By using your constants C and k, recall that then Big-O notation for f(x) can be summarized as something along the lines
... we can say that f(x) is O(g(x)) if we can find a constant C such
that |f(x)| is less than C|g(x)| or all x larger than k, i.e., for all
x>k. (*)
This, by no means, implies that we need to find a unique set of (C, k) to prove that some f(x) is some O(g(x)), just some set (C, k) such that (*) above holds.
See e.g. the following link for some reference on how to specify the asymptotic behaviour of a function:
https://www.khanacademy.org/computing/computer-science/algorithms/asymptotic-notation/a/big-o-notation

Using big-O to prove N^2 is O(2^N)

I can clearly see than N^2 is bounded by c2^N, but how do i prove it by using formal definition of big-O. I can simply prove it by M.I.
Here is my attempt..
By definition, there for any n>n0, there exist a constant C which
f(n) <= Cg(n)
where
f(n) = n^2
and
g(n) = 2^n
Should I take log to both side and solve for C?
and one more question about fibonacci sequence, i wanna solve the recurrence relation.
int fib(int n){
if(n<=1) return n;
else return fib(n-1) + fib(n-2);
The equation is..
T(n) = T(n-1)+T(n-2)+C // where c is for the adding operation
so
T(n) = T(n-2) + 2T(n-3) + T(n-4) + 3c
and one more
T(n) = T(n-3) + 3T(n-4) + 3T(n-5) + T(n-6) + 6c
then i started to get lost to form the general equation i
The pattern is somehow like pascal triangle?
t(n) = t(n-i) + aT(n-i-1) + bT(n-i-2) + ... + kT(n-i-i) + C
As you point out, to see if f(x) ϵ O(g(x)) you need to find...
...some c > 0 and
...some x0
such that f(x) < c·g(x) for all x > x0.
In this case, you can pick c = 1 and x0 = 2. What you need to prove is that
x2 < 2x for all x > 2
At this point you can log both sides (since if log(x) > log(y), then x > y.) Assuming you're using base-2 log you get the following
log(x2) < log(2x)
and by standard laws of logarithms, you get
2·log(x) < x·log(2)
Since log(2) = 1 this can be written as
2·log(x) < x
If we set x = 2, we get
2·log(2) = 2
and since x grows faster than log(x) we know that 2·log(x) < x holds for all x > 2.
For the most part, the accepted answer (from aioobe) is correct, but there is an important correction that needs to be made.
Yes, for x=2, 2×log(x) = x or 2×log(2) = 2 is correct, but then he incorrectly implies that 2×log(x) < x is true for ALL x>2, which is not true.
Let's take x=3, so the equation becomes: 2×log(3) < 3 (an invalid equation).
If you calculate this, you get: 2×log(3) ≈ 3,16993 which is greater than 3.
You can clearly see this if you plot f(x) = x2 and g(x) = 2x or if you plot f(x)= 2×log(x) and g(x) = x (if c=1).
Between x=2 and x=4, you can see that g(x) will dip below f(x). It is only when x ≥ 4, that f(x) will remain ≤ c×g(x).
So to get the correct answer, you follow the steps described in aioobe's answer, but you plot the functions to get the last intersection where f(x) = c×g(x). The x at that intersection is your x0 (together with the choosen c) for which the following is true: f(x) ≤ c×g(x) for all x ≥ x0.
So for c=1 it should be: for all x≥4, or x0=4
To improve upon the accepted answer:
You have to prove that x^2 < 2^x for all x > 2
Taking log on both sides, we have to prove that:
2·log(x) < x for all x > 2
Thus we have to show the function h(x)=x-2·log(x)>0 for all x>2
h(2)=0
Differentiating h(x) with respect to x, we get h'(x)= 1 - 1/(x·ln(2))
For all x>2, h'(x) is always greater than 0, thus h(x) keeps increasing and since h(2)=0,
it is hence proved that h(x) > 0 for all x > 2,
or x^2 < 2^x for all x > 2

Problem from Growth of function

I am reading thee book "Discrete Mathematics and its Application" by Kenneth H. Rosen
I am now in the chapter Growth of Functions and trying the Exercise of that.[5th Edition page 142]
I am stuck here:
Determine whether these functions are in O(x^2) [(big oh of x's square]
[1] f(x) = 2^x,
[2] f(x) = (x^4)/2
[3] f(x) = floor(x) * floor(x)
I can not do the 1st one. Will anybody please help?
I have done the 2nd and 3rd as follows. Please check and comment.
[2]
if f(x) = (x^4)/2 is O(x^2) for x > k (k = any constant, C = any constant)
___then |(x^4)/2| <= C|.(x^2)| for x > k
___or |(x^2)| <= 2C for x > k
___or x <= sqrt(2C) for x > k
___or x<= C1 [ C1 = sqrt(2c) = Constant]
___but this contradicts with x > k where k is any constant
so f(x) is not O(x^2)
[3]
f(x) = floor(x) * floor(x) <= x * x for x > 1
__or f(x) <= x^2 for x>1
__so f(x) is O(x^2) taking C = 1 and k = 1 as witness
Please help me in the 1st one.
f(x) is in O(g(x)) exactly when f(x) <= a*g(x)+b for large x and fixed a and b. One method from elementary calculus is to divide and take the limit:
lim(f(x)/g(x)) as x->inf
If this is zero, then f(x) grows strictly slower than x^2. If it's any other finite number, then the two functions are in the same big-O class (and that number is your C-witness). If it's infinity, then f(x) is NOT O(x^2).
The simpler approach is just to look at the largest power of x (for polynomials) or just know that k^x grows faster than x^k for all fixed k greater than one. (Hint: this is your answer.)
Big-O notation is something you should be able to eyeball. In fact, if you aren't presently this comfortable with the relations between algebraic functions, then you're better off to drop the algorithms class and switch to a math class first until you have good grades in calculus. Then, you will have no trouble at all with big-O notation.

Resources