How to prove that |sin(n!)| = Theta(1)? - big-o

I had such an equation on Algorithms and Data Structures subject. It looks very obvious but how exactly I can prove it?
Even on graph we can see that |sin(n!)| does not go above 1 and below 0.

Proof of |sin(n)| is O(1):
We have two functions:
f(n) = |(sin(n!)|
g(n) = 1
And we have to proof that for every n > 0 this statement is correct: f(n) <= g(n)
Let's use induction for such case:
Base - For n = 1, f(1) ~= 0.8 and g(1)=1
Induction - Suppose that f(n) <= g(n) for n = 1, 2, 3, ...k. Now let's show that for k+1 the f(k+1) <= g(k+1) is valid also.
n! range is from 1 to inifinity
sin(x) by definition can be [-1, 1], so sin(n!) has same range as well.
|sin(n!)| scopes it to be [0, 1]
Whatever k was f(k+1) still will be in range of [0, 1], therefore f(k+1) <= g(k+1), therefore our statement is correct for any k.
As it was mentioned by #Berthur in comments, we have to proof as well that |sin(n)| has a lower positive non-zero bound. We can't show that, because |sin(n)| can have arbitrary close to zero values.

Related

Solving the following recurrence: T(n) = T(n/3) + T(n/2) + sqrt(n)

I am trying to solve the following recurrence:
T(n) = T(n/3) + T(n/2) + sqrt(n)
I currently have done the following but am not sure if I am on the right track:
T(n) <= 2T(n/2) + sqrt(n)
T(n) <= 4T(n/4) + sqrt(n/2) + sqrt(n)
T(n) <= 8T(n/8) + sqrt(n/4) + sqrt(n/2) + sqrt(n)
so, n/(2^k) = 1, and the sqrt portion simplifies to: (a(1-r^n))/(1-r)
K = log2(n) and the height is 2^k, so 2^(log2(n)) but:
I am not sure how to combine the result of 2^(log2(n)) with the sqrt(n) portion.
A good initial attempt would be to identify the upper and lower bounds of the time complexity function. These are given by:
These two functions are much easier to solve for than T(n) itself. Consider the slightly more general function:
When do we stop recursing? We need a stopping condition. Since it is not given, we can assume it is n = 1 without loss of generality (you'll hopefully see how). Therefore the number of terms, m, is given by:
Therefore we can obtain the lower and upper bounds for T(n):
Can we do better than this? i.e. obtain the exact relationship between n and T(n)?
From my previous answer here, we can derive a binomial summation formula for T(n):
Where
C is such that n = C is the stopping condition for T(n). If not given, we can assume C = 1 without loss of generality.
In your example, f(n) = sqrt(n), c1 = c2 = 1, a = 3, b = 2. Therefore:
How do we evaluate the inner sum? Consider the standard formula for a binomial expansion, with positive exponent m:
Thus we replace x, y with the corresponding values in the formula, and get:
Where we arrived at the last two steps with the standard geometric series formula and logarithm rules. Note that the exponent is consistent with the bounds we found before.
Some numerical tests to confirm the relationship:
N T(N)
--------------------
500000 118537.6226
550000 121572.4712
600000 135160.4025
650000 141671.5369
700000 149696.4756
750000 165645.2079
800000 168368.1888
850000 181528.6266
900000 185899.2682
950000 191220.0292
1000000 204493.2952
Plot of log T(N) against log N:
The gradient of such a plot m is such that T(N) ∝ N^m, and we see that m = 0.863, which is quite close to the theoretical value of 0.861.

How to solve a recurrence relation such as $T(n) = T(n/2) + T(n/4) + O(m)$

I want to get the tighter bound for this recurrence in which we have two variables m and n.
From my previous answer here, we can derive a binomial summation formula for T(n):
Where
C is such that n = C is the stopping condition for T(n).
In your specific example, the constants are: c1 = 1, c2 = 1, a = 2, b = 4, f(n) = O(m). Since O(m) has no dependence on n, we can simply replace the f term with it.
How do we evaluate the inner sum? Recall the binomial expansion for integer powers:
Setting a = b = 1 we get:
Thus:

Order relationship with respect to n^(nmod6)

What is the order relationship between f(n) = 10n and g(n) = n^(nmod6)?
I know that I can think of f(n) as just n, but thinking about g(n) confuses me because won't nmod6 change with the different values of n? For example, n = 6 would make g(n) = n^0 = 1 but when n = 5, g(n) = n ^ 5. How can I think of this with respect to the Big-Oh, Big-Theta, and Big-Omega relationships?
(n mod 6) can only take values from 0 to 5, so g(n) is bounded above by n^5, and bounded below by 1. So it would be O(n^5) and Omega(1). It does not have a workable Big-Theta.

Time complexity of the program using recurrence equation

I want to find out the time complexity of the program using recurrence equations.
That is ..
int f(int x)
{
if(x<1) return 1;
else return f(x-1)+g(x);
}
int g(int x)
{
if(x<2) return 1;
else return f(x-1)+g(x/2);
}
I write its recurrence equation and tried to solve it but it keep on getting complex
T(n) =T(n-1)+g(n)+c
=T(n-2)+g(n-1)+g(n)+c+c
=T(n-3)+g(n-2)+g(n-1)+g(n)+c+c+c
=T(n-4)+g(n-3)+g(n-2)+g(n-1)+g(n)+c+c+c+c
……………………….
……………………..
Kth time …..
=kc+g(n)+g(n-1)+g(n-3)+g(n-4).. .. . … +T(n-k)
Let at kth time input become 1
Then n-k=1
K=n-1
Now i end up with this..
T(n)= (n-1)c+g(n)+g(n-1)+g(n-2)+g(n-3)+….. .. g(1)
I ‘m not able to solve it further.
Any way if we count the number of function calls in this program , it can be easily seen that time complexity is exponential but I want proof it using recurrence . how can it be done ?
Explanation in Anwer 1, looks correct , similar work I did.
The most difficult task in this code is to write its recursion equation. I have drawn another diagram , I identified some patterns , I think we can get some help form this diagram what could be the possible recurrence equation.
And I came up with this equation , not sure if it is right ??? Please help.
T(n) = 2*T(n-1) + c * logn
Ok, I think I have been able to prove that f(x) = Theta(2^x) (note that the time complexity is the same). This also proves that g(x) = Theta(2^x) as f(x) > g(x) > f(x-1).
First as everyone noted, it is easy to prove that f(x) = Omega(2^x).
Now we have the relation that f(x) <= 2 f(x-1) + f(x/2) (since f(x) > g(x))
We will show that, for sufficiently large x, there is some constant K > 0 such that
f(x) <= K*H(x), where H(x) = (2 + 1/x)^x
This implies that f(x) = Theta(2^x), as H(x) = Theta(2^x), which itself follows from the fact that H(x)/2^x -> sqrt(e) as x-> infinity (wolfram alpha link of the limit).
Now (warning: heavier math, perhap cs.stackexchange or math.stackexchange is better suited)
according to wolfram alpha (click the link and see series expansion near x = infinity),
H(x) = exp(x ln(2) + 1/2 + O(1/x))
And again, according to wolfram alpha (click the link (different from above) and see the series expansion for x = infinity), we have that
H(x) - 2H(x-1) = [1/2x + O(1/x^2)]exp(x ln(2) + 1/2 + O(1/x))
and so
[H(x) - 2H(x-1)]/H(x/2) -> infinity as x -> infinity
Thus, for sufficiently large x (say x > L) we have the inequality
H(x) >= 2H(x-1) + H(x/2)
Now there is some K (dependent only on L (for instance K = f(2L))) such that
f(x) <= K*H(x) for all x <= 2L
Now we proceed by (strong) induction (you can revert to natural numbers if you want to)
f(x+1) <= 2f(x) + f((x+1)/2)
By induction, the right side is
<= 2*K*H(x) + K*H((x+1)/2)
And we proved earlier that
2*H(x) + H((x+1)/2) <= H(x+1)
Thus f(x+1) <= K * H(x+1)
Using memoisation, both functions can easily be computed in O(n) time. But the program takes at least O(2^n) time, and thus is a very inefficient way of computing f(n) and g(n)
To prove that the program takes at most O(2+epsilon)^n time for any epsilon > 0:
Let F(n) and G(n) be the number of function calls that are made in evaluating f(n) and g(n), respectively. Clearly (counting the addition as 1 function call):
F(0) = 1; F(n) = F(n-1) + G(n) + 1
G(1) = 1; G(n) = F(n-1) + G(n/2) + 1
Then one can prove:
F and G are monotonic
F > G
Define H(1) = 2; H(n) = 2 * H(n-1) + H(n/2) + 1
clearly, H > F
for all n, H(n) > 2 * H(n-1)
hence H(n/2) / H(n-1) -> 0 for sufficiently large n
hence H(n) < (2 + epsilon) * H(n-1) for all epsilon > 0 and sufficiently large n
hence H in O((2 + epsilon)^n) for any epsilon > 0
(Edit: originally I concluded here that the upper bound is O(2^n). That is incorrect,as nhahtdh pointed out, but see below)
so this is the best I can prove.... Because G < F < H they are also in O((2 + epsilon)^n) for any epsilon > 0
Postscript (after seeing Mr Knoothes solution): Because i.m.h.o a good mathematical proof gives insight, rather than lots of formulas, and SO exists for all those future generations (hi gals!):
For many algorithms, calculating f(n+1) involves twice (thrice,..) the amount of work for f(n), plus something more. If this something more becomes relatively less with increasing n (which is often the case) using a fixed epsilon like above is not optimal.
Replacing the epsilon above by some decreasing function ε(n) of n will in many cases (if ε decreases fast enough, say ε(n)=1/n) yield an upper bound O((2 + ε(n))^n ) = O(2^n)
Let f(0)=0 and g(0)=0
From the function we have,
f(x) = f(x - 1) + g(x)
g(x) = f(x - 1) + g(x/2)
Substituting g(x) in f(x) we get,
f(x) = f(x-1) + f(x -1) + g(x/2)
∴f(x) = 2f(x-1) + g(x/2)
Expanding this we get,
f(x) = 2f(x-1)+f(x/2-1)+f(x/4-1)+ ... + f(1)
Let s(x) be a function defined as follows,
s(x) = 2s(x-1)
Now clearly f(x)=Ω(s(x)).
The complexity of s(x) is O(2x).
Therefore function f(x)=Ω(2x).
I think is clear to see that f(n) > 2n, because f(n) > h(n) = 2h(n-1) = 2n.
Now I claim that for every n, there is an ε such that:
f(n) < (2+ε)n, to see this, let do it by induction, but to make it more sensible at first I'll use ε = 1, to show f(n) <= 3n, then I'll extend it.
We will use strong induction, suppose for every m < n, f(m) < 3m then we have:
f(n) = 2[f(n-1) + f(n/2 -1) + f(n/4 -1)+ ... +f(1-1)]
but for this part:
A = f(n/2 -1) + f(n/4 -1)+ ... +f(1-1)
we have:
f(n/2) = 2[f(n/2 -1) + f(n/4 -1)+ ... +f(1-1]) ==>
A <= f(n/2) [1]
So we can rewrite f(n):
f(n) = 2f(n-1) + A < 2f(n-1) +f(n/2),
Now let back to our claim:
f(n) < 2*3^(n-1) + 2*3^(n/2)==>
f(n) < 2*3^(n-1) + 3^(n-1) ==>
f(n) < 3^n. [2]
By [2], proof of f(n)&in;O(3n) is completed.
But If you want to extend this to the format of (2+ε)n, just use 1 to replace the inequality, then we will have
for ε > 1/(2+ε)n/2-1 → f(n) < (2+ε)n.[3]
Also by [3] you can say that for every n there is an ε such that f(n) < (2+ε)n actually there is constant ε such that for n > n0, f(n)&in;O((2+ε)n). [4]
Now we can use wolfarmalpha like #Knoothe, by setting ε=1/n, then we will have:
f(n) < (2+1/n)n which results on f(n) < e*2n, and by our simple lower bound at start we have: f(n)&in; Θ(2^n).[5]
P.S: I didn't calculate epsilon exactly, but you can do it with pen and paper simply, I think this epsilon is not correct, but is easy to find it, and if is hard tell me is hard, and I'll write it.

Proving a recurrence relation with induction

I've been having trouble with an assignment I received with the course I am following.
The assignment in question:
Use induction to prove that when n >= 2 is an exact power of 2, the solution of
the recurrence:
T(n) = {2 if n = 2,
2T(n/2)+n if n =2^k with k > 1 }
is T(n) = nlog(n)
NOTE: the logarithms in the assignment have base 2.
The base case here is obvious, when n = 2, we have that 2 = 2log(2)
However, I am stuck on the step here and I am not sure how to solve this.
Step. Let us assume that the statement holds for 2^m for all m <= k and let us show it for 2^{k+1}.
Then, T(2^{k+1}) = 2T(2^k) + 2^{k+1}.
By the inductive assumption T(2^k) = 2^k*log(2^k), i.e., T(2^k) = k*2^k (since the logarithms have base 2 here).
Hence, T(2^{k+1}) = 2*k*2^k + 2^{k+1} = 2^{k+1}*(k+1), which can be written as 2^{k+1}*log(2^{k+1}), completing the proof.

Resources