Calculating the Recurrence Relation T(n) = sqrt(n * T(sqrt(n)) + n) - algorithm

I think the complexity of this recursion is O(n^2/3)` by change variable and induction. but I'm not sure. Is this solution correct?

This is a fascinating recurrence and it does not solve to Θ(n). Rather, it appears to solve to Θ(n2/3).
To give an intuition for why this isn't likely to be Θ(n), let's imagine that we're dealing with a really, really large value of n. Then since
T(n) = (nT(√n) + n)1/2
under the assumption that T(√n) ≈ √n, we'd get that
T(n) = (n√n + n)1/2
= (n3/2 + n)1/2
≈ n3/4.
In other words, assuming that T(n) = Θ(n) would give us a different value of T(n) as n gets large.
On the other hand, let's assume that T(n) = Θ(n2/3). Then the same calculation gives us that
T(n) = (nT(n) + n)1/2
= (n · n2/3 + n)1/2
≈ (n4/3)1/2
= n2/3,
which is consistent with itself.
To validate this, I wrote a short program that printed out different values of T(n) given different inputs and plotted the results. Here's the version of T(n) that I wrote up:
double T(double n) {
if (n <= 2) return n;
return sqrt(n * T(sqrt(n)) + n);
}
I decided to use 2 as a base case, since repeatedly taking square roots will never let n drop to one. I also decided to use real-valued arguments rather than discrete integer values just to make the math easier.
If you plot the values of T(n), you get this curve:
.
This doesn't look like what I'd expect from a linear plot. To figure out what this was, I plotted it on a log/log plot, which has the nice property that all polynomial functions get converted to straight lines whose slope is equal to the exponent. Here's the result:
I consulted my Handy Neighborhood Regression Software and asked it to determine the slope of this line. Here's what it gave back:
Slope: 0.653170918815869
R2: 0.999942627574643
That's a very good fit, and the slope of 0.653 is pretty close to 2/3. So that's more empirical evidence supporting that the recurrence solves to Θ(n2/3).
All that's left to do now is to work out the math. We'll solve this recurrence using a series of substitutions.
First, I'm generally not that comfortable working with exponents in the way that this recurrence uses them, so let's take the log of both sides. (Throughout this exposition, I'll use lg n to mean log2 n).
lg T(n) = lg (nT(√n) + n)1/2
= (1/2) lg (nT(√n) + n)
= (1/2) lg(T(√n) + 1) + (1/2)lg n
≈ (1/2) lg T(√n) + (1/2) lg n
Now, let's define S(n) = lg T(n). Then we have
S(n) = lg T(n)
≈ (1/2) lg T(√ n) + (1/2) lg n
= (1/2) S(√ n) + (1/2) lg n
That's a lot easier to work with, though we still have the problem of the recurrence shrinking by powers each time. To address this, let's do one more substitution, which is a fairly common one when working with these sorts of expressions. Let's define R(n) = S(2n). Then we have that
R(n) = S(2n)
&approx; (1/2)S(√2n) + (1/2) lg 2n
= (1/2)S(2n/2) + (1/2) n
= (1/2) R(n / 2) + (1/2) n
Great! All that's left to do now is to solve R(n).
Now, there is a slight catch here. We could immediately use the Master Theorem to conclude that R(n) = Θ(n). The problem with this is that just knowing that R(n) = Θ(n) won't allow us to determine what T(n) is. Specifically, let's suppose that we just know R(n) = Θ(n). Then we could say that
S(n) = S(2lg n) = R(lg n) = Θ(log n)
to get that S(n) = Θ(log n). However, we get stuck when trying to solve for T(n) in terms of S(n). Specifically, we know that
T(n) = 2S(n) = 2Θ(log n),
but we cannot go from this to saying that T(n) = Θ(n). The reason is that the hidden coefficient in the Θ(log n) is significant here. Specifically, if S(n) = k lg n, then we have that
2k lg n = 2lg nk = nk,
so the leading coefficient of the logarithm will end up determining the exponent on the polynomial. As a result, when solving R, we need to determine the exact coefficient of the linear term, which translates into the exact coefficient of the logarithmic term for S.
So let's jump back to R(n), which we know is
R(n) &approx; (1/2) R(n/2) + (1/2)n.
If we iterate this a few times, we see this pattern:
R(n) &approx; (1/2) R(n/2) + (1/2)n
&approx; (1/2)((1/2) R(n/4) + (1/4)n) + (1/2)n
&approx; (1/4)R(n/4) + (1/8)n + (1/2)n
&approx; (1/4)((1/2)R(n/8) + n/8) + (1/8)n + (1/2)n
&approx; (1/8)R(n/8) + (1/32)n + (1/8)n + (1/2)n.
The pattern appears to be that, after k iterations, we get that
R(n) &approx; (1/2k)R(n/2k) + n(1/2 + 1/8 + 1/32 + 1/128 + ... + 1/22k+1).
This means we should look at the sum
(1/2) + (1/8) + (1/32) + (1/128) + ...
This is
(1/2)(1 + 1/4 + 1/16 + 1/64 + ... )
which, as the sum of a geometric series, solves to
(1/2)(4/3)
= 2/3.
Hey, look! It's the 2/3 we were talking about earlier. This means that R(n) works out to approximately (2/3)n + c for some constant c that depends on the base case of the recurrence. Therefore, we see that
T(n) = 2S(n)
= 2S(2lg n)
= 2R(lg n)
&approx; 2(2/3)lg n + c
= 2lg n2/3 + c
= 2c 2lg n2/3
= 2c n2/3
= Θ(n2/3)
Which matches the theoretically predicted and empirically observed values from earlier.
This was a very fun problem to work through and I'll admit I'm surprised by the answer! I am a bit nervous, though, that I may have missed something when going from
lg T(n) = (1/2) lg (T(√n) + 1) + (1/2) lg n
to
lg T(n) &approx; (1/2) lg T(√ n) + (1/2) lg n.
It's possible that this +1 term actually introduces some other term into the recurrence that I didn't recognize. For example, is there an O(log log n) term that arises as a result? That wouldn't surprise me, given that we have a recurrence that shrinks by a square root. However, I've done some simple data explorations and I'm not seeing any terms in there that look like there's a double log involved.
Hope this helps!

We know that:
T(n) = sqrt(n) * sqrt(T(sqrt(n)) + 1)
Hence:
T(n) < sqrt(n) * sqrt(T(sqrt(n)) + T(sqrt(n)))
1 is replaced by T(sqrt(n)). So,
T(n) < sqrt(2) * sqrt(n) * sqrt(T(sqrt(n))
Now, to find an upper bound we need to solve the following recurrent relation:
G(n) = sqrt(2n) * sqrt(G(sqrt(n))
To solve this, we need to expand it (suppose n = 2^{2^k} and T(1) = 1):
G(n) = (2n)^{1/2} * (2n)^{1/8} * (2n)^{1/32} * ... * (2n)^(1/2^k) =>
G(n) = (2n)^{1/2 + 1/8 + 1/32 + ... + 1/2^k} =
If we take a factor 1/2 from 1/2 + 1/8 + 1/32 + ... + 1/2^k we will have 1/2 * (1 + 1/4 + 1/8 + ... + 1/2^{k-1}).
As we know that 1 + 1/4 + 1/8 + ... + 1/2^{k-1} is a geometric series with a ratio 1/4, it is equal to 4/3 at infinity. Therefore G(n) = Theta(n^{2/3}) and T(n) = O(n^{2/3}).
Notice that as sqrt(n) * sqrt(T(sqrt(n)) < T(n), we can show similar to the previous case that T(n) = Omega(n^{2/3}). It means T(n) = Theta(n^{2/3}).

Related

Calculating the Recurrence Relation T(n)=T(n / [(log n)^2]) + Θ(1)

I tried to solve this problem many hours and I think the solution is O(log n/[log (log n)^2]). but I'm not sure.Is this solution correct?
Expand the equation:
T(n) = (T(n/(log^2(n)*log(n/log^2(n))^2) + Theta(1)) Theta(1) =
T(n/(log^4(n) + 4 (loglog(n))^2 - 4log(n)loglog(n)) + 2 * Theta(1)
We know n/(log^4(n) + 4 (log(log(n)))^2 - 4log(n)log(log(n)) is greater than n/log^4(n) asymptotically. As you can see, each time n is divided by log^2(n). Hence, we can say if we compute the height of dividing n by log^2(n) up to reaching to 1, it will be a lower bound for T(n).
Hence, the height of the expansion tree will be k such that
n = (log^2(n))^k = lof^2k(n) =>‌ (take a log)
log(n) = 2k log(log(n)) => k = log(n)/(2 * log(log(n)))
Therefore, T(n) = Omega(log(n)/log(log(n))).
For the upper bound, as we know that n/(i-th statement) <‌ n/log^i(n) (instead of applying log^2(n), we've applied log(n)), we can say the height of division of n by log(n) will be an upper bound for T(n). Hence, as:
n = log^k(n) => log(n) = k log(log(n)) => k = log(n) / log(log(n))
we can say T(n) = O(log(n) / log(log(n))).

Big-O for T(N) = 2T(N − 1) + N, T(1) = 2

How to get big-O for this?
T(N) = 2T(N − 1) + N, T(1) = 2
I got two variants of answer O(2^N) or O(N^2), but I am not sure how to solve it correctly
Divide T(N) by 2^N and name the result:
S(N) = T(N)/2^N
From the definition of T(N) we get
S(N) = S(N-1) + N/2^N (eq.1)
meaning that S(N) increases, but quickly converges to a constant (since N/2^N -> 0). So,
T(N)/2^N -> constant
or
T(N) = O(2^N)
Detailed proof
In the comment below Paul Hankin suggests how to complete the proof. Take eq.1 and sum from N=2 to N=M
sum_{N=2}^M S(N) = sum_{N=2}^M S(N-1) + sum_{N=2}^M N/2^N
= sum_{N=1}{M-1} S(N) + sum_{N=1}^{M-1} (N-1)/2^{N-1}
thus, after canceling terms with indexes N = 2, 3, ..., M-1, we get
S(M) = S(1) + sum_{N=1}^M N/2^N - M/2^M
and since the series on the right converges (because its terms are bounded by 1/N^2 for N>>1 which is known to converge), S(M) converges to a finite constant.
It's a math problem and Leandro Caniglia is right.
let b(n) = T(n) / 2^n
thus b(n) = b(n-1) + n / 2^n = b(n-2) + n / 2^n + (n-1) / 2^(n-1) ....
i / 2^i is less than 1 for every integer i
So the sum of them has limit and must smaller than some constant.
thus b(n) < C.
thus T(n) < 2^n * C.
It is obvious that T(n) >= 2^n.
So T(n) is O(2^n)
Check by plugging the answer in the equation.
2^N = 2.2^(N-1) + N = 2^N + N
or
N^2 = 2 (N-1)^2 + N
Keeping only the dominant terms, you have
2^N ~ 2^N
or
N^2 ~ 2 N^2.
Conclude.

Finding these three algorithm's run time

Hi I am having a tough time showing the run time of these three algorithms for T(n). Assumptions include T(0)=0.
1) This one i know is close to Fibonacci so i know it's close to O(n) time but having trouble showing that:
T(n) = T(n-1) + T(n-2) +1
2) This on i am stumped on but think it's roughly about O(log log n):
T(n) = T([sqrt(n)]) + n. n greater-than-or-equal to 1. sqrt(n) is lower bound.
3) i believe this one is in roughly O(n*log log n):
T(n) = 2T(n/2) + (n/(log n)) + n.
Thanks for the help in advance.
T(n) = T(n-1) + T(n-2) + 1
Assuming T(0) = 0 and T(1) = a, for some constant a, we notice that T(n) - T(n-1) = T(n-2) + 1. That is, the growth rate of the function is given by the function itself, which suggests this function has exponential growth.
Let T'(n) = T(n) + 1. Then T'(n) = T'(n-1) + T'(n-2), by the above recurrence relation, and we have eliminated the troublesome constant term. T(n) and U(n) differ by a constant factor of 1, so assuming they are both non-decreasing (they are) then they will have the same asymptotic complexity, albeit for different constants n0.
To show T'(n) has asymptotic growth of O(b^n), we would need some base cases, then the hypothesis that the condition holds for all n up to, say, k - 1, and then we'd need to show it for k, that is, cb^(n-2) + cb^(n-1) < cb^n. We can divide through by cb^(n-2) to simplify this to 1 + b <= b^2. Rearranging, we get b^2 - b - 1 > 0; roots are (1 +- sqrt(5))/2, and we must discard the negative one since we cannot use a negative number as the base for our exponent. So for b >= (1+sqrt(5))/2, T'(n) may be O(b^n). A similar thought experiment will show that for b <= (1+sqrt(5))/2, T'(n) may be Omega(b^n). Thus, for b = (1+sqrt(5))/2 only, T'(n) may be Theta(b^n).
Completing the proof by induction that T(n) = O(b^n) is left as an exercise.
T(n) = T([sqrt(n)]) + n
Obviously, T(n) is at least linear, assuming the boundary conditions require T(n) be nonnegative. We might guess that T(n) is Theta(n) and try to prove it. Base case: let T(0) = a and T(1) = b. Then T(2) = b + 2 and T(4) = b + 6. In both cases, a choice of c >= 1.5 will work to make T(n) < cn. Suppose that whatever our fixed value of c is works for all n up to and including k. We must show that T([sqrt(k+1)]) + (k+1) <= c*(k+1). We know that T([sqrt(k+1)]) <= csqrt(k+1) from the induction hypothesis. So T([sqrt(k+1)]) + (k+1) <= csqrt(k+1) + (k+1), and csqrt(k+1) + (k+1) <= c(k+1) can be rewritten as cx + x^2 <= cx^2 (with x = sqrt(k+1)); dividing through by x (OK since k > 1) we get c + x <= cx, and solving this for c we get c >= x/(x-1) = sqrt(k+1)/(sqrt(k+1)-1). This eventually approaches 1, so for large enough n, any constant c > 1 will work.
Making this proof totally rigorous by fixing the following points is left as an exercise:
making sure enough base cases are proven so that all assumptions hold
distinguishing the cases where (a) k + 1 is a perfect square (hence [sqrt(k+1)] = sqrt(k+1)) and (b) k + 1 is not a perfect square (hence sqrt(k+1) - 1 < [sqrt(k+1)] < sqrt(k+1)).
T(n) = 2T(n/2) + (n/(log n)) + n
This T(n) > 2T(n/2) + n, which we know is the recursion relation for the runtime of Mergesort, which by the Master theorem is O(n log n), s we know our complexity is no less than that.
Indeed, by the master theorem: T(n) = 2T(n/2) + (n/(log n)) + n = 2T(n/2) + n(1 + 1/(log n)), so
a = 2
b = 2
f(n) = n(1 + 1/(log n)) is O(n) (for n>2, it's always less than 2n)
f(n) = O(n) = O(n^log_2(2) * log^0 n)
We're in case 2 of the Master Theorem still, so the asymptotic bound is the same as for Mergesort, Theta(n log n).

Calculating Big O complexity of Recursive Algorithms

Somehow, I find that it is much harder to derive Big O complexities for recursive algorithms compared to iterative algorithms. Do provide some insight about how I should go about solving these 2 questions.
*assume that submethod has linear complexity
def myMethod(n)
if (n>0)
submethod(n)
myMethod(n/2)
end
end
def myMethod(k,n)
if(n>0)
submethod(k)
myMethod(k,n/2)
end
end
For your first problem, the recurrence will be:
T(n) = n + T(n/2)
T(n/2) = n/2 + T(n/4)
...
...
...
T(2) = 2 + T(1)
T(1) = 1 + T(0) // assuming 1/2 equals 0(integer division)
adding up we get:
T(n) = n + n/2 + n/4 + n/8 + ..... 1 + T(0)
= n(1 + 1/2 + 1/4 + 1/8 .....) + k // assuming k = T(0)
= n*1/(1 - 1/2) ( sum of geometric series a/(1-r) when n tends to infinity)
= 2n + k
Therefore, T(n) = O(n). Remember i have assumed n tends to infinity ,cause this is what we do in Asymptotic analysis.
For your second problem its easy to see that, we perform k primitive operations everytime till n becomes 0. This happens log(n) times. Therefore, T(n) = O(k*log(n))
All you need to do is count how many times a basic operation is executed. This is true for analysing any kind of algorithm. In your case, we will count the number of times submethod is called.
You could break-down the running time of call myMethod(n) to be 1 + myMethod(n / 2). Which you can further break down to 1 + (1 + myMethod(n / 4)). At some point you will reach the base case, in log(n)th step. That gives you an algorithm of log(n).
The second one is no different, since k is constant all the time, it will again take log(n) time, assuming submethod takes constant time regardless of its input.

Is log(n!) = Θ(n·log(n))?

I am to show that log(n!) = Θ(n·log(n)).
A hint was given that I should show the upper bound with nn and show the lower bound with (n/2)(n/2). This does not seem all that intuitive to me. Why would that be the case? I can definitely see how to convert nn to n·log(n) (i.e. log both sides of an equation), but that's kind of working backwards.
What would be the correct approach to tackle this problem? Should I draw the recursion tree? There is nothing recursive about this, so that doesn't seem like a likely approach..
Remember that
log(n!) = log(1) + log(2) + ... + log(n-1) + log(n)
You can get the upper bound by
log(1) + log(2) + ... + log(n) <= log(n) + log(n) + ... + log(n)
= n*log(n)
And you can get the lower bound by doing a similar thing after throwing away the first half of the sum:
log(1) + ... + log(n/2) + ... + log(n) >= log(n/2) + ... + log(n)
= log(n/2) + log(n/2+1) + ... + log(n-1) + log(n)
>= log(n/2) + ... + log(n/2)
= n/2 * log(n/2)
I realize this is a very old question with an accepted answer, but none of these answers actually use the approach suggested by the hint.
It is a pretty simple argument:
n! (= 1*2*3*...*n) is a product of n numbers each less than or equal to n. Therefore it is less than the product of n numbers all equal to n; i.e., n^n.
Half of the numbers -- i.e. n/2 of them -- in the n! product are greater than or equal to n/2. Therefore their product is greater than the product of n/2 numbers all equal to n/2; i.e. (n/2)^(n/2).
Take logs throughout to establish the result.
Sorry, I don't know how to use LaTeX syntax on stackoverflow..
See Stirling's Approximation:
ln(n!) = n*ln(n) - n + O(ln(n))
where the last 2 terms are less significant than the first one.
For lower bound,
lg(n!) = lg(n)+lg(n-1)+...+lg(n/2)+...+lg2+lg1
>= lg(n/2)+lg(n/2)+...+lg(n/2)+ ((n-1)/2) lg 2 (leave last term lg1(=0); replace first n/2 terms as lg(n/2); replace last (n-1)/2 terms as lg2 which will make cancellation easier later)
= n/2 lg(n/2) + (n/2) lg 2 - 1/2 lg 2
= n/2 lg n - (n/2)(lg 2) + n/2 - 1/2
= n/2 lg n - 1/2
lg(n!) >= (1/2) (n lg n - 1)
Combining both bounds :
1/2 (n lg n - 1) <= lg(n!) <= n lg n
By choosing lower bound constant greater than (1/2) we can compensate for -1 inside the bracket.
Thus lg(n!) = Theta(n lg n)
Helping you further, where Mick Sharpe left you:
It's deriveration is quite simple:
see http://en.wikipedia.org/wiki/Logarithm -> Group Theory
log(n!) = log(n * (n-1) * (n-2) * ... * 2 * 1) = log(n) + log(n-1) + ... + log(2) + log(1)
Think of n as infinitly big. What is infinite minus one? or minus two? etc.
log(inf) + log(inf) + log(inf) + ... = inf * log(inf)
And then think of inf as n.
Thanks, I found your answers convincing but in my case, I must use the Θ properties:
log(n!) = Θ(n·log n) => log(n!) = O(n log n) and log(n!) = Ω(n log n)
to verify the problem I found this web, where you have all the process explained: http://www.mcs.sdsmt.edu/ecorwin/cs372/handouts/theta_n_factorial.htm
http://en.wikipedia.org/wiki/Stirling%27s_approximation
Stirling approximation might help you. It is really helpful in dealing with problems on factorials related to huge numbers of the order of 10^10 and above.
This might help:
eln(x) = x
and
(lm)n = lm*n
If you reframe the problem, you can solve this with calculus! This method was originally shown to me via Arthur Breitman https://twitter.com/ArthurB/status/1436023017725964290.
First, you take the integral of log(x) from 1 to n it is n*log(n) -n +1. This proves a tight upper bound since log is monotonic and for every point n, the integral from n to n+1 of log(n) > log(n) * 1. You can similarly craft the lower bound using log(x-1), as for every point n, 1*log(n) > the integral from x=n-1 to n of log(x). The integral of log(x) from 0 to n-1 is (n-1)*(log(n-1) -1), or n log(n-1) -n -log(n-1)+1.
These are very tight bounds!

Resources