I'm having trouble fully understanding how to write the recurrence for the expected running time of a randomized algorithm.
I believe I'm doing it correctly, but if someone could look over it, that'd be a huge help.
Here's the pseudocode for the algorithm:
printIntegers(A, n) // an array A of integers is the input, with n integers
if A.length > 0
for i = 1 to n
print A[i]
randInt = rand(1, 10)
if randInt != 10
return
else
printIntegers(A, n-1)
The only random part is the random generator between 1 and 10. I'm trying to understand how that would translate in the recurrence.
I'm thinking:
T(n) = O(n) if a != 10 probability = 9/10
T(n-1) + O(n) a = 10 = 1/10
T(n-2) + O(n)
....
T(0) + O(n)
This makes sense in my head, and then the expected running time would be O(n). Am I approaching this correctly?
Note that the initial condition should use n in the check, not A.length since the latter is not changing in the recursion.
The expected number of times the recursion will be called is 0.1. The expectation is the same as the probability the recursion will be called. In the current case, if the random number generator is truly random, the number 10 will appear 1/10 of the times. Likewise, the expected number of times there will be no recursion is 0.9. But the O(n) appears in both the cases, so the equation will be, when considering expected values:
T(n) = (0.9 + 0.1) * O(n) + 0.1 * T(n-1)
= O(n) + 0.1 * T(n-1)
= O(n) + 0.1 * (O(n-1) + 0.1 * T(n-2))
= O(n) + 0.1 * O(n-1) + 0.1^2 * O(n-2) +...
= O(n) * (0.1 + 0.1^2 +...+0.1^(n-1)) + 0.1^(n-1) * T(1)
= O(n) * (1 - 0.1^n)/0.9 + K
The above is O(n * (1 - 0.9^n)/0.9) which is essentially the same as O(n) depending upon your accuracy needs.
First note that:
T(n) = n + (n-1)/10 + (n-2)/10^2 + ... + 1/10^{n-1}
Then, bounding T(n) above:
T(n) = n + (n-1)/10 + (n-2)/10^2 + ... + 1/10^{n-1}
< n + n/10 + n/10^2 + ... + n/10^{n-1}
= n(1 + 1/10 + ... + 1/10^{n-1})
< n(1 + 1/10 + 1/10^2 + ...)
= n/(1 - 1/10) = 10n/9
and bounding it below:
T(n) = n + (n-1)/10 + (n-2)/10^2 + ... + 1/10^{n-1}
> n
So n < T(n) < 10n/9 and T(n) is Theta(n).
Related
I need help in solving T(n) = T(n/4) + T(n/3) + 2n using iteration method (recursion tree. I am thinking it would be either Θ(2n) or Θ(n)?
It's straightforward. We have two following inequalities:
T(n) > 2T(n/4) + 2n
and
T(n) < 2T(n/3) + 2n
Now, try to find upper bound and lower bound by expansion. Based on both cases, you will find that T(n) = Theta(n).
For example, for T'(n) = 2T(n/3) + 2n we have the following expansion:
T'(n) = 2T(n/3) + 2n = 2^2 T(n/3^2) + (1 + 2/3) * 2n
By induction we can show that:
T'(n) = 2^log_3(n) T(0) + (1 + 2/3 + 2^2/3^2 + ...) * 2n
< n + 6 * n = 7n
Because 2^log_3(n) < 2^log_2(n) = n and (1 + 2/3 + 2^2/3^2 + ...) is a geometric sum with factor 2/3. Hence, the sum will be 1/(1-2/3) = 3 when n goes to infinity.
You can do the same analysis for the lower bound of T(n).
Therefore, as c1 * n <= T(n) <= c_2 * n, we can conclude that T(n) is in Theta(n).
I have the function :
int ps (int n)
{ if (n == 1) return 1;
else return (Extra(n) + Ps (n/4) + Ps (n/4)); }
Extra(n) is O(n)
I have tried to find T(n) of this function which is T(n)=T(n)+2 T(n/4) and I have calculated the complexity using the master theorem it is O(n)
but I don't know how to find the complexity of it using back substitution
First, you are wrong in terms of complexity. You didn't mention Extra(n) in writing the time complexity. So, T(n) = 2 T(n/4) + n. Now, I think the new recurrent complexity term is easy to solve by substitution:
T(n) = 2T(n/4) + n = 2 (2 T(n/8) + n/4) + n = 2^2 T(n/8) + n/2 + n =
2^2 (2 T(n/16) + n/8) + n/2 + n = 2^3 T(n/16) + n/4 + n/2 + n
Now, by mathematical induction, if we suppose n = 2^k, you can find that:
T(n) = n + n/2 + n/4 + n/8 + ... + n/2^k = n (1 + 1/2 + 1/4 + ... + 1/2^k) <= 2n
The last part of the above analysis comes from the being geometric series of the sum with factor 1/2. Hence, T(n) is in Theta(n) and O(n).
so I got this algorithm I need to calculate its time complexity
which goes like
for i=1 to n do
k=i
while (k<=n) do
FLIP(A[k])
k = k + i
where A is an array of booleans, and FLIP is as it is, flipping the current value. therefore it's O(1).
Now I understand that the inner while loop should be called
n/1+n/2+n/3+...+n/n
If I'm correct, but is there a formula out there for such calculation?
pretty confused here
The more exact computation is T(n) \sum((n-i)/i) for i = 1 to n (because k is started from i). Hence, the final sum is n + n/2 + ... + n/n - n = n(1 + 1/2 + ... + 1/n) - n, approximately. We knew 1 + 1/2 + ... + 1/n = H(n) and H(n) = \Theta(\log(n)). Hence, T(n) = \Theta(n\log(n)). The -n has not any effect on the asymptotic computaional cost, as n = o(n\log(n)).
Lets say we want to calculate sum of this equation
n + n / 2 + n / 3 + ... + n / n
=> n ( 1 + 1 / 2 + 1 / 3 + ..... + 1 / n )
Then in bracket ( 1 + 1 / 2 + 1 / 3 + ... + 1 / n ) this is a well known Harmonic series and i am afraid there is no proven formula to calculate Harmonic series.
The given problem boils down to calculate below sum -Sum of harmonic series
Although this sum can't be calculated accurately, however you can still find asymptotic upper bound for this sum, which is approximately O(log(n)).
Hence answer to above problem will be - O(nlog(n))
What is the time complexity for the following function?
for(int i = 0; i < a.size; i++) {
for(int j = i; j < a.size; i++) {
//
}
}
I think it is less than big O n^2 because we arent iterating over all of the elements in the second for loop. I believe the time complexity comes out to be something like this:
n[ (n) + (n-1) + (n-2) + ... + (n-n) ]
But when I solve this formula it comes out to be
n^2 - n + n^2 - 2n + n^2 - 3n + ... + n^2 - n^2
Which doesn't seem correct at all. Can somebody tell me exactly how to solve this problem, and where I am wrong.
That is O(n^2). If you consider the iteration where i = a.size() - 1, and you work your way backwards (i = a.size() - 2, i = a.size - 3, etc), you are looking at the following sum of number of iterations, where n = a.size.
1 + 2 + 3 + 4 + ... + n
The sum of this series is n(n+1)/2, which is O(n^2). Note that big-O notation ignores constants and takes the highest polynomial power when it is applied to a polynomial function.
It will run for:
1 + 2 + 3 + .. + n
Which is 1/2 n(n+1) which give us O(n^2)
The Big-O notation will only keep the dominant term, neglecting constants too
The Big-O is only used to compare algorithms on the same variation of a problem using the same complexity analysis standard, if and only if the dominant terms are different.
If the dominant terms are the same, you need to compare Big-Theta or Time complexity, which will show minor differences.
Example
A
for i = 1 .. n
for j = i .. n
..
B
for i = 1 .. n
for j = 1 .. n
..
We have
Time(A) = 1/2 n(n+1) ~ O(n^2)
Time(B) = n^2 ~ O(n^2)
O(A) = O(B)
T(A) < T(B)
Analysis
To visualize how we got 1 + 2 + 3 + .. n:
for i = 1 .. n:
print "(1 + "
sum = 0
for j = i .. n:
sum++
print sum") + "
will print the following:
(1+n) + (1+(n-1)) + .. + (1+3) + (1+2) + (1+1) + (1+0)
n+1 + n + n-1 + .. + 3 + 2 + 1
1 + 2 + 3 + .. + n + n+1
1/2 n(n+1) + (n+1)
1/2 n^2 + 1/2 n + n + 1
1/2 n^2 + 3/2 n + 1
Yes, the number of iterations is strictly less than n^2, but it's still Θ(n^2). It will eventually be greater than n^k for any k<2, and it will eventually be less than n^k for any k>2.
(As a side note, computer scientists often say big-O when they really mean big-theta (Θ). It's technically correct to say that almost every algorithm you've seen has O(n!) running time; all reasonably algorithms have running times that grow no more quickly than n!. But it's not really useful to say that the complexity is O(n!) if it's also O(n log n), so by some kind of Gricean maxim we assume that when someone says an algorithm's complexiy is O(f(x)) that f(x) is as small as possible.)
I have the following worked out:
T(n) = T(n - 1) + n = O(n^2)
Now when I work this out I find that the bound is very loose. Have I done something wrong or is it just that way?
You need also a base case for your recurrence relation.
T(1) = c
T(n) = T(n-1) + n
To solve this, you can first guess a solution and then prove it works using induction.
T(n) = (n + 1) * n / 2 + c - 1
First the base case. When n = 1 this gives c as required.
For other n:
T(n)
= (n + 1) * n / 2 + c - 1
= ((n - 1) + 2) * n / 2 + c - 1
= ((n - 1) * n / 2) + (2 * n / 2) + c - 1
= (n * (n - 1) / 2) + c - 1) + (2 * n / 2)
= T(n - 1) + n
So the solution works.
To get the guess in the first place, notice that your recurrence relationship generates the triangular numbers when c = 1:
T(1) = 1:
*
T(2) = 3:
*
**
T(3) = 6:
*
**
***
T(4) = 10:
*
**
***
****
etc..
Intuitively a triangle is roughly half of a square, and in Big-O notation the constants can be ignored so O(n^2) is the expected result.
Think of it this way:
In each "iteration" of the recursion you do O(n) work.
Each iteration has n-1 work to do, until n = base case. (I'm assuming base case is O(n) work)
Therefore, assuming the base case is a constant independant of n, there are O(n) iterations of the recursion.
If you have n iterations of O(n) work each, O(n)*O(n) = O(n^2).
Your analysis is correct. If you'd like more info on this way of solving recursions, look into Recursion Trees. They are very intuitive compared to the other methods.
The solution is pretty easy for this one. You have to unroll the recursion:
T(n) = T(n-1) + n = T(n-2) + (n - 1) + n =
= T(n-3) + (n-2) + (n-1) + n = ... =
= T(0) + 1 + 2 + ... + (n-1) + n
You have arithmetic progression here and the sum is 1/2*n*(n-1). Technically you are missing the boundary condition here, but with any constant boundary condition you see that the recursion is O(n^2).
Looks about right, but will depend on the base case T(1). Assuming you will do n steps to get T(n) to T(0) and each time the n term is anywhere between 0 and n for an average of n/2 so n * n/2 = (n^2)/2 = O(n^2).