Assume that the worst-case runtime of an algorithm can be described as:
T(n) = O(n) + O(r^2) + O(n-r)
With n being the input size and r being the index at which a partition was created per the algorithm.
Can this equation be simplified further? If the variables were all n then it would be O(n^2) but can the same idea be applied when r is involved?
As O(n-r) is suppressed by O(n) you can write T(n) = O(n) + O(r^2). Also, as you know that r is between 0 and n, you can write T(n) = O(n + r^2). However, the exact term is T(n,r) = O(n + r^2).
Related
I have the following "divide and conquer" algorithm A1.
A1 divides a problem with size n , to 4 sub-problems with size n/4.
Then, solves them and compose the solutions to 12n time.
How can I to write the recursive equation that give the runtime of algorithms.
Answering the question "How can I to write the recursive equation that give the runtime of algorithms"
You should write it this way:
Let T(n) denote the run time of your algorithm for input size of n
T(n) = 4*T(n/4) + 12*n;
Although the master theorem does give a shortcut to the answer, it is imperative to understand the derivation of the Big O runtime. Divide and conquer recurrence relations are written in the form T(n) = q * T(n/j) + cn, where q is the number of subproblems, j the amount we divide the data for each subproblem, and cn is the time it takes to divide/combine/manipulate each subproblem at each level. cn could also be cn^2 or c, whatever the runtime would be.
In your case, you have 4 subproblems of size n/4 with each level being solved in 12n time giving a recurrence relation of T(n) = 4 * T(n/4) + 12n. From this recurrence, we can then derive the runtime of the algorithm. Given it is a divide and conquer relation, we can assume that the base case is T(1) = 1.
To solve the recurrence, I will use a technique called substitution. We know that T(n) = 4 * T(n/4) + 12n, so we will substitute for T(n/4). T(n/4) = 4 * T(n/16) + 12(n/4). Plugging this into the equation gets us T(n) = 4 * (4 * T(n/16) + 12n/4) + 12n, which we can simplify to T(n) = 4^2 * T(n/16) + 2* 12n. Again, we still have more work to do in the equation to capture the work in all levels, so we substitute for T(n/16), T(n) = 4^3 * T(n/64) + 3* 12n. We see the pattern emerge and know that we want to go all the way down to our base case, T(1), so that we substitute to get T(n) = 4^k*T(1) + k * 12n. This equation defines the total amount of work that is in the divide and conquer algorithm because we have substituted all of the levels in, however, we still have an unknown variable k and we want it in terms of n We get k by solving the equation n/4^k = 1 as we know that we have reached the point where we are calling the algorithm on only one variable. We solve for n and get that k = log4n. That means that we have done log4n substitutions. We plug that in for k and get T(n) =4^log4n*T(1) + log4n * 12n. We simplify this to T(n) =n *1 + log4n * 12n. Since this is Big O analysis and log4n is in O(log2n) due to the change of base property of logarithms, we get that T(n) = n + 12n * logn which means that T(n) is in the Big O of nlogn.
Recurrence relation that best describes is given by:
T(n)=4*T(n/4)+12*n
Where T(n)= run time of given algorithm for input of size n, 4= no of subproblems,n/4 = size of each subproblem .
Using Master Theorem Time Complexity is calculated to be:theta(n*log n)
Given this algorithm, I am required to :
Find the recursion formula of the expected value of the running time.
Find the closest upper bound as possible.
I am actually a bit lost so if someone could help...
Recursive formula for worst case: T(n) = T(n/2) + n
Recursive formula for best case: T(n) = T(1) + n
Recursive formula for expected case: T(n) = T(n/4) + n
Worst case: 2n = O(n)
Best case: n = O(n)
Expected case: 4n/3 = O(n)
Some people here seem to be confused about the log(n) factor. log(n) factor would only be required if T(n) = 2T(n/2) + n i.e. if the function was called TWICE recursively with half the input.
I have this algorithm:
S(n)
if n=1 then return(0)
else
S(n/3)
x <- 0
while x<= 3n^3 do
x <- x+3
S(n/3)
Is 2 * T(n/3) + n^3 the recurrence relation?
Is T(n) = O(n^3) the execution time?
The recurrence expression is correct. The time complexity of the algorithm is O(n^3).
The recurrence stops at T(1).
Running an example for n = 27 helps deriving a general expression:
T(n) = 2*T(n/3)+n^3 =
= 2*(2*T(n/9)+(n/3)^3)+n^3 =
= 2*(2*(2*T(n/27)+(n/9)^3)+(n/3)^3)+n^3 =
= ... =
= 2*(2*2*T(n/27)+2*(n/9)^3+(n/3)^3)+n^3 =
= 2*2*2*T(n/27)+2*2*(n/9)^3+2*(n/3)^3+n^3
From this example we can see that the general expression is given by:
Which is equivalent to:
Which, in turn, can be solved to the following closed form:
The dominating term in this expression is (1/25)*27n^3 (2^(log_3(n)) is O(n), you can think of it as 2^(log(n)*(1/log(3))); dropping the constant 1/log(3) gives 2^log(n) = n), thus, the recurrence is O(n^3).
2 * T(n/3) + n^3
Yes, I think this is a correct recurrence relation.
Time complexity:
while x<= 3n^3 do
x <- x+3
This has a Time complexity of O(n^3). Also, at each step, the function calls itself twice with 1/3rd n. So the series shall be
n, n/3, n/9, ...
The total complexity after adding each depth
n^3 + 2/27 * (n^3) + 4/243 * (n^3)...
This series is bounded by k*n^3 where k is a constant.
Proof: if it is considered as a GP with a factor of 1/2, then the sum
becomes 2*n^3. Now we can see that at each step, the factor is
continuously decreasing and is less than half. Hence the upper bound is less than 2*n^3.
So in my opinion the complexity = O(n^3).
I'm taking Data Structures and Algorithm course and I'm stuck at this recursive equation:
T(n) = logn*T(logn) + n
obviously this can't be handled with the use of the Master Theorem, so I was wondering if anybody has any ideas for solving this recursive equation. I'm pretty sure that it should be solved with a change in the parameters, like considering n to be 2^m , but I couldn't manage to find any good fix.
The answer is Theta(n). To prove something is Theta(n), you have to show it is Omega(n) and O(n). Omega(n) in this case is obvious because T(n)>=n. To show that T(n)=O(n), first
Pick a large finite value N such that log(n)^2 < n/100 for all n>N. This is possible because log(n)^2=o(n).
Pick a constant C>100 such that T(n)<Cn for all n<=N. This is possible due to the fact that N is finite.
We will show inductively that T(n)<Cn for all n>N. Since log(n)<n, by the induction hypothesis, we have:
T(n) < n + log(n) C log(n)
= n + C log(n)^2
< n + (C/100) n
= C * (1/100 + 1/C) * n
< C/50 * n
< C*n
In fact, for this function it is even possible to show that T(n) = n + o(n) using a similar argument.
This is by no means an official proof but I think it goes like this.
The key is the + n part. Because of this, T is bounded below by o(n). (or should that be big omega? I'm rusty.) So let's assume that T(n) = O(n) and have a go at that.
Substitute into the original relation
T(n) = (log n)O(log n) + n
= O(log^2(n)) + O(n)
= O(n)
So it still holds.
I am trying to prove the following worst-case scenario for the Quicksort algorithm but am having some trouble. Initially, we have an array of size n, where n = ij. The idea is that at every partition step of Quicksort, you end up with two sub-arrays where one is of size i and the other is of size i(j-1). i in this case is an integer constant greater than 0. I have drawn out the recursive tree of some examples and understand why this is a worst-case scenario and that the running time will be theta(n^2). To prove this, I've used the iteration method to solve the recurrence equation:
T(n) = T(ij) = m if j = 1
T(n) = T(ij) = T(i) + T(i(j-1)) + cn if j > 1
T(i) = m
T(2i) = m + m + c*2i = 2m + 2ci
T(3i) = m + 2m + 2ci + 3ci = 3m + 5ci
So it looks like the recurrence is:
j
T(n) = jm + ci * sum k - 1
k=1
At this point, I'm a bit lost as to what to do. It looks the summation at the end will result in j^2 if expanded out, but I need to show that it somehow equals n^2. Any explanation on how to continue with this would be appreciated.
Pay attention, the quicksort algorithm worst case scenario is when you have two subproblems of size 0 and n-1. In this scenario, you have this recurrence equations for each level:
T(n) = T(n-1) + T(0) < -- at first level of tree
T(n-1) = T(n-2) + T(0) < -- at second level of tree
T(n-2) = T(n-3) + T(0) < -- at third level of tree
.
.
.
The sum of costs at each level is an arithmetic serie:
n n(n-1)
T(n) = sum k = ------ ~ n^2 (for n -> +inf)
k=1 2
It is O(n^2).
Its a problem of simple mathematics. The complexity as you have calculated correctly is
O(jm + ij^2)
what you have found out is a parameterized complextiy. The standard O(n^2) is contained in this as follows - assuming i=1 you have a standard base case so m=O(1) hence j=n therefore we get O(n^2). if you put ij=n you will get O(nm/i+n^2/i) . Now what you should remember is that m is a function of i depending upon what you will use as the base case algorithm hence m=f(i) thus you are left with O(nf(i)/i + n^2/i). Now again note that since there is no linear algorithm for general sorting hence f(i) = omega(ilogi) which will give you O(nlogi + n^2/i). So you have only one degree of freedom that is i. Check that for any value of i you cannot reduce it below nlogn which is the best bound for comparison based.
Now what I am confused is that you are doing some worst case analysis of quick sort. This is not the way its done. When you say worst case it implies you are using randomization in which case the worst case will always be when i=1 hence the worst case bound will be O(n^2). An elegant way to do this is explained in randomized algorithm book by R. Motwani and Raghavan alternatively if you are a programmer then you look at Cormen.