I am trying to find the runtime of the following recurrence using iterative substitution:
T(n) = T(n/2) + T(n/3) + n
The issue is that there are two T(n/x) terms and finding general form for this case has proven to be quite challenging.
Is there a general guideline one should follow using iterative substitution for cases like this?
This recurrence is from the class of Akra–Bazzi recurrences . Following the formula the solution is:
Alternatively, suppose that T(1) = c0 then you can prove that T(n) <= max(6,c0)*n by induction.
You can also use the substitution rule. Here's how:
T(n) = T(n/2)+T(n/3) + n =
= n+(n/2+n/3)+T(n/(2*2))+T(n/(2*3))+T(n/(3*2))+T(n/(3*3))
= n+(n/2+n/3)+(n/(2*2)+n/(2*3)+n/(3*2)+n/(3*3))
+T(n/(2*2*2))+T(n/(2*2*3))
+T(n/(2*3*2))+T(n/(2*3*3))
+T(n/(3*2*2))+T(n/(3*2*3))
+T(n/(3*3*2))+T(n/(3*3*3))=
...
= n * (1 + 5/6 + (5/6)^2 + (5/6)^3 + (5/6)^4 + ...)
= 6 * n (assuming n = 2^k3^k. you get < 6*n otherwise)
Nothing formal here, but
T(n) = 2T(n/2) + n // O(nlog(n))
So your recurrence might still be O(nlog(n))?
Also what is the base case?
Related
I realize that solving this with Master's theorem gives the answer of Big Theta(log n). However, I want to know more and find the base of the logarithm. I tried reading about masters theorem more to find out about the base but could not find more information on wikipedia (https://en.wikipedia.org/wiki/Master_theorem_(analysis_of_algorithms)).
How would I solve this using recursion tree or substitution method for solving recurrences?
You can assume n = 2^K and T(0) = 0.
Don't set n=2^k but n=3^k
thus T(3^k) = T(3^{k-1}) + c
recurrence becomes w_k = w_{k-1} + c
Assuming T(1) = 1
with the general term: w_k = ck+1
and w_0 = 1
you conclude T(n) = clog_3(n) + 1
and thus T(n) = O(log_3(n))
T(n) = T(n/3) + O(1) = T(n/9) + O(1) + O(1) = T(n/27) + O(1) + O(1) + O(1) = …
After log3(n) steps, the term T vanishes and T(n) = O(log(n)).
I don't think I have fully understand recurrence in algorithm.
Well, the n in the recurrence function can also be changed into n^2 or n^3. Are they just the same with the n case?
If applicable, what's the typical method of finding the best bounds of running time?
I also figured out that T(n) = T(0.8n) + n = O(n).
When solving recurrence relations, the most common way is by repeatly replacing functions by their expressions. S.t. T(n) = T(0.8n) + n = T(0.64n) + 0.8n + n = ... = (1 + 0.8 + 0.64 + 0.512 + ...)n. This is a typical geometric infinite progression. By applying basic calculus, we can easily get that T(n) = 5n = O(n).
When we change the n in the original expression by n^x and x is an arbitrary non-zero constants, we can always let some variable t = n^x, and T(t) is O(t), so n^x should be the same case with n. T(n^x) = O(n^x).
I have the following "divide and conquer" algorithm A1.
A1 divides a problem with size n , to 4 sub-problems with size n/4.
Then, solves them and compose the solutions to 12n time.
How can I to write the recursive equation that give the runtime of algorithms.
Answering the question "How can I to write the recursive equation that give the runtime of algorithms"
You should write it this way:
Let T(n) denote the run time of your algorithm for input size of n
T(n) = 4*T(n/4) + 12*n;
Although the master theorem does give a shortcut to the answer, it is imperative to understand the derivation of the Big O runtime. Divide and conquer recurrence relations are written in the form T(n) = q * T(n/j) + cn, where q is the number of subproblems, j the amount we divide the data for each subproblem, and cn is the time it takes to divide/combine/manipulate each subproblem at each level. cn could also be cn^2 or c, whatever the runtime would be.
In your case, you have 4 subproblems of size n/4 with each level being solved in 12n time giving a recurrence relation of T(n) = 4 * T(n/4) + 12n. From this recurrence, we can then derive the runtime of the algorithm. Given it is a divide and conquer relation, we can assume that the base case is T(1) = 1.
To solve the recurrence, I will use a technique called substitution. We know that T(n) = 4 * T(n/4) + 12n, so we will substitute for T(n/4). T(n/4) = 4 * T(n/16) + 12(n/4). Plugging this into the equation gets us T(n) = 4 * (4 * T(n/16) + 12n/4) + 12n, which we can simplify to T(n) = 4^2 * T(n/16) + 2* 12n. Again, we still have more work to do in the equation to capture the work in all levels, so we substitute for T(n/16), T(n) = 4^3 * T(n/64) + 3* 12n. We see the pattern emerge and know that we want to go all the way down to our base case, T(1), so that we substitute to get T(n) = 4^k*T(1) + k * 12n. This equation defines the total amount of work that is in the divide and conquer algorithm because we have substituted all of the levels in, however, we still have an unknown variable k and we want it in terms of n We get k by solving the equation n/4^k = 1 as we know that we have reached the point where we are calling the algorithm on only one variable. We solve for n and get that k = log4n. That means that we have done log4n substitutions. We plug that in for k and get T(n) =4^log4n*T(1) + log4n * 12n. We simplify this to T(n) =n *1 + log4n * 12n. Since this is Big O analysis and log4n is in O(log2n) due to the change of base property of logarithms, we get that T(n) = n + 12n * logn which means that T(n) is in the Big O of nlogn.
Recurrence relation that best describes is given by:
T(n)=4*T(n/4)+12*n
Where T(n)= run time of given algorithm for input of size n, 4= no of subproblems,n/4 = size of each subproblem .
Using Master Theorem Time Complexity is calculated to be:theta(n*log n)
In masters theorem were given a "plug-in" formula to find the big O, given it satisfies some condition.
However, what if we have problems like the following below? Can anyone show me how to do a step by step formula. And what topics would help me to know more about these types of questions. Assume that the person asking this question knows nothing about induction.
T(n)=T(n^(1/2))+1
T(n)=T(n-1) + 1
T(n)=T(n-1) + n^c , c is a natural number >1
T(n)= T(n-1) C^n, c is a natural number >1
You'll need to know a little math to do some of these. You can figure out what the recursion looks like when you expand it out all the way to the base case, e.g. for T(n) = T(n-1) + n^c you get T(n) = 1^c + 2^c + ... + n^c, but then you need to know some math in order to know that this is O(n^(c+1)). (The easiest way to see this is by bounding the sum above and below in terms of integrals of x^c). Similarly for T(n) = T(n-1) + c^n you easily get T(n) = c^1 + c^2 + ... + c^n but you again need to use some calculus or something to figure out that this is T(n) = O(c^n).
For T(n) = T(n^(1/2)) + 1 you need to count how many times you apply the recurrence before you get to the base case. Again math helps here. When you take square-root, the logarithm gets cut in half. So you want to know how many times you can cut the logarithm in half until you get to the base case. This is O(log log n).
You can expand upon the formula and work on it:
For example:
T(n) = T(n-1) + 1
T(n) = [T(n-2) + 1] + 1
...
T(n) = 1 + 1 + 1 ... (n times)
So T(n) = O(n).
First of all sorry for asking such a basic question.
But I am having difficulties understanding substitution method for solving recurrences.I am following Introduction to Algo.s -CLRS. As I am not able to find enough examples and ambiguity is the main concern.Especially the induction step.In the text books we have to prove f(n) implies f(n+1) but in CLRS this step is missing or may be I am not getting the example. Please explain step by step how to prove that O(n^2) is the solution for recurrence function T(n)=T(n-1)+n
Its the general steps of substitution method that I want to understand. If you could shed some light on strong mathematical induction and provide links to material on substitution method that'll be helpful also.
In substitution method, simply replace any occurance of T(k) by T(k-1) + k, and do it iteratively.
T(n) = T(n-1) + n =
= (T(n-2) + (n-1)) + n =
= T(n-3) + (n-2) + (n-1) + n =
= ... =
= 1 + 2 + ... + n-1 + n
From sum of arithmetic progression, you can get that T(n) is in O(n^2).
Note that substitution method is usually used to get an intuition on what the complexity is, to formally prove it - you will probably need a different tool - such as mathematical induction.
The formal proof will go something like that:
Claim: T(n) <= n^2
Base: T(1) = 1 <= 1^2
Hypothesis: the claim is true for each `k < n` for some `n`.
T(n) = T(n-1) + n <= (hypothesis) (n-1)^2 + n = n^2-2n + 1 < n^2 (for n > 1)