I need help with finding the complexity of a recursive algorith; I know that in order to solve this I have to find the linear recurrence, then apply the Master Theorem. As of my knowledge, finding the recurrence would be straightforward when only one parameter is considered;
In this case there are two parameters (i, j). Consider the function below called on (A,1,n):
integer stuff(integer [] A, integer i, integer j){
if i ≥ j then return i – j
integer h ← 0
for integer k ← 1 to floor((j – i + 1)/3) do {
h ← h + 1
}
return stuff(A, i , i + h) + stuff(A, j – h, j) – stuff(A, i + h + 1, j – h − 1)
}
Assuming various things, I guessed the relation to be:
T(1) = k
T(n) = T(n/3) + T(n/3) + T(n/3) + 1/3*n = 3*T(n/3) + 1/3*n
I assumed that because it looks that the function is called over 3 parts of 3, of which each is one third of n; being h = O(n/3)
First call: h+i-i = h ~ n/3
Second call: j-(j-h) = h ~ n/3
Third call: j-h-1-(i+h) = j-i-2h ~ n/3 (which I only assumed)
Even though I can try to guess the relation and make sense out of it, I don't know how to formally prove it.
If my guessing is correct, how do you get to that conclusion? If not, what am I missing?
Sorry for the long question, Thanks in advance
As you return inside the for, it means all the time the function will be finished just with a constant complexity! Because all the time goes to the for loop and it return the value of the function and everything is finished and the result is ready to be returned.
Also, the proof of the recurrent relationship comes from your analysis. If you use some counting principle in Combinatorics, the final result will be proved.
Moreover, if you correct the pseudocode and put the return at the end of the function, the complexity is T(n) = 3T(n/3) + \Theta(n) (as you analyzed). Now, from the master theorem, you can say that T(n) = n log(n)).
Related
I am trying to solve the reccurence of the quicksort algorithm by the substitution method:
I can not find any way to proof that this will lead to . What further steps do I have to make to make this work?
The worst case of quicksort is when you choose a pivot element which is the minimum or maximum element from the array, so that all remaining elements go to one side of the partition, and the other side of the partition is empty. In this case, the non-empty partition has a size of (n - 1), and it takes linear time (kn for some constant k > 0) to do the partitioning itself, so the recurrence relation is
T(n) = T(n - 1) + T(0) + kn
If we guess that T(n) = an² + bn + c for some constants a, b, c, then we can substitute:
an² + bn + c = [ a(n - 1)² + b(n - 1) + c ] + [ c ] + kn
where the two square-bracketed terms are T(n - 1) and T(0) respectively. By expanding the brackets and equating coefficients, we get
an² = an²
bn = -2an + bn + kn
c = a - b + 2c
It follows that there is a family of solutions, parameterised by c = T(0), where a = k/2 and b = k/2 + c. This family of solutions can be written exactly as
T(n) = (k/2) n² + (k/2 + c) n + c
which is not just O(n²), but Ө(n²), meaning the running time is a quadratic function, not merely bounded above by a quadratic function. Note that the actual value of c doesn't change the asymptotic behaviour of the function, so long as k > 0 (i.e. the partitioning step does take a positive amount of time).
I have found an answer to my question, the continuation of the previous equation is,
This is true if
I'm interested in calculating the following code's time and space complexity but seem to be struggling a lot. I know that the deepest the recursion could reach is n so the space should be O(n). I have no idea however how to calculate the time complexity... I don't know how to write the formula when it comes to recursions similar to this forms like: f(f(n-1)) .
if it was something like, return f3(n-1) + f3(n-1) then i know it should be O(2^n) since T(n) = 2T(n-1) correct?
Here's the code:
int f3(int n)
{
if(n <= 2)
return 1;
f3(1 + f3(n-2));
return n - 1;
}
Thank you for your help!
Notice that f3(n) = n - 1 for all n, so the line f3(1 + f3(n-2)), first f3(n-2) is computed, which returns n - 3 and then f3(1 + n - 3) = f3(n-2) is computed again!
So, f3(n) computes f3(n-2) twice, alongside with some O(1) overhead.
We got the recursion formula T(n) = 2T(n-2) + c for some constant c, and T(n) is the running time of f3(n).
Solving the recursion, we get T(n) = O(2^(n/2)).
What is the time complexity of the recursive solution to this in the code taken from: http://www.geeksforgeeks.org/dynamic-programming-set-32-word-break-problem/ :
// returns true if string can be segmented into space separated
// words, otherwise returns false
bool wordBreak(string str)
{
int size = str.size();
// Base case
if (size == 0) return true;
// Try all prefixes of lengths from 1 to size
for (int i=1; i<=size; i++)
{
// The parameter for dictionaryContains is str.substr(0, i)
// str.substr(0, i) which is prefix (of input string) of
// length 'i'. We first check whether current prefix is in
// dictionary. Then we recursively check for remaining string
// str.substr(i, size-i) which is suffix of length size-i
if (dictionaryContains( str.substr(0, i) ) &&
wordBreak( str.substr(i, size-i) ))
return true;
}
// If we have tried all prefixes and none of them worked
return false;
}
I'm thinking its n^2 because for n calls to the method, it worst case does (n-1) work (iterates over the rest of string recursively?). Or is it exponential/n!?
I'm having a tough time figuring out Big(O) of recursive functions such as these. Any help is much appreciated!
The answer is exponential, to be precise O(2^(n-2)). (2 power n-2)
In each call you are calling the recursive function with length 1,2....n-1(in worst case). To do the work of length n you are recursively doing the work of all the strings of length n-1, n-2, ..... 1. So T(n) is the time complexity of your current call, you are internally doing a work of sum of T(n-1),T(n-2)....T(1).
Mathematically :
T(n) = T(n-1) + T(n-2) +.....T(1);
T(1) = T(2) = 1
If you really don't know how to solve this, an easier way to solve the above recurrence is by just substitute values.
T(1) = T(2) = 1
T(3) = T(1) + T(2) = 1+1 =2; // 2^1
T(4) = T(1)+ T(2) + T(3) = 1+1+2 =4; //2^2
T(5) = T(1) + T(2) +T(3) +T(4) = 1+1+2+4 =8; //2^3
So if you substitute first few values, it will be clear that the Time complexity is 2^(n-2)
The short version:
The worst-case runtime of this function is Θ(2n), which is surprising because it ignores the quadratic amount of work done by each recursive call simply splitting the string into pieces and checking which prefixes are words.
The longer version: let's imagine we have an input string consisting of n copies of the letter a followed by the letter b. (we'll abbreviate this as aⁿb), and create a dictionary containing the words a, aa, aaa, ..., aⁿ.
Now, what will the recursion do?
First, notice that none of the recursive calls will return true, because there's no way to account for the b at the end of the string. This means that each recursive call will be to a string of the form aᵏb. Let's denote the amount of time required to process such a string as T(k). Each one of these calls will fire off k smaller calls, one to each suffix of aᵏb.
However, we also have to account for the other contributors to the runtime. In particular, calling string::substr to form a substring of length k takes time O(k). We also need to factor in the cost of checking if a prefix is a word. The code isn't shown here for how to do this, but assuming we use a trie or a hash table we can assume that the cost of checking if a string of length k is a word is O(k) as well. This means that, at each point where we make a recursive call, we will do O(n) work - some amount of work to check if the prefix is a word, and some amount of work to form the substring corresponding to the suffix.
Therefore, we get that
T(k) = T(0) + T(1) + T(2) + ... + T(k-1) + O(k2)
Here, the first part of the recurrence corresponds to each of the recursive calls, and the second part of the recurrence accounts for the cost of making each of the substrings. (There are n substrings, each of which takes time O(n) to process). Our goal is to solve this recurrence, and just for simplicity we'll assume that T(0) = 1.
To do this, we'll use the "expand and contract" technique. Let's write out the values of T(k) and T(k+1) next to each other:
T(k) = T(0) + T(1) + T(2) + ... + T(k-1) + O(k2)
T(k+1) = T(0) + T(1) + T(2) + ... + T(k-1) + T(k) + O(k2)
Subtracting this first expression from the second gives us that
T(k+1) - T(k) = T(k) + O(k),
or that
T(k+1) = 2T(k) + O(k).
How did we get O(k) out of the difference of two O(k2) terms? It's because (k + 1)2 - k2 = 2k + 1 = O(k).
This is an easier recurrence to work with, since each term just depends on the previous one. For simplicity, we're going to assume that the O(k) term is literally just k, giving the recurrence
T(k+1) = 2T(k) + k.
This recurrence solves to T(k) = 2k+1 - k - 1. To see this, we can use a quick inductive argument. Specifically:
T(0) = 1 = 2 - 1 = 20+1 - 0 - 1
T(k+1) = 2T(k) + k = 2(2k - k - 1) + k
= 2k+1 - 2k - 2 + k
= 2k+1 - k - 2
= 2k+1 - (k + 1) - 1
Therefore, we get that our runtime is Θ(2n), since we can ignore the lower-order n term.
I was very surprised to see this, because this means that the quadratic work done by each recursive call does not factor into the overall runtime! I would have initially guessed that the runtime would be something like Θ(n · 2n) before doing this analysis. :-)
I believe the answer should actually be O(2^(n-1)). You can see a proof as this as well as a worst-case example here:
https://leetcode.com/problems/word-break/discuss/169383/The-Time-Complexity-of-The-Brute-Force-Method-Should-Be-O(2n)-and-Prove-It-Below
One intuitive way I think about the complexity here is, how many ways are there to add spaces in the string here or break the word here ?
for a 4 letter word:
no of ways to break at index 0-1 * no of ways to break at index 1-2 * no of ways to break at index 2-3 = 2 * 2 * 2.
2 is there to signify the two options => you break it, you don't break it
O(2^(n-1)) is the recursive complexity of the wordbreak then ;)
Consider the element uniqueness problem, in which we are given a range, i, i + 1, . . . , j, of indices for an array, A, and we want to determine if the elements of this range, A[i], A[i+1], . . . , A[j], are all unique, that is, there is no repeated element in this group of array entries. Consider the following (inefficient) recursive algorithm.
public static boolean isUnique(int[] A, int start, int end) {
if (start >= end) return true; // the range is too small for repeats
// check recursively if first part of array A is unique
if (!isUnique(A,start,end-1) // there is duplicate in A[start],...,A[end-1]
return false;
// check recursively if second part of array A is unique
if (!isUnique(A,start+1,end) // there is duplicate in A[start+1],...,A[end]
return false;
return (A[start] != A[end]; // check if first and last are different
}
Let n denote the number of entries under consideration, that is, let n = end − start + 1. What is an upper is upper bound on the asymptotic running time of this code fragment for large n? Provide a brief and precise explanation.
(You lose marks if you do not explain.) To begin your explanation, you may say how many recursive calls the
algorithm will make before it terminates, and analyze the number of operations per invocation of this algorithm.
Alternatively, you may provide the recurrence characterizing the running time of this algorithm, and then solve it
using the iterative substitution technique?
This question is from a sample practise exam for an Algorithms class this is my current answer can some one please help verify if im on the right track
Answer:
The recurrence equation:
T(n) = 1 if n = 1,
T(n) = 2T(n-1) if n > 1
after solving using iterative substitution i got
2^k * T (n-k) and I solved this to O(2^(n-1)) and I simplified it O(2^n)
Your recurrence relation should be T(n) = 2T(n-1) + O(1) with T(1) = O(1). However this doesn't change the asymptotics, the solution is still T(n) = O(2^n). To see this you can expand the recurrence relation to get T(n) = O(1) + 2(O(1) + 2(O(1) + ...)) so you have T(n) = O(1) * (1 + 2 + 4 = ... + 2^n) = O(1) * (2^(n+1) - 1) = O(2^n).
I am somewhat confused with the running time analysis of a program here which has recursive calls which depend on a RNG. (Randomly Generated Number)
Let's begin with the pseudo-code, and then I will go into what I have thought about so far related to this one.
Func1(A, i, j)
/* A is an array of at least j integers */
1 if (i ≥ j) then return (0);
2 n ← j − i + 1 ; /* n = number of elements from i to j */
3 k ← Random(n);
4 s ← 0; //Takes time of Arbitrary C
5 for r ← i to j do
6 A[r] ← A[r] − A[i] − A[j]; //Arbitrary C
7 s ← s + A[r]; //Arbitrary C
8 end
9 s ← s + Func1(A, i, i+k-1); //Recursive Call 1
10 s ← s + Func1(A, i+k, j); //Recursive Call 2
11 return (s);
Okay, now let's get into the math I have tried so far. I'll try not to be too pedantic here as it is just a rough, estimated analysis of expected run time.
First, let's consider the worst case. Note that the K = Random(n) must be at least 1, and at most n. Therefore, the worst case is the K = 1 is picked. This causes the total running time to be equal to T(n) = cn + T(1) + T(n-1). Which means that overall it takes somewhere around cn^2 time total (you can use Wolfram to solve recurrence relations if you are stuck or rusty on recurrence relations, although this one is a fairly simple one).
Now, here is where I get somewhat confused. For the expected running time, we have to base our assumption off of the probability of the random number K. Therefore, we have to sum all the possible running times for different values of k, plus their individual probability. By lemma/hopefully intuitive logic: the probability of any one Randomly Generated k, with k between 1 to n, is equal 1/n.
Therefore, (in my opinion/analysis) the expected run time is:
ET(n) = cn + (1/n)*Summation(from k=1 to n-1) of (ET(k-1) + ET(n-k))
Let me explain a bit. The cn is simply for the loop which runs i to j. This is estimated by cn. The summation represents all of the possible values for k. The (1/n) multiplied by this summation is there because the probability of any one k is (1/n). The terms inside the summation represent the running times of the recursive calls of Func1. The first term on the left takes ET(k-1) because this recursive call is going to do a loop from i to k-1 (which is roughly ck), and then possibly call Func1 again. The second is a representation of the second recursive call, which would loop from i+k to j, which is also represented by n-k.
Upon expansion of the summation, we see that the overall function ET(n) is of the order n^2. However, as a test case, plugging in k=(n/2) gives a total running time for Func 1 of roughly nlog(n). This is why I am confused. How can this be, if the estimated running time is of the order n^2? Am I considering a "good" case by plugging in n/2 for k? Or am I thinking about k in the wrong sense in some way?
Expected time complexity is ET(n) = O(nlogn) . Following is math proof derived by myself please tell if any error :-
ET(n) = P(k=1)*(ET(1)+ET(n-1)) + P(k=2)*(ET(2)+ET(n-2)).......P(k=n-1)*(ET(n-1)+ET(1)) + c*n
As the RNG is uniformly random P(k=x) = 1/n for all x
hence ET(n) = 1/n*(ET(1)*2+ET(2)*2....ET(n-1)*2) + c*n
ET(n) = 2/n*sum(ET(i)) + c*n i in (1,n-1)
ET(n-1) = 2/(n-1)*sum(ET(i)) + c*(n-1) i in (1,n-2)
sum(ET(i)) i in (1,n-2) = (ET(n-1)-c*(n-1))*(n-1)/2
ET(n) = 2/n*(sum(ET(i)) in (1,n-2) + ET(n-1)) + c*n
ET(n) = 2/n*((ET(n-1)-c*(n-1))*(n-1)/2+ET(n-1)) + c*n
ET(n) = 2/n*((n+1)/2*ET(n-1) - c*(n-1)*(n-1)/2) + c*n
ET(n) = (n+1)/n*ET(n-1) + c*n - c*(n-1)*(n-1)/n
ET(n) = (n+1)/n*ET(n-1) + c
solving recurrence
ET(n) = (n+1)ET(1) + c + (n+1)/n*c + (n+1)/(n-1)*c + (n+1)/(n-2)*c.....
ET(n) = (n+1) + c + (n+1)*sum(1/i) i in (1,n)
sum(1/i) i in (1,n) = O(logn)
ET(n) = (n+1) + c + (n+1)*logn
ET(n) = O(nlogn)