Runtime complexity of recursive permutation function - algorithm

I wrote this code that returns all the permutation of the provided string. Now I want to calculate the run-time complexity and need help in that.
The code recursively calls the permutationRecursively function N times (for every character of the string i.e. st) and then there are two for loops one is looping through all the permutations returns back from the recursive call (i.e. for a it will be ['a'] or for ab it will be ['ab', 'ba'] and so on) and then each pair of permutation. I am really confused about this part. What will be complexity of this specific part?
I assume that for all the recursive calls it will be O(N) and then for inner loops, it will be O(A*B). So the total would be O(N*A*B). Is it correct?
def permutationRecursively(st):
if(len(st) < 2):
return [st]
else:
permutations = permutationRecursively(st[0:-1])
newPermutations = []
wordToInsert = st[-1]
for permutationPair in permutations:
for index in range(len(permutationPair)+1):
newPermutations.append(permutationPair[0:index]+wordToInsert+permutationPair[index:])
return newPermutations
start_time = time.time()
permutationRecursively("abbc")
print("--- %s seconds ---" % (time.time() - start_time))

Your function works by first recursively calling itself on an input of size n-1. Then it loops through each element of the result (of which there are (n-1)!), and for each element, it does O(n²) work (since len(permutationPair)+1 has length n and the string concatenation is O(n).
Hence we get the following recurrence relation for the time complexity T(n):
T(n) = T(n-1) + (n-1)! n²
The asymptotic behaviour of this relation is as follows:
T(n) ∈ Θ((n-1)! n²) = Θ(n!n)
So, in particular, T(n) ∉ O(n!).

Related

How will summing a sub array affect time complexity in a nested for loop?

Trying to calculate time complexity of some simple code but I do not know how to calculate time complexity while summing a sub array. The code is as follows:
for i=1 to n {
for j = i+1 to n {
s = sum(A[i...j])
B[i,j]=s
}}
So I know the nested for loops inevitably give us a O(n^2) and I believe the function to sum to the sub array is also O(n^2). However, I think the time complexity for the whole algorithm is O(n^3). How do I get here with this information? Thank you!
I like to think of for loops as summations. As such, the number of steps (written as a function, T(n)) is:
T(n) = \sum_{i=1}^n numStepsInInnerForLoop
Here, I'm using something written in pseudo-MathJax, and have written the outer for loop as a summation from i=1 to n of the number of steps in the inner for loop (the one from i+1 to n). You can think of this analagously as summing the number of steps in the inner for loop, from i=1 to n. Substituting in numStepsInInnerForLoop results in:
T(n) = \sum_{i=1}^n [\sum_{j=i+1}^n numStepsOfSumFunction]
This function now represents the number of steps where both for loops have been fleshed out as summations. Assuming that s = sum(A[i...j]) takes j-i+1 steps and B[i,j]=s takes just one step, we can substitute numStepsOfSumFunction with these more useful parameters and the equation now becomes:
T(n) = \sum_{i=1}^n [\sum_{j=i+1}^n (j-i+1 + 1)]
When you solve these summations (using the kind of formulas you see on this summation tutorial page) you'll get a cubic function for T(n) which corresponds to O(n^3).
Your reasoning leads me to believe that you're running this algorithm on a array of size n. If so, then every time you call the sum method in the inner for loop, you're calling this method on a specific range of values (indices i to j). For each iteration of this for loop, this sum method will iterate through 1, 2, 3, ..., then finally n elements in the last iteration as j increases from (i + 1) to n. Note that this is when i = 1. As i increases, it won't necessarily go from 1, 2, 3, ..., to n anymore since it will technically go up to n - i elements. Big O, though, is the worst case so we have to use this scenario.
1 + 2 + 3 + ... + n gives us n^2. The runtime of the sum method depends on the values of i and j; however, when run in the for loop with the given conditions, the total time-complexity of all the calls to sum in one iteration of the inner for loop is O(n^2). And finally, since this inner for loop is executed n times, the total time-complexity for the whole algorithm is O(n^3).

Time Complexity of Recursive Algorithm Array

I have a recursive algorithm like that which computes the smallest integer in the array.
ALGORITHM F_min1(A[0..n-1])
//Input: An array A[0..n-1] of real numbers
If n = 1
return A[0]
else
temp ← F_minl(A[0..n-2])
If temp ≤ A[n-1]
return temp
else
return A[n-1]
What I think that the reccurence relation should be
T(n)=T(n-1) + n
However I am not sure about the + n part. I want to be sure in which cases the recurrence is T(n)=T(n-1) + 1 and in which cases the recurrence is T(n)=T(n-1) + n.
The recurrence should be
T(1) = 1,
T(n) = T(n-1) + 1
because besides the recursive call to the smaller array, all computational effort (reading the last entry of A and doing the comparison) take constant time in unit cost measure. The algorithm can be understood as divide-and-conquer where the divide part is splitting the array into a prefix and the last entry; the conquer part, which is a comparison, cannot take more than constant time here. In total, a case where there is linear work after the recursive call does not exist.

time complexity of some recursive and none recursive algorithm

I have two pseudo-code algorithms:
RandomAlgorithm(modVec[0 to n − 1])
b = 0;
for i = 1 to n do
b = 2.b + modVec[n − i];
for i = 1 to b do
modVec[i mod n] = modVec[(i + 1) mod n];
return modVec;
Second:
AnotherRecursiveAlgo(multiplyVec[1 to n])
if n ≤ 2 do
return multiplyVec[1] × multiplyVec[1];
return
multiplyVec[1] × multiplyVec[n] +
AnotherRecursiveAlgo(multiplyVec[1 to n/3]) +
AnotherRecursiveAlgo(multiplyVec[2n/3 to n]);
I need to analyse the time complexity for these algorithms:
For the first algorithm i got the first loop is in O(n),the second loop has a best case and a worst case , best case is we have O(1) the loop runs once, the worst case is we have a big n on the first loop, but i don't know how to write this idea as a time complexity cause i usually get b=sum(from 1 to n-1) of 2^n-1 . modVec[n-1] and i get stuck here.
For the second loop i just don't get how to solve the time complexity of this one, we usually have it dependant on n , so we need the formula i think.
Thanks for the help.
The first problem is a little strange, all right.
If it helps, envision modVec as an array of 1's and 0's.
In this case, the first loop converts this array to a value.
This is O(n)
For instance, (1, 1, 0, 1, 1) will yield b = 27.
Your second loop runs b times. The dominating term for the value of b is 2^(n-1), a.k.a. O(2^n). The assignment you do inside the loop is O(1).
The second loop does depend on n. Your base case is a simple multiplication, O(1). The recursion step has three terms:
simple multiplication
recur on n/3 elements
recur on n/3 elements (from 2n/3 to the end is n/3 elements)
Just as your binary partitions result in log[2] complexities, this one will result in log[3]. The base doesn't matter; the coefficient (two recursive calls) doesn't' matter. 2*O(log3) is still O(log N).
Does that push you to a solution?
First Loop
To me this boils down to the O(First-For-Loop) + O(Second-For-Loop).
O(First-For-Loop) is simple = O(n).
O(Second-For-Loop) interestingly depends on n. Therefore, to me it's can be depicted as O(f(n)), where f(n) is some function of n. Not completely sure if I understand the f(n) based on the code presented.
The answer consequently becomes O(n) + O(f(n)). This could boil down to O(n) or O(f(n)) depending upon which one is larger and more dominant (since the lower order terms don't matter in the big-O notation.
Second Loop
In this case, I see that each call to the function invokes 3 additional calls...
The first call seems to be an O(1) call. So it won't matter.
The second and the third calls seems to recurses the function.
Therefore each function call is resulting in 2 additional recursions.
Consequently , the time complexity on this would be O(2^n).

Forming recurrence relations

I have a question on forming recurrence relations and calculating the time complexity.
If we have a recurrence relation T(n)=2T(n/2) + c then it means that the constant amount of work c is divided into 2 parts T(n/2) + T(n/2) when drawing recursion tree.
Now consider recurrence relation of factorial which is T(n)=n*T(n-1) + c . If we follow the above method then we should divide the work c into n instances each of T(n-1) and then evaluate time complexity. However if calculate in this way then answer will O(n^n) because we will have O(n^n) recursive calls which is wrong.
So my question is why can't we use the same approach of dividing the elements into sub parts as in first case.
Let a recurrence relation be T(n) = a * T(n/b) + O(n).
This recurrence implies that there is a recursive function which:
divides the original problem into a subproblems
the size of each subproblem will be n/b if the current problem size is n
when the subproblems are trivial (too easy to solve), no recursion is needed and they are solved directly (and this process will take O(n) time).
When we say that original problem is divided into a subproblems, we mean that there are a recursive calls in the function body.
So, for instance, if the function is:
int f(int n)
{
if(n <= 1)
return n;
return f(n-1) + f(n-2);
}
we say that the problem (of size n) is divided into 2 subproblems, of sizes n-1 and n-2. The recurrence relation would be T(n) = T(n-1) + T(n-2) + c. This is because there are 2 recursive calls, and with different arguments.
But, if the function is like:
int f(int n)
{
if(n <= 2)
return n;
return n * f(n-1);
}
we say that the problem (of size n) is divided into only 1 subproblem, which is of size n-1. This is because there is only 1 recursive call.
So, the recurrence relation would be T(n) = T(n-1) + c.
If we multiply the T(n-1) with n, as would seem normal, we are conveying that there were n recursive calls made.
Remember, our main motive for forming recurrence relations is to perform (asymptotic) complexity analysis of recursive functions. Even though it would seem like n cannot be discarded from the relation as it depends on the input size, it would not serve the same purpose as it does in the function itself.
But, if you are talking about the value returned by the function, it would be f(n) = n * f(n-1). Here, we are multiplying with n because it is an actual value, that will be used in the computation.
Now, coming to the c in T(n) = T(n-1) + c; it merely suggests that when we are solving a problem of size n, we need to solve a smaller problem of size n-1 and some other constant (constant time) work like comparison, multiplication and returning values are also performed.
We can never divide "constant amount of work c" into two parts T(n/2) and T(n/2), even using recursive tree method. What we are, in fact, dividing, is the problem into two halves. The same "c" amount of work will be needed in each recursive call in each level of the recursive tree.
If there were a recurrence relation like T(n) = 2T(n/2) + O(n), where the amount of work to be done depends on the input size, then the amount of work to be done at each level will be halved at the next level, just like you described.
But, if the recurrence relation were like T(n) = T(n-1) + O(n), we would not be dividing the amount of work into two halves in the next recursion level. We would just be reducing the amount of work by one at each successive level (n-sized problem becomes n-1 at next level).
To check how the amount of work will change with recursion, apply substitution method to your recurrence relation.
I hope I have answered your question.

Time complexity for algorithm

I'd like to know the Big Oh for the following algorithm
public List<String> getPermutations(String s){
if(s.length()==1){
List<String> base = new ArrayList<String>();
base.add(String.valueOf(s.charAt(0)));
return base;
}
List<String> combos = createPermsForCurrentChar(s.charAt(0),
getPermutations(s.substring(1));
return combos;
}
private List<String> createPermsForCurrentChar(char a,List<String> temp){
List<String> results = new ArrayList<String>();
for(String tempStr : temp){
for(int i=0;i<tempStr.length();i++){
String prefix = tempStr.substring(0, i);
String suffix = tempStr.substring(i);
results.add(prefix + a + suffix);
}
}
return results;
}
Heres what I think it is getPermutations is called n times , where n is length of the string.
My understanding is that
createPermutations is O(l * m) where l is the length of list temp and m is the length of each string in temp.
However since we are looking at worst case analysis, m<=n and l<= n!.
The length of the temp list keeps growing in each recursive call and so does the number of characters in each string in temp.
Does this mean that the time complexity of this algorithm is O(n * n! *n). Or is it O(n * n * n) ?
Well, I will just write this up as an answer instead of having a long list of comments.
Denote the run time of getPerm on string of length n as T(n). Observe that inside getPerm, it calls getPerm(string length n-1), so clearly
T(n)=T(n-1) + [run time of createPerm]
Note that createPerm has 2 loops that are nested. The outer loop iterates through the size of the result of getperm(string of length n-1) and the inner loop iterates through n-1 (length of individual strings). The result of getPerm(string of length n-1) is a list of T(n-1) strings. From this, we get that
[run time of createPerm] = (n-1) T(n-1)
Substituting this into the previous equation gives
T(n) = T(n-1) + (n-1) T(n-1) = n T(n-1)
T(1) = 1 from the exit condition. We can just expand to find the solution (or, alternatively, use Z-transform: Can not figure out complexity of this recurrence). Since it is a simple equation, expanding is faster:
T(n) = n T(n-1)
= n (n-1) T(n-2)
= n (n-1) (n-2) T(n-3) ....
= n (n-1) ... 1
= n!
So T(n) = n!
Exercise: prove this by induction! :p
Does this make sense? Let's think about it. We are creating permutations of n characters: http://en.wikipedia.org/wiki/Permutation.
EDIT: note that T(n)=n! is O(n!)
I'm not the best with combinatorics, but I think it's O(n^3) where n is the number of characters in your string.
My logic is this:
The number of times that
getPermutations(String)
is called is related to the call:
createPermsForCurrentChar(s.charAt(0),getPermutations(s.substring(1));
On the first call you pass arguements (charAt(0), substring of length s.length-1), then (charAt(1), substring of length s.length-2)... for O(n) calls.
What's more important is the # of elements in List temp each time we enter createPermsForCurrentChar.
First, let's analyze the function as a standalone thing:
Let's say there are k elements in List<String> temp, and they have monotonically increasing lengths, denoted by L=current length, beginning with L=1 and ending with L=k.
The outer-for loop will iterate k times, this is easy.
The inner for loop will iterate L times.
Our complexity is O(k"L"). L is in quotation marks because it changes each time, let's see what it looks like: First outer loop iteration, the inner loop executes once. Second outer loop ieration, the inner loop executes twice, and so on until the inner loop executes k times giving us 1+2+3+4+...k = O(k^2).
So createPermsForCurrentChar is O(k^2) complexity, where k is the number of elements in List<String> temp (and also the size of the longest string in temp). What we want to know now is this - How many elements will be in List<string> temp for each call?
When we finally reach the base case in our recursion, we're passing the second last character of our string, and the last character of our string to createPermsForCurrentChar, so k=1. It will create a single string of length O(k).
This allows the next execution to pop off the stack and call createPermsForCurrentChar again, this time with k=2. Then k=3, k=4, k=5, etc.
We know that createPermsForCurrentChar is being called O(n) times due to our recurrence relation, so k will eventually = n. (1 + 2 + 3 + ... + n) = O(n^2). Taking into account the complexity of createPermsForCurrentChar, we get (1^2 + 2^2 + 3^2 + ... n^2) = (1/3)n^3 + (1/2)n^2 + (1/6)n (from http://math2.org/math/expansion/power.htm).
Since we only care about our dominating value, we can say that the algorithm is O(n^3).

Resources