Trying to analyze the runtime complexity of the following algorithm:
Problem: We have an m * n array A consisting of lower case letters and a target string s. The goal is to examine whether the target string appearing in A or not.
algorithm:
for(int i = 0; i < m; i++){
for(int j = 0; j < n; j++){
if(A[i][j] is equal to the starting character in s) search(i, j, s)
}
}
boolean search(int i, int j, target s){
if(the current position relative to s is the length of s) then we find the target
looping through the four possible directions starting from i, j: {p,q} = {i+1, j} or {i-1, j} or {i, j+1} or {i, j-1}, if the coordinate is never visited before
search(p, q, target s)
}
One runtime complexity analysis that I read is the following:
At each position in the array A, we are first presented with 4 possible directions to explore. After the first round, we are only given 3 possible choices because we can never go back. So the worst runtime complexity is O(m * n * 3**len(s))
However, I disagree with this analysis, because even though we are only presented with 3 possible choices each round, we do need to spend one operation to check whether that direction has been visited before or not. For instance, in java you probably just use a boolean array to track whether one spot has been visited before, so in order to know whether a spot has been visited or not, one needs a conditional check, and that costs one operation. The analysis I mentioned does not seem to take into account this.
What should be the runtime complexity?
update:
Let us suppose that the length of the target string is l and the runtime complexity at a given position in the matrix is T(l). Then we have:
T(l) = 4 T(l- 1) + 4 = 4(3T(l - 2) + 4) + 4 = 4(3( 3T(l -3) + 4) + 4)) + 4 = 4 * 3 ** (l - 1) + 4 + 4 *4 + 4 * 3 * 4 + ...
the +4 is coming from the fact that we are looping through four directions in each round besides recursively calling itself three times.
What should be the runtime complexity?
The mentioned analysis is correct and the complexity is indeed O(m * n * 3**len(s)).
For instance, in java you probably just use a boolean array to track whether one spot has been visited before, so in order to know whether a spot has been visited or not, one needs a conditional check, and that costs one operation.
That is correct and does not contradict the analysis.
The worst case we can construct is the matrix filled with only one letter a and a string aaaa....aaaax (many letters a and one x at the end). If m, n and len(s) are large enough, almost each call of the search function will generate 3 recursion calls of itself. Each of that calls will generate another 3 calls (which gives us total 9 calls of depth 2), each of them willl generate another 3 calls (which gives us total 27 calls of depth 3) and so on. Checking current string character, conditional checks, spawning a recursion are all O(1), so complexity of the whole search function is O(3**len(s)).
The solution is brute force. We have to touch each point on the board. That makes O(m*n) operation.
Now for each point, we have to run dfs() to check if the word exist. So we get
O(m * n * timeComplexityOf dfs)
this is a dfs written in python. Examine the time complexity
def dfs(r,c,i):
# O(1)
if i==len(word):
return True
# O(1)
# set is implemented as a hash table.
# So, time complexity of look up in a set is O(1)
if r<0 or c<0 or r>=ROWS or c>=COLS or word[i]!=board[r][c] or (r,c) in path_set:
return False
# O(1)
path.add((r,c))
# O(1)
res=(dfs(r+1,c,i+1) or
dfs(r-1,c,i+1) or
dfs(r,c+1,i+1) or
dfs(r,c-1,i+1))
# O(1)
path.remove((r,c))
return res
Since we dfs recursively calling itself, think about how many dfs calls will be on call stack. in worst case it will length of word. Thats why
O ( m * n * word.length)
Related
I have a number n and a set of numbers S ∈ [1..n]* with size s (which is substantially smaller than n). I want to sample a number k ∈ [1..n] with equal probability, but the number is not allowed to be in the set S.
I am trying to solve the problem in at worst O(log n + s). I am not sure whether it's possible.
A naive approach is creating an array of numbers from 1 to n excluding all numbers in S and then pick one array element. This will run in O(n) and is not an option.
Another approach may be just generating random numbers ∈[1..n] and rejecting them if they are contained in S. This has no theoretical bound as any number could be sampled multiple times even if it is in the set. But on average this might be a practical solution if s is substantially smaller than n.
Say s is sorted. Generate a random number between 1 and n-s, call it k. We've chosen the k'th element of {1,...,n} - s. Now we need to find it.
Use binary search on s to find the count of the elements of s <= k. This takes O(log |s|). Add this to k. In doing so, we may have passed or arrived at additional elements of s. We can adjust for this by incrementing our answer for each such element that we pass, which we find by checking the next larger element of s from the point we found in our binary search.
E.g., n = 100, s = {1,4,5,22}, and our random number is 3. So our approach should return the third element of [2,3,6,7,...,21,23,24,...,100] which is 6. Binary search finds that 1 element is at most 3, so we increment to 4. Now we compare to the next larger element of s which is 4 so increment to 5. Repeating this finds 5 in so we increment to 6. We check s once more, see that 6 isn't in it, so we stop.
E.g., n = 100, s = {1,4,5,22}, and our random number is 4. So our approach should return the fourth element of [2,3,6,7,...,21,23,24,...,100] which is 7. Binary search finds that 2 elements are at most 4, so we increment to 6. Now we compare to the next larger element of s which is 5 so increment to 7. We check s once more, see that the next number is > 7, so we stop.
If we assume that "s is substantially smaller than n" means |s| <= log(n), then we will increment at most log(n) times, and in any case at most s times.
If s is not sorted then we can do the following. Create an array of bits of size s. Generate k. Parse s and do two things: 1) count the number of elements < k, call this r. At the same time, set the i'th bit to 1 if k+i is in s (0 indexed so if k is in s then the first bit is set).
Now, increment k a number of times equal to r plus the number of set bits is the array with an index <= the number of times incremented.
E.g., n = 100, s = {1,4,5,22}, and our random number is 4. So our approach should return the fourth element of [2,3,6,7,...,21,23,24,...,100] which is 7. We parse s and 1) note that 1 element is below 4 (r=1), and 2) set our array to [1, 1, 0, 0]. We increment once for r=1 and an additional two times for the two set bits, ending up at 7.
This is O(s) time, O(s) space.
This is an O(1) solution with O(s) initial setup that works by mapping each non-allowed number > s to an allowed number <= s.
Let S be the set of non-allowed values, S(i), where i = [1 .. s] and s = |S|.
Here's a two part algorithm. The first part constructs a hash table based only on S in O(s) time, the second part finds the random value k ∈ {1..n}, k ∉ S in O(1) time, assuming we can generate a uniform random number in a contiguous range in constant time. The hash table can be reused for new random values and also for new n (assuming S ⊂ { 1 .. n } still holds of course).
To construct the hash, H. First set j = 1. Then iterate over S(i), the elements of S. They do not need to be sorted. If S(i) > s, add the key-value pair (S(i), j) to the hash table, unless j ∈ S, in which case increment j until it is not. Finally, increment j.
To find a random value k, first generate a uniform random value in the range s + 1 to n, inclusive. If k is a key in H, then k = H(k). I.e., we do at most one hash lookup to insure k is not in S.
Python code to generate the hash:
def substitute(S):
H = dict()
j = 1
for s in S:
if s > len(S):
while j in S: j += 1
H[s] = j
j += 1
return H
For the actual implementation to be O(s), one might need to convert S into something like a frozenset to insure the test for membership is O(1) and also move the len(S) loop invariant out of the loop. Assuming the j in S test and the insertion into the hash (H[s] = j) are constant time, this should have complexity O(s).
The generation of a random value is simply:
def myrand(n, s, H):
k = random.randint(s + 1, n)
return (H[k] if k in H else k)
If one is only interested in a single random value per S, then the algorithm can be optimized to improve the common case, while the worst case remains the same. This still requires S be in a hash table that allows for a constant time "element of" test.
def rand_not_in(n, S):
k = random.randint(len(S) + 1, n);
if k not in S: return k
j = 1
for s in S:
if s > len(S):
while j in S: j += 1
if s == k: return j
j += 1
Optimizations are: Only generate the mapping if the random value is in S. Don't save the mapping to a hash table. Short-circuit the mapping generation when the random value is found.
Actually, the rejection method seems like the practical approach.
Generate a number in 1...n and check whether it is forbidden; regenerate until the generated number is not forbidden.
The probability of a single rejection is p = s/n.
Thus the expected number of random number generations is 1 + p + p^2 + p^3 + ... which is 1/(1-p), which in turn is equal to n/(n-s).
Now, if s is much less than n, or even more up to s = n/2, this expected number is at most 2.
It would take s almost equal to n to make it infeasible in practice.
Multiply the expected time by log s if you use a tree-set to check whether the number is in the set, or by just 1 (expected value again) if it is a hash-set. So the average time is O(1) or O(log s) depending on the set implementation. There is also O(s) memory for storing the set, but unless the set is given in some special way, implicitly and concisely, I don't see how it can be avoided.
(Edit: As per comments, you do this only once for a given set.
If, additionally, we are out of luck, and the set is given as a plain array or list, not some fancier data structure, we get O(s) expected time with this approach, which still fits into the O(log n + s) requirement.)
If attacks against the unbounded algorithm are a concern (and only if they truly are), the method can include a fall-back algorithm for the cases when a certain fixed number of iterations didn't provide the answer.
Similarly to how IntroSort is QuickSort but falls back to HeapSort if the recursion depth gets too high (which is almost certainly a result of an attack resulting in quadratic QuickSort behavior).
Find all numbers that are in a forbidden set and less or equal then n-s. Call it array A.
Find all numbers that are not in a forbidden set and greater then n-s. Call it array B. It may be done in O(s) if set is sorted.
Note that lengths of A and B are equal, and create mapping map[A[i]] = B[i]
Generate number t up to n-s. If there is map[t] return it, otherwise return t
It will work in O(s) insertions to a map + 1 lookup which is either O(s) in average or O(s log s)
Im working on a textbook Algorithms (Dasgupta, C. H. Papadimitriou, and U. V. Vazirani) where Im trying to solve the Textbook problem 2.23.
However, Im not sure whether my solution is correct. Appreciate any insites!
Given some constraints:
n = 2^k,k ∈ N
Runtime = O(n log n)
Use at most O(1) additional memory.
I would like to write pseudocode for a function which returns the majority element in a array B = [b_1,...,b_n] of length n with b_i ∈ B, where only tests for equality and inequality (b_i = b_j) can be used.
An element x is called majority element if it occurs more than n/2 times:
MajorityElement(A,x) := |{i | i ∈ {1,...,n},b_i = x}| > n/2
Given the subarray B_l,_r = [a_l , . . . , a_r ] I thought to use a divide-and-conquer algorithm:
function GetMajorityElement(B, l, r, x):
if x = 1: -- so here I check if array has only one element
return B[1]
else l < r: --- here I check if left element < right element
midelement<–(l+r-1)/2
B_lefthalf <– B[ :midelement]
B_rightthalf <– B[midelement:]
MEL = GetMajorityElement(B_lefthalf) - recursively repeat
MER = GetMajorityElement(B_rightthalf)
if MEL is a majority element of B:
return MEL
if MER is a majority element of B:
return MER
return ‘no majority’
I assume the algorithm runs: T (n) = 2T(n/2) + O(n) = O(n log n).
Appreciate any insites/hints and corrections. Thx!
To find the majority element, we use a basic algorithm called the Moore's voting algorithm.
The algorithm is:
Initialize index and count of majority element
maj_index = 0, count = 1
Loop for i = 1 to size – 1 ........
(a) If a[maj_index] == a[i]
count++
(b) Else
count--;
(c) If count == 0
maj_index = i;
count = 1
return a[maj_index]
I think the described algorithm won't work. The decision: "is a majority element of B:" is not clear and might be quite complicated. You will need at least the counts of both majority candidates in both halfs. A minority in the first half can be the majority in both halfs together with the other majority.
Proposed solution:
Divide and conquer can be used for sorting. Later counting and keeping the most frequent element until at least half the sorted set is looked on will allow to find the majority. But you will need O(n) additional space, if you are not allowed to change the order of the elements in the original array.
Additionally you have to be aware that the recursion might cost additional space.
I'm trying to understand what the Big-O is of the code below.
What the code is supposed to do
Essentially, I am trying to select a subset (max size = selectionSize) of random nodes and any edges that exists between them. The selection of random nodes is done in the while loop. After having done that, I want to select any edges that exist between the selected nodes.
What I think it is & why
I think the running time is O = n^2 where n=selectionSize. The reason is: even though I can increase the size of the elements in nodes (e.g. make it 10000), I don't believe it can affect the algorithm since I am only looping through the maximum of selectionSize. The only reason I am a bit worried that this is wrong is because of the while loop, where I select random elements from the list up until I have enough. While this can take quite long (because it is random), I don't think it affects the overall output in terms of time.
Edit: ugh on second thoughts... Nevermind... The nodes size does affect it (since node.getNeighbors()can be at most the size of the nodes).. So I think that if the selectionSize is equal to the size of nodes, the running time is O=n^2 where n=size of nodes.
Any tips/hints would be appreciated.
Code
// nodes and selectionSize are my input:
int[] nodes = {1,2,3...,1000}; // 1000 elements
int selectionSize = 500; // This can be at most the size of the elements (in this case 1000)
run_algo(nodes, selectionSize);
public void run_algo(int[] nodes, int selectionSize) {
randomNodesList = {};
while(randomNodesList < selectionSize) {
randomNode = selectRandomNode(nodes); // Assume O(1)
if(!randomNodesList.exists(randomNode)) { // Assume O(1)
randomNodesList.push_back(randomNode); // Assume O(1)
}
}
foreach(node in randomNodesList) {
foreach(neighbor in node.getNeighbors()) { // Max. nodes size (in this case 1000)
if (!randomNodesList.exists(neighbor)) { // Assume O(1)
AddEdge(node, neighbor); // Takes O(1) time
}
}
}
}
if selectRandomNode(nodes); works with replacement (the same node can be picked twice), then the big o is not defined, since you have a probably infinite loop (you can end up picking the same node again and again).
If it works without replacement, then it's O(n^2) (in the worst case, every node may be connected to every other node).
Notes on selecting without replacement:
Consider the case when you are given an array of size n, say A and an empty array, B. All the elements in A are unique.
The task is to fill B with n elements randomly selected from A. It is desired that there should be at least k unique elements in B.
It can be shown that the probability of having more than k unique items increases with increasing n (I have added the equations after the plot).
Thus, in practice, the probability of the loop finishing in a single pass (i.e. in less than n steps) gets large as the difference in n and k increases.
It's very intuitive if you think about it, the math is just cherry on the top.
def k_numerator(n, k):
res = 0
sign = 1
for i in range(k, 0, -1):
pow_term = (i ** n)
comb_term = comb(k, i)
prod = pow_term * comb_term
prod = prod * sign
res = res + prod
sign = sign * -1
return res
def p_exactly_k(n, k):
"""
Returns the probability of `B` containing exactly `k` unique elements
(also see notes above)
"""
return (comb(n, k) * k_numerator(n, k)) / (n ** n)
I am not 100% sure if I understand this right. But let's break it down:
The while-Loop runs "selectionSize"-times best case and worst case n (where n is the amount of nodes)
Therefore the size of randomNodeList is in O(n).
In a simple graph you can have O(n-1) neighbours. So the whole loop must be in O(n^2) (Because of n *(n-1))
axiom is right. It is in fact not possible to find an upper bound for this algorithm. It's nondeterministic. It depends on your random numbers.
I was given a tricky question.
Given:
A = [a1,a2,...an] (list of positive integers with length "n")
r (positive integer)
Find a list of { *, + } operators
O = [o1,o2,...on-1]
so that if we placed those operators between the elements of "A", the resulting expression would evaluate to "r". Only one solution is required.
So for example if
A = [1,2,3,4]
r = 14
then
O = [*, +, *]
I've implemented a simple recursive solution with some optimisation, but of course it's exponential O(2^n) time, so for an input with length 40, it works for ages.
I wanted to ask if any of you know a sub-exponential solution for this?
Update
Elements of A are between 0-10000,
r can be arbitrarily big
Let A and B be positive integers. Then A + B ≤ A × B + 1.
This little fact can be used to construct a very efficient algorithm.
Let's define a graph. The graph nodes correspond to operations lists, for example, [+, ×, +, +, ×]. There is an edge from graph node X to graph node Y if the Y can be obtained by changing a single + to a × in X. The graph has a source at the node corresponding to [+, +, ..., +].
Now perform a breadth-first search from the source node, constructing the graph as you go. When expanding a node [+, ×, +, +, ×], for example, you (optionally construct then) connect to the nodes [×, ×, +, +, ×], [+, ×, ×, +, ×], and [+, ×, +, ×, ×]. Do not expand to a node if the result of evaluating it is greater than r + k(O), where k(O) is the number of +'s in the operation list O. This is because of the "+ 1" in the fact at the beginning of the answer - consider the case of a = [1, 1, 1, 1, 1], r = 1.
This approach uses O(n 2n) time and O(2n) space (where both are potentially very-loose worst case bounds). This is still an exponential algorithm, however I think you will find it performs very reasonably for non-sinister inputs. (I suspect this problem is NP-complete, which is why I am happy with this "non-sinister inputs" escape clause.)
Here's an O(rn^2)-time, O(rn)-space DP approach. If r << 2^n then this will have better worst-case behaviour than exponential-time branch-and-bound approaches, though even then the latter may still be faster on many instances. This is pseudo-polynomial time, because it takes time proportional to the value of part of its input (r), not its size (which would be log2(r)). Specifically it needs rn bits of memory, so it should give answers in a few seconds for up to around rn < 1,000,000,000 and n < 1000 (e.g. n = 100, r = 10,000,000).
The key observation is that any formula involving all n numbers has a final term that consists of some number i of factors, where 1 <= i <= n. That is, any formula must be in one of the following n cases:
(a formula on the first n-1 terms) + a[n]
(a formula on the first n-2 terms) + a[n-1] * a[n]
(a formula on the first n-3 terms) + a[n-2] * a[n-1] * a[n]
...
a[1] * a[2] * ... * a[n]
Let's call the "prefix" of a[] consisting of the first i numbers P[i]. If we record, for each 0 <= i <= n-1, the complete set of values <= r that can be reached by some formula on P[i], then based on the above, we can quite easily compute the complete set of values <= r that can be reached by P[n]. Specifically, let X[i][j] be a true or false value that indicates whether the prefix P[i] can achieve the value j. (X[][] could be stored as an array of n size-(r+1) bitmaps.) Then what we want to do is compute X[n][r], which will be true if r can be reached by some formula on a[], and false otherwise. (X[n][r] isn't quite the full answer yet, but it can be used to get the answer.)
X[1][a[1]] = true. X[1][j] = false for all other j. For any 2 <= i <= n and 0 <= j <= r, we can compute X[i][j] using
X[i][j] = X[i - 1][j - a[i]] ||
X[i - 2][j - a[i-1]*a[i]] ||
X[i - 3][j - a[i-2]*a[i-1]*a[i]] ||
... ||
X[1][j - a[2]*a[3]*...*a[i]] ||
(a[1]*a[2]*...*a[i] == j)
Note that the last line is an equality test that compares the product of all i numbers in P[i] to j, and returns true or false. There are i <= n "terms" (rows) in the expression for X[i][j], each of which can be computed in constant time (note in particular that the multiplications can be built up in constant time per row), so computing a single value X[i][j] can be done in O(n) time. To find X[n][r], we need to calculate X[i][j] for every 1 <= i <= n and every 0 <= j <= r, so there is O(rn^2) overall work to do. (Strictly speaking we may not need to compute all of these table entries if we use memoization instead of a bottom-up approach, but many inputs will require us to compute a large fraction of them anyway, so it's likely that the latter is faster by a small constant factor. Also a memoization approach requires keeping an "already processed" flag for each DP cell -- which doubles the memory usage when each cell is just 1 bit!)
Reconstructing a solution
If X[n][r] is true, then the problem has a solution (satisfying formula), and we can reconstruct one in O(n^2) time by tracing back through the DP table, starting from X[n][r], at each location looking for any term that enabled the current location to assume the value "true" -- that is, any true term. (We could do this reconstruction step faster by storing more than a single bit per (i, j) combination -- but since r is allowed to be "arbitrarily big", and this faster reconstruction won't improve the overall time complexity, it probably makes more sense to go with the approach that uses the fewest bits per DP table entry.) All satisfying solutions can be reconstructed this way, by backtracking through all true terms instead of just picking any one -- but there may be an exponential number of them.
Speedups
There are two ways that calculation of an individual X[i][j] value can be sped up. First, because all the terms are combined with ||, we can stop as soon as the result becomes true, since no later term can make it false again. Second, if there is no zero anywhere to the left of i, we can stop as soon as the product of the final numbers becomes larger than r, since there's no way for that product to be decreased again.
When there are no zeroes in a[], that second optimisation is likely to be very important in practice: it has the potential to make the inner loop much smaller than the full i-1 iterations. In fact if a[] contains no zeroes, and its average value is v, then after k terms have been computed for a particular X[i][j] value the product will be around v^k -- so on average, the number of inner loop iterations (terms) needed drops from n to log_v(r) = log(r)/log(v). That might be much smaller than n, in which case the average time complexity for this model drops to O(rn*log(r)/log(v)).
[EDIT: We actually can save multiplications with the following optimisation :)]
8/32/64 X[i][j]s at a time: X[i][j] is independent of X[i][k] for k != j, so if we are using bitsets to store these values, we can calculate 8, 32 or 64 of them (or maybe more, with SSE2 etc.) in parallel using simple bitwise OR operations. That is, we can calculate the first term of X[i][j], X[i][j+1], ..., X[i][j+31] in parallel, OR them into the results, then calculate their second terms in parallel and OR them in, etc. We still need to perform the same number of subtractions this way, but the products are all the same, so we can reduce the number of multiplications by a factor of 8/32/64 -- as well as, of course, the number of memory accesses. OTOH, this makes the first optimisation from the previous paragraph harder to accomplish -- you have to wait until an entire block of 8/32/64 bits have become true before you can stop iterating.
Zeroes: Zeroes in a[] may allow us to stop early. Specifically, if we have just computed X[i][r] for some i < n and found it to be true, and there is a zero anywhere to the right of position i in a[], then we can stop: we already have a formula on the first i numbers that evaluates to r, and we can use that zero to "kill off" all numbers to the right of position i by creating one big product term that includes all of them.
Ones: An interesting property of any a[] entry containing the value 1 is that it can be moved to any other position in a[] without affecting whether or not there is a solution. This is because every satisfying formula either has a * on at least one side of this 1, in which case it multiplies some other term and has no effect there, and would likewise have no effect anywhere else; or it has a + on both sides (imagine extra + signs before the first position and after the last), in which case it might as well be added in anywhere.
So, we can safely shunt all 1 values to the end of a[] before doing anything else. The point of doing this is that now we don't have to evaluate these rows of X[][] at all, because they only influence the outcome in a very simple way. Suppose there are m < n ones in a[], which we have moved to the end. Then after computing the m+1 values X[n-m][r-m], X[n-m][r-m+1], X[n-m][r-m+2], ..., X[n-m][r], we already know what X[n][r] must be: if any of them are true, then X[n][r] must be true, otherwise (if they are all false) it must be false. This is because the final m ones can add anywhere from 0 up to m to a formula on the first n-m values. (But if a[] consists entirely of 1s, then at least 1 must be "added" -- they can't all multiply some other term.)
Here is another approach that might be helpful. It is sometimes known as a "meet-in-the-middle" algorithm and runs in O(n * 2^(n/2)). The basic idea is this. Suppose n = 40 and you know that the middle slot is a +. Then, you can brute force all N := 2^20 possibilities for each side. Let A be a length N array storing the possible values of the left side, and similarly let B be a length N array storing the values for the right side.
Then, after sorting A and B, it is not hard to efficiently check for whether any two of them sum to r (e.g. for each value in A, do a binary search on B, or you can even do it in linear time if both arrays are sorted). This part takes O(N * log N) = O(n * 2^(n/2)) time.
Now, this was all assuming the middle slot is a +. If not, then it has to be a *, and you can combine the middle two elements into one (their product), reducing the problem to n = 39. Then you try the same thing, and so on. If you analyze it carefully, you should get O(n * 2^(n/2)) as the asymptotic complexity, since actually the largest term dominates.
You need to do some bookkeeping to actually recover the +'s and *'s, which I have left out to simplify the explanation.
My question is: given a list L of length n, and an integer i such that 0 <= i < n!, how can you write a function perm(L, n) to produce the ith permutation of L in O(n) time? What I mean by ith permutation is just the ith permutation in some implementation defined ordering that must have the properties:
For any i and any 2 lists A and B, perm(A, i) and perm(B, i) must both map the jth element of A and B to an element in the same position for both A and B.
For any inputs (A, i), (A, j) perm(A, i)==perm(A, j) if and only if i==j.
NOTE: this is not homework. In fact, I solved this 2 years ago, but I've completely forgotten how, and it's killing me. Also, here is a broken attempt I made at a solution:
def perm(s, i):
n = len(s)
perm = [0]*n
itCount = 0
for elem in s:
perm[i%n + itCount] = elem
i = i / n
n -= 1
itCount+=1
return perm
ALSO NOTE: the O(n) requirement is very important. Otherwise you could just generate the n! sized list of all permutations and just return its ith element.
def perm(sequence, index):
sequence = list(sequence)
result = []
for x in xrange(len(sequence)):
idx = index % len(sequence)
index /= len(sequence)
result.append( sequence[idx] )
# constant time non-order preserving removal
sequence[idx] = sequence[-1]
del sequence[-1]
return result
Based on the algorithm for shuffling, but we take the least significant part of the number each time to decide which element to take instead of a random number. Alternatively consider it like the problem of converting to some arbitrary base except that the base name shrinks for each additional digit.
Could you use factoradics? You can find an illustration via this MSDN article.
Update: I wrote an extension of the MSDN algorithm that finds i'th permutation of n things taken r at a time, even if n != r.
A computational minimalistic approach (written in C-style pseudocode):
function perm(list,i){
for(a=list.length;a;a--){
list.switch(a-1,i mod a);
i=i/a;
}
return list;
}
Note that implementations relying on removing elements from the original list tend to run in O(n^2) time, at best O(n*log(n)) given a special tree style list implementation designed for quickly inserting and removing list elements.
The above code rather than shrinking the original list and keeping it in order just moves an element from the end to the vacant location, still makes a perfect 1:1 mapping between index and permutation, just a slightly more scrambled one, but in pure O(n) time.
So, I think I finally solved it. Before I read any answers, I'll post my own here.
def perm(L, i):
n = len(L)
if (n == 1):
return L
else:
split = i%n
return [L[split]] + perm(L[:split] + L[split+1:], i/n)
There are n! permutations. The first character can be chosen from L in n ways. Each of those choices leave (n-1)! permutations among them. So this idea is enough for establishing an order. In general, you will figure out what part you are in, pick the appropriate element and then recurse / loop on the smaller L.
The argument that this works correctly is by induction on the length of the sequence. (sketch) For a length of 1, it is trivial. For a length of n, you use the above observation to split the problem into n parts, each with a question on an L' with length (n-1). By induction, all the L's are constructed correctly (and in linear time). Then it is clear we can use the IH to construct a solution for length n.