Dynamic Programming a Count of Subsequences with Property? - algorithm

Consider a dynamic programming problem that asks how many distinct subsequences (not necessarily contiguous) of a sequence S have a certain property P of value p0.
The range of P is small and finite, and there is an efficient way of calculating P:
P(s1 + s2) = f(P(s1), P(s2))
where + denotes sequence concatenation.
One way to do this would be to count how many subsequences there are of the prefix S[1] + S[2] + ... + S[k] of S that have property px. (Store this in Count[px][k])
So the recursion is:
Count[px][k] = Count[px][k-1] // not using element S[k];
P pq = f(px,P(S[k])); // calculate property pq of appending element S[k]
Count[pq][k] += Count[px][k-1] // add count of P(prefix+S[k])
and the answer is then:
return Count[p0][S.length]
This works when the elements of S are pairwise distinct, however it will count twice subsequences that have equal value but use different elements of different positions.
How can I modify this algorithm so that it counts equal subsequences exactly once ? (ie only counts distinct subsequences)

Suppose the sequence is of characters and S[k] is the letter x.
The problem is that you have double counted all sequences that don't use S[k], but nevertheless end with x (this can only happen if x happened earlier in the sequence).
Suppose the last time x appeared was at element S[j].
All the distinct sequences that end with x is simply given by counting all distinct sequences up to position j-1, and then adding x onto all of these.
We can therefore correct for the double counting by subtracting this count.
Count[px][k] = Count[px][k-1] // not using element S[k]
P pq = f(px,P(S[k])) // property pq of appending element S[k]
j = index of previous location in string where S[j]==S[k]
Count[pq][k] += Count[px][k-1] // using element S[k]
Count[pq][k] -= Count[px][j-1] // Removing double counts

Related

Data structure to check if a static array does not contain an element of a given range

I'm stuck for hours on the following homework question for data-structures class:
You are given a static set S (i.e., S never changes) of n integers from {1, . . . , u}.
Describe a data structure of size O(n log u) that can answer the following queries in O(1) time:
Empty(i, j) - returns TRUE if and only if there is no element in S that is between i and j (where i and j are integers in {1, . . . , u}).
At first I thought of using a y-fast-trie.
Using y-fast-trie we can achieve O(n) space and O(loglogu) query (by finding the successor of i and check if it's bigger than j).
But O(loglogu) is not O(1)...
Then I thought maybe we can sort the array and create a second array of size n+1 of the ranges that are not in the array and then in the query we would check if [i, j] is a sub-range of one of the ranges but I didn't thought of any way to do it that uses O(nlogu) space and can answer the query in O(1).
I have no idea how to solve this and I feel like I'm not even close to the solution, any help would be nice.
We can create a x-fast-trie of S (takes O(nlogu) space) and save in each node the maximum and minimum value of a leaf in it's sub tree. Now we can use that to answer the Empty query in O(1). Like this:
Empty(i, j)
We first calculate xor(i,j) now the number of leading zeros in that number will be the number of leading bits i and j share in common let's mark this number as k. Now we'll take the first k bits of i (or j because they're equal) and check in the x-fast-trie hash table if there's a node that equels to those bits. If there isn't we'll return TRUE because any number between i and j would also have the same k leading bits and since there isn't any number with those leading bits there isn't any number between i and j. If there is let's mark that node as X.
if X->right->minimum > j and X->left->maximum < i we return TRUE and otherwise we return FALSE, because if this is false then there is a number between i and j and if it's true then all the numbers that are smaller than j are also smaller than i and all the numbers that are bigger than i are also bigger than j.
Sorry for bad English
You haven't clarify either the numbers given will be sorted or not. If not, sort them, while will take O(nlogn).
Find upper bound of i, say x. Find lower bound of j, say y.
Now just check 4 numbers. Numbers at index x, x+1, y-1 and y. If any of the numbers of the given array is between i and j return true. Otherwise return false.
If the given Set/Array is not sorted, then in this approach additional O(nlogn) is required to sort it. Memory requires O(n). For each query, it's O(1).
Consider a data structure consisting of
an array A[1,...,u] of size u such that A[i]=1 if i is present in S, and A[i]=0 otherwise. This array can be constructed from set S in O(n).
an array B[1,...,u] of size u which stores cumulative sum of A i.e. B[i] = A[1]+...+A[i]. This array can be constructed in O(u) from A using the relation B[i] = B[i-1] + A[i] for all i>1.
a function empty(i,j) which returns the desired Boolean query. If i==1, then define count = B[j], otherwise take count = B[j]-B[i-1]. Note that count gives the number of distinct elements in S lying in range [i,j]. Once we have count, simply return count==0. Clearly, each query takes O(1).
Edit: As pointed out in comments, the size of this data structure is O(u), which doesn't matches the constraints. But I hope it gives others an approximate target to shoot at.
It isn't a solution, but impossible to write it in a comment. There is an idea of how to solve the more specific task that possibly will help to solve the generic task from the question.
The specific task is the same except the following point, u = 1024. Also, it isn't a final solution, it is a rough sketch (for the specific task).
Data structure creation:
Create a bitmask for U = { 1, ..., u } - M = 0000.....100001, where Mᵥ = 1 when Uᵥ ∊ S, otherwice = 0.
Save bitmask M as 'unsigned intgers 32' array = G (32 items). Each item of G contains 32 items from M.
Combine integer H = bitmask where Hᵣ = 0 when Gᵣ = 0, otherwice = 1
Convert G to G that is HashMap r to Gᵣ. G is G but contains records for Gᵣ != 0 only.
Images in the following pseudocode use 8 bits except 32, just for simplicity.
Empty(i, j) {
I = i / 32
J = j / 32
if I != J {
if P == 0: return true
if P(I) == 0: return true
if P(J) == 0: return true
} else {
if P(J=I) == 0: return true
}
return false
}

Maximum sum in array with special conditions

Assume we have an array with n Elements ( n%3 = 0).In each step, a number is taken from the array. Either you take the leftmost or the rightmost one. If you choose the left one, this element is added to the sum and the two right numbers are removed and vice versa.
Example: A = [100,4,2,150,1,1], sum = 0.
take the leftmost element. A = [4,2,150] sum = 0+100 =100
2.take the rightmost element. A = [] sum = 100+150 = 250
So the result for A should be 250 and the sequence would be Left, Right.
How can I calculate the maximum sum I can get in an array? And how can I determine the sequence in which I have to extract the elements?
I guess this problem can best be solved with dynamic programming and the concrete sequence can then be determined by backtracking.
The underlying problem can be solved via dynamic programming as follows. The state space can be defined by letting
M(i,j) := maximum value attainable by chosing from the subarray of
A starting at index i and ending at index j
for any i, j in {1, N} where `N` is the number of elements
in the input.
where the recurrence relation is as follows.
M(i,j) = max { M(i+1, j-2) + A[i], M(i+2, j-1) + A[j] }
Here, the first value corresponds to the choice of adding the beginning of the array while the second value connesponds to the choice of subtracting the end of the array. The base cases are the states of value 0 where i=j.

Find longest positive substrings in binary string

Let's assume I have a string like 100110001010001. I'd like to find such substring that:
are as longest as possible
have total positive sum >0
So the longest substrings, that have more 1s than 0s.
For example for the string above 100110001010001 it would be: [10011]000[101]000[1]
Actually it's be satisfying to find the total length of those, in this case: 9.
Unfortunately I have no clue, how can it be done not in brute-force way. Any ideas, please?
As posted now, your question seems a bit unclear. The total length of valid substrings that are "as long as possible" could mean different things: for example, among other options, it could be (1) a list of the longest valid extension to the left of each index (which would allow overlaps in the list), (2) the longest combination of non-overlapping such longest left-extensions, (3) the longest combination of non-overlapping, valid substrings (where each substring is not necessarily the longest possible).
I will outline a method for (3) since it easily transforms to (1) or (2). Finding the longest left-extension from each index with more ones than zeros can be done in O(n log n) time and O(n) additional space (for just the longest valid substring in O(n) time, see here: Finding the longest non-negative sub array). With that preprocessing, finding the longest combination of valid, non-overlapping substrings can be done with dynamic programming in somewhat optimized O(n^2) time and O(n) additional space.
We start by traversing the string, storing sums representing the partial sum up to and including s[i], counting zeros as -1. We insert each partial sum in a binary tree where each node also stores an array of indexes where the value occurs, and the leftmost index of a value less than the node's value. (A substring from s[a] to s[b] has more ones than zeros if the prefix sum up to b is greater than the prefix sum up to a.) If a value is already in the tree, we add the index to the node's index array.
Since we are traversing from left to right, only when a new lowest value is inserted into the tree is the leftmost-index-of-lower-value updated — and it's updated only for the node with the previous lowest value. This is because any nodes with a lower value would not need updating; and if any nodes with lower values were already in the tree, any nodes with higher values would already have stored the index of the earliest one inserted.
The longest valid substring to the left of each index extends to the leftmost index with a lower prefix sum, which can be easily looked up in the tree.
To get the longest combination, let f(i) represent the longest combination up to index i. Then f(i) equals the maximum of the length of each valid left extension possible to index j added to f(j-1).
Dynamic programming.
We have a string. If it is positive, that's our answer. Otherwise we need to trim each end until it goes positive, and find each pattern of trims. So for each length (N-1, N-2, N-3) etc, we've got N- length possible paths (trim from a, trim from b) each of which give us a state. When state goes positive, we've found out substring.
So two lists of integers, representing what happens if we trim entirely from a or entirely from b. Then backtrack. If we trim 1 from a, we must trim all the rest from b, if we trim two from a, we must trim one fewer from b. Is there an answer that allows us to go positive?
We can quickly eliminate because the answer must be at a maximum, either max trimming from a or max trimming from b. If the other trim allows us go positive, that's the result.
pseudocode:
N = length(string);
Nones = countones(string);
Nzeros = N - Nones;
if(Nones > Nzeroes)
return string
vector<int> cuta;
vector<int> cutb;
int besta = Nones - Nzeros;
int bestb = Nones - Nzeros;
cuta.push_back(besta);
cutb.push_back(bestb);
bestia = 0;
bestib = 0;
for(i=0;i<N;i++)
{
cuta.push_back( string[i] == 1 ? cuta.back() - 1 : cuta.back() +1);
cutb.push_back( string[N-i-1] == 1 ? cutb.back() -1 : cutb.back()+1);
if(cuta.back() > besta)
{
besta = cuta.back();
bestia = i;
}
if(cutb.back() > bestb)
{
bestb = cutb.back();
bestib = i;
}
// checks, is a cut from wholly from a or b going to send us positive
if(besta == 1)
answer = substring(string, bestia, N);
if(bestb == 1)
answer = substring(string, 0, N - bestib);
// if not, is a combined cut from current position to the
// the peak in the other distribution going to send us positive?
if(Nones - Nzeros + besta + cutb.back() == 1)
{
answer = substring(string, bestai, N - i);
}
if(Nones - Nzeros + cuta.back() + bestb == 1)
{
answer = substring(string, i, N - bestbi);
}
}
/*if we get here the string was all zeros and no positive substring */
This is untested and the final checks are a bit fiddly and I might have
made an error somewhere, but the algorithm should work more or less
as described.

On-demand algorithm to return successive combinations of k elements from n

This post shows how to write an algorithm to spit out, at one time, all combinations of k elements from n, avoiding permutations. But how would one write an algorithm that, on demand, gives the next combination (obviously, without precomputing and storing them)? It would be initialized with the ordered set of symbols n and an integer k, and would then be called to return the next combination.
Pseudocode or a good English narrative would be fine for my purposes - I'm not fluent in much beyond Perl and C and a bit of Java.
ORIGINAL WORDING
(SKIP TO THE UPDATE BELOW)
Let's assume that the n elements are the integers 1..n.
Represent every k-combination in increasing order (this representation gets rid of permutations inside the k-combination.)
Now consider the lexicographic order between k-combinations (of n elements). In other words, {i_1..i_k} < {j_1..j_k} if there exists an index t such that
i_s = j_s for all s < t and
i_t < j_t.
If {i_1..i_k} is a k-combination, define next{i_1..i_k} to be the next element w.r.t. the lexicographic order.
Here is how to compute next{i_1..i_k}:
Find the largest index r such that i_r + 1 < i_{r+1}
If no index satisfies this condition and i_k < n, set r := k
If none of the above conditions can be satisfied, there is no next (and the k-combination equals {n-k+1, n-k+2,... ,n})
If r satisfies the first condition, set next to be {i_1, ..., i_{r-1}, i_r + 1, i_{r+1}, ..., i_k}
If r = k (second condition), set next := {i_1, ..., i_{k-1}, i_k + 1}.
UPDATE (Many thanks to #rici for improving the solution)
Let's assume that the n elements are the integers 1..n.
Represent every k-combination in increasing order (this representation gets rid of permutations inside the k-combination.)
Now consider the lexicographic order between k-combinations (of n elements). In other words, {i_1..i_k} < {j_1..j_k} if there exists an index t such that
i_s = j_s for all s < t and
i_t < j_t.
If {i_1..i_k} is a k-combination, define next{i_1..i_k} to be the next element w.r.t. the lexicographic order.
Note that with this order the smallest k-combination is {1..k} and the largest {n-k+1, n-k+2,... ,n}.
Here is how to compute next{i_1..i_k}
Find the largest index r such that i_r can be incremented by 1.
Increment the value at index r and reset the following elements with consecutive values starting at i_r + 2.
Repeat until no position can be incremented.
More precisely:
If i_k < n, increment i_k by 1 (i.e., replace i_k with i_k + 1)
If i_k = n, find the largest index r such that i_r + 1 < i_{r+1}. Then increment i_r by 1 and reset the following positions to {i_r + 2, i_r + 3, ..., i_r + k + 1 - r}
Repeat until you reach {n-k+1, n-k+2,... ,n}
Note the recursive character of the algorithm: every time it increments the least significant position the tail is reset to the lexicographically smallest sequence that starts with the value just incremented.
Smalltalk code
SequenceableCollection >> #nextChoiceFrom: n
| next k r ar |
k := self size.
(self at: 1) = (n - k + 1) ifTrue: [^nil].
next := self shallowCopy.
r := (self at: k) = n
ifTrue: [(1 to: k-1) findLast: [:i | (self at: i) + 1 < (self at: i+1)]]
ifFalse: [k].
ar := self at: r.
r to: k do: [:i |
ar := ar + 1.
next at: i put: ar].
^next
Here's a prose description of how to do this. Start with your favorite iterative algorithm for generating all combinations. Then turn each loop variable into a state variable, and package it all into a class. Construct an instance of the class with k and n and initialize each state variable according to the algorithm.
You can implement most of these algorithms as you've described by converting them to an Iterator Pattern. This requires you to save the state of the algorithm between successive nextELement() calls.
If your language has support for coroutines, you may be able to convert the code more easily. Python and C# both have a yield keyword that can be used to transfer control back to the calling function while retaining the state of algorithm you're executing.

Minimal number of swaps?

There are N characters in a string of types A and B in the array (same amount of each type). What is the minimal number of swaps to make sure that no two adjacent chars are same if we can only swap two adjacent characters ?
For example, input is:
AAAABBBB
The minimal number of swaps is 6 to make the array ABABABAB. But how would you solve it for any kind of input ? I can only think of O(N^2) solution. Maybe some kind of sort ?
If we need just to count swaps, then we can do it with O(N).
Let's assume for simplicity that array X of N elements should become ABAB... .
GetCount()
swaps = 0, i = -1, j = -1
for(k = 0; k < N; k++)
if(k % 2 == 0)
i = FindIndexOf(A, max(k, i))
X[k] <-> X[i]
swaps += i - k
else
j = FindIndexOf(B, max(k, j))
X[k] <-> X[j]
swaps += j - k
return swaps
FindIndexOf(element, index)
while(index < N)
if(X[index] == element) return index
index++
return -1; // should never happen if count of As == count of Bs
Basically, we run from left to right, and if a misplaced element is found, it gets exchanged with the correct element (e.g. abBbbbA** --> abAbbbB**) in O(1). At the same time swaps are counted as if the sequence of adjacent elements would be swapped instead. Variables i and j are used to cache indices of next A and B respectively, to make sure that all calls together of FindIndexOf are done in O(N).
If we need to sort by swaps then we cannot do better than O(N^2).
The rough idea is the following. Let's consider your sample: AAAABBBB. One of Bs needs O(N) swaps to get to the A B ... position, another B needs O(N) to get to A B A B ... position, etc. So we get O(N^2) at the end.
Observe that if any solution would swap two instances of the same letter, then we can find a better solution by dropping that swap, which necessarily has no effect. An optimal solution therefore only swaps differing letters.
Let's view the string of letters as an array of indices of one kind of letter (arbitrarily chosen, say A) into the string. So AAAABBBB would be represented as [0, 1, 2, 3] while ABABABAB would be [0, 2, 4, 6].
We know two instances of the same letter will never swap in an optimal solution. This lets us always safely identify the first (left-most) instance of A with the first element of our index array, the second instance with the second element, etc. It also tells us our array is always in sorted order at each step of an optimal solution.
Since each step of an optimal solution swaps differing letters, we know our index array evolves at each step only by incrementing or decrementing a single element at a time.
An initial string of length n = 2k will have an array representation A of length k. An optimal solution will transform this array to either
ODDS = [1, 3, 5, ... 2k]
or
EVENS = [0, 2, 4, ... 2k - 1]
Since we know in an optimal solution instances of a letter do not pass each other, we can conclude an optimal solution must spend min(abs(ODDS[0] - A[0]), abs(EVENS[0] - A[0])) swaps to put the first instance in correct position.
By realizing the EVENS or ODDS choice is made only once (not once per letter instance), and summing across the array, we can count the minimum number of needed swaps as
define count_swaps(length, initial, goal)
total = 0
for i from 0 to length - 1
total += abs(goal[i] - initial[i])
end
return total
end
define count_minimum_needed_swaps(k, A)
return min(count_swaps(k, A, EVENS), count_swaps(k, A, ODDS))
end
Notice the number of loop iterations implied by count_minimum_needed_swaps is 2 * k = n; it runs in O(n) time.
By noting which term is smaller in count_minimum_needed_swaps, we can also tell which of the two goal states is optimal.
Since you know N, you can simply write a loop that generates the values with no swaps needed.
#define N 4
char array[N + N];
for (size_t z = 0; z < N + N; z++)
{
array[z] = 'B' - ((z & 1) == 0);
}
return 0; // The number of swaps
#Nemo and #AlexD are right. The algorithm is order n^2. #Nemo misunderstood that we are looking for a reordering where two adjacent characters are not the same, so we can not use that if A is after B they are out of order.
Lets see the minimum number of swaps.
We dont care if our first character is A or B, because we can apply the same algorithm but using A instead of B and viceversa everywhere. So lets assume that the length of the word WORD_N is 2N, with N As and N Bs, starting with an A. (I am using length 2N to simplify the calculations).
What we will do is try to move the next B right to this A, without taking care of the positions of the other characters, because then we will have reduce the problem to reorder a new word WORD_{N-1}. Lets also assume that the next B is not just after A if the word has more that 2 characters, because then the first step is done and we reduce the problem to the next set of characters, WORD_{N-1}.
The next B should be as far as possible to be in the worst case, so it is after half of the word, so we need $N-1$ swaps to put this B after the A (maybe less than that). Then our word can be reduced to WORD_N = [A B WORD_{N-1}].
We se that we have to perform this algorithm as most N-1 times, because the last word (WORD_1) will be already ordered. Performing the algorithm N-1 times we have to make
N_swaps = (N-1)*N/2.
where N is half of the lenght of the initial word.
Lets see why we can apply the same algorithm for WORD_{N-1} also assuming that the first word is A. In this case it matters than the first word should be the same as in the already ordered pair. We can be sure that the first character in WORD_{N-1} is A because it was the character just next to the first character in our initial word, ant if it was B the first work can perform only a swap between these two words and or none and we will already have WORD_{N-1} starting with the same character than WORD_{N}, while the first two characters of WORD_{N} are different at the cost of almost 1 swap.
I think this answer is similar to the answer by phs, just in Haskell. The idea is that the resultant-indices for A's (or B's) are known so all we need to do is calculate how far each starting index has to move and sum the total.
Haskell code:
Prelude Data.List> let is = elemIndices 'B' "AAAABBBB"
in minimum
$ map (sum . zipWith ((abs .) . (-)) is) [[1,3..],[0,2..]]
6 --output

Resources