My friend was asked this question in an interview:
We have a vector of integers consisting only of 0s and 1s. A delete consists of selecting consecutive equal numbers and removing them. The remaining parts are then attached to each other. For e.g., if the vector is [0,1,1,0] then after removing [1,1] we get [0,0]. We need one delete to remove an element from the vector, if no consecutive elements are found.
We need to write a function that returns the minimum number of deletes to make the vector empty.
Examples 1:
Input: [0,1,1,0]
Output: 2
Explanation: [0,1,1,0] -> [0,0] -> []
Examples 2:
Input: [1,0,1,0]
Output: 3
Explanation: [1,0,1,0] -> [0,1,0] -> [0,0] -> [].
Examples 3:
Input: [1,1,1]
Output: 1
Explanation: [1,1,1] -> []
I am unsure of how to solve this question. I feel that we can use a greedy approach:
Remove all consecutive equal elements and increment the delete counter for each;
Remove elements of the form <a, b, c> where a==c and a!=b, because of we had multiple consecutive bs, it would have been deleted in step (1) above. Increment the delete counter once as we delete one b.
Repeat steps (1) and (2) as long as we can.
Increment delete counter once for each of the remaining elements in the vector.
But I am not sure if this would work. Could someone please confirm if this is the right approach? If not, how do we solve this?
Hint
You can simplify this problem greatly by noticing the following fact: a chain of consecutive zeros or ones can be shortened or lengthened without changing the final solution. By example, the two vectors have the same solution:
[1, 0, 1]
[1, 0, 0, 0, 0, 0, 0, 1]
With that in mind, the solution becomes simpler. So I encourage you to pause and try to figure it out!
Solution
With the previous remark, we can reduce the problem to vectors of alternating zeros and ones. In fact, since zero and one have no special meaning here, it suffices to solve for all such vector which start by... say a one.
[] # number of steps: 0
[1] # number of steps: 1
[1, 0] # number of steps: 2
[1, 0, 1] # number of steps: 2
[1, 0, 1, 0] # number of steps: 3
[1, 0, 1, 0, 1] # number of steps: 3
[1, 0, 1, 0, 1, 0] # number of steps: 4
[1, 0, 1, 0, 1, 0, 1] # number of steps: 4
We notice a pattern, the solution seems to be floor(n / 2) + 1 for n > 1 where n is the length of those sequences. But can we prove it..?
Proof
We will proceed by induction. Suppose you have a solution for a vector of length n - 2, then any move you do (except for deleting the two characters on the edges of the vector) will have the following result.
[..., 0, 1, 0, 1, 0 ...]
^------------ delete this one
Result:
[..., 0, 1, 1, 0, ...]
But we already mentioned that a chain of consecutive zeros or ones can be shortened or lengthened without changing the final solution. So the result of the deletion is in fact equivalent to now having to solve for:
[..., 0, 1, 0, ...]
What we did is one deletion in n elements and arrived to a case which is equivalent to having to solve for n - 2 elements. So the solution for a vector of size n is...
Solution(n) = Solution(n - 2) + 1
= [floor((n - 2) / 2) + 1] + 1
= floor(n / 2) + 1
Keeping in mind that the solutions for [1] and [1, 0] are respectively 1 and 2, this concludes our proof. Notice here, that [] turns out to be an edge case.
Interestingly enough, this proof also shows us that the optimal sequence of deletions for a given vector is highly non-unique. You can simply delete any block of ones or zeros, except for the first and last ones, and you will end up with an optimal solution.
Conclusion
In conclusion, given an arbitrary vector of ones and zeros, the smallest number of deletions you will need can be computed by counting the number of groups of consecutive ones or zeros. The answer is then floor(n / 2) + 1 for n > 1.
Just for fun, here is a Python implementation to solve this problem.
from itertools import groupby
def solution(vector):
n = 0
for group in groupby(vector):
n += 1
return n // 2 + 1 if n > 1 else n
Intuition: If we remove the subsegments of one integer, then all the remaining integers are of one type leads to only one operation.
Choosing the integer which is not the starting one to remove subsegments leads to optimal results.
Solution:
Take the integer other than the one that is starting as a flag.
Count the number of contiguous segments of the flag in a vector.
The answer will be the above count + 1(one operation for removing a segment of starting integer)
So, the answer is:
answer = Count of contiguous segments of flag + 1
Example 1:
[0,1,1,0]
flag = 1
Count of subsegments with flag = 1
So, answer = 1 + 1 = 2
Example 2:
[1,0,1,0]
flag = 0
Count of subsegments with flag = 2
So, answer = 2 + 1 = 3
Example 3:
[1,1,1]
flag = 0
Count of subsegments with flag = 0
So, answer = 0 + 1 = 1
Related
I have n pairs of numbers: ( p[1], s[1] ), ( p[2], s[2] ), ... , ( p[n], s[n] )
Where p[i] is integer greater than 1; s[i] is integer : 0 <= s[i] < p[i]
Is there any way to determine minimum positive integer a , such that for each pair :
( s[i] + a ) mod p[i] != 0
Anything better than brute force ?
It is possible to do better than brute force. Brute force would be O(A·n), where A is the minimum valid value for a that we are looking for.
The approach described below uses a min-heap and achieves O(n·log(n) + A·log(n)) time complexity.
First, notice that replacing a with a value of the form (p[i] - s[i]) + k * p[i] leads to a reminder equal to zero in the ith pair, for any positive integer k. Thus, the numbers of that form are invalid a values (the solution that we are looking for is different from all of them).
The proposed algorithm is an efficient way to generate the numbers of that form (for all i and k), i.e. the invalid values for a, in increasing order. As soon as the current value differs from the previous one by more than 1, it means that there was a valid a in-between.
The pseudocode below details this approach.
1. construct a min-heap from all the following pairs (p[i] - s[i], p[i]),
where the heap comparator is based on the first element of the pairs.
2. a0 = -1; maxA = lcm(p[i])
3. Repeat
3a. Retrieve and remove the root of the heap, (a, p[i]).
3b. If a - a0 > 1 then the result is a0 + 1. Exit.
3c. if a is at least maxA, then no solution exists. Exit.
3d. Insert into the heap the value (a + p[i], p[i]).
3e. a0 = a
Remark: it is possible for such an a to not exist. If a valid a is not found below LCM(p[1], p[2], ... p[n]), then it is guaranteed that no valid a exists.
I'll show below an example of how this algorithm works.
Consider the following (p, s) pairs: { (2, 1), (5, 3) }.
The first pair indicates that a should avoid values like 1, 3, 5, 7, ..., whereas the second pair indicates that we should avoid values like 2, 7, 12, 17, ... .
The min-heap initially contains the first element of each sequence (step 1 of the pseudocode) -- shown in bold below:
1, 3, 5, 7, ...
2, 7, 12, 17, ...
We retrieve and remove the head of the heap, i.e., the minimum value among the two bold ones, and this is 1. We add into the heap the next element from that sequence, thus the heap now contains the elements 2 and 3:
1, 3, 5, 7, ...
2, 7, 12, 17, ...
We again retrieve the head of the heap, this time it contains the value 2, and add the next element of that sequence into the heap:
1, 3, 5, 7, ...
2, 7, 12, 17, ...
The algorithm continues, we will next retrieve value 3, and add 5 into the heap:
1, 3, 5, 7, ...
2, 7, 12, 17, ...
Finally, now we retrieve value 5. At this point we realize that the value 4 is not among the invalid values for a, thus that is the solution that we are looking for.
I can think of two different solutions. First:
p_max = lcm (p[0],p[1],...,p[n]) - 1;
for a = 0 to p_max:
zero_found = false;
for i = 0 to n:
if ( s[i] + a ) mod p[i] == 0:
zero_found = true;
break;
if !zero_found:
return a;
return -1;
I suppose this is the one you call "brute force". Notice that p_max represents Least Common Multiple of p[i]s - 1 (solution is either in the closed interval [0, p_max], or it does not exist). Complexity of this solution is O(n * p_max) in the worst case (plus the running time for calculating lcm!). There is a better solution regarding the time complexity, but it uses an additional binary array - classical time-space tradeoff. Its idea is similar to the Sieve of Eratosthenes, but for remainders instead of primes :)
p_max = lcm (p[0],p[1],...,p[n]) - 1;
int remainders[p_max + 1] = {0};
for i = 0 to n:
int rem = s[i] - p[i];
while rem >= -p_max:
remainders[-rem] = 1;
rem -= p[i];
for i = 0 to n:
if !remainders[i]:
return i;
return -1;
Explanation of the algorithm: first, we create an array remainders that will indicate whether certain negative remainder exists in the whole set. What is a negative remainder? It's simple, notice that 6 = 2 mod 4 is equivalent to 6 = -2 mod 4. If remainders[i] == 1, it means that if we add i to one of the s[j], we will get p[j] (which is 0, and that is what we want to avoid). Array is populated with all possible negative remainders, up to -p_max. Now all we have to do is search for the first i, such that remainder[i] == 0 and return it, if it exists - notice that the solution does not have to exists. In the problem text, you have indicated that you are searching for the minimum positive integer, I don't see why zero would not fit (if all s[i] are positive). However, if that is a strong requirement, just change the for loop to start from 1 instead of 0, and increment p_max.
The complexity of this algorithm is n + sum (p_max / p[i]) = n + p_max * sum (1 / p[i]), where i goes from to 0 to n. Since all p[i]s are at least 2, that is asymptotically better than the brute force solution.
An example for better understanding: suppose that the input is (5,4), (5,1), (2,0). p_max is lcm(5,5,2) - 1 = 10 - 1 = 9, so we create array with 10 elements, initially filled with zeros. Now let's proceed pair by pair:
from the first pair, we have remainders[1] = 1 and remainders[6] = 1
second pair gives remainders[4] = 1 and remainders[9] = 1
last pair gives remainders[0] = 1, remainders[2] = 1, remainders[4] = 1, remainders[6] = 1 and remainders[8] = 1.
Therefore, first index with zero value in the array is 3, which is a desired solution.
I would like to know, what is the best approach to solve this problem:
Given x, y and y integers: a1, a2, a3 .. ay find all combinations of
a1 ± a2 ± ... ± ay = x, y < 20.
My recent approach is to find all permutations of 1 and 0 stored in table T and then, depending on whether number T[i] is 1 and 0, add or subtract ai from sum. The problem is that there are n! permutations of n-element array. Hence, for 20-element array, I have to check 20! possibilities where most of them are repeated. Could you please suggest me any potential approach to solving my problem?
There are only 2^20 (just over a million) binary vectors of length 20 rather than the infeasible 20!. Use should be able to brute-force that few in less than a second, especially if you use a Gray Code which would allow you to pass from one candidate sum to another in a single step (e.g. to go from a + b - c -d to a + b - c + d just add 2*d.
The excellent branch and bound idea of #MikeWise would be good if y gets much larger. Generate a tree starting with a root node of 0. Give it children of -a1 and +a1. Then 4 grand children by adding and subtracting a2, etc. If you ever get farther than the sum of the remaining ai from the target x -- you can prune that branch. In the worst case, this might be slightly worse than the Gray-code based brute force (because you need to do so much more processing at each node), but in the best case you might be able to prune away most possibilities.
On Edit: Here is some Python code. First I define a generator which, given an integer n, successively returns which bit position needs to flip to step through a Gray code:
def grayBit(n):
code = [0]*n
odd = True
done = False
while not done:
if odd:
code[0] = 1 - code[0] #flip bit
odd = False
yield 0
else:
i = code.index(1)
if i == n-1:
done = True
else:
code[i+1] = 1 - code[i+1]
odd = True
yield i+1
(This uses an algorithm which I learned years ago in the excellent book "Constructive Combinatorics" by Stanton and White).
Then -- I use this to return all solutions (as lists consisting of the input list of numbers with negative signs inserted as needed). The key point is that I can take the current bit-to-flip and either add or subtract twice the corresponding number:
def signedSums(nums, target):
n = len(nums)
patterns = []
total = sum(nums)
pattern = [1]*n
if target == total: patterns.append([x*y for x,y in zip(nums,pattern)])
deltas = [2*i for i in nums]
for i in grayBit(n):
if pattern[i] == 1:
total -= deltas[i]
else:
total += deltas[i]
pattern[i] = -1 * pattern[i]
if target == total: patterns.append([x*y for x,y in zip(nums,pattern)])
return patterns
Typical output:
>>> signedSums([1,2,3,4,5,9],6)
[[1, -2, -3, -4, 5, 9], [1, 2, 3, -4, -5, 9], [-1, 2, -3, 4, -5, 9], [1, 2, 3, 4, 5, -9]]
It only takes about a second to evaluate:
>>> len(signedSums([i for i in range(1,21)],100))
2865
Hence there are 2865 ways to add or subtract the integers in the range 1,2,..,20 to get a net sum of 100.
I assumed that a1 can be either added or subtracted (instead of just added, which is what your question implies if taken literally). Note that if you really want to insist that a1 occurs positively, then you could just subtract it from x and apply the above algorithm to the rest of the list and the adjusted target.
Finally, it is not too hard to see that if you solve the subset sub problem with the set of weights {2*a1, 2*a2, 2*a3, .... 2*ay} and with a target sum of x + a1 + a2 + ... + ay then the subsets selected will correspond exactly to the subsets where the positive signs occur in the solution to the original problem. Thus your problem is easily reducible to the subset-sum problem and it is thus NP-complete to determine if it has any solutions (and NP-hard to list them all).
We have conditions:
a1 ± a2 ± ... ± ay = x, y<20 [1]
First of all, I would generalize the condition [1], allowing all 'a' including 'a1' to be ±:
±a1 ± a2 ± ... ± ay = x [2]
If we have solution for [2], we can easily get solution for [1]
To solve [2] we can use the following approach:
combinations list x
| x == 0 && null list = [[]]
| null list = []
| otherwise = plusCombinations ++ minusCombinations where
a = head list
rest = tail list
plusCombinations = map (\c -> a:c) $ combinations rest (x-a)
minusCombinations = map (\c -> -a:c) $ combinations rest (x+a)
Explanation:
First condition checks if x reached zero and used all numbers from list. This means that solution found and we return single solution: [[]]
Second condition checks that list is empty and as far as x is not 0 this means that no solution can be found, returning empty solution: []
Third branch means that we can two alternatives: to use ai with '+' or with '-' so we concatenate plus and minus combinations
Example output:
*Main> combinations [1,2,3,4] 2
[[1,2,3,-4],[-1,2,-3,4]]
*Main> combinations [1,2,3,4] 3
[]
*Main> combinations [1,2,3,4] 4
[[1,2,-3,4],[-1,-2,3,4]]
Given the list of numbers
1 15 2 5 10
I need to obtain
1 2 5 10 15
The only operation I can do is "move the number X at position Y".
In the above example I only need to do "move the number 15 at position 5".
I would like to minimize the number of operations but I can't find/remember a classical algorithm for that, given the operation available.
Some background :
I'm interacting with an API for a kanban-like service.
I have about 600 cards and some actions on our bug-tracker can imply a reordering of these 600 cards in the kanban (multiple cards can move at the same time if the priority of a project is changed)
I can do it in 600 calls to the API but I'm trying to reduce that number as much as possible.
Lemma: The minimum number of (delete element, insert element) pairs you can perform to sort a list L (in increasing order) is:
Smin(L) = |L| - |LIC(L)|
Where LIC(L) is the Longest Increasing Subsequence.
Thus, you have to:
Establish the LIC of your list.
Remove the elements not in it and insert them back at the appropriate position (using binary search).
Proof:
By induction.
For a list of size 1, the longest increasing subsequence is of length... 1! The list is already sorted so the number of (del,ins) pairs required is
|L| - |LIC(L)| = 1 - 1 = 0
Now let Ln be a list of length n, 1 ≤ n. Let Ln+1 be the list obtained by adding an element en+1 to the left of Ln.
This element may or may not influence the Longest Increasing Subsequence. Let's try to see how...
Let in,1 and in,2 be the two first elements of LIC(Ln) (*):
If en+1 > in,2, then LIC(Ln+1) = LIC(Ln)
If en+1 ≤ in,1, then LIC(Ln+1) = en+1 || LIC(Ln)
Else, LIC(Ln+1) = LIC(Ln) - in,1 + en+1. We keep the LIC with the highest first element. This is done by removing in,1 from the LIC and replacing it with en+1.
In the first case, we delete en+1, we thus get to sort Ln. By the induction hypothesis, this require n (deletion, insertion) pairs. We then have to insert en+1 at the appropriate position. Thus:
S(Ln+1)min = 1 + S(Ln)min
S(Ln+1)min = 1 + n - |LIC(Ln)|
S(Ln+1)min = |Ln+1| - |LIC(Ln+1|
In the second case, we ignore en+1. We begin by deleting elements not in LIC(Ln). These elements have to be inserted again! There are
S(Ln)min = |Ln| - |LIC(Ln)|
such elements.
Now, we just have to take care and insert them in the right order (relatively to en+1). In the end, it requires:
S(Ln+1)min = |Ln| - |LIC(Ln)|
S(Ln+1)min = |Ln| + 1 - (|LIC(Ln)| + 1)
Since we have |LIC(Ln+1)| = |LIC(Ln)| + 1 and |Ln+1| = |Ln| + 1, we have in the end:
S(Ln+1)min = |Ln+1| - |LIC(Ln+1)|
The last case can be proved by considering the list L'n obtained by removing in,1 from Ln+1. In that case LIC(L'n) = LIC(Ln+1) and thus:
|LIC(L'n)| = |LIC(Ln)| (1)
From there, we can sort L'n (which takes |L'n| - |LIC(L'n| by the induction hypothesis. The previous equality (1) leads to the result.
(*): If LIC(Ln) < 2, then in,2 doesn't exist. Just ignore the comparisons with it. In that case, only case 2 and case 3 apply... The result is still valid
One possible solution is to find the longest increasing subsequence and move only elements that aren't inside it.
I can't prove it's optimal, but it is easy to prove it is correct and better than N swaps.
Here is a proof-of-concept in Python 2. I implemented it as a O(n2) algorithm, but I'm pretty sure it can be reduced to O(n log n).
from operator import itemgetter
def LIS(V):
T = [1]*(len(V))
P = [-1]*(len(V))
for i, v in enumerate(V):
for j in xrange(i-1, -1, -1):
if T[j]+1 > T[i] and V[j] <= V[i]:
T[i] = T[j] + 1
P[i] = j
i, _ = max(enumerate(T), key=itemgetter(1))
while i != -1:
yield i
i = P[i]
def complement(L, n):
for a, b in zip(L, L[1:]+[n]):
for i in range(a+1, b):
yield i
def find_moves(V):
n = len(V)
L = list(LIS(V))[::-1]
SV = sorted(range(n), key=lambda i:V[i])
moves = [(x, SV.index(x)) for x in complement(L, n)]
while len(moves):
a, b = moves.pop()
yield a, b
moves = [(x-(x>a)+(x>b), y) for x, y in moves]
def make_and_print_moves(V):
print 'Initial array:', V
for a, b in find_moves(V):
x = V.pop(a)
V.insert(b, x)
print 'Move {} to {}. Result: {}'.format(a, b, V)
print '***'
make_and_print_moves([1, 15, 2, 5, 10])
make_and_print_moves([4, 3, 2, 1])
make_and_print_moves([1, 2, 4, 3])
It outputs something like:
Initial array: [1, 15, 2, 5, 10]
Move 1 to 4. Result: [1, 2, 5, 10, 15]
***
Initial array: [4, 3, 2, 1]
Move 3 to 0. Result: [1, 4, 3, 2]
Move 3 to 1. Result: [1, 2, 4, 3]
Move 3 to 2. Result: [1, 2, 3, 4]
***
Initial array: [1, 2, 4, 3]
Move 3 to 2. Result: [1, 2, 3, 4]
***
I am given a function rand5() that generates, with a uniform distribution, a random integer in the closed interval [1,5]. How can I use rand5(), and nothing else, to create a function rand7(), which generates integers in [1,7] (again, uniformly distributed) ?
I searched stackoverflow, and found many similar questions, but not exactly like this one.
My initial attempt was rand5() + 0.5*rand5() + 0.5*rand5(). But this won't generate integers from 1 to 7 with uniform probability. Any answers, or links to answers, are very welcome.
Note that a prefect uniform distribution cannot be achieved with a bounded number of draw5() invocations, because for every k: 5^k % 7 != 0 - so you will always have some "spare" elements.
Here is a solution with unbounded number of draw5() uses:
Draw two numbers, x1,x2. There are 5*5=25 possible outcomes for this.
Note that 25/7 ~= 3.57. Chose 3*7=21 combinations, such that each combination will be mapped to one number in [1,7], for all other 4 numbers - redraw.
For example:
(1,1),(1,2),(2,1) : 1
(3,1),(1,3),(3,2): 2
(3,3),(1,4),(4,1): 3
(2,4),(4,2)(3,4): 4
(4,3), (4,4), (1,5): 5
(5,1), (2,5), (5,2) : 6
(5,3), (3,5), (4,5) : 7
(5,4),(5,5),(2,3), (2,2) : redraw
Here's a simple way:
Use rand5() to generate a sequence of three random integers from the set { 1, 2, 4, 5 } (i.e., throw away any 3 that is generated).
If all three numbers are in the set { 1, 2 }, discard the sequence and return to step 1.
For each number in the sequence, map { 1, 2} to 0 and { 4, 5 } to 1. Use these as the three bit values for a 3-bit number. Because the bits cannot all be 0, the number will be in the range [1, 7]. Because each bit is 0 or 1 with equal probability, the distribution over [1, 7] should be uniform.
ok I had to think about it for a while but it is actually not that hard. Imagine instead of rand5 you had rand2 which either outputs 0 or 1. You can make rand2 our of rand5 by simply doing
rand2() {
if(rand5() > 2.5) return 1
else return 0
}
now using rand2 multiple times do a tree to get rand7. For example if you start rand7 can be in [1,2,3,4,5,6,7] after a throw of rand2 which gives 0 you now subset to [1,2,3,4] and after another throw or rand2 which is 1 you subset to [3,4] and a final throw of 1 gives the output of rand7 to be 4. In general this tree trick can work to take a rand2 and map to randx where x is any integer.
Here's one meta-trick which comes in handy for lots of these problems: the bias is introduced when we treat the terms differently in some fashion, so if we treat them all the same at each step and perform operations only on the set, we'll stay out of trouble.
We have to call rand5() at least once (obviously!), but if we branch on that bad things happen unless we're clever. So instead let's call it once for each of the 7 possibilities:
In [126]: import random
In [127]: def r5():
.....: return random.randint(1, 5)
.....:
In [128]: [r5() for i in range(7)]
Out[128]: [3, 1, 3, 4, 1, 1, 2]
Clearly each of these terms was equally likely to be any of these numbers.. but only one of them happened to be 2, so if our rule had been "choose whichever term rand5() returns 2 for" then it would have worked. Or 4, or whatever, and if we simply looped long enough that would happen. So there are lots of way to come up with something that works. Here (in pseudocode -- this is terrible Python) is one way:
import random, collections
def r5():
return random.randint(1, 5)
def r7():
left = range(1, 8)
while True:
if len(left) == 1:
return left[0]
rs = [r5() for n in left]
m = max(rs)
how_many_at_max = rs.count(m)
if how_many_at_max == len(rs):
# all the same: try again
continue
elif how_many_at_max == 1:
# hooray!
return left[rs.index(m)]
# keep only the non-maximals
left = [l for l,r in zip(left, rs) if r != m]
which gives
In [189]: collections.Counter(r7() for _ in xrange(10**6))
Out[189]: Counter({7: 143570, 5: 143206, 4: 142827, 2: 142673, 6: 142604, 1: 142573, 3: 142547})
Given a set A containing n positive integers, how can I find the smallest integer >= 0 that can be obtained using all the elements in the set. Each element can be can be either added or subtracted to the total.
Few examples to make this clear.
A = [ 2, 1, 3]
Result = 0 (2 + 1 - 3)
A = [1, 2, 0]
Result = 1 (-1 + 2 + 0)
A = [1, 2, 1, 7, 6]
Result = 1 (1 + 2 - 1 - 7 + 6)
You can solve it by using Boolean Integer Programming. There are several algorithms (e.g. Gomory or branch and bound) and free libraries (e.g. LP-Solve) available.
Calculate the sum of the list and call it s. Double the numbers in the list. Say the doubled numbers are a,b,c. Then you have the following equation system:
Boolean x,y,z
a*x+b*y+c*z >= s
Minimize ax+by+cz!
The boolean variables indicate if the corresponding number should be added (when true) or subtracted (when false).
[Edit]
I should mention that the transformed problem can be seen as "knapsack problem" as well:
Boolean x,y,z
-a*x-b*y-c*z <= -s
Maximize ax+by+cz!