Minimum common remainder of division - algorithm

I have n pairs of numbers: ( p[1], s[1] ), ( p[2], s[2] ), ... , ( p[n], s[n] )
Where p[i] is integer greater than 1; s[i] is integer : 0 <= s[i] < p[i]
Is there any way to determine minimum positive integer a , such that for each pair :
( s[i] + a ) mod p[i] != 0
Anything better than brute force ?

It is possible to do better than brute force. Brute force would be O(A·n), where A is the minimum valid value for a that we are looking for.
The approach described below uses a min-heap and achieves O(n·log(n) + A·log(n)) time complexity.
First, notice that replacing a with a value of the form (p[i] - s[i]) + k * p[i] leads to a reminder equal to zero in the ith pair, for any positive integer k. Thus, the numbers of that form are invalid a values (the solution that we are looking for is different from all of them).
The proposed algorithm is an efficient way to generate the numbers of that form (for all i and k), i.e. the invalid values for a, in increasing order. As soon as the current value differs from the previous one by more than 1, it means that there was a valid a in-between.
The pseudocode below details this approach.
1. construct a min-heap from all the following pairs (p[i] - s[i], p[i]),
where the heap comparator is based on the first element of the pairs.
2. a0 = -1; maxA = lcm(p[i])
3. Repeat
3a. Retrieve and remove the root of the heap, (a, p[i]).
3b. If a - a0 > 1 then the result is a0 + 1. Exit.
3c. if a is at least maxA, then no solution exists. Exit.
3d. Insert into the heap the value (a + p[i], p[i]).
3e. a0 = a
Remark: it is possible for such an a to not exist. If a valid a is not found below LCM(p[1], p[2], ... p[n]), then it is guaranteed that no valid a exists.
I'll show below an example of how this algorithm works.
Consider the following (p, s) pairs: { (2, 1), (5, 3) }.
The first pair indicates that a should avoid values like 1, 3, 5, 7, ..., whereas the second pair indicates that we should avoid values like 2, 7, 12, 17, ... .
The min-heap initially contains the first element of each sequence (step 1 of the pseudocode) -- shown in bold below:
1, 3, 5, 7, ...
2, 7, 12, 17, ...
We retrieve and remove the head of the heap, i.e., the minimum value among the two bold ones, and this is 1. We add into the heap the next element from that sequence, thus the heap now contains the elements 2 and 3:
1, 3, 5, 7, ...
2, 7, 12, 17, ...
We again retrieve the head of the heap, this time it contains the value 2, and add the next element of that sequence into the heap:
1, 3, 5, 7, ...
2, 7, 12, 17, ...
The algorithm continues, we will next retrieve value 3, and add 5 into the heap:
1, 3, 5, 7, ...
2, 7, 12, 17, ...
Finally, now we retrieve value 5. At this point we realize that the value 4 is not among the invalid values for a, thus that is the solution that we are looking for.

I can think of two different solutions. First:
p_max = lcm (p[0],p[1],...,p[n]) - 1;
for a = 0 to p_max:
zero_found = false;
for i = 0 to n:
if ( s[i] + a ) mod p[i] == 0:
zero_found = true;
break;
if !zero_found:
return a;
return -1;
I suppose this is the one you call "brute force". Notice that p_max represents Least Common Multiple of p[i]s - 1 (solution is either in the closed interval [0, p_max], or it does not exist). Complexity of this solution is O(n * p_max) in the worst case (plus the running time for calculating lcm!). There is a better solution regarding the time complexity, but it uses an additional binary array - classical time-space tradeoff. Its idea is similar to the Sieve of Eratosthenes, but for remainders instead of primes :)
p_max = lcm (p[0],p[1],...,p[n]) - 1;
int remainders[p_max + 1] = {0};
for i = 0 to n:
int rem = s[i] - p[i];
while rem >= -p_max:
remainders[-rem] = 1;
rem -= p[i];
for i = 0 to n:
if !remainders[i]:
return i;
return -1;
Explanation of the algorithm: first, we create an array remainders that will indicate whether certain negative remainder exists in the whole set. What is a negative remainder? It's simple, notice that 6 = 2 mod 4 is equivalent to 6 = -2 mod 4. If remainders[i] == 1, it means that if we add i to one of the s[j], we will get p[j] (which is 0, and that is what we want to avoid). Array is populated with all possible negative remainders, up to -p_max. Now all we have to do is search for the first i, such that remainder[i] == 0 and return it, if it exists - notice that the solution does not have to exists. In the problem text, you have indicated that you are searching for the minimum positive integer, I don't see why zero would not fit (if all s[i] are positive). However, if that is a strong requirement, just change the for loop to start from 1 instead of 0, and increment p_max.
The complexity of this algorithm is n + sum (p_max / p[i]) = n + p_max * sum (1 / p[i]), where i goes from to 0 to n. Since all p[i]s are at least 2, that is asymptotically better than the brute force solution.
An example for better understanding: suppose that the input is (5,4), (5,1), (2,0). p_max is lcm(5,5,2) - 1 = 10 - 1 = 9, so we create array with 10 elements, initially filled with zeros. Now let's proceed pair by pair:
from the first pair, we have remainders[1] = 1 and remainders[6] = 1
second pair gives remainders[4] = 1 and remainders[9] = 1
last pair gives remainders[0] = 1, remainders[2] = 1, remainders[4] = 1, remainders[6] = 1 and remainders[8] = 1.
Therefore, first index with zero value in the array is 3, which is a desired solution.

Related

What is the sublist array that can give us maximum 'flip-flop' sum?

my problem is that I'm given an array of with length l.
let's say this is my array: [1,5,4,2,9,3,6] let's call this A.
This array can have multiple sub arrays with nodes being adjacent to each other. so we can have [1,5,4] or [2,9,3,6] and so on. the length of each sub array does not matter.
But the trick is the sum part. we cannot just add all numbers, it works like flip flop. so for the sublist [2,9,3,6] the sum would be [2,-9,3,-6] which is: -10. and is pretty small.
what would be the sublist (or sub-array if you like) of this array A that produces the maximum sum?
one possible way would be (from intuition) that the sublist [4,2,9] will output a decent result : [4, -2, 9] = (add all the elements) = 11.
The question is, how to come up with a result like this?
what is the sub-array that gives us the maximum flip-flop sum?
and mainly, what is the algorithm that takes any array as an input and outputs a sub-array with all numbers being adjacent and with the maximum sum?
I haven't come up with anything but I'm pretty sure I should pick either dynamic programming or divide and conquer to solve this issue. again, I don't know, I may be totally wrong.
The problem can indeed be solved using dynamic programming, by keeping track of the maximum sum ending at each position.
However, since the current element can be either added to or subtracted from a sum (depending on the length of the subsequence), we will keep track of the maximum sums ending here, separately, for both even as well as odd subsequence lengths.
The code below (implemented in python) does that (please see comments in the code for additional details).
The time complexity is O(n).
a = [1, 5, 4, 2, 9, 3, 6]
# initialize the best sequences which end at element a[0]
# best sequence with odd length ending at the current position
best_ending_here_odd = a[0] # the sequence sum value
best_ending_here_odd_start_idx = 0
# best sequence with even length ending at the current position
best_ending_here_even = 0 # the sequence sum value
best_ending_here_even_start_idx = 1
best_sum = 0
best_start_idx = 0
best_end_idx = 0
for i in range(1, len(a)):
# add/subtract the current element to the best sequences that
# ended in the previous element
best_ending_here_even, best_ending_here_odd = \
best_ending_here_odd - a[i], best_ending_here_even + a[i]
# swap starting positions (since a sequence which had odd length when it
# was ending at the previous element has even length now, and vice-versa)
best_ending_here_even_start_idx, best_ending_here_odd_start_idx = \
best_ending_here_odd_start_idx, best_ending_here_even_start_idx
# we can always make a sequence of even length with sum 0 (empty sequence)
if best_ending_here_even < 0:
best_ending_here_even = 0
best_ending_here_even_start_idx = i + 1
# update the best known sub-sequence if it is the case
if best_ending_here_even > best_sum:
best_sum = best_ending_here_even
best_start_idx = best_ending_here_even_start_idx
best_end_idx = i
if best_ending_here_odd > best_sum:
best_sum = best_ending_here_odd
best_start_idx = best_ending_here_odd_start_idx
best_end_idx = i
print(best_sum, best_start_idx, best_end_idx)
For the example sequence in the question, the above code outputs the following flip-flop sub-sequence:
4 - 2 + 9 - 3 + 6 = 14
As quertyman wrote, we can use dynamic programming. This is similar to Kadane's algorithm but with a few twists. We need a second temporary variable to keep track of trying each element both as an addition and as a subtraction. Note that a subtraction must be preceded by an addition but not vice versa. O(1) space, O(n) time.
JavaScript code:
function f(A){
let prevAdd = [A[0], 1] // sum, length
let prevSubt = [0, 0]
let best = [0, -1, 0, null] // sum, idx, len, op
let add
let subt
for (let i=1; i<A.length; i++){
// Try adding
add = [A[i] + prevSubt[0], 1 + prevSubt[1]]
if (add[0] > best[0])
best = [add[0], i, add[1], ' + ']
// Try subtracting
if (prevAdd[0] - A[i] > 0)
subt = [prevAdd[0] - A[i], 1 + prevAdd[1]]
else
subt = [0, 0]
if (subt[0] > best[0])
best = [subt[0], i, subt[1], ' - ']
prevAdd = add
prevSubt = subt
}
return best
}
function show(A, sol){
let [sum, i, len, op] = sol
let str = A[i] + ' = ' + sum
for (let l=1; l<len; l++){
str = A[i-l] + op + str
op = op == ' + ' ? ' - ' : ' + '
}
return str
}
var A = [1, 5, 4, 2, 9, 3, 6]
console.log(JSON.stringify(A))
var sol = f(A)
console.log(JSON.stringify(sol))
console.log(show(A, sol))
Update
Per OP's request in the comments, here is some theoretical elaboration on the general recurrence (pseudocode): let f(i, subtract) represent the maximum sum up to and including the element indexed at i, where subtract indicates whether or not the element is subtracted or added. Then:
// Try subtracting
f(i, true) =
if f(i-1, false) - A[i] > 0
then f(i-1, false) - A[i]
otherwise 0
// Try adding
f(i, false) =
A[i] + f(i-1, true)
(Note that when f(i-1, true) evaluates
to zero, the best ending at
i as an addition is just A[i])
The recurrence only depends on the evaluation at the previous element, which means we can code it with O(1) space, just saving the very last evaluation after each iteration, and updating the best so far (including the sequence's ending index and length if we want).

Coin change with split into two sets

I'm trying to figure out how to solve a problem that seems a tricky variation of a common algorithmic problem but require additional logic to handle specific requirements.
Given a list of coins and an amount, I need to count the total number of possible ways to extract the given amount using an unlimited supply of available coins (and this is a classical change making problem https://en.wikipedia.org/wiki/Change-making_problem easily solved using dynamic programming) that also satisfy some additional requirements:
extracted coins are splittable into two sets of equal size (but not necessarily of equal sum)
the order of elements inside the set doesn't matter but the order of set does.
Examples
Amount of 6 euros and coins [1, 2]: solutions are 4
[(1,1), (2,2)]
[(1,1,1), (1,1,1)]
[(2,2), (1,1)]
[(1,2), (1,2)]
Amount of 8 euros and coins [1, 2, 6]: solutions are 7
[(1,1,2), (1,1,2)]
[(1,2,2), (1,1,1)]
[(1,1,1,1), (1,1,1,1)]
[(2), (6)]
[(1,1,1), (1,2,2)]
[(2,2), (2,2)]
[(6), (2)]
By now I tried different approaches but the only way I found was to collect all the possible solution (using dynamic programming) and then filter non-splittable solution (with an odd number of coins) and duplicates. I'm quite sure there is a combinatorial way to calculate the total number of duplication but I can't figure out how.
(The following method first enumerates partitions. My other answer generates the assignments in a bottom-up fashion.) If you'd like to count splits of the coin exchange according to coin count, and exclude redundant assignments of coins to each party (for example, where splitting 1 + 2 + 2 + 1 into two parts of equal cardinality is only either (1,1) | (2,2), (2,2) | (1,1) or (1,2) | (1,2) and element order in each part does not matter), we could rely on enumeration of partitions where order is disregarded.
However, we would need to know the multiset of elements in each partition (or an aggregate of similar ones) in order to count the possibilities of dividing them in two. For example, to count the ways to split 1 + 2 + 2 + 1, we would first count how many of each coin we have:
Python code:
def partitions_with_even_number_of_parts_as_multiset(n, coins):
results = []
def C(m, n, s, p):
if n < 0 or m <= 0:
return
if n == 0:
if not p:
results.append(s)
return
C(m - 1, n, s, p)
_s = s[:]
_s[m - 1] += 1
C(m, n - coins[m - 1], _s, not p)
C(len(coins), n, [0] * len(coins), False)
return results
Output:
=> partitions_with_even_number_of_parts_as_multiset(6, [1,2,6])
=> [[6, 0, 0], [2, 2, 0]]
^ ^ ^ ^ this one represents two 1's and two 2's
Now since we are counting the ways to choose half of these, we need to find the coefficient of x^2 in the polynomial multiplication
(x^2 + x + 1) * (x^2 + x + 1) = ... 3x^2 ...
which represents the three ways to choose two from the multiset count [2,2]:
2,0 => 1,1
0,2 => 2,2
1,1 => 1,2
In Python, we can use numpy.polymul to multiply polynomial coefficients. Then we lookup the appropriate coefficient in the result.
For example:
import numpy
def count_split_partitions_by_multiset_count(multiset):
coefficients = (multiset[0] + 1) * [1]
for i in xrange(1, len(multiset)):
coefficients = numpy.polymul(coefficients, (multiset[i] + 1) * [1])
return coefficients[ sum(multiset) / 2 ]
Output:
=> count_split_partitions_by_multiset_count([2,2,0])
=> 3
(Posted a similar answer here.)
Here is a table implementation and a little elaboration on algrid's beautiful answer. This produces an answer for f(500, [1, 2, 6, 12, 24, 48, 60]) in about 2 seconds.
The simple declaration of C(n, k, S) = sum(C(n - s_i, k - 1, S[i:])) means adding all the ways to get to the current sum, n using k coins. Then if we split n into all ways it can be partitioned in two, we can just add all the ways each of those parts can be made from the same number, k, of coins.
The beauty of fixing the subset of coins we choose from to a diminishing list means that any arbitrary combination of coins will only be counted once - it will be counted in the calculation where the leftmost coin in the combination is the first coin in our diminishing subset (assuming we order them in the same way). For example, the arbitrary subset [6, 24, 48], taken from [1, 2, 6, 12, 24, 48, 60], would only be counted in the summation for the subset [6, 12, 24, 48, 60] since the next subset, [12, 24, 48, 60] would not include 6 and the previous subset [2, 6, 12, 24, 48, 60] has at least one 2 coin.
Python code (see it here; confirm here):
import time
def f(n, coins):
t0 = time.time()
min_coins = min(coins)
m = [[[0] * len(coins) for k in xrange(n / min_coins + 1)] for _n in xrange(n + 1)]
# Initialize base case
for i in xrange(len(coins)):
m[0][0][i] = 1
for i in xrange(len(coins)):
for _i in xrange(i + 1):
for _n in xrange(coins[_i], n + 1):
for k in xrange(1, _n / min_coins + 1):
m[_n][k][i] += m[_n - coins[_i]][k - 1][_i]
result = 0
for a in xrange(1, n + 1):
b = n - a
for k in xrange(1, n / min_coins + 1):
result = result + m[a][k][len(coins) - 1] * m[b][k][len(coins) - 1]
total_time = time.time() - t0
return (result, total_time)
print f(500, [1, 2, 6, 12, 24, 48, 60])

Minimum number of special moves to sort number

Given the list of numbers
1 15 2 5 10
I need to obtain
1 2 5 10 15
The only operation I can do is "move the number X at position Y".
In the above example I only need to do "move the number 15 at position 5".
I would like to minimize the number of operations but I can't find/remember a classical algorithm for that, given the operation available.
Some background :
I'm interacting with an API for a kanban-like service.
I have about 600 cards and some actions on our bug-tracker can imply a reordering of these 600 cards in the kanban (multiple cards can move at the same time if the priority of a project is changed)
I can do it in 600 calls to the API but I'm trying to reduce that number as much as possible.
Lemma: The minimum number of (delete element, insert element) pairs you can perform to sort a list L (in increasing order) is:
Smin(L) = |L| - |LIC(L)|
Where LIC(L) is the Longest Increasing Subsequence.
Thus, you have to:
Establish the LIC of your list.
Remove the elements not in it and insert them back at the appropriate position (using binary search).
Proof:
By induction.
For a list of size 1, the longest increasing subsequence is of length... 1! The list is already sorted so the number of (del,ins) pairs required is
|L| - |LIC(L)| = 1 - 1 = 0
Now let Ln be a list of length n, 1 ≤ n. Let Ln+1 be the list obtained by adding an element en+1 to the left of Ln.
This element may or may not influence the Longest Increasing Subsequence. Let's try to see how...
Let in,1 and in,2 be the two first elements of LIC(Ln) (*):
If en+1 > in,2, then LIC(Ln+1) = LIC(Ln)
If en+1 ≤ in,1, then LIC(Ln+1) = en+1 || LIC(Ln)
Else, LIC(Ln+1) = LIC(Ln) - in,1 + en+1. We keep the LIC with the highest first element. This is done by removing in,1 from the LIC and replacing it with en+1.
In the first case, we delete en+1, we thus get to sort Ln. By the induction hypothesis, this require n (deletion, insertion) pairs. We then have to insert en+1 at the appropriate position. Thus:
S(Ln+1)min = 1 + S(Ln)min
S(Ln+1)min = 1 + n - |LIC(Ln)|
S(Ln+1)min = |Ln+1| - |LIC(Ln+1|
In the second case, we ignore en+1. We begin by deleting elements not in LIC(Ln). These elements have to be inserted again! There are
S(Ln)min = |Ln| - |LIC(Ln)|
such elements.
Now, we just have to take care and insert them in the right order (relatively to en+1). In the end, it requires:
S(Ln+1)min = |Ln| - |LIC(Ln)|
S(Ln+1)min = |Ln| + 1 - (|LIC(Ln)| + 1)
Since we have |LIC(Ln+1)| = |LIC(Ln)| + 1 and |Ln+1| = |Ln| + 1, we have in the end:
S(Ln+1)min = |Ln+1| - |LIC(Ln+1)|
The last case can be proved by considering the list L'n obtained by removing in,1 from Ln+1. In that case LIC(L'n) = LIC(Ln+1) and thus:
|LIC(L'n)| = |LIC(Ln)| (1)
From there, we can sort L'n (which takes |L'n| - |LIC(L'n| by the induction hypothesis. The previous equality (1) leads to the result.
(*): If LIC(Ln) < 2, then in,2 doesn't exist. Just ignore the comparisons with it. In that case, only case 2 and case 3 apply... The result is still valid
One possible solution is to find the longest increasing subsequence and move only elements that aren't inside it.
I can't prove it's optimal, but it is easy to prove it is correct and better than N swaps.
Here is a proof-of-concept in Python 2. I implemented it as a O(n2) algorithm, but I'm pretty sure it can be reduced to O(n log n).
from operator import itemgetter
def LIS(V):
T = [1]*(len(V))
P = [-1]*(len(V))
for i, v in enumerate(V):
for j in xrange(i-1, -1, -1):
if T[j]+1 > T[i] and V[j] <= V[i]:
T[i] = T[j] + 1
P[i] = j
i, _ = max(enumerate(T), key=itemgetter(1))
while i != -1:
yield i
i = P[i]
def complement(L, n):
for a, b in zip(L, L[1:]+[n]):
for i in range(a+1, b):
yield i
def find_moves(V):
n = len(V)
L = list(LIS(V))[::-1]
SV = sorted(range(n), key=lambda i:V[i])
moves = [(x, SV.index(x)) for x in complement(L, n)]
while len(moves):
a, b = moves.pop()
yield a, b
moves = [(x-(x>a)+(x>b), y) for x, y in moves]
def make_and_print_moves(V):
print 'Initial array:', V
for a, b in find_moves(V):
x = V.pop(a)
V.insert(b, x)
print 'Move {} to {}. Result: {}'.format(a, b, V)
print '***'
make_and_print_moves([1, 15, 2, 5, 10])
make_and_print_moves([4, 3, 2, 1])
make_and_print_moves([1, 2, 4, 3])
It outputs something like:
Initial array: [1, 15, 2, 5, 10]
Move 1 to 4. Result: [1, 2, 5, 10, 15]
***
Initial array: [4, 3, 2, 1]
Move 3 to 0. Result: [1, 4, 3, 2]
Move 3 to 1. Result: [1, 2, 4, 3]
Move 3 to 2. Result: [1, 2, 3, 4]
***
Initial array: [1, 2, 4, 3]
Move 3 to 2. Result: [1, 2, 3, 4]
***

Maximum continuous achievable number

The problem
Definitions
Let's define a natural number N as a writable number (WN) for number set in M numeral system, if it can be written in this numeral system from members of U using each member no more than once. More strict definition of 'written': - here CONCAT means concatenation.
Let's define a natural number N as a continuous achievable number (CAN) for symbol set in M numeral system if it is a WN-number for U and M and also N-1 is a CAN-number for U and M (Another definition may be N is CAN for U and M if all 0 .. N numbers are WN for U and M). More strict:
Issue
Let we have a set of S natural numbers: (we are treating zero as a natural number) and natural number M, M>1. The problem is to find maximum CAN (MCAN) for given U and M. Given set U may contain duplicates - but each duplicate could not be used more than once, of cause (i.e. if U contains {x, y, y, z} - then each y could be used 0 or 1 time, so y could be used 0..2 times total). Also U expected to be valid in M-numeral system (i.e. can not contain symbols 8 or 9 in any member if M=8). And, of cause, members of U are numbers, not symbols for M (so 11 is valid for M=10) - otherwise the problem will be trivial.
My approach
I have in mind a simple algorithm now, which is simply checking if current number is CAN via:
Check if 0 is WN for given U and M? Go to 2: We're done, MCAN is null
Check if 1 is WN for given U and M? Go to 3: We're done, MCAN is 0
...
So, this algorithm is trying to build all this sequence. I doubt this part can be improved, but may be it can? Now, how to check if number is a WN. This is also some kind of 'substitution brute-force'. I have a realization of that for M=10 (in fact, since we're dealing with strings, any other M is not a problem) with PHP function:
//$mNumber is our N, $rgNumbers is our U
function isWriteable($mNumber, $rgNumbers)
{
if(in_array((string)$mNumber, $rgNumbers=array_map('strval', $rgNumbers), true))
{
return true;
}
for($i=1; $i<=strlen((string)$mNumber); $i++)
{
foreach($rgKeys = array_keys(array_filter($rgNumbers, function($sX) use ($mNumber, $i)
{
return $sX==substr((string)$mNumber, 0, $i);
})) as $iKey)
{
$rgTemp = $rgNumbers;
unset($rgTemp[$iKey]);
if(isWriteable(substr((string)$mNumber, $i), $rgTemp))
{
return true;
}
}
}
return false;
}
-so we're trying one piece and then check if the rest part could be written with recursion. If it can not be written, we're trying next member of U. I think this is a point which can be improved.
Specifics
As you see, an algorithm is trying to build all numbers before N and check if they are WN. But the only question is - to find MCAN, so, question is:
May be constructive algorithm is excessive here? And, if yes, what other options could be used?
Is there more quick way to determine if number is WN for given U and M? (this point may have no sense if previous point has positive answer and we'll not build and check all numbers before N).
Samples
U = {4, 1, 5, 2, 0}
M = 10
then MCAN = 2 (3 couldn't be reached)
U = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 11}
M = 10
then MCAN = 21 (all before could be reached, for 22 there are no two 2 symbols total).
Hash the digit count for digits from 0 to m-1. Hash the numbers greater than m that are composed of one repeated digit.
MCAN is bound by the smallest digit for which all combinations of that digit for a given digit count cannot be constructed (e.g., X000,X00X,X0XX,XX0X,XXX0,XXXX), or (digit count - 1) in the case of zero (for example, for all combinations of four digits, combinations are needed for only three zeros; for a zero count of zero, MCAN is null). Digit counts are evaluated in ascending order.
Examples:
1. MCAN (10, {4, 1, 5, 2, 0})
3 is the smallest digit for which a digit-count of one cannot be constructed.
MCAN = 2
2. MCAN (10, {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 11})
2 is the smallest digit for which a digit-count of two cannot be constructed.
MCAN = 21
3. (from Alma Do Mundo's comment below) MCAN (2, {0,0,0,1,1,1})
1 is the smallest digit for which all combinations for a digit-count of four
cannot be constructed.
MCAN = 1110
4. (example from No One in Particular's answer)
MCAN (2, {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1111,11111111})
1 is the smallest digit for which all combinations for a digit-count of five
cannot be constructed.
MCAN = 10101
The recursion steps I've made are:
If the digit string is available in your alphabet, mark it used and return immediately
If the digit string is of length 1, return failure
Split the string in two and try each part
This is my code:
$u = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 11];
echo ncan($u), "\n"; // 21
// the functions
function satisfy($n, array $u)
{
if (!empty($u[$n])) { // step 1
--$u[$n];
return $u;
} elseif (strlen($n) == 1) { // step 2
return false;
}
// step 3
for ($i = 1; $i < strlen($n); ++$i) {
$u2 = satisfy(substr($n, 0, $i), $u);
if ($u2 && satisfy(substr($n, $i), $u2)) {
return true;
}
}
return false;
}
function is_can($n, $u)
{
return satisfy($n, $u) !== false;
}
function ncan($u)
{
$umap = array_reduce($u, function(&$result, $item) {
#$result[$item]++;
return $result;
}, []);
$i = -1;
while (is_can($i + 1, $umap)) {
++$i;
}
return $i;
}
Here is another approach:
1) Order the set U with regards to the usual numerical ordering for base M.
2) If there is a symbol between 0 and (M-1) which is missing, then that is the first number which is NOT MCAN.
3) Find the fist symbol which has the least number of entries in the set U. From this we have an upper bound on the first number which is NOT MCAN. That number would be {xxxx} N times. For example, if M = 4 and U = { 0, 0, 0, 1, 1, 1, 2, 2, 2, 3, 3}, then the number 333 is not MCAN. This gives us our upper bound.
4) So, if the first element of the set U which has the small number of occurences is x and it has C occurences, then we can clearly represent any number with C digits. (Since every element has at least C entries).
5) Now we ask if there is any number less than (C+1)x which can't be MCAN? Well, any (C+1) digit number can have either (C+1) of the same symbol or only at most (C) of the same symbol. Since x is minimal from step 3, (C+1)y for y < x can be done and (C)a + b can be done for any distinct a, b since they have (C) copies at least.
The above method works for set elements of only 1 symbol. However, we now see that it becomes more complex if multi-symbol elements are allowed. Consider the following case:
U = { 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1111,11111111}
Define c(A,B) = the number of 'A' symbols of 'B' length.
So for our example, c(0,1) = 15, c(0,2) = 0, c(0,3) = 0, c(0,4) = 0, ...
c(1,1) = 3, c(1,2) = 0, c(1,3) = 0, c(1,4) = 1, c(0,5) = 0, ..., c(1,8) = 1
The maximal 0 string we can't do is 16. The maximal 1 string we can't do is also 16.
1 = 1
11 = 1+1
111 = 1+1+1
1111 = 1111
11111 = 1+1111
111111 = 1+1+1111
1111111 = 1+1+1+1111
11111111 = 11111111
111111111 = 1+11111111
1111111111 = 1+1+11111111
11111111111 = 1+1+1+11111111
111111111111 = 1111+11111111
1111111111111 = 1+1111+11111111
11111111111111 = 1+1+1111+11111111
111111111111111 = 1+1+1+1111+11111111
But can we make the string 11111101111? We can't because the last 1 string (1111) needs the only set of 1's with the 4 in a row. Once we take that, we can't make the first 1 string (111111) because we only have an 8 (which is too big) or 3 1-lengths which are too small.
So for multi-symbols, we need another approach.
We know from sorting and ordering our strings what is the minimum length we can't do for a given symbol. (In the example above, it would be 16 zeros or 16 ones.) So this is our upper bound for an answer.
What we have to do now is start a 1 and count up in base M. For each number we write it in base M and then determine if we can make it from our set U. We do this by using the same approach used in the coin change problem: dynamic programming. (See for example http://www.geeksforgeeks.org/dynamic-programming-set-7-coin-change/ for the algorithm.) The only difference is that in our case we only have finite number of each elements, not an infinite supply.
Instead of subtracting the amount we are using like in the coin change problem, we strip the matching symbol off of the front of the string we are trying to match. (This is the opposite of our addition - concatenation.)

Allocate an array of integers proportionally compensating for rounding errors

I have an array of non-negative values. I want to build an array of values who's sum is 20 so that they are proportional to the first array.
This would be an easy problem, except that I want the proportional array to sum to exactly
20, compensating for any rounding error.
For example, the array
input = [400, 400, 0, 0, 100, 50, 50]
would yield
output = [8, 8, 0, 0, 2, 1, 1]
sum(output) = 20
However, most cases are going to have a lot of rounding errors, like
input = [3, 3, 3, 3, 3, 3, 18]
naively yields
output = [1, 1, 1, 1, 1, 1, 10]
sum(output) = 16 (ouch)
Is there a good way to apportion the output array so that it adds up to 20 every time?
There's a very simple answer to this question: I've done it many times. After each assignment into the new array, you reduce the values you're working with as follows:
Call the first array A, and the new, proportional array B (which starts out empty).
Call the sum of A elements T
Call the desired sum S.
For each element of the array (i) do the following:
a. B[i] = round(A[i] / T * S). (rounding to nearest integer, penny or whatever is required)
b. T = T - A[i]
c. S = S - B[i]
That's it! Easy to implement in any programming language or in a spreadsheet.
The solution is optimal in that the resulting array's elements will never be more than 1 away from their ideal, non-rounded values. Let's demonstrate with your example:
T = 36, S = 20. B[1] = round(A[1] / T * S) = 2. (ideally, 1.666....)
T = 33, S = 18. B[2] = round(A[2] / T * S) = 2. (ideally, 1.666....)
T = 30, S = 16. B[3] = round(A[3] / T * S) = 2. (ideally, 1.666....)
T = 27, S = 14. B[4] = round(A[4] / T * S) = 2. (ideally, 1.666....)
T = 24, S = 12. B[5] = round(A[5] / T * S) = 2. (ideally, 1.666....)
T = 21, S = 10. B[6] = round(A[6] / T * S) = 1. (ideally, 1.666....)
T = 18, S = 9. B[7] = round(A[7] / T * S) = 9. (ideally, 10)
Notice that comparing every value in B with it's ideal value in parentheses, the difference is never more than 1.
It's also interesting to note that rearranging the elements in the array can result in different corresponding values in the resulting array. I've found that arranging the elements in ascending order is best, because it results in the smallest average percentage difference between actual and ideal.
Your problem is similar to a proportional representation where you want to share N seats (in your case 20) among parties proportionnaly to the votes they obtain, in your case [3, 3, 3, 3, 3, 3, 18]
There are several methods used in different countries to handle the rounding problem. My code below uses the Hagenbach-Bischoff quota method used in Switzerland, which basically allocates the seats remaining after an integer division by (N+1) to parties which have the highest remainder:
def proportional(nseats,votes):
"""assign n seats proportionaly to votes using Hagenbach-Bischoff quota
:param nseats: int number of seats to assign
:param votes: iterable of int or float weighting each party
:result: list of ints seats allocated to each party
"""
quota=sum(votes)/(1.+nseats) #force float
frac=[vote/quota for vote in votes]
res=[int(f) for f in frac]
n=nseats-sum(res) #number of seats remaining to allocate
if n==0: return res #done
if n<0: return [min(x,nseats) for x in res] # see siamii's comment
#give the remaining seats to the n parties with the largest remainder
remainders=[ai-bi for ai,bi in zip(frac,res)]
limit=sorted(remainders,reverse=True)[n-1]
#n parties with remainter larger than limit get an extra seat
for i,r in enumerate(remainders):
if r>=limit:
res[i]+=1
n-=1 # attempt to handle perfect equality
if n==0: return res #done
raise #should never happen
However this method doesn't always give the same number of seats to parties with perfect equality as in your case:
proportional(20,[3, 3, 3, 3, 3, 3, 18])
[2,2,2,2,1,1,10]
You have set 3 incompatible requirements. An integer-valued array proportional to [1,1,1] cannot be made to sum to exactly 20. You must choose to break one of the "sum to exactly 20", "proportional to input", and "integer values" requirements.
If you choose to break the requirement for integer values, then use floating point or rational numbers. If you choose to break the exact sum requirement, then you've already solved the problem. Choosing to break proportionality is a little trickier. One approach you might take is to figure out how far off your sum is, and then distribute corrections randomly through the output array. For example, if your input is:
[1, 1, 1]
then you could first make it sum as well as possible while still being proportional:
[7, 7, 7]
and since 20 - (7+7+7) = -1, choose one element to decrement at random:
[7, 6, 7]
If the error was 4, you would choose four elements to increment.
A naïve solution that doesn't perform well, but will provide the right result...
Write an iterator that given an array with eight integers (candidate) and the input array, output the index of the element that is farthest away from being proportional to the others (pseudocode):
function next_index(candidate, input)
// Calculate weights
for i in 1 .. 8
w[i] = candidate[i] / input[i]
end for
// find the smallest weight
min = 0
min_index = 0
for i in 1 .. 8
if w[i] < min then
min = w[i]
min_index = i
end if
end for
return min_index
end function
Then just do this
result = [0, 0, 0, 0, 0, 0, 0, 0]
result[next_index(result, input)]++ for 1 .. 20
If there is no optimal solution, it'll skew towards the beginning of the array.
Using the approach above, you can reduce the number of iterations by rounding down (as you did in your example) and then just use the approach above to add what has been left out due to rounding errors:
result = <<approach using rounding down>>
while sum(result) < 20
result[next_index(result, input)]++
So the answers and comments above were helpful... particularly the decreasing sum comment from #Frederik.
The solution I came up with takes advantage of the fact that for an input array v, sum(v_i * 20) is divisible by sum(v). So for each value in v, I mulitply by 20 and divide by the sum. I keep the quotient, and accumulate the remainder. Whenever the accumulator is greater than sum(v), I add one to the value. That way I'm guaranteed that all the remainders get rolled into the results.
Is that legible? Here's the implementation in Python:
def proportion(values, total):
# set up by getting the sum of the values and starting
# with an empty result list and accumulator
sum_values = sum(values)
new_values = []
acc = 0
for v in values:
# for each value, find quotient and remainder
q, r = divmod(v * total, sum_values)
if acc + r < sum_values:
# if the accumlator plus remainder is too small, just add and move on
acc += r
else:
# we've accumulated enough to go over sum(values), so add 1 to result
if acc > r:
# add to previous
new_values[-1] += 1
else:
# add to current
q += 1
acc -= sum_values - r
# save the new value
new_values.append(q)
# accumulator is guaranteed to be zero at the end
print new_values, sum_values, acc
return new_values
(I added an enhancement that if the accumulator > remainder, I increment the previous value instead of the current value)

Resources