Complex Combinatorial Conditions on Dynamic Programming - algorithm

I am exploring how a Dynamic Programming design approach relates to the underlying combinatorial properties of problems.
For this, I am looking at the canonical instance of the coin change problem: Let S = [d_1, d_2, ..., d_m] and n > 0 be a requested amount. In how many ways can we add up to n using nothing but the elements in S?
If we follow a Dynamic Programming approach to design an algorithm for this problem that would allow for a solution with polynomial complexity, we would start by looking at the problem and how it is related to smaller and simpler sub-problems. This would yield a recursive relation describing an inductive step representing the problem in terms of the solutions to its related subproblems. We can then implement either a memoization technique or a tabulation technique to efficiently implement this recursive relation in a top-down or a bottom-up manner, respectively.
A recursive relation to solve this instance of the problem could be the following (Python 3.6 syntax and 0-based indexing):
def C(S, m, n):
if n < 0:
return 0
if n == 0:
return 1
if m <= 0:
return 0
count_wout_high_coin = C(S, m - 1, n)
count_with_high_coin = C(S, m, n - S[m - 1])
return count_wout_high_coin + count_with_high_coin
This recursive relation yields a correct amount of solutions but disregarding the order. However, this relation:
def C(S, n):
if n < 0:
return 0
if n == 0:
return 1
return sum([C(S, n - coin) for coin in S])
yields a correct amount of solutions while regarding the order.
I am interested in capturing more subtle combinatorial patterns through a recursion relation that can be further optimized via memorization/tabulation.
For example, this relation:
def C(S, m, n, p):
if n < 0:
return 0
if n == 0 and not p:
return 1
if n == 0 and p:
return 0
if m == 0:
return 0
return C(S, m - 1, n, p) + C(S, m, n - S[n - 1], not p)
yields a solution disregarding order but counting only solutions with an even number of summands. The same relation can be modified to regard order and counting number of even number of summands:
def C(S, n, p):
if n < 0:
return 0
if n == 0 and not p:
return 1
if n == 0 and p:
return 0
return sum([C(S, n - coin, not p) for coin in S])
However, what if we have more than 1 person among which we want to split the coins? Say I want to split n among 2 persons s.t. each person gets the same number of coins, regardless of the total sum each gets. From the 14 solutions, only 7 include an even number of coins so that I can split them evenly. But I want to exclude redundant assignments of coins to each person. For example, 1 + 2 + 2 + 1 and 1 + 2 + 1 + 2 are different solutions when order matters, BUT they represent the same split of coins to two persons, i.e. person B would get 1 + 2 = 2 + 1. I am having a hard time coming up with a recursion to count splits in a non-redundant manner.

(Before I elaborate on a possible answer, let me just point out that counting the splits of the coin exchange, for even n, by sum rather than coin-count would be more or less trivial since we can count the number of ways to exchange n / 2 and multiply it by itself :)
Now, if you'd like to count splits of the coin exchange according to coin count, and exclude redundant assignments of coins to each person (for example, where splitting 1 + 2 + 2 + 1 into two equal size parts is only either (1,1) | (2,2), (2,2) | (1,1) or (1,2) | (1,2) and element order in each part does not matter), we could rely on your first enumeration of partitions where order is disregarded.
However, we would need to know the multiset of elements in each partition (or an aggregate of similar ones) in order to count the possibilities of dividing them in two. For example, to count the ways to split 1 + 2 + 2 + 1, we would first count how many of each coin we have:
def partitions_with_even_number_of_parts_as_multiset(n, coins):
results = []
def C(m, n, s, p):
if n < 0 or m <= 0:
return
if n == 0:
if not p:
results.append(s)
return
C(m - 1, n, s, p)
_s = s[:]
_s[m - 1] += 1
C(m, n - coins[m - 1], _s, not p)
C(len(coins), n, [0] * len(coins), False)
return results
Output:
=> partitions_with_even_number_of_parts_as_multiset(6, [1,2,6])
=> [[6, 0, 0], [2, 2, 0]]
^ ^ ^ ^ this one represents two 1's and two 2's
Now since we are counting the ways to choose half of these, we need to find the coefficient of x^2 in the polynomial multiplication
(x^2 + x + 1) * (x^2 + x + 1) = ... 3x^2 ...
which represents the three ways to choose two from the multiset count [2,2]:
2,0 => 1,1
0,2 => 2,2
1,1 => 1,2
In Python, we can use numpy.polymul to multiply polynomial coefficients. Then we lookup the appropriate coefficient in the result.
For example:
import numpy
def count_split_partitions_by_multiset_count(multiset):
coefficients = (multiset[0] + 1) * [1]
for i in xrange(1, len(multiset)):
coefficients = numpy.polymul(coefficients, (multiset[i] + 1) * [1])
return coefficients[ sum(multiset) / 2 ]
Output:
=> count_split_partitions_by_multiset_count([2,2,0])
=> 3

Here is a table implementation and a little elaboration on algrid's beautiful answer. This produces an answer for f(500, [1, 2, 6, 12, 24, 48, 60]) in about 2 seconds.
The simple declaration of C(n, k, S) = sum(C(n - s_i, k - 1, S[i:])) means adding all the ways to get to the current sum, n using k coins. Then if we split n into all ways it can be partitioned in two, we can just add all the ways each of those parts can be made from the same number, k, of coins.
The beauty of fixing the subset of coins we choose from to a diminishing list means that any arbitrary combination of coins will only be counted once - it will be counted in the calculation where the leftmost coin in the combination is the first coin in our diminishing subset (assuming we order them in the same way). For example, the arbitrary subset [6, 24, 48], taken from [1, 2, 6, 12, 24, 48, 60], would only be counted in the summation for the subset [6, 12, 24, 48, 60] since the next subset, [12, 24, 48, 60] would not include 6 and the previous subset [2, 6, 12, 24, 48, 60] has at least one 2 coin.
Python code (see it here; confirm here):
import time
def f(n, coins):
t0 = time.time()
min_coins = min(coins)
m = [[[0] * len(coins) for k in xrange(n / min_coins + 1)] for _n in xrange(n + 1)]
# Initialize base case
for i in xrange(len(coins)):
m[0][0][i] = 1
for i in xrange(len(coins)):
for _i in xrange(i + 1):
for _n in xrange(coins[_i], n + 1):
for k in xrange(1, _n / min_coins + 1):
m[_n][k][i] += m[_n - coins[_i]][k - 1][_i]
result = 0
for a in xrange(1, n + 1):
b = n - a
for k in xrange(1, n / min_coins + 1):
result = result + m[a][k][len(coins) - 1] * m[b][k][len(coins) - 1]
total_time = time.time() - t0
return (result, total_time)
print f(500, [1, 2, 6, 12, 24, 48, 60])

Related

Calculating the numbers whose binary representation has exactly required number 1's

Okay so the problem is finding a positive integer n such that there are exactly m numbers in n+1 to 2n (both inclusive) whose binary representation has exactly k 1s.
Constraints: m<=10^18 and k<=64. Also answer is less than 10^18.
Now I can't think of an efficient way of solving this instead of going through each integer and calculating the binary 1 count in the required interval for each of them, but that would take too long. So is there any other way to go about with this?
You're correct to suspect that there's a more efficient way.
Let's start with a slightly simpler subproblem. Absent some really clever
insights, we're going to need to be able to find the number of integers in
[n+1, 2n] that have exactly k bits set in their binary representation. To
keep things short, let's call such integers "weight-k" integers (for motivation for this terminology, look up Hamming weight). We can
immediately simplify our counting problem: if we can count all weight-k integers in [0, 2n]
and we can count all weight-k integers in [0, n], we can subtract one count
from the other to get the number of weight-k integers in [n+1, 2n].
So an obvious subproblem is to count how many weight-k integers there are
in the interval [0, n], for given nonnegative integers k and n.
A standard technique for a problem of this kind is to look for a way to break
it down into smaller subproblems of the same kind; this is one aspect of
what's often called dynamic programming. In this case, there's an easy way of
doing so: consider the even numbers in [0, n] and the odd numbers in [0, n]
separately. Every even number m in [0, n] has exactly the same weight as
m/2 (because by dividing by two, all we do is remove a single zero
bit). Similarly, every odd number m has weight exactly one more than the
weight of (m-1)/2. With some thought about the appropriate base cases, this
leads to the following recursive algorithm (in this case implemented in Python,
but it should translate easily to any other mainstream language).
def count_weights(n, k):
"""
Return number of weight-k integers in [0, n] (for n >= 0, k >= 0)
"""
if k == 0:
return 1 # 0 is the only weight-0 value
elif n == 0:
return 0 # only considering 0, which doesn't have positive weight
else:
from_even = count_weights(n//2, k)
from_odd = count_weights((n-1)//2, k-1)
return from_even + from_odd
There's plenty of scope for mistakes here, so let's test our fancy recursive
algorithm against something less efficient but more direct (and, I hope, more
obviously correct):
def weight(n):
"""
Number of 1 bits in the binary representation of n (for n >= 0).
"""
return bin(n).count('1')
def count_weights_slow(n, k):
"""
Return number of weight-k integers in [0, n] (for n >= 0, k >= 0)
"""
return sum(weight(m) == k for m in range(n+1))
The results of comparing the two algorithms look convincing:
>>> count_weights(100, 5)
11
>>> count_weights_slow(100, 5)
11
>>> all(count_weights(n, k) == count_weights_slow(n, k)
... for n in range(1000) for k in range(10))
True
However, our supposedly fast count_weights function doesn't scale well to the
size numbers you need:
>>> count_weights(2**64, 5) # takes a few seconds on my machine
7624512
>>> count_weights(2**64, 6) # minutes ...
74974368
>>> count_weights(2**64, 10) # gave up waiting ...
But here's where a second key idea of dynamic programming comes in: memoize!
That is, keep a record of the results of previous calls, in case we need to use
them again. It turns out that the chain of recursive calls made will tend to
repeat lots of calls, so there's value in memoizing. In Python, this is
trivially easy to do, via the functools.lru_cache decorator. Here's our new
version of count_weights. All that's changed is the extra line at the top:
#lru_cache(maxsize=None)
def count_weights(n, k):
"""
Return number of weight-k integers in [0, n] (for n >= 0, k >= 0)
"""
if k == 0:
return 1 # 0 is the only weight-0 value
elif n == 0:
return 0 # only considering 0, which doesn't have positive weight
else:
from_even = count_weights(n//2, k)
from_odd = count_weights((n-1)//2, k-1)
return from_even + from_odd
Now testing on those larger examples again, we get results much more quickly,
without any noticeable delay.
>>> count_weights(2**64, 10)
151473214816
>>> count_weights(2**64, 32)
1832624140942590534
>>> count_weights(5853459801720308837, 27)
356506415596813420
So now we have an efficient way to count, we've got an inverse problem to
solve: given k and m, find an n such that count_weights(2*n, k) -
count_weights(n, k) == m. This one turns out to be especially easy, since the
quantity count_weights(2*n, k) - count_weights(n, k) is monotonically
increasing with n (for fixed k), and more specifically increases by either
0 or 1 every time n increases by 1. I'll leave the proofs of those
facts to you, but here's a demo:
>>> for n in range(10, 30): print(n, count_weights(n, 3))
...
10 1
11 2
12 2
13 3
14 4
15 4
16 4
17 4
18 4
19 5
20 5
21 6
22 7
23 7
24 7
25 8
26 9
27 9
28 10
29 10
This means that we're guaranteed to be able to find a solution. There may be multiple solutions, so we'll aim to find the smallest one (though it would be equally easy to find the largest one). Bisection search gives us a crude but effective way to do this. Here's the code:
def solve(m, k):
"""
Find the smallest n >= 0 such that [n+1, 2n] contains exactly
m weight-k integers.
Assumes that m >= 1 (for m = 0, the answer is trivially n = 0).
"""
def big_enough(n):
"""
Target function for our bisection search solver.
"""
diff = count_weights(2*n, k) - count_weights(n, k)
return diff >= m
low = 0
assert not big_enough(low)
# Initial phase: expand interval to identify an upper bound.
high = 1
while not big_enough(high):
high *= 2
# Bisection phase.
# Loop invariant: big_enough(high) is True and big_enough(low) is False
while high - low > 1:
mid = (high + low) // 2
if big_enough(mid):
high = mid
else:
low = mid
return high
Testing the solution:
>>> n = solve(5853459801720308837, 27)
>>> n
407324170440003813446
Let's double check that n:
>>> count_weights(2*n, 27) - count_weights(n, 27)
5853459801720308837
Looks good. And if we got our search right, this should be the smallest
n that works:
>>> count_weights(2*(n-1), 27) - count_weights(n-1, 27)
5853459801720308836
There are plenty of other opportunities for optimizations and cleanups in the
above code, and other ways to tackle the problem, but I hope this gives you a
starting point.
The OP commented that they needed to do this in C, where memoization isn't immediately available without using an external library. Here's a variant of count_weights that doesn't need memoization. It's achieved by (a) tweaking the recursion in count_weights so that the same n is used in both recursive calls, and then (b) returning, for a given n, the values of count_weights(n, k) for all k for which the answer is nonzero. In effect, we're just moving the memoization into an explicit list.
Note: as written, the code below needs Python 3.
def count_all_weights(n):
"""
Return frequencies of weights of all integers in [0, n],
as a list. The kth entry in the list gives the count
of weight-k integers in [0, n].
Example
-------
>>> count_all_weights(16)
[1, 5, 6, 4, 1]
"""
if n == 0:
return [1]
else:
wm = count_all_weights((n-1)//2)
weights = [wm[0], *(wm[i]+wm[i+1] for i in range(len(wm)-1)), wm[-1]]
if n % 2 == 0:
weights[bin(n).count('1')] += 1
return weights
An example call:
>>> count_all_weights(7590)
[1, 13, 78, 286, 714, 1278, 1679, 1624, 1139, 559, 182, 35, 3]
This function should be good enough even for larger n: count_all_weights(10**18) takes less than a half a millisecond on my machine.
Now the bisection search will work as before, replacing the call to count_weights(n, k) with count_all_weights(n)[k] (and similarly for count_weights(2*n, k)).
Finally, another possibility is to break up the interval [0, n] into a succession of smaller and smaller subintervals, where each subinterval has length a power of two. For example, we'd break the interval [0, 101] into [0, 63], [64, 95], [96, 99] and [100, 101]. The advantage of this is that we can easily compute how many weight-k integers there are in any one of these subintervals by counting combinations. For example, in [0, 63] we have all possible 6-bit combinations, so if we're after weight-3 integers, we know there must be exactly 6-choose-3 (i.e., 20) of them. And in [64, 95], we know each integer starts with a 1-bit, and then after excluding that 1-bit we have all possible 5-bit combinations, so again we know how many integers there are in this interval with any given weight.
Applying this idea, here's a complete, fast, all-in-one function that solves your original problem. It has no recursion and no memoization.
def solve(m, k):
"""
Given nonnegative integers m and k, find the smallest
nonnegative integer n such that the closed interval
[n+1, 2*n] contains exactly m weight-k integers.
Note that for k small there may be no solution:
if k == 0 then we have no solution unless m == 0,
and if k == 1 we have no solution unless m is 0 or 1.
"""
# Deal with edge cases.
if k < 2 and k < m:
raise ValueError("No solution")
elif k == 0 or m == 0:
return 0
k -= 1
# Find upper bound on n, and generate a subset of
# Pascal's triangle as we go.
rows = []
high, row = 1, [1] + [0] * k
while row[k] < m:
rows.append((high, row))
high, row = high * 2, [1, *(row[i]+row[i+1] for i in range(k))]
# Bisect to find first n that works.
low = mlow = weight = 0
while rows:
high, row = rows.pop()
mmid = mlow + row[k - weight]
if mmid < m:
low, mlow, weight = low + high, mmid, weight + 1
return low + 1

Number of different binary sequences of length n generated using exactly k flip operations

Consider a binary sequence b of length N. Initially, all the bits are set to 0. We define a flip operation with 2 arguments, flip(L,R), such that:
All bits with indices between L and R are "flipped", meaning a bit with value 1 becomes a bit with value 0 and vice-versa. More exactly, for all i in range [L,R]: b[i] = !b[i].
Nothing happens to bits outside the specified range.
You are asked to determine the number of possible different sequences that can be obtained using exactly K flip operations modulo an arbitrary given number, let's call it MOD.
More specifically, each test contains on the first line a number T, the number of queries to be given. Then there are T queries, each one being of the form N, K, MOD with the meaning from above.
1 ≤ N, K ≤ 300 000
T ≤ 250
2 ≤ MOD ≤ 1 000 000 007
Sum of all N-s in a test is ≤ 600 000
time limit: 2 seconds
memory limit: 65536 kbytes
Example :
Input :
1
2 1 1000
Output :
3
Explanation :
There is a single query. The initial sequence is 00. We can do the following operations :
flip(1,1) ⇒ 10
flip(2,2) ⇒ 01
flip(1,2) ⇒ 11
So there are 3 possible sequences that can be generated using exactly 1 flip.
Some quick observations that I've made, although I'm not sure they are totally correct :
If K is big enough, that is if we have a big enough number of flips at our disposal, we should be able to obtain 2n sequences.
If K=1, then the result we're looking for is N(N+1)/2. It's also C(n,1)+C(n,2), where C is the binomial coefficient.
Currently trying a brute force approach to see if I can spot a rule of some kind. I think this is a sum of some binomial coefficients, but I'm not sure.
I've also come across a somewhat simpler variant of this problem, where the flip operation only flips a single specified bit. In that case, the result is
C(n,k)+C(n,k-2)+C(n,k-4)+...+C(n,(1 or 0)). Of course, there's the special case where k > n, but it's not a huge difference. Anyway, it's pretty easy to understand why that happens.I guess it's worth noting.
Here are a few ideas:
We may assume that no flip operation occurs twice (otherwise, we can assume that it did not happen). It does affect the number of operations, but I'll talk about it later.
We may assume that no two segments intersect. Indeed, if L1 < L2 < R1 < R2, we can just do the (L1, L2 - 1) and (R1 + 1, R2) flips instead. The case when one segment is inside the other is handled similarly.
We may also assume that no two segments touch each other. Otherwise, we can glue them together and reduce the number of operations.
These observations give the following formula for the number of different sequences one can obtain by flipping exactly k segments without "redundant" flips: C(n + 1, 2 * k) (we choose 2 * k ends of segments. They are always different. The left end is exclusive).
If we had perform no more than K flips, the answer would be sum for k = 0...K of C(n + 1, 2 * k)
Intuitively, it seems that its possible to transform the sequence of no more than K flips into a sequence of exactly K flips (for instance, we can flip the same segment two more times and add 2 operations. We can also split a segment of more than two elements into two segments and add one operation).
By running the brute force search (I know that it's not a real proof, but looks correct combined with the observations mentioned above) that the answer this sum minus 1 if n or k is equal to 1 and exactly the sum otherwise.
That is, the result is C(n + 1, 0) + C(n + 1, 2) + ... + C(n + 1, 2 * K) - d, where d = 1 if n = 1 or k = 1 and 0 otherwise.
Here is code I used to look for patterns running a brute force search and to verify that the formula is correct for small n and k:
reachable = set()
was = set()
def other(c):
"""
returns '1' if c == '0' and '0' otherwise
"""
return '0' if c == '1' else '1'
def flipped(s, l, r):
"""
Flips the [l, r] segment of the string s and returns the result
"""
res = s[:l]
for i in range(l, r + 1):
res += other(s[i])
res += s[r + 1:]
return res
def go(xs, k):
"""
Exhaustive search. was is used to speed up the search to avoid checking the
same string with the same number of remaining operations twice.
"""
p = (xs, k)
if p in was:
return
was.add(p)
if k == 0:
reachable.add(xs)
return
for l in range(len(xs)):
for r in range(l, len(xs)):
go(flipped(xs, l, r), k - 1)
def calc_naive(n, k):
"""
Counts the number of reachable sequences by running an exhaustive search
"""
xs = '0' * n
global reachable
global was
was = set()
reachable = set()
go(xs, k)
return len(reachable)
def fact(n):
return 1 if n == 0 else n * fact(n - 1)
def cnk(n, k):
if k > n:
return 0
return fact(n) // fact(k) // fact(n - k)
def solve(n, k):
"""
Uses the formula shown above to compute the answer
"""
res = 0
for i in range(k + 1):
res += cnk(n + 1, 2 * i)
if k == 1 or n == 1:
res -= 1
return res
if __name__ == '__main__':
# Checks that the formula gives the right answer for small values of n and k
for n in range(1, 11):
for k in range(1, 11):
assert calc_naive(n, k) == solve(n, k)
This solution is much better than the exhaustive search. For instance, it can run in O(N * K) time per test case if we compute the coefficients using Pascal's triangle. Unfortunately, it is not fast enough. I know how to solve it more efficiently for prime MOD (using Lucas' theorem), but O do not have a solution in general case.
Multiplicative modular inverses can't solve this problem immediately as k! or (n - k)! may not have an inverse modulo MOD.
Note: I assumed that C(n, m) is defined for all non-negative n and m and is equal to 0 if n < m.
I think I know how to solve it for an arbitrary MOD now.
Let's factorize the MOD into prime factors p1^a1 * p2^a2 * ... * pn^an. Now can solve this problem for each prime factor independently and combine the result using the Chinese remainder theorem.
Let's fix a prime p. Let's assume that p^a|MOD (that is, we need to get the result modulo p^a). We can precompute all p-free parts of the factorial and the maximum power of p that divides the factorial for all 0 <= n <= N in linear time using something like this:
powers = [0] * (N + 1)
p_free = [i for i in range(N + 1)]
p_free[0] = 1
for cur_p in powers of p <= N:
i = cur_p
while i < N:
powers[i] += 1
p_free[i] /= p
i += cur_p
Now the p-free part of the factorial is the product of p_free[i] for all i <= n and the power of p that divides n! is the prefix sum of the powers.
Now we can divide two factorials: the p-free part is coprime with p^a so it always has an inverse. The powers of p are just subtracted.
We're almost there. One more observation: we can precompute the inverses of p-free parts in linear time. Let's compute the inverse for the p-free part of N! using Euclid's algorithm. Now we can iterate over all i from N to 0. The inverse of the p-free part of i! is the inverse for i + 1 times p_free[i] (it's easy to prove it if we rewrite the inverse of the p-free part as a product using the fact that elements coprime with p^a form an abelian group under multiplication).
This algorithm runs in O(N * number_of_prime_factors + the time to solve the system using the Chinese remainder theorem + sqrt(MOD)) time per test case. Now it looks good enough.
You're on a good path with binomial-coefficients already. There are several factors to consider:
Think of your number as a binary-string of length n. Now we can create another array counting the number of times a bit will be flipped:
[0, 1, 0, 0, 1] number
[a, b, c, d, e] number of flips.
But even numbers of flips all lead to the same result and so do all odd numbers of flips. So basically the relevant part of the distribution can be represented %2
Logical next question: How many different combinations of even and odd values are available. We'll take care of the ordering later on, for now just assume the flipping-array is ordered descending for simplicity. We start of with k as the only flipping-number in the array. Now we want to add a flip. Since the whole flipping-array is used %2, we need to remove two from the value of k to achieve this and insert them into the array separately. E.g.:
[5, 0, 0, 0] mod 2 [1, 0, 0, 0]
[3, 1, 1, 0] [1, 1, 1, 0]
[4, 1, 0, 0] [0, 1, 0, 0]
As the last example shows (remember we're operating modulo 2 in the final result), moving a single 1 doesn't change the number of flips in the final outcome. Thus we always have to flip an even number bits in the flipping-array. If k is even, so will the number of flipped bits be and same applies vice versa, no matter what the value of n is.
So now the question is of course how many different ways of filling the array are available? For simplicity we'll start with mod 2 right away.
Obviously we start with 1 flipped bit, if k is odd, otherwise with 1. And we always add 2 flipped bits. We can continue with this until we either have flipped all n bits (or at least as many as we can flip)
v = (k % 2 == n % 2) ? n : n - 1
or we can't spread k further over the array.
v = k
Putting this together:
noOfAvailableFlips:
if k < n:
return k
else:
return (k % 2 == n % 2) ? n : n - 1
So far so well, there are always v / 2 flipping-arrays (mod 2) that differ by the number of flipped bits. Now we come to the next part permuting these arrays. This is just a simple permutation-function (permutation with repetition to be precise):
flipArrayNo(flippedbits):
return factorial(n) / (factorial(flippedbits) * factorial(n - flippedbits)
Putting it all together:
solutionsByFlipping(n, k):
res = 0
for i in [k % 2, noOfAvailableFlips(), step=2]:
res += flipArrayNo(i)
return res
This also shows that for sufficiently large numbers we can't obtain 2^n sequences for the simply reason that we can not arrange operations as we please. The number of flips that actually affect the outcome will always be either even or odd depending upon k. There's no way around this. The best result one can get is 2^(n-1) sequences.
For completeness, here's a dynamic program. It can deal easily with arbitrary modulo since it is based on sums, but unfortunately I haven't found a way to speed it beyond O(n * k).
Let a[n][k] be the number of binary strings of length n with k non-adjacent blocks of contiguous 1s that end in 1. Let b[n][k] be the number of binary strings of length n with k non-adjacent blocks of contiguous 1s that end in 0.
Then:
# we can append 1 to any arrangement of k non-adjacent blocks of contiguous 1's
# that ends in 1, or to any arrangement of (k-1) non-adjacent blocks of contiguous
# 1's that ends in 0:
a[n][k] = a[n - 1][k] + b[n - 1][k - 1]
# we can append 0 to any arrangement of k non-adjacent blocks of contiguous 1's
# that ends in either 0 or 1:
b[n][k] = b[n - 1][k] + a[n - 1][k]
# complete answer would be sum (a[n][i] + b[n][i]) for i = 0 to k
I wonder if the following observations might be useful: (1) a[n][k] and b[n][k] are zero when n < 2*k - 1, and (2) on the flip side, for values of k greater than ⌊(n + 1) / 2⌋ the overall answer seems to be identical.
Python code (full matrices are defined for simplicity, but I think only one row of each would actually be needed, space-wise, for a bottom-up method):
a = [[0] * 11 for i in range(0,11)]
b = [([1] + [0] * 10) for i in range(0,11)]
def f(n,k):
return fa(n,k) + fb(n,k)
def fa(n,k):
global a
if a[n][k] or n == 0 or k == 0:
return a[n][k]
elif n == 2*k - 1:
a[n][k] = 1
return 1
else:
a[n][k] = fb(n-1,k-1) + fa(n-1,k)
return a[n][k]
def fb(n,k):
global b
if b[n][k] or n == 0 or n == 2*k - 1:
return b[n][k]
else:
b[n][k] = fb(n-1,k) + fa(n-1,k)
return b[n][k]
def g(n,k):
return sum([f(n,i) for i in range(0,k+1)])
# example
print(g(10,10))
for i in range(0,11):
print(a[i])
print()
for i in range(0,11):
print(b[i])

Counting Inversions In An Array - Special Case

Inversion Count for an array indicates – how far (or close) the array is from being sorted. If array is already sorted then inversion count is 0. If array is sorted in reverse order that inversion count is the maximum.
Formally speaking, two elements a[i] and a[j] form an inversion if a[i] > a[j] and i < j Example:
The sequence 2, 4, 1, 3, 5 has three inversions (2, 1), (4, 1), (4, 3).
Now, there are various algorithms to solve this in O(n log n).
There is a special case where the array only has 3 types of elements - 1, 2 and 3. Now, is it possible to count the inversions in O(n) ?
Eg 1,1,3,2,3,1,3
Yes it is. Just take 3 integers a,b,c where a is number of 1's encountered till now, b is number of 2's encountered till now and c is number of 3's encountered till now. Given this follow the algorithm below ( I assume numbers are given in array arr and the size is n, with 1 based indexing, also following is just a pseudocode )
no_of_inv = 0
a = 0
b = 0
c = 0
for i from 1 to n:
if arr[i] == 1:
no_of_inv = no_of_inv + b + c
a++
else if arr[i] == 2:
no_of_inv = no_of_inv + c
b++
else:
c++
(This algorithm is extremely similar to Sasha's. I just wanted to provide an explanation as well.)
Every inversion (i, j) satisfies 0 ≤ i < j < n. Let's define S[j] to be the number of inversions of the form (i, j); that is, S[j] is the number of times A[i] > A[j] for 0 ≤ i < j. Then the total number of inversions is T = S[0] + S[1] + … + S[n - 1].
Let C[x][j] be the number of times A[i] > x for 0 ≤ i < j. Then S[j] = C[A[j]][j] for all j. If we can compute the 3n values C[x][j] in linear time, then we can compute S in linear time.
Here is some Python code:
>>> import numpy as np
>>> A = np.array([1, 1, 3, 2, 3, 1, 3])
>>> C = {x: np.cumsum(A > x) for x in np.unique(A)}
>>> T = sum(C[A[j]][j] for j in range(len(A)))
>>> print T
4
This could be made more efficient—although not in asmpytotic terms—by not storing all C values at once. The algorithm really only needs a single pass through the array. I have chosen to present it this way because it is most concise.

Given k sorted numbers, what is the minimum cost to turn them into consecutive numbers?

Suppose, we are given a sorted list of k numbers. Now, we want to convert this sorted list into a list having consecutive numbers. The only operation allowed is that we can increase/decrease a number by one. Performing every such operation will result in increasing the total cost by one.
Now, how to minimize the total cost while converting the list as mentioned?
One idea that I have is to get the median of the sorted list and arrange the numbers around the median. After that just add the absolute difference between the corresponding numbers in the newly created list and the original list. But, this is just an intuitive method. I don't have any proof of it.
P.S.:
Here's an example-
Sorted list: -96, -75, -53, -24.
We can convert this list into a consecutive list by various methods.
The optimal one is: -58, -59, -60, -61
Cost: 90
This is a sub-part of a problem from Topcoder.
Let's assume that the solution is in increasing order and m, M are the minimum and maximum value of the sorted list. The other case will be handled the same way.
Each solution is defined by the number assigned to the first element. If this number is very small then increasing it by one will reduce the cost. We can continue increasing this number until the cost grows. From this point the cost will continuously grow. So the optimum will be a local minimum and we can find it by using binary search. The range we are going to search will be [m - n, M + n] where n is the number of elements:
l = [-96, -75, -53, -24]
# Cost if initial value is x
def cost(l, x):
return sum(abs(i - v) for i, v in enumerate(l, x))
def find(l):
a, b = l[0] - len(l), l[-1] + len(l)
while a < b:
m = (a + b) / 2
if cost(l, m + 1) >= cost(l, m) <= cost(l, m - 1): # Local minimum
return m
if cost(l, m + 1) < cost(l, m):
a = m + 1
else:
b = m - 1
return b
Testing:
>>> initial = find(l)
>>> range(initial, initial + len(l))
[-60, -59, -58, -57]
>>> cost(l, initial)
90
Here is a simple solution:
Let's assume that these numbers are x, x + 1, x + n - 1. Then the cost is sum i = 0 ... n - 1 of abs(a[i] - (x + i)). Let's call it f(x).
f(x) is piece-wise linear and it approaches infinity as x approaches +infinity or -infinity. It means that its minimum is reached in one of the end points.
The end points are a[0], a[1] - 1, a[2] - 2, ..., a[n - 1] - (n - 1). So we can just try all of them and pick the best.

Fast algorithm/formula for serial range of modulo of co-prime numbers

In my project, one part of problem is there. But to simplify, here the problem is being formulated. There are two positive co-prime integers: a and b, where a < b. Multiples of a from 1 through b-1 is listed followed by modulus operation by b.
a mod b , 2*a mod b , 3*a mod b, ... , (b-1)*a mod b
Now, there is another integer, say n ( 1 <= n < b). Through the first n numbers in the list, we have to find how many numbers is less than, say m (1 <= m < b). This can be done in brute force approach, thereby giving a O(n).
An example:
a=6, b=13, n=8, m=6
List is:
6, 12, 5, 11, 4, 10, 3, 9, 2, 8, 1, 7
This is a permutation of the numbers from 1 to 12 because modulus operation of any two co-primes produces a permutation of numbers if we include another number, that is, 0. If we take a= 2, b=13, then the list would have been 2, 4, 6, 8, 10, 12, 1, 3, 5, 7, 9, 11, which gives a pattern. Whereas if a and b are very large (in my project they can go up to 10^20), then I have no idea how to deduce a pattern of such large numbers.
Now getting back to the example, we take the first n = 8 numbers from the list, which gives
6, 12, 5, 11, 4, 10, 3, 9
Applying the less-than operator with m = 6, it gives the total number of numbers less than m being 3 as explained below in the list
0, 0, 1, 0, 1, 0, 1, 0
where 0 refers to not being less than m and 1 refers to being less than m.
Since, the algorithm above is a O(n), which is not acceptable for the range of [0, 10^20], so can the community give a hint/clue/tip to enable me to reach a O(log n ) solution, or even better O(1) solution?
(Warning: I got a little twitchy about the range of multipliers not being [0, n), so I adjusted it. It's easy enough to compensate.)
I'm going to sketch, with tested Python code, an implementation that runs in time O(log max {a, b}). First, here's some utility functions and a naive implementation.
from fractions import gcd
from random import randrange
def coprime(a, b):
return gcd(a, b) == 1
def floordiv(a, b):
return a // b
def ceildiv(a, b):
return floordiv(a + b - 1, b)
def count1(a, b, n, m):
assert 1 <= a < b
assert coprime(a, b)
assert 0 <= n < b + 1
assert 0 <= m < b + 1
return sum(k * a % b < m for k in range(n))
Now, how can we speed this up? The first improvement is to partition the multipliers into disjoint ranges such that, within a range, the corresponding multiples of a are between two multiples of b. Knowing the lowest and highest values, we can count via a ceiling division the number of multiples less than m.
def count2(a, b, n, m):
assert 1 <= a < b
assert coprime(a, b)
assert 0 <= n < b + 1
assert 0 <= m < b + 1
count = 0
first = 0
while 0 < n:
count += min(ceildiv(m - first, a), n)
k = ceildiv(b - first, a)
n -= k
first = first + k * a - b
return count
This isn't fast enough. The second improvement is to replace most of the while loop with a recursive call. In the code below, j is the number of iterations that are "complete" in the sense that there is a wraparound. term3 accounts for the remaining iteration, using logic that resembles count2.
Each of the complete iterations contributes floor(m / a) or floor(m / a) + 1 residues under the threshold m. Whether we get the + 1 depends on what first is for that iteration. first starts at 0 and changes by a - (b % a) modulo a on each iteration through the while loop. We get the + 1 whenever it's under some threshold, and this count is computable via a recursive call.
def count3(a, b, n, m):
assert 1 <= a < b
assert coprime(a, b)
assert 0 <= n < b + 1
assert 0 <= m < b + 1
if 1 == a:
return min(n, m)
j = floordiv(n * a, b)
term1 = j * floordiv(m, a)
term2 = count3(a - b % a, a, j, m % a)
last = n * a % b
first = last % a
term3 = min(ceildiv(m - first, a), (last - first) // a)
return term1 + term2 + term3
The running time can be analyzed analogously to the Euclidean GCD algorithm.
Here's some test code to prove evidence for my claims of correctness. Remember to delete the assertions before testing performance.
def test(p, f1, f2):
assert 3 <= p
for t in range(100):
while True:
b = randrange(2, p)
a = randrange(1, b)
if coprime(a, b):
break
for n in range(b + 1):
for m in range(b + 1):
args = (a, b, n, m)
print(args)
assert f1(*args) == f2(*args)
if __name__ == '__main__':
test(25, count1, count2)
test(25, count1, count3)

Resources