Number of different binary sequences of length n generated using exactly k flip operations - algorithm

Consider a binary sequence b of length N. Initially, all the bits are set to 0. We define a flip operation with 2 arguments, flip(L,R), such that:
All bits with indices between L and R are "flipped", meaning a bit with value 1 becomes a bit with value 0 and vice-versa. More exactly, for all i in range [L,R]: b[i] = !b[i].
Nothing happens to bits outside the specified range.
You are asked to determine the number of possible different sequences that can be obtained using exactly K flip operations modulo an arbitrary given number, let's call it MOD.
More specifically, each test contains on the first line a number T, the number of queries to be given. Then there are T queries, each one being of the form N, K, MOD with the meaning from above.
1 ≤ N, K ≤ 300 000
T ≤ 250
2 ≤ MOD ≤ 1 000 000 007
Sum of all N-s in a test is ≤ 600 000
time limit: 2 seconds
memory limit: 65536 kbytes
Example :
Input :
1
2 1 1000
Output :
3
Explanation :
There is a single query. The initial sequence is 00. We can do the following operations :
flip(1,1) ⇒ 10
flip(2,2) ⇒ 01
flip(1,2) ⇒ 11
So there are 3 possible sequences that can be generated using exactly 1 flip.
Some quick observations that I've made, although I'm not sure they are totally correct :
If K is big enough, that is if we have a big enough number of flips at our disposal, we should be able to obtain 2n sequences.
If K=1, then the result we're looking for is N(N+1)/2. It's also C(n,1)+C(n,2), where C is the binomial coefficient.
Currently trying a brute force approach to see if I can spot a rule of some kind. I think this is a sum of some binomial coefficients, but I'm not sure.
I've also come across a somewhat simpler variant of this problem, where the flip operation only flips a single specified bit. In that case, the result is
C(n,k)+C(n,k-2)+C(n,k-4)+...+C(n,(1 or 0)). Of course, there's the special case where k > n, but it's not a huge difference. Anyway, it's pretty easy to understand why that happens.I guess it's worth noting.

Here are a few ideas:
We may assume that no flip operation occurs twice (otherwise, we can assume that it did not happen). It does affect the number of operations, but I'll talk about it later.
We may assume that no two segments intersect. Indeed, if L1 < L2 < R1 < R2, we can just do the (L1, L2 - 1) and (R1 + 1, R2) flips instead. The case when one segment is inside the other is handled similarly.
We may also assume that no two segments touch each other. Otherwise, we can glue them together and reduce the number of operations.
These observations give the following formula for the number of different sequences one can obtain by flipping exactly k segments without "redundant" flips: C(n + 1, 2 * k) (we choose 2 * k ends of segments. They are always different. The left end is exclusive).
If we had perform no more than K flips, the answer would be sum for k = 0...K of C(n + 1, 2 * k)
Intuitively, it seems that its possible to transform the sequence of no more than K flips into a sequence of exactly K flips (for instance, we can flip the same segment two more times and add 2 operations. We can also split a segment of more than two elements into two segments and add one operation).
By running the brute force search (I know that it's not a real proof, but looks correct combined with the observations mentioned above) that the answer this sum minus 1 if n or k is equal to 1 and exactly the sum otherwise.
That is, the result is C(n + 1, 0) + C(n + 1, 2) + ... + C(n + 1, 2 * K) - d, where d = 1 if n = 1 or k = 1 and 0 otherwise.
Here is code I used to look for patterns running a brute force search and to verify that the formula is correct for small n and k:
reachable = set()
was = set()
def other(c):
"""
returns '1' if c == '0' and '0' otherwise
"""
return '0' if c == '1' else '1'
def flipped(s, l, r):
"""
Flips the [l, r] segment of the string s and returns the result
"""
res = s[:l]
for i in range(l, r + 1):
res += other(s[i])
res += s[r + 1:]
return res
def go(xs, k):
"""
Exhaustive search. was is used to speed up the search to avoid checking the
same string with the same number of remaining operations twice.
"""
p = (xs, k)
if p in was:
return
was.add(p)
if k == 0:
reachable.add(xs)
return
for l in range(len(xs)):
for r in range(l, len(xs)):
go(flipped(xs, l, r), k - 1)
def calc_naive(n, k):
"""
Counts the number of reachable sequences by running an exhaustive search
"""
xs = '0' * n
global reachable
global was
was = set()
reachable = set()
go(xs, k)
return len(reachable)
def fact(n):
return 1 if n == 0 else n * fact(n - 1)
def cnk(n, k):
if k > n:
return 0
return fact(n) // fact(k) // fact(n - k)
def solve(n, k):
"""
Uses the formula shown above to compute the answer
"""
res = 0
for i in range(k + 1):
res += cnk(n + 1, 2 * i)
if k == 1 or n == 1:
res -= 1
return res
if __name__ == '__main__':
# Checks that the formula gives the right answer for small values of n and k
for n in range(1, 11):
for k in range(1, 11):
assert calc_naive(n, k) == solve(n, k)
This solution is much better than the exhaustive search. For instance, it can run in O(N * K) time per test case if we compute the coefficients using Pascal's triangle. Unfortunately, it is not fast enough. I know how to solve it more efficiently for prime MOD (using Lucas' theorem), but O do not have a solution in general case.
Multiplicative modular inverses can't solve this problem immediately as k! or (n - k)! may not have an inverse modulo MOD.
Note: I assumed that C(n, m) is defined for all non-negative n and m and is equal to 0 if n < m.
I think I know how to solve it for an arbitrary MOD now.
Let's factorize the MOD into prime factors p1^a1 * p2^a2 * ... * pn^an. Now can solve this problem for each prime factor independently and combine the result using the Chinese remainder theorem.
Let's fix a prime p. Let's assume that p^a|MOD (that is, we need to get the result modulo p^a). We can precompute all p-free parts of the factorial and the maximum power of p that divides the factorial for all 0 <= n <= N in linear time using something like this:
powers = [0] * (N + 1)
p_free = [i for i in range(N + 1)]
p_free[0] = 1
for cur_p in powers of p <= N:
i = cur_p
while i < N:
powers[i] += 1
p_free[i] /= p
i += cur_p
Now the p-free part of the factorial is the product of p_free[i] for all i <= n and the power of p that divides n! is the prefix sum of the powers.
Now we can divide two factorials: the p-free part is coprime with p^a so it always has an inverse. The powers of p are just subtracted.
We're almost there. One more observation: we can precompute the inverses of p-free parts in linear time. Let's compute the inverse for the p-free part of N! using Euclid's algorithm. Now we can iterate over all i from N to 0. The inverse of the p-free part of i! is the inverse for i + 1 times p_free[i] (it's easy to prove it if we rewrite the inverse of the p-free part as a product using the fact that elements coprime with p^a form an abelian group under multiplication).
This algorithm runs in O(N * number_of_prime_factors + the time to solve the system using the Chinese remainder theorem + sqrt(MOD)) time per test case. Now it looks good enough.

You're on a good path with binomial-coefficients already. There are several factors to consider:
Think of your number as a binary-string of length n. Now we can create another array counting the number of times a bit will be flipped:
[0, 1, 0, 0, 1] number
[a, b, c, d, e] number of flips.
But even numbers of flips all lead to the same result and so do all odd numbers of flips. So basically the relevant part of the distribution can be represented %2
Logical next question: How many different combinations of even and odd values are available. We'll take care of the ordering later on, for now just assume the flipping-array is ordered descending for simplicity. We start of with k as the only flipping-number in the array. Now we want to add a flip. Since the whole flipping-array is used %2, we need to remove two from the value of k to achieve this and insert them into the array separately. E.g.:
[5, 0, 0, 0] mod 2 [1, 0, 0, 0]
[3, 1, 1, 0] [1, 1, 1, 0]
[4, 1, 0, 0] [0, 1, 0, 0]
As the last example shows (remember we're operating modulo 2 in the final result), moving a single 1 doesn't change the number of flips in the final outcome. Thus we always have to flip an even number bits in the flipping-array. If k is even, so will the number of flipped bits be and same applies vice versa, no matter what the value of n is.
So now the question is of course how many different ways of filling the array are available? For simplicity we'll start with mod 2 right away.
Obviously we start with 1 flipped bit, if k is odd, otherwise with 1. And we always add 2 flipped bits. We can continue with this until we either have flipped all n bits (or at least as many as we can flip)
v = (k % 2 == n % 2) ? n : n - 1
or we can't spread k further over the array.
v = k
Putting this together:
noOfAvailableFlips:
if k < n:
return k
else:
return (k % 2 == n % 2) ? n : n - 1
So far so well, there are always v / 2 flipping-arrays (mod 2) that differ by the number of flipped bits. Now we come to the next part permuting these arrays. This is just a simple permutation-function (permutation with repetition to be precise):
flipArrayNo(flippedbits):
return factorial(n) / (factorial(flippedbits) * factorial(n - flippedbits)
Putting it all together:
solutionsByFlipping(n, k):
res = 0
for i in [k % 2, noOfAvailableFlips(), step=2]:
res += flipArrayNo(i)
return res
This also shows that for sufficiently large numbers we can't obtain 2^n sequences for the simply reason that we can not arrange operations as we please. The number of flips that actually affect the outcome will always be either even or odd depending upon k. There's no way around this. The best result one can get is 2^(n-1) sequences.

For completeness, here's a dynamic program. It can deal easily with arbitrary modulo since it is based on sums, but unfortunately I haven't found a way to speed it beyond O(n * k).
Let a[n][k] be the number of binary strings of length n with k non-adjacent blocks of contiguous 1s that end in 1. Let b[n][k] be the number of binary strings of length n with k non-adjacent blocks of contiguous 1s that end in 0.
Then:
# we can append 1 to any arrangement of k non-adjacent blocks of contiguous 1's
# that ends in 1, or to any arrangement of (k-1) non-adjacent blocks of contiguous
# 1's that ends in 0:
a[n][k] = a[n - 1][k] + b[n - 1][k - 1]
# we can append 0 to any arrangement of k non-adjacent blocks of contiguous 1's
# that ends in either 0 or 1:
b[n][k] = b[n - 1][k] + a[n - 1][k]
# complete answer would be sum (a[n][i] + b[n][i]) for i = 0 to k
I wonder if the following observations might be useful: (1) a[n][k] and b[n][k] are zero when n < 2*k - 1, and (2) on the flip side, for values of k greater than ⌊(n + 1) / 2⌋ the overall answer seems to be identical.
Python code (full matrices are defined for simplicity, but I think only one row of each would actually be needed, space-wise, for a bottom-up method):
a = [[0] * 11 for i in range(0,11)]
b = [([1] + [0] * 10) for i in range(0,11)]
def f(n,k):
return fa(n,k) + fb(n,k)
def fa(n,k):
global a
if a[n][k] or n == 0 or k == 0:
return a[n][k]
elif n == 2*k - 1:
a[n][k] = 1
return 1
else:
a[n][k] = fb(n-1,k-1) + fa(n-1,k)
return a[n][k]
def fb(n,k):
global b
if b[n][k] or n == 0 or n == 2*k - 1:
return b[n][k]
else:
b[n][k] = fb(n-1,k) + fa(n-1,k)
return b[n][k]
def g(n,k):
return sum([f(n,i) for i in range(0,k+1)])
# example
print(g(10,10))
for i in range(0,11):
print(a[i])
print()
for i in range(0,11):
print(b[i])

Related

How to get the intuition behind the solution?

I was solving the below problem from USACO training. I found this really fast solution for which, I am finding it unable to absorb fully.
Problem: Consider an ordered set S of strings of N (1 <= N <= 31) bits. Bits, of course, are either 0 or 1.
This set of strings is interesting because it is ordered and contains all possible strings of length N that have L (1 <= L <= N) or fewer bits that are `1'.
Your task is to read a number I (1 <= I <= sizeof(S)) from the input and print the Ith element of the ordered set for N bits with no more than L bits that are `1'.
sample input: 5 3 19
output: 10110
The two solutions I could think of:
Firstly the brute force solution which goes through all possible combinations of bits, selects and stores the strings whose count of '1's are less than equal to 'L' and returning the Ith string.
Secondly, we can find all the permutations of '1's from 5 positions with range of count(0 to L), sort the strings in increasing order and returning the Ith string.
The best Solution:
The OP who posted the solution has used combination instead of permutation. According to him, the total number of string possible is 5C0 + 5C1 + 5C2 + 5C3.
So at every position i of the string, we decide whether to include the ith bit in our output or not, based on the total number of ways we have to build the rest of the string. Below is a dry run of the entire approach for the above input.
N = 5, L = 3, I = 19
00000
at i = 0, for the rem string, we have 4C0 + 4C1 + 4C2 + 4C3 = 15
It says that, there are 15 other numbers possible with the last 4 positions. as 15 is less than 19, our first bit has to be set.
N = 5, L = 2, I = 4
10000
at i = 1, we have 3C0 + 3C1 + 3C2 (as we have used 1 from L) = 7
as 7 is greater than 4, we cannot set this bit.
N = 5, L = 2, I = 4
10000
at i = 2 we have 2C0 + 2C2 = 2
as 2 <= I(4), we take this bit in our output.
N = 5, L = 1, I = 2
10100
at i = 3, we have 1C0 + 1C1 = 2
as 2 <= I(2) we can take this bit in our output.
as L == 0, we stop and 10110 is our answer. I was amazed to find this solution. However, I am finding it difficult to get the intuition behind this solution.
How does this solution sort-of zero in directly to the Ith number in the set?
Why does the order of the bits not matter in the combinations of set bits?
Suppose we have precomputed the number of strings of length n with k or fewer bits set. Call that S(n, k).
Now suppose we want the i'th string (in lexicographic order) of length N with L or fewer bits set.
All the strings with the most significant bit zero come before those with the most significant bit 1. There's S(N-1, L) strings with the most significant bit zero, and S(N-1, L-1) strings with the most significant bit 1. So if we want the i'th string, if i<=S(N-1, L), then it must have the top bit zero and the remainder must be the i'th string of length N-1 with at most L bits set, and otherwise it must have the top bit one, and the remainder must be the (i-S(N-1, L))'th string of length N-1 with at most L-1 bits set.
All that remains to code is to precompute S(n, k), and to handle the base cases.
You can figure out a combinatorial solution to S(n, k) as your friend did, but it's more practical to use a recurrence relation: S(n, k) = S(n-1, k) + S(n-1, k-1), and S(0, k) = S(n, 0) = 1.
Here's code that does all that, and as an example prints out all 8-bit numbers with 3 or fewer bits set, in lexicographic order. If i is out of range, then it raises an IndexError exception, although in your question you assume i is always in range, so perhaps that's not necessary.
S = [[1] * 32 for _ in range(32)]
for n in range(1, 32):
for k in range(1, 32):
S[n][k] = S[n-1][k] + S[n-1][k-1]
def ith_string(n, k, i):
if n == 0:
if i != 1:
raise IndexError
return ''
elif i <= S[n-1][k]:
return "0" + ith_string(n-1, k, i)
elif k == 0:
raise IndexError
else:
return "1" + ith_string(n-1, k-1, i - S[n-1][k])
print([ith_string(8, 3, i) for i in range(1, 94)])

Generate one permutation from an index

Is there an efficient algorithm to generate a permutation from one index provided? The permutations do not need to have any specific ordering and it just needs to return every permutation once per every possible index. The set I wish to permute is all integers from 0~255.
If I understand the question correctly, the problem is as follows: You are given two integers n and k, and you want to find the kth permutation of n integers. You don't care about it being the kth lexicographical permutation, but it's just easier to be lexicographical so let's stick with that.
This is not too bad to compute. The base permutation is 1,2,3,4...n. This is the k=0 case. Consider what happens if you were to swap the 1 and 2: by moving the 1, you are passing up every single permutation where 1 goes first, and there are (n-1)! of those (since you could have permuted 2,3,4..n if you fixed the 1 in place). Thus, the algorithm is as follows:
for i from 1 to n:
j = k / (n-i)! // integer division, so rounded down
k -= j * (n-i)!
place down the jth unplaced number
This will iteratively produce the kth lexicographical permutation, since it repeatedly solves a sub-problem with a smaller set of numbers to place, and decrementing k along the way.
There is an implementation in python in module more-itertools: nth_permutation.
Here is an implementation, adapted from the code of more_itertools.nth_permutation:
from sympy import factorial
def nth_permutation(iterable, index):
pool = list(iterable)
n = len(pool)
c = factorial(n)
index = index % c
result = [0] * n
q = index
for d in range(1, n + 1):
q, i = divmod(q, d)
if 0 <= n - d < n:
result[n - d] = i
if q == 0:
break
return tuple(map(pool.pop, result))
print( nth_permutation(range(6), 360) )
# (3, 0, 1, 2, 4, 5)

Calculating the numbers whose binary representation has exactly required number 1's

Okay so the problem is finding a positive integer n such that there are exactly m numbers in n+1 to 2n (both inclusive) whose binary representation has exactly k 1s.
Constraints: m<=10^18 and k<=64. Also answer is less than 10^18.
Now I can't think of an efficient way of solving this instead of going through each integer and calculating the binary 1 count in the required interval for each of them, but that would take too long. So is there any other way to go about with this?
You're correct to suspect that there's a more efficient way.
Let's start with a slightly simpler subproblem. Absent some really clever
insights, we're going to need to be able to find the number of integers in
[n+1, 2n] that have exactly k bits set in their binary representation. To
keep things short, let's call such integers "weight-k" integers (for motivation for this terminology, look up Hamming weight). We can
immediately simplify our counting problem: if we can count all weight-k integers in [0, 2n]
and we can count all weight-k integers in [0, n], we can subtract one count
from the other to get the number of weight-k integers in [n+1, 2n].
So an obvious subproblem is to count how many weight-k integers there are
in the interval [0, n], for given nonnegative integers k and n.
A standard technique for a problem of this kind is to look for a way to break
it down into smaller subproblems of the same kind; this is one aspect of
what's often called dynamic programming. In this case, there's an easy way of
doing so: consider the even numbers in [0, n] and the odd numbers in [0, n]
separately. Every even number m in [0, n] has exactly the same weight as
m/2 (because by dividing by two, all we do is remove a single zero
bit). Similarly, every odd number m has weight exactly one more than the
weight of (m-1)/2. With some thought about the appropriate base cases, this
leads to the following recursive algorithm (in this case implemented in Python,
but it should translate easily to any other mainstream language).
def count_weights(n, k):
"""
Return number of weight-k integers in [0, n] (for n >= 0, k >= 0)
"""
if k == 0:
return 1 # 0 is the only weight-0 value
elif n == 0:
return 0 # only considering 0, which doesn't have positive weight
else:
from_even = count_weights(n//2, k)
from_odd = count_weights((n-1)//2, k-1)
return from_even + from_odd
There's plenty of scope for mistakes here, so let's test our fancy recursive
algorithm against something less efficient but more direct (and, I hope, more
obviously correct):
def weight(n):
"""
Number of 1 bits in the binary representation of n (for n >= 0).
"""
return bin(n).count('1')
def count_weights_slow(n, k):
"""
Return number of weight-k integers in [0, n] (for n >= 0, k >= 0)
"""
return sum(weight(m) == k for m in range(n+1))
The results of comparing the two algorithms look convincing:
>>> count_weights(100, 5)
11
>>> count_weights_slow(100, 5)
11
>>> all(count_weights(n, k) == count_weights_slow(n, k)
... for n in range(1000) for k in range(10))
True
However, our supposedly fast count_weights function doesn't scale well to the
size numbers you need:
>>> count_weights(2**64, 5) # takes a few seconds on my machine
7624512
>>> count_weights(2**64, 6) # minutes ...
74974368
>>> count_weights(2**64, 10) # gave up waiting ...
But here's where a second key idea of dynamic programming comes in: memoize!
That is, keep a record of the results of previous calls, in case we need to use
them again. It turns out that the chain of recursive calls made will tend to
repeat lots of calls, so there's value in memoizing. In Python, this is
trivially easy to do, via the functools.lru_cache decorator. Here's our new
version of count_weights. All that's changed is the extra line at the top:
#lru_cache(maxsize=None)
def count_weights(n, k):
"""
Return number of weight-k integers in [0, n] (for n >= 0, k >= 0)
"""
if k == 0:
return 1 # 0 is the only weight-0 value
elif n == 0:
return 0 # only considering 0, which doesn't have positive weight
else:
from_even = count_weights(n//2, k)
from_odd = count_weights((n-1)//2, k-1)
return from_even + from_odd
Now testing on those larger examples again, we get results much more quickly,
without any noticeable delay.
>>> count_weights(2**64, 10)
151473214816
>>> count_weights(2**64, 32)
1832624140942590534
>>> count_weights(5853459801720308837, 27)
356506415596813420
So now we have an efficient way to count, we've got an inverse problem to
solve: given k and m, find an n such that count_weights(2*n, k) -
count_weights(n, k) == m. This one turns out to be especially easy, since the
quantity count_weights(2*n, k) - count_weights(n, k) is monotonically
increasing with n (for fixed k), and more specifically increases by either
0 or 1 every time n increases by 1. I'll leave the proofs of those
facts to you, but here's a demo:
>>> for n in range(10, 30): print(n, count_weights(n, 3))
...
10 1
11 2
12 2
13 3
14 4
15 4
16 4
17 4
18 4
19 5
20 5
21 6
22 7
23 7
24 7
25 8
26 9
27 9
28 10
29 10
This means that we're guaranteed to be able to find a solution. There may be multiple solutions, so we'll aim to find the smallest one (though it would be equally easy to find the largest one). Bisection search gives us a crude but effective way to do this. Here's the code:
def solve(m, k):
"""
Find the smallest n >= 0 such that [n+1, 2n] contains exactly
m weight-k integers.
Assumes that m >= 1 (for m = 0, the answer is trivially n = 0).
"""
def big_enough(n):
"""
Target function for our bisection search solver.
"""
diff = count_weights(2*n, k) - count_weights(n, k)
return diff >= m
low = 0
assert not big_enough(low)
# Initial phase: expand interval to identify an upper bound.
high = 1
while not big_enough(high):
high *= 2
# Bisection phase.
# Loop invariant: big_enough(high) is True and big_enough(low) is False
while high - low > 1:
mid = (high + low) // 2
if big_enough(mid):
high = mid
else:
low = mid
return high
Testing the solution:
>>> n = solve(5853459801720308837, 27)
>>> n
407324170440003813446
Let's double check that n:
>>> count_weights(2*n, 27) - count_weights(n, 27)
5853459801720308837
Looks good. And if we got our search right, this should be the smallest
n that works:
>>> count_weights(2*(n-1), 27) - count_weights(n-1, 27)
5853459801720308836
There are plenty of other opportunities for optimizations and cleanups in the
above code, and other ways to tackle the problem, but I hope this gives you a
starting point.
The OP commented that they needed to do this in C, where memoization isn't immediately available without using an external library. Here's a variant of count_weights that doesn't need memoization. It's achieved by (a) tweaking the recursion in count_weights so that the same n is used in both recursive calls, and then (b) returning, for a given n, the values of count_weights(n, k) for all k for which the answer is nonzero. In effect, we're just moving the memoization into an explicit list.
Note: as written, the code below needs Python 3.
def count_all_weights(n):
"""
Return frequencies of weights of all integers in [0, n],
as a list. The kth entry in the list gives the count
of weight-k integers in [0, n].
Example
-------
>>> count_all_weights(16)
[1, 5, 6, 4, 1]
"""
if n == 0:
return [1]
else:
wm = count_all_weights((n-1)//2)
weights = [wm[0], *(wm[i]+wm[i+1] for i in range(len(wm)-1)), wm[-1]]
if n % 2 == 0:
weights[bin(n).count('1')] += 1
return weights
An example call:
>>> count_all_weights(7590)
[1, 13, 78, 286, 714, 1278, 1679, 1624, 1139, 559, 182, 35, 3]
This function should be good enough even for larger n: count_all_weights(10**18) takes less than a half a millisecond on my machine.
Now the bisection search will work as before, replacing the call to count_weights(n, k) with count_all_weights(n)[k] (and similarly for count_weights(2*n, k)).
Finally, another possibility is to break up the interval [0, n] into a succession of smaller and smaller subintervals, where each subinterval has length a power of two. For example, we'd break the interval [0, 101] into [0, 63], [64, 95], [96, 99] and [100, 101]. The advantage of this is that we can easily compute how many weight-k integers there are in any one of these subintervals by counting combinations. For example, in [0, 63] we have all possible 6-bit combinations, so if we're after weight-3 integers, we know there must be exactly 6-choose-3 (i.e., 20) of them. And in [64, 95], we know each integer starts with a 1-bit, and then after excluding that 1-bit we have all possible 5-bit combinations, so again we know how many integers there are in this interval with any given weight.
Applying this idea, here's a complete, fast, all-in-one function that solves your original problem. It has no recursion and no memoization.
def solve(m, k):
"""
Given nonnegative integers m and k, find the smallest
nonnegative integer n such that the closed interval
[n+1, 2*n] contains exactly m weight-k integers.
Note that for k small there may be no solution:
if k == 0 then we have no solution unless m == 0,
and if k == 1 we have no solution unless m is 0 or 1.
"""
# Deal with edge cases.
if k < 2 and k < m:
raise ValueError("No solution")
elif k == 0 or m == 0:
return 0
k -= 1
# Find upper bound on n, and generate a subset of
# Pascal's triangle as we go.
rows = []
high, row = 1, [1] + [0] * k
while row[k] < m:
rows.append((high, row))
high, row = high * 2, [1, *(row[i]+row[i+1] for i in range(k))]
# Bisect to find first n that works.
low = mlow = weight = 0
while rows:
high, row = rows.pop()
mmid = mlow + row[k - weight]
if mmid < m:
low, mlow, weight = low + high, mmid, weight + 1
return low + 1

Given k sorted numbers, what is the minimum cost to turn them into consecutive numbers?

Suppose, we are given a sorted list of k numbers. Now, we want to convert this sorted list into a list having consecutive numbers. The only operation allowed is that we can increase/decrease a number by one. Performing every such operation will result in increasing the total cost by one.
Now, how to minimize the total cost while converting the list as mentioned?
One idea that I have is to get the median of the sorted list and arrange the numbers around the median. After that just add the absolute difference between the corresponding numbers in the newly created list and the original list. But, this is just an intuitive method. I don't have any proof of it.
P.S.:
Here's an example-
Sorted list: -96, -75, -53, -24.
We can convert this list into a consecutive list by various methods.
The optimal one is: -58, -59, -60, -61
Cost: 90
This is a sub-part of a problem from Topcoder.
Let's assume that the solution is in increasing order and m, M are the minimum and maximum value of the sorted list. The other case will be handled the same way.
Each solution is defined by the number assigned to the first element. If this number is very small then increasing it by one will reduce the cost. We can continue increasing this number until the cost grows. From this point the cost will continuously grow. So the optimum will be a local minimum and we can find it by using binary search. The range we are going to search will be [m - n, M + n] where n is the number of elements:
l = [-96, -75, -53, -24]
# Cost if initial value is x
def cost(l, x):
return sum(abs(i - v) for i, v in enumerate(l, x))
def find(l):
a, b = l[0] - len(l), l[-1] + len(l)
while a < b:
m = (a + b) / 2
if cost(l, m + 1) >= cost(l, m) <= cost(l, m - 1): # Local minimum
return m
if cost(l, m + 1) < cost(l, m):
a = m + 1
else:
b = m - 1
return b
Testing:
>>> initial = find(l)
>>> range(initial, initial + len(l))
[-60, -59, -58, -57]
>>> cost(l, initial)
90
Here is a simple solution:
Let's assume that these numbers are x, x + 1, x + n - 1. Then the cost is sum i = 0 ... n - 1 of abs(a[i] - (x + i)). Let's call it f(x).
f(x) is piece-wise linear and it approaches infinity as x approaches +infinity or -infinity. It means that its minimum is reached in one of the end points.
The end points are a[0], a[1] - 1, a[2] - 2, ..., a[n - 1] - (n - 1). So we can just try all of them and pick the best.

Creating a random number generator from a coin toss

Yesterday i had this interview question, which I couldn't fully answer:
Given a function f() = 0 or 1 with a perfect 1:1 distribution, create a function f(n) = 0, 1, 2, ..., n-1 each with probability 1/n
I could come up with a solution for if n is a natural power of 2, ie use f() to generate the bits of a binary number of k=ln_2 n. But this obviously wouldn't work for, say, n=5 as this would generate f(5) = 5,6,7 which we do not want.
Does anyone know a solution?
You can build a rng for the smallest power of two greater than n as you described. Then whenever this algorithm generates a number larger than n-1, throw that number away and try again. This is called the method of rejection.
Addition
The algorithm is
Let m = 2^k >= n where k is is as small as possible.
do
Let r = random number in 0 .. m-1 generated by k coin flips
while r >= n
return r
The probability that this loop stops with at most i iterations is bounded by 1 - (1/2)^i. This goes to 1 very rapidly: The loop is still running after 30 iterations with probability less than one-billionth.
You can decrease the expected number of iterations with a slightly modified algorithm:
Choose p >= 1
Let m = 2^k >= p n where k is is as small as possible.
do
Let r = random number in 0 .. m-1 generated by k coin flips
while r >= p n
return floor(r / p)
For example if we are trying to generate 0 .. 4 (n = 5) with the simpler algorithm, we would reject 5, 6 and 7, which is 3/8 of the results. With p = 3 (for example), pn = 15, we'd have m = 16 and would reject only 15, or 1/16 of the results. The price is needing four coin flips rather than 3 and a division op. You can continue to increase p and add coin flips to decrease rejections as far as you wish.
Another interesting solution can be derived through a Markov Chain Monte Carlo technique, the Metropolis-Hastings algorithm. This would be significantly more efficient if a large number of samples were required but it would only approach the uniform distribution in the limit.
initialize: x[0] arbitrarily
for i=1,2,...,N
if (f() == 1) x[i] = (x[i-1]++) % n
else x[i] = (x[i-1]-- + n) % n
For large N the vector x will contain uniformly distributed numbers between 0 and n. Additionally, by adding in an accept/reject step we can simulate from an arbitrary distribution, but you would need to simulate uniform random numbers on [0,1] as a sub-procedure.
def gen(a, b):
min_possible = a
max_possible = b
while True:
floor_min_possible = floor(min_possible)
floor_max_possible = floor(max_possible)
if max_possible.is_integer():
floor_max_possible -= 1
if floor_max_possible == floor_min_possible:
return floor_max_possible
mid = (min_possible + max_possible)/2
if coin_flip():
min_possible = mid
else:
max_possible = mid
My #RandomNumberGenerator #RNG
/w any f(x) that gives rand ints from 1 to x, we can get rand ints from 1 to k, for any k:
get ints p & q, so p^q is smallest possible, while p is a factor of x, & p^q >= k;
Lbl A
i=0 & s=1; while i < q {
s+= ((f(x) mod p) - 1) * p^i;
i++;
}
if s > k, goto A, else return s
//** about notation/terms:
rand = random
int = integer
mod is (from) modulo arithmetic
Lbl is a “Label”, from the Basic language, & serves as a coordinates for executing code. After the while loop, if s > k, then “goto A” means return to the point of code where it says “Lbl A”, & resume. If you return to Lbl A & process the code again, it resets the values of i to 0 & s to 1.
i is an iterator for powers of p, & s is a sum.
"s+= foo" means "let s now equal what it used to be + foo".
"i++" means "let i now equal what it used to be + 1".
f(x) returns random integers from 1 to x. **//
I figured out/invented/solved it on my own, around 2008. The method is discussed as common knowledge here. Does anyone know since when the random number generator rejection method has been common knowledge? RSVP.

Resources