Question from the interview at f2f interview at MS:
Determine the number of integral solutions of
x1 + x2 + x3 + x4 + x5 = N
where 0 <= xi <= N
So basically we need to find the number of partitions of N in at most 5 parts
Supposed to be solved with paper and pencil. Did not make much headway though, does anybody have a solution for this?
Assume numbers are strictly > 0.
Consider an integer segment [0, N]. The problem is to split it into 4 segments of positive length. Imagine we do that by putting 4 splitter dots between adjacent numbers. How many ways to do that ? C(N-1, 4).
Now, some numbers can be 0-s. Let k be number of non-zero numbers. We can choose them in C(5,k) ways, for each having C(N-1, k) splittings. Accumulating by all k in [0,5] range, we get
Sum[ C(5,k) * C(n-1,k); k = 0 to 5]
#Grigor Gevorgyan indeed gives the right way to figure out the solution.
think about when
1 <= xi
that's exactly dividing N points into 5 segments. it's equivalent to insert 4 "splitter dots" out of N-1 possible places ( between adjacent numbers). So the answer is C(N-1,4)
then what about when
0 <= xi
?
If you have the solution of X+5 points in
1 <= xi
whose answer is C(N-1,4)=C(X+5-1,4)=C(X+4,4)
then you simply remove one point from each set, and you have a solution of X points, with
0 <= xi
which means,the answer now is exactly equal to C(X+4,4)
Topcoder tutorials
Look for the section "Combination with repetition" : The specific case is explained under that section with diagrmatic illustration .(A picture is worth quite a few words!)
You have the answer here.
It is classical problem -
Number of options to put N balls in M boxes = c(M+N-1,N).
The combinations solution is more appropriate if a pen and paper solution was asked. It's also the classic solution. Here is a dynamic programming solution.
Let dp[i, N] = number of solutions of x1 + x2 + ... +xi = N.
Let's take x1 + x2 = N:
We have the solutions:
0 + N = N
1 + N - 1 = N
...
N + 0 = N
So dp[2, N] = N + 1 solutions.
Let's take x1 + x2 + x3 = N:
We have the solutions:
0 + (0 + N) = N
0 + (1 + N - 1) = N
...
0 + (N + 0) = N
...
Notice that there are N + 1 solutions thus far. Moving on:
1 + (0 + N - 1) = N
1 + (1 + N - 2) = N
...
1 + (N - 1 + 0) = N
...
Notice that there are another N solutions. Moving on:
...
N - 1 + (0 + 1) = N
N - 1 + (1 + 0) = N
=> +2 solutions
N + (0 + 0) = N
=> +1 solution
So we have dp[3, N] = dp[2, N] + dp[2, N - 1] + dp[2, N - 2] + ... + dp[2, 0].
Also notice that dp[k, 0] = 1
Since for each row of the matrix we need N summations, the complexity for computing dp[k, N] is O(k*N), which is just as much as would be needed for the combinatorics solution.
To keep the complexity for each row O(N), store s[i] = sum of the first i elements on the previous row. The memory used can also be reduced to O(N).
Related
How can I find the expected value for the expression in form P/Q
Given:
N integers
2 Operators, 'Bitwise OR' & '+'
We can use any of the two operator with equal probability between each consecutive integers to form the expression.
Currently, the solution that I have in mind is to generate all possible expression using the operators and then using the value of each expression I can calculate expected value for it.
But as N grows, this approach fails. Is there any other alternative that will be efficient in terms of time complexity?
Note: For this question: 'Bitwise OR' has higher priority than '+' operator.
There can be at max 10^5 integers.
Example:
Input
1 2 3
Output
19/4
The different ways are:
1+2+3 = 6
1+2|3 = 4
1|2+3 = 6
1|2|3 = 3
All these ways have probability = 1/4
So expected value will be 19/4
The important observation is that every + splits its left and right parts into sections that can be processed independently.
Let the array of numbers be a[1…N]. Define f(i) to be the expectation value obtained from a[i…N]. What we want to find is f(1).
Note that the first + sign in [i…N] will appear after the ith element with probability 1/2 and i+1th element with probability 1/4 and so on. Just find bitwise or of the elements till + and add the expectation value of what remains.
Thus we have the recurrence
f(i) = sum_{j = i to N-1} (or(a[i…j]) + f(j+1))/(2^(j-i+1))
+ or(a[i…N])/(2^(N-i))
This should be easy to implement efficiently without errors.
For the example array [1,2,3]:
f(3) = or(a[3…3]) = 3
f(2) = (or(a[2…2])+f(3))/2 + or(a[2…3])/2 = 5/2 + 3/2 = 4
f(1) = (or(a[1…1])+f(2))/2 + (or(a[1…2])+f(3))/4 + or(a[1…3])/4 = 5/2 + 6/4 + 3/4 = 19/4
The answer is found to be 19/4, as expected.
First of all, since there are 2ⁿ⁻¹ expressions (two possible operators on each of the n-1 places between numbers) and they are all equally probable, the expected value is the sum of all the expressions divided by 2ⁿ⁻¹. So the problem boils down to calculating the sum of the expressions.
An O(n²) algorithm
Let x_1, x_2, ..., x_n be the input numbers.
Let S_k be the sum of all expressions formed by inserting | or + between every pair of consecutive numbers in the list x_1, x_2, ..., x_k.
Let N_k be the number of all such expressions. N_k = 2 ^ (k - 1).
Let's see how we can use S_1, S_2, ..., S_(k-1) to calculate S_k.
The idea is to divide all possible expressions by the position of the last "+" in them.
The sum of the expressions of the form "... + x_k" is
S_(k-1) + x_k * N_(k-1)
The sum of the expressions of the form "... + x_(k-1) | x_k" is
S_(k-2) + (x_(k-1) | x_k) * N_(k-2)
The sum of the expressions of the form "... + x_(k-2) | x_(k-1) | x_k" is
S_(k-2) + (x_(k-2) | x_(k-1) | x_k) * N_(k-3)
...and so on until the single expression x_1 | x_2 | ... | x_k.
Here is a Python implementation of the algorithm.
numbers = [1, 2, 3] # The input numbers.
totals = [0] # The partial sums. For every k > 0 totals[k] is S_k.
for i in range(len(numbers)): # Processing the numbers one by one.
new_total = 0
last_summand = 0 # last_summand is numbers[j] | ... | numbers[i]
for j in range(i, 0, -1): # j is the position of the last plus in the expression.
# On every iteration new_total is increased by the sum of the
# expressions of the form "... + numbers[j] | ... | numbers[i]".
last_summand |= numbers[j]
new_total += totals[j] + last_summand * (2 ** (j - 1))
last_summand |= numbers[0]
new_total += last_summand # Handling the expression with no pluses at all.
totals.append(new_total)
# Now the last element in totals is the sum of all expressions.
print(str(totals[-1]) + '/' + str(2**(len(numbers) - 1)))
Further optimization: O(n*log(M))
The problem has two properties that can be used to create a faster algorithm.
If S_n is the sum of the expressions formed by the numbers x_1, x_2, ..., x_n, then 2*S_n is the sum of the expressions formed by the numbers 2*x_1, 2*x_2, ..., 2*x_n.
If x_1, x_2, ..., x_n and y_1, y_2, ..., y_n are such that x_k & y_m == 0 for any k and m, and SX_n is the sum of the expressions formed by x_1, x_2, ..., x_n, and SY_n is the sum of the expressions formed by y_1, y_2, ..., y_n, then SX_n + SY_n is the sum of the expressions formed by x_1+y_1, x_2+y_2, ..., x_n+y_n.
Which means, the problem can be reduced to finding the sum of the expressions for 1-bit numbers. Every bit position from 0 to 31 can be processed separately, and after the solutions are found we can simply add them.
Let x_1, x_2, ..., x_n be one-bit numbers (every x_i is either 0 or 1).
Let S_k be the sum of the expressions formed by x_1, x_2, ..., x_k.
Let N0_k be the number of such expressions where the last summand equals 0.
Let N1_k be the number of such expressions where the last summand equals 1.
Here is the recurrent relation that allows to find S_k, N0_k and N1_k knowing only x_k, S_(k-1), N0_(k-1) and N1_(k-1):
k = 1, x_1 = 0:
S_1 = 0
N0_1 = 1
N1_1 = 0
k = 1, x_1 = 1:
S_1 = 1
N0_1 = 0
N1_1 = 1
k > 1, x_k = 0:
S_k = S_(k-1) * 2
N0_k = N0_(k-1) * 2 + N0_(k-1)
N1_k = N1_(k-1)
k > 1, x_k = 1:
S_k = S_(k-1) * 2 + N0_(k-1) * 2 + N0_(k-1)
N0_k = 0
N1_k = N0_(k-1) * 2 + N0_(k-1) * 2
Since S_n can be found in O(n) and it needs to be found for every bit position, the time complexity of the whole algorithm is O(n*log(M)), where M is the upper bound on the numbers.
An implementation:
numbers = [1, 2, 3]
max_bits_in_number = 31
def get_bit(x, k):
return (x >> k) & 1
total_sum = 0
for bit_index in range(max_bits_in_number):
bit = get_bit(numbers[0], bit_index)
expression_sum = bit
expression_count = (1 - bit, bit)
for i in range(1, len(numbers)):
bit = get_bit(numbers[i], bit_index)
if bit == 0:
expression_sum = expression_sum * 2
expression_count = (expression_count[0] * 2 + expression_count[1], expression_count[1])
else:
expression_sum = expression_sum * 2 + expression_count[0] * 2 + expression_count[1]
expression_count = (0, expression_count[0] * 2 + expression_count[1]*2)
total_sum += expression_sum * 2**bit_index
print(str(total_sum) + '/' + str(2**(len(numbers) - 1)))
Example – if n = 15 & k = 3 Answer : 33 (3, 6, 9, 12, 13, 15, 18, 21, 23, 24, 27, 30, 31, 32, 33)
I started following the sequence but couldn't formulate it
for multiples of 3 -> 3+3+3+4+3+3+4+3+3+4
for containing digit 3 ->
{
range in diff = 100 -> 1+1+1+10+1+1+1+1+1+1 = f(n) say;
range in diff = 1000 ->
f(n)+f(n)+f(n)+10*f(n)+f(n)+f(n)+f(n)+f(n)+f(n)+f(n) = ff(n) say
range in diff = 10000 ->
ff(n) + ff(n) + ff(n) + 10*ff(n)+ff(n) + ff(n) + ff(n)+ff(n) + ff(n) + ff(n)
same goes further.
}
I have to answer in better than O(n) or in O(1) if possible, Please don't suggest methods like to check every number in a for loop. Thanks.
Edit-I have searched everywhere but couldn't find it answered anywhere so , It's not a duplicate.
Here's one way to think about it that could point you along at least one direction (or, alternatively, a wild-goose chase). Separate the two questions and remove overlapping results:
(1) How many j-digit numbers are divisible by k ? [j 9's / k] - [(j-1) 9's / k]
(2) How many j-digit numbers include the digit k? 9 * 10^(k-1) - 8 x 9^(k-1)
Now we need to subtract the j-digit numbers that are both divisible by k and include the digit k. But how many are there?
Use divisibility rules to consider the different cases. For example:
k = 2
If k is the rightmost digit, any combination of the previous j-1 digits would work.
Otherwise, only combinations with 0,4,6 or 8 as the rightmost digit would work.
k = 5
If k is the rightmost digit, any combination of the previous j-1 digits would work.
Otherwise, only combinations with 0 or 5 as the rightmost digit would work.
etc.
(Addendum: I asked the combinatoric question on math.stackexchange and got some interesting answers. And here's a link to the OP's question on math.stackexchange: https://math.stackexchange.com/questions/1884303/the-n-th-number-that-contains-the-digit-k-or-is-divisible-by-k-2-le-k-l )
Following up on גלעד ברקן's answer, if you have an O(1) way of calculating d(j, k) = numbers with at least one digit k up to j, discarding numbers that are divisible by k, then you can calculate e(j, k) = numbers with at least on digit k or divisible by k under j as j/k + d(j, k).
This allows you to find f(n, k) with binary search, since k <= f(n, k) <= k*n and e(j, k) = n <=> f(n, k) = j: you essentially try to guess which j will yield the expected n, in O(log n) tries.
I agree with גלעד ברקן's observation regarding divisibility rules for calculating d(j, k) efficiently; but they are not trivial to implement, except for k=5 and k=2.
I strongly doubt that you can improve on O(log n) for this problem; and it may not even be reachable for some values of k.
This is more complex than I thought, but I think I figured out a solution for the simplest case (k = 2).
First I tried to simplify by asking the following question: Which position in the sequence have the numbers 10^i * k where i = 1, 2, 3, ...? For k = 2 the numbers are 20, 200, 2000, ...
i k n
1 2 20/2 = 10
2 2 200/2 + 2* 5 = 110
3 2 2000/2 + 2* 50 + 18* 5 = 1190
4 2 20000/2 + 2*500 + 18*50 + 162*5 = 12710
i 2 10^i + 2*10^(i-1)/2 + 18*10^(i-2)/2 + 162*10^(i-3)/2 + ?*10^(i-4)/2 + ...
In the last line I tried to express the pattern. The first part is the number dividable by 2. Then there are i-1 additional parts for the odd numbers with a 2 at first position, second and so on. The difficult part is to calculate the factors (2, 18, 162, ...).
Here a function returning the new factor for any i:
f(i) = 2 * 10^(i-2) - sum(10^(i-x-1)*f(x), x from 2 to i-1) = 2 * 9^(i-2) [thx #m69]
f(2) = 2
f(3) = 2*10 - (1*2) = 18
f(4) = 2*100 - (10*2 + 1*18) = 162
f(5) = 2*1000 - (100*2 + 10*18 + 1*162) = 1458
So using this information we can come up with the following algorithm:
Find the highest number 10^i*2 which does not exceed the position. (If n is in the range [positionOf(10^i*2), positionOf(10^i*2) + (10^i)] then we already know the solution: 10^i*2 + (n - positionOf(10^i*2)). E.g. if we find that i=2 we know that the next 100 values are all in the sequence: [201, 300], so if 110 <= n <= 210, then the solution is 200+(n-110) = n+90.)
int nn = positionOf(10^i * 2);
int s = 10^i * 2;
for (int ii = i; ii >= 0; ii--) {
for (int j = 1; j < 10; j++) {
if (j == 1 || j == 6) {
if (n <= nn + 10^ii)
return s + nn - n;
nn += 10^ii;
s += 10^ii;
int tmp = positionOf(10^ii);
if (nn + tmp > n)
break;
nn += tmp;
s += 10^ii;
} else {
int tmp = positionOf(10^ii * 2);
if (nn + tmp > n)
break;
nn += tmp;
s += 10^ii * 2;
}
}
}
return s;
This is only untested uncomplete pseudo-code (I know that you can't use ^ in Java), ii = 1 or 0 needs to be treated as special case, this missing and how to find i isn't shown either or the answer would become too long.
this can be solved using binary search+ digit dp.....
with time complexity of o(logn*)
for solution seecode:enter code herehttps://ideone.com/poxhzd
This question already has answers here:
Sum of Digits till a number which is given as input
(2 answers)
Closed 6 years ago.
Problem:
Find the sum of the digits of all the numbers from 1 to N (both ends included)
Time Complexity should be O(logN)
For N = 10 the sum is 1+2+3+4+5+6+7+8+9+(1+0) = 46
For N = 11 the sum is 1+2+3+4+5+6+7+8+9+(1+0)+(1+1) = 48
For N = 12 the sum is 1+2+3+4+5+6+7+8+9+(1+0)+(1+1) +(1+2)= 51
This recursive solution works like a charm, but I'd like to understand the rationale for reaching such a solution. I believe it's based on finite induction, but can someone show exactly how to solve this problem?
I've pasted (with minor modifications) the aforementioned solution:
static long Solution(long n)
{
if (n <= 0)
return 0;
if (n < 10)
return (n * (n + 1)) / 2; // sum of arithmetic progression
long x = long.Parse(n.ToString().Substring(0, 1)); // first digit
long y = long.Parse(n.ToString().Substring(1)); // remaining digits
int power = (int)Math.Pow(10, n.ToString().Length - 1);
// how to reach this recursive solution?
return (power * Solution(x - 1))
+ (x * (y + 1))
+ (x * Solution(power - 1))
+ Solution(y);
}
Unit test (which is NOT O(logN)):
long count = 0;
for (int i=1; i<=N; i++)
{
foreach (var c in i.ToString().ToCharArray())
count += int.Parse(c.ToString());
}
Or:
Enumerable.Range(1, N).SelectMany(
n => n.ToString().ToCharArray().Select(
c => int.Parse(c.ToString())
)
).Sum();
This is actually a O(n^log10(2))-time solution (log10(2) is approximately 0.3). Not sure if that matters. We have n = xy, where I use concatenation to denote concatenation, not multiplication. Here are the four key lines with commentary underneath.
return (power * Solution(x - 1))
This counts the contribution of the x place for the numbers from 1 inclusive to x*power exclusive. This recursive call doesn't contribute to the complexity because it returns in constant time.
+ (x * (y + 1))
This counts the contribution of the x place for the numbers from x*power inclusive to n inclusive.
+ (x * Solution(power - 1))
This counts the contribution of the lower-order places for the numbers from 1 inclusive to x*power exclusive. This call is on a number one digit shorter than n.
+ Solution(y);
This counts the contribution of the lower-order places for the numbers from x*power inclusive to n inclusive. This call is on a number one digit shorter than n.
We get the time bound from applying Case 1 of the Master Theorem. To get the running time down to O(log n), we can compute Solution(power - 1) analytically. I don't remember offhand what the closed form is.
After thinking for a while (and finding similar answers), I guess I could achieve the rationale that gave me another solution.
Definitions
Let S(n) be the sum of the digits of all numbers 0 <= k < n.
Let D(k) be the plain digits sum of k only.
(I'll omit parentheses for >clarity, so consider Dx = D(x)
If n>=10, let's decompose n by spliting the last digit and the tens (n = 10*k + r) (k, r being integers)
We need to sum S(n) = S(10*k + r) = S(10*k) + D(10*k+1) + ... + D(10*k+r)
The first part, S(10*k), follows a pattern:
S(10*1)=D1+D2+D3+...+D9 =(1+2+3+...+9) *1 + D10
S(10*2)=D1+D2+D3+...+D19 =(1+2+3+...+9) *2 +1*9 +D10 + D20
S(10*3)=D1+D2+D3+...+D29 =(1+2+3+...+9) *3 +1*9+2*9 +D10+...+D20 + D30
So S(10*k) = (1+2+3+...+9)*k + 9*S(k-1) + S(k-1) + D(10*k) = 45*k + 10*S(k-1) + D(10*k)
Regarding the last part, we know that D(10*k+x) = D(10*k)+D(x) = D(k)+x, so this last part can be simplified:
D(10*k+1) + ... + D(10*k+r) = D(k)+1 + D(k)+2 + ... D(k)+r = rD(k) + (1+2+...+r) = rD(k) + r*(1+r)/2
So, adding both parts of the equation (and grouping D(k)) we have:
S(n) = 45*k + 10*S(k-1) + (1+r)D(k) + r*(1+r)/2
And replacing k and r we have:
S(n) = 45*k + 10*S((n/10)-1) + (1+n%10)D(n/10) + n%10(1+n%10)/2
Pseudocode:
S(n):
if n=0, sum=0
if n<10, n*(1+n)/2
r=n%10 # let's decompose n = 10*k + r (being k, r integers).
k=n/10
return 45*k + 10*S((n/10)-1) + (1+n%10)*D(n/10) + n%10*(1+n%10)/2
D(n):
just sum digits
First algorithm (the one from the original question) in C#
static BigInteger Solution(BigInteger n)
{
if (n <= 0)
return 0;
if (n < 10)
return (n * (n + 1)) / 2; // sum of arithmetic progression
long x = long.Parse(n.ToString().Substring(0, 1)); // first digit
long y = long.Parse(n.ToString().Substring(1)); // remaining digits
BigInteger power = BigInteger.Pow(10, n.ToString().Length - 1);
var log = Math.Round(BigInteger.Log10(power)); // BigInteger.Log10 can give rounding errors like 2.99999
return (power * Solution(x - 1)) //This counts the contribution of the x place for the numbers from 1 inclusive to x*power exclusive. This recursive call doesn't contribute to the complexity because it returns in constant time.
+ (x * (y + 1)) //This counts the contribution of the x place for the numbers from x*power inclusive to n inclusive.
//+ (x * Solution(power - 1)) // This counts the contribution of the lower-order places for the numbers from 1 inclusive to x*power exclusive. This call is on a number one digit shorter than n.
+ (x * 45*new BigInteger(log)* BigInteger.Pow(10,(int)log-1)) //
+ Solution(y);
}
Second algorithm (deduced from formula above) in C#
static BigInteger Solution2(BigInteger n)
{
if (n <= 0)
return 0;
if (n < 10)
return (n * (n + 1)) / 2; // sum of arithmetic progression
BigInteger r = BigInteger.ModPow(n, 1, 10); // decompose n = 10*k + r
BigInteger k = BigInteger.Divide(n, 10);
return 45 * k
+ 10*Solution2(k-1) // 10*S((n/10)-1)
+ (1+r) * (k.ToString().ToCharArray().Select(x => int.Parse(x.ToString())).Sum()) // (1+n%10)*D(n/10)
+ (r * (r + 1)) / 2; //n%10*(1+n%10)/2
}
EDIT: According to my tests, it's running faster than both the original version (which was using recursion twice), and the version modified to calculate Solution(power - 1) in a single step.
PS: I'm not sure, but I guess that if I had splitted the first digit of the number instead of the last, maybe I could achieve a solution like the original algorithm.
Given an array a of n integers, count how many subsequences (non-consecutive as well) have sum % k = 0:
1 <= k < 100
1 <= n <= 10^6
1 <= a[i] <= 1000
An O(n^2) solution is easily possible, however a faster way O(n log n) or O(n) is needed.
This is the subset sum problem.
A simple solution is this:
s = 0
dp[x] = how many subsequences we can build with sum x
dp[0] = 1, 0 elsewhere
for i = 1 to n:
s += a[i]
for j = s down to a[i]:
dp[j] = dp[j] + dp[j - a[i]]
Then you can simply return the sum of all dp[x] such that x % k == 0. This has a high complexity though: about O(n*S), where S is the sum of all of your elements. The dp array must also have size S, which you probably can't even afford to declare for your constraints.
A better solution is to not iterate over sums larger than or equal to k in the first place. To do this, we will use 2 dp arrays:
dp1, dp2 = arrays of size k
dp1[0] = dp2[0] = 1, 0 elsewhere
for i = 1 to n:
mod_elem = a[i] % k
for j = 0 to k - 1:
dp2[j] = dp2[j] + dp1[(j - mod_elem + k) % k]
copy dp2 into dp1
return dp1[0]
Whose complexity is O(n*k), and is optimal for this problem.
There's an O(n + k^2 lg n)-time algorithm. Compute a histogram c(0), c(1), ..., c(k-1) of the input array mod k (i.e., there are c(r) elements that are r mod k). Then compute
k-1
product (1 + x^r)^c(r) mod (1 - x^k)
r=0
as follows, where the constant term of the reduced polynomial is the answer.
Rather than evaluate each factor with a fast exponentiation method and then multiply, we turn things inside out. If all c(r) are zero, then the answer is 1. Otherwise, recursively evaluate
k-1
P = product (1 + x^r)^(floor(c(r)/2)) mod (1 - x^k).
r=0
and then compute
k-1
Q = product (1 + x^r)^(c(r) - 2 floor(c(r)/2)) mod (1 - x^k),
r=0
in time O(k^2) for the latter computation by exploiting the sparsity of the factors. The result is P^2 Q mod (1 - x^k), computed in time O(k^2) via naive convolution.
Traverse a and count a[i] mod k; there ought to be k such counts.
Recurse and memoize over the distinct partitions of k, 2*k, 3*k...etc. with parts less than or equal to k, adding the products of the appropriate counts.
For example, if k were 10, some of the partitions would be 1+2+7 and 1+2+3+4; but while memoizing, we would only need to calculate once how many pairs mod k in the array produce (1 + 2).
For example, k = 5, a = {1,4,2,3,5,6}:
counts of a[i] mod k: {1,2,1,1,1}
products of distinct partitions of k:
5 => 1
4,1 => 2
3,2 => 1
products of distinct partitions of 2 * k with parts <= k:
5,4,1 => 2
5,3,2 => 1
4,1,3,2 => 2
products of distinct partitions of 3 * k with parts <= k:
5,4,1,3,2 => 2
answer = 11
{1,4} {4,6} {2,3} {5}
{1,4,2,3} {1,4,5} {4,6,2,3} {4,6,5} {2,3,5}
{1,4,2,3,5} {4,6,2,3,5}
here is the problem from spoj that states
For a string of n bits x1,x2,x3,...,Xn
the adjacent bit count of the string
(AdjBC(x)) is given by
X1*X2 + X2*X3 + X3*X4 + ... + Xn-1 *
Xn
which counts the number of times a 1
bit is adjacent to another 1 bit. For
example:
AdjBC(011101101) = 3
AdjBC(111101101) = 4
AdjBC(010101010) = 0
and the question is : Write a program which takes as input integers n and k and returns the number of bit strings x of n bits (out of 2ⁿ) that satisfy AdjBC(x) = k.
I have no idea how to solve this problem. Can you help me to solve this ?
Thanks
Often in combinatorial problems, it helps to look at the set of values it produces. Using brute force I calculated the following table:
k 0 1 2 3 4 5 6
n +----------------------------
1 | 2 0 0 0 0 0 0
2 | 3 1 0 0 0 0 0
3 | 5 2 1 0 0 0 0
4 | 8 5 2 1 0 0 0
5 | 13 10 6 2 1 0 0
6 | 21 20 13 7 2 1 0
7 | 34 38 29 16 8 2 1
The first column is the familiar Fibonacci sequence, and satisfies the recurrence relation f(n, 0) = f(n-1, 0) + f(n-2, 0)
The other columns satisfies the recurrence relation f(n, k) = f(n - 1, k) + f(n - 1, k - 1) + f(n - 2, k) - f(n - 2, k - 1)
With this, you can do some dynamic programming:
INPUT: n, k
row1 <- [2,0,0,0,...] (k+1 elements)
row2 <- [3,1,0,0,...] (k+1 elements)
repeat (n-2) times
for j = k downto 1 do
row1[j] <- row2[j] + row2[j-1] + row1[j] - row1[j-1]
row1[0] <- row1[0] + row2[0]
swap row1 and row2
return row2[k]
As a hint you can split it up into two cases: numbers ending in 0 and numbers ending in 1.
def f(n, k):
return f_ending_in_0(n, k) + f_ending_in_1(n, k)
def f_ending_in_0(n, k):
if n == 1: return k == 0
return f(n - 1, k)
def f_ending_in_1(n, k):
if n == 1: return k == 0
return f_ending_in_0(n - 1, k) + f_ending_in_1(n - 1, k - 1)
This gives the correct output but takes a long time to execute. You can apply standard dynamic programming or memoization techniques to get this to perform fast enough.
I am late to the party, but I have a linear time complexity solution.
For me this is more of a mathematical problem. You can read the detailed solution in this blog post written by me. What follows is a brief outline. I wish I could put some LaTeX, but SO doesn't allows that.
Suppose for given n and k, our answer is given by the function f(n,k). Using Beggar's Method, we can arrive at the following formula
f(n,k) = SUM C(k+a-1,a-1)*C(n-k+1-a,a), where a runs from 1 to (n-k+1)/2
Here C(p,q) denotes binomial coefficients.
So to get our answer, we have to calculate both the binomial coefficients for each value of a. We can calculate the binomial table beforehand. This approach will then given our answer in O(n^2) since we have to calculate the table.
We can improve the time complexity by using the recursion formula C(p,q) = (p * C(p-1,q-1))/q to calculate the current value of binomial coefficients from their values in previous loop.
Our final code looks like this:
long long x=n-k, y=1, p=n-k+1, ans=0;
ans += x*y;
for(int a=2; a<=p/2; a++)
{
x = (x*(p-2*a+1)*(p-2*a+2))/(a*(p-a+1));
y = (y*(k+a-1))/(a-1);
ans += x*y;
}
You can find the complete accepted solution in my GitHub repository.