Unlucky Numbers - algorithm

Unlucky Numbers (NOT HOMEWORK)
There are few numbers considered to be unlucky(It contains only 4 and 7. ). Our goal is to find count of such numbers in the range of positive integers a and b.
For Example:
Input : a = 10 b = 20
Output : 0
Input : a = 30 b = 50
Output : 2 (44, 47)
Below is the Code I tried out using a static array approach, wherein I calculate all possible unlucky numbers for a 32-bit integer initially. This is done in O(n) and later a sequential scan helps obtain the count which is again an O(n) operation. Is there a better approach to solve this without the help of a static array ?
#define MAX_UNLUCKY 1022
static int unlucky[MAX_UNLUCKY];
int main(int argc, char **argv) {
int i, j, k;
int a, b, factor;
printf("Enter the numbers : \n");
scanf("%d",&a);
scanf("%d",&b);
unlucky[0] = 4;
unlucky[1] = 7;
factor = 10;
k = 1;
for(i = 2; i < MAX_UNLUCKY; ++i)
unlucky[i] = unlucky[(i >> 1) - 1]*factor + unlucky[k ^= 1];
for (i = 0; i < MAX_UNLUCKY;++i)
if (unlucky[i] > a) break;
for (k = i; k < MAX_UNLUCKY;++k) {
if (unlucky[k] > b) break;
printf("Unlukcy numbers = %d\n", unlucky[k]);
}
printf ("Total Number of Unlucky numbers in this range is %d\n", k-i);
return (0);
}

Consider the following:
How many numbers are there between
0x100 and 0x111?
100,101,110,111 ( 4 = 0x111 - 0x100 + 1 )
That's exactly how many unlucky numbers there are between 744 and 777 (744,747,774,777).
Now:
700 and 800 have the same number of unlucky numbers between them as 744 and 777.
744 is the smallest unlucky number greater than 700 and 777 is the greatest unlucky number smaller than 800.
No need to generate numbers, just substraction.
For cases like a = 10, b = 800, first find your number for 10-100 and then 100-800 (because you'll be counting some numbers twice):
For 10-100:
a = 44
b = 77
0x11 - 0x00 = 3 + 1 = 4 ( 44,47,74,77 )
For 100-800:
a = 444
b = 777
0x111 - 0x000 = 7 + 1 = 8 ( 444, 447, 474, 477, 744, 747, 774, 777 )
So between 10 and 800: 4+8 = 12 numbers, which is also correct.
This is also O(1) time & space if you find the auxiliary numbers efficiently, which shouldn't be too hard...

Related

Find The quotient of a number

There is a giving number N , i have to find out the number of integer for which the repetitive division with N gives quotient one.
For Ex:
N=8
Numbers Are 2 as: 8/2=4/2=2/2=1
5 as 8/5=1
6 as 8/6=1
7 and 8
My Aprroach:
All the numbers from N/2+1 to N gives me quotient 1 so
Ans: N/2 + Check Numbers from (2, sqrt(N))
Time Complexity O(sqrt(N))
Is there any better ways to do this, since number can be upto 10^12 and there can many queries ?
Can it be O(1) or O(40) (because 2^40 exceeds 10^12)
A test harness to verify functionality and assess order of complexity.
Edit as needed - its wiki
#include <math.h>
#include <stdio.h>
unsigned long long nn = 0;
unsigned repeat_div(unsigned n, unsigned d) {
for (;;) {
nn++;
unsigned q = n / d;
if (q <= 1) return q;
n = q;
}
return 0;
}
unsigned num_repeat_div2(unsigned n) {
unsigned count = 0;
for (unsigned d = 2; d <= n; d++) {
count += repeat_div(n, d);
}
return count;
}
unsigned num_repeat_div2_NM(unsigned n) {
unsigned count = 0;
if (n > 1) {
count += (n + 1) / 2;
unsigned hi = (unsigned) sqrt(n);
for (unsigned d = 2; d <= hi; d++) {
count += repeat_div(n, d);
}
}
return count;
}
unsigned num_repeat_div2_test(unsigned n) {
// number of integers that repetitive division with n gives quotient one.
unsigned count = 0;
// increment nn per code' tightest loop
...
return count;
}
///
unsigned (*method_rd[])(unsigned) = { num_repeat_div2, num_repeat_div2_NM,
num_repeat_div2_test};
#define RD_N (sizeof method_rd/sizeof method_rd[0])
unsigned test_rd(unsigned n, unsigned long long *iteration) {
unsigned count = 0;
for (unsigned i = 0; i < RD_N; i++) {
nn = 0;
unsigned this_count = method_rd[i](n);
iteration[i] += nn;
if (i > 0 && this_count != count) {
printf("Oops %u %u %u\n", i, count, this_count);
exit(-1);
}
count = this_count;
// printf("rd[%u](%u) = %u. Iterations:%llu\n", i, n, cnt, nn);
}
return count;
}
void tests_rd(unsigned lo, unsigned hi) {
unsigned long long total_iterations[RD_N] = {0};
unsigned long long total_count = 0;
for (unsigned n = lo; n <= hi; n++) {
total_count += test_rd(n, total_iterations);
}
for (unsigned i = 0; i < RD_N; i++) {
printf("Sum rd(%u,%u) --> %llu. total Iterations %llu\n", lo, hi,
total_count, total_iterations[i]);
}
}
int main(void) {
tests_rd(2, 10 * 1000);
return 0;
}
If you'd like O(1) lookup per query, the hash table of naturals less than or equal 10^12 that are powers of other naturals will not be much larger than 2,000,000 elements; create it by iterating on the bases from 1 to 1,000,000, incrementing the value of seen keys; roots 1,000,000...10,001 need only be squared; roots 10,000...1,001 need only be cubed; after that, as has been mentioned, there can be at most 40 operations at the smallest root.
Each value in the table will represent the number of base/power configurations (e.g., 512 -> 2, corresponding to 2^9 and 8^3).
First off, your algorithm is not O(sqrt(N)), as you are ignoring the number of times you divide by each of the checked numbers. If the number being checked is k, the number of divisions before the result is obtained (by the method described above) is O(log(k)). Hence the complexity becomes N/2 + (log(2) + log(3) + ... + log(sqrt(N)) = O(log(N) * sqrt(N)).
Now that we have got that out of the way, the algorithm may be improved. Observe that, by repeated division and you will get a 1 for a checked number k only when k^t <= N < 2 * k^t where t=floor(log_k(N)).
That is, when k^t <= N < 2 * k^(t+1). Note the strict < on the right-side.
Now, to figure out t, you can use the Newton-Raphson method or the Taylor's series to get logarithms very quickly and a complexity measure is mentioned here. Let us call that C(N). So the complexity will be C(2) + C(3) + .... + C(sqrt(N)). If you can ignore the cost of computing the log, you can get this to O(sqrt(N)).
For example, in the above case for N=8:
2^3 <= 8 < 2 * 2^3 : 1
floor(log_3(8)) = 1 and 8 does not satisfy 3^1 <= 8 < 2 * 3^1: 0
floor(log_4(8)) = 1 and 8 does not satisfy 4^1 <= 8 < 2 * 4^1 : 0
4 extra coming in from numbers 5, 6, 7 and 8 as 8 t=1 for these numbers.
Note that we did not need to check for 3 and 4, but I have done so to illustrate the point. And you can verify that each of the numbers in [N/2..N] satisfies the above inequality and hence need to be added.
If you use this approach, we can eliminate the repeated divisions and get the complexity down to O(sqrt(N)) if the complexity of computing logarithms can be assumed negligible.
Let's see since number can be upto 10^12 , what you can do is Create for number 2 to 10^6 , you can create and Array of 40 , so for each length check if the number can be expressed as i^(len-1)+ y where i is between 2 to 10^6 and len is between 1 to 40.
So time complexity O(40*Query)

Most efficient method of generating a random number with a fixed number of bits set

I need to generate a random number, but it needs to be selected from the set of binary numbers with equal numbers of set bits. E.g. choose a random byte value with exactly 2 bits set...
00000000 - no
00000001 - no
00000010 - no
00000011 - YES
00000100 - no
00000101 - YES
00000110 - YES
...
=> Set of possible numbers 3, 5, 6...
Note that this is a simplified set of numbers. Think more along the lines of 'Choose a random 64-bit number with exactly 40 bits set'. Each number from the set must be equally likely to arise.
Do a random selection from the set of all bit positions, then set those bits.
Example in Python:
def random_bits(word_size, bit_count):
number = 0
for bit in random.sample(range(word_size), bit_count):
number |= 1 << bit
return number
Results of running the above 10 times:
0xb1f69da5cb867efbL
0xfceff3c3e16ea92dL
0xecaea89655befe77L
0xbf7d57a9b62f338bL
0x8cd1fee76f2c69f7L
0x8563bfc6d9df32dfL
0xdf0cdaebf0177e5fL
0xf7ab75fe3e2d11c7L
0x97f9f1cbb1f9e2f8L
0x7f7f075de5b73362L
I have found an elegant solution: random-dichotomy.
Idea is that on average:
and with a random number is dividing by 2 the number of set bits,
or is adding 50% of set bits.
C code to compile with gcc (to have __builtin_popcountll):
#include <assert.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
/// Return a random number, with nb_bits bits set out of the width LSB
uint64_t random_bits(uint8_t width, uint8_t nb_bits)
{
assert(nb_bits <= width);
assert(width <= 64);
uint64_t x_min = 0;
uint64_t x_max = width == 64 ? (uint64_t)-1 : (1UL<<width)-1;
int n = 0;
while (n != nb_bits)
{
// generate a random value of at least width bits
uint64_t x = random();
if (width > 31)
x ^= random() << 31;
if (width > 62)
x ^= random() << 33;
x = x_min | (x & x_max); // x_min is a subset of x, which is a subset of x_max
n = __builtin_popcountll(x);
printf("x_min = 0x%016lX, %d bits\n", x_min, __builtin_popcountll(x_min));
printf("x_max = 0x%016lX, %d bits\n", x_max, __builtin_popcountll(x_max));
printf("x = 0x%016lX, %d bits\n\n", x, n);
if (n > nb_bits)
x_max = x;
else
x_min = x;
}
return x_min;
}
In general less than 10 loops are needed to reach the requested number of bits (and with luck it can take 2 or 3 loops). Corner cases (nb_bits=0,1,width-1,width) are working even if a special case would be faster.
Example of result:
x_min = 0x0000000000000000, 0 bits
x_max = 0x1FFFFFFFFFFFFFFF, 61 bits
x = 0x1492717D79B2F570, 33 bits
x_min = 0x0000000000000000, 0 bits
x_max = 0x1492717D79B2F570, 33 bits
x = 0x1000202C70305120, 14 bits
x_min = 0x0000000000000000, 0 bits
x_max = 0x1000202C70305120, 14 bits
x = 0x0000200C10200120, 7 bits
x_min = 0x0000200C10200120, 7 bits
x_max = 0x1000202C70305120, 14 bits
x = 0x1000200C70200120, 10 bits
x_min = 0x1000200C70200120, 10 bits
x_max = 0x1000202C70305120, 14 bits
x = 0x1000200C70201120, 11 bits
x_min = 0x1000200C70201120, 11 bits
x_max = 0x1000202C70305120, 14 bits
x = 0x1000200C70301120, 12 bits
width = 61, nb_bits = 12, x = 0x1000200C70301120
Of course, you need a good prng. Otherwise you can face an infinite loop.
Say the number of bits to set is b and the word size is w. I would create a vector v of of length w with the first b values set to 1 and the rest set to 0. Then just shuffle v.
Here is another option which is very simple and reasonably fast in practice.
choose a bit at random
if it is already set
do nothing
else
set it
increment count
end if
Repeat until count equals the number of bits you want set.
This will only be slow when the number of bits you want set (call it k) is more than half the word length (call it N). In that case, use the algorithm to set N - k bits instead and then flip all the bits in the result.
I bet the expected running time here is pretty good, although I am too lazy/stupid to compute it precisely right now. But I can bound it as less than 2*k... The expected number of flips of a coin to get "heads" is two, and each iteration here has a better than 1/2 chance of succeeding.
If you don't have the convenience of Python's random.sample, you might do this in C using the classic sequential sampling algorithm:
unsigned long k_bit_helper(int n, int k, unsigned long bit, unsigned long accum) {
if !(n && k)
return accum;
if (k > rand() % n)
return k_bit_helper(n - 1, k - 1, bit + bit, accum + bit);
else
return k_bit_helper(n - 1, k, bit + bit, accum);
}
unsigned long random_k_bits(int k) {
return k_bit_helper(64, k, 1, 0);
}
The cost of the above will be dominated by the cost of generating the random numbers (true in the other solutions, also). You can optimize this a bit if you have a good prng by batching: for example, since you know that the random numbers will be in steadily decreasing ranges, you could get the random numbers for n through n-3 by getting a random number in the range 0..(n * (n - 1) * (n - 2) * (n - 3)) and then extracting the individual random numbers:
r = randint(0, n * (n - 1) * (n - 2) * (n - 3) - 1);
rn = r % n; r /= n
rn1 = r % (n - 1); r /= (n - 1);
rn2 = r % (n - 2); r /= (n - 2);
rn3 = r % (n - 3); r /= (n - 3);
The maximum value of n is presumably 64 or 26, so the maximum value of the product above is certainly less than 224. Indeed, if you used a 64-bit prng, you could extract as many as 10 random numbers out of it. However, don't do this unless you know the prng you use produces independently random bits.
I have another suggestion based on enumeration: choose a random number i between 1 and n choose k, and generate the i-th combination. For example, for n = 6, k = 3 the 20 combinations are:
000111
001011
010011
100011
001101
010101
100101
011001
101001
110001
001110
010110
100110
011010
101010
110010
011100
101100
110100
111000
Let's say we randomly choose combination number 7. We first check whether it has a 1 in the last position: it has, because the first 10 (5 choose 2) combinations have. We then recursively check the remaining positions. Here is some C++ code:
word ithCombination(int n, int k, word i) {
// i is zero-based
word x = 0;
word b = 1;
while (k) {
word c = binCoeff[n - 1][k - 1];
if (i < c) {
x |= b;
--k;
} else {
i -= c;
}
--n;
b <<= 1;
}
return x;
}
word randomKBits(int k) {
word i = randomRange(0, binCoeff[BITS_PER_WORD][k] - 1);
return ithCombination(BITS_PER_WORD, k, i);
}
To be fast, we use precalculated binomial coefficients in binCoeff. The function randomRange returns a random integer between the two bounds (inclusively).
I did some timings (source). With the C++11 default random number generator, most time is spent in generating random numbers. Then this solution is fastest, since it uses the absolute minimum number of random bits possible. If I use a fast random number generator, then the solution by mic006 is fastest. If k is known to be very small, it's best to just randomly set bits until k are set.
Not exactly an algorithm suggestion, but just found a really neat solution in JavaScript to get random bits directly from Math.random output bits using ArrayBuffer.
//Swap var out with const and let for maximum performance! I like to use var because of prototyping ease
var randomBitList = function(n){
var floats = Math.ceil(n/64)+1;
var buff = new ArrayBuffer(floats*8);
var floatView = new Float64Array(buff);
var int8View = new Uint8Array(buff);
var intView = new Int32Array(buff);
for(var i = 0; i < (floats-1)*2; i++){
floatView[floats-1] = Math.random();
int8View[(floats-1)*8] = int8View[(floats-1)*8+4];
intView[i] = intView[(floats-1)*2];
}
this.get = function(idx){
var i = idx>>5;//divide by 32
var j = idx%32;
return (intView[i]>>j)&1;
//return Math.random()>0.5?0:1;
};
this.getBitList = function(){
var arr = [];
for(var idx = 0; idx < n; idx++){
var i = idx>>5;//divide by 32
var j = idx%32;
arr[idx] = (intView[i]>>j)&1;
}
return arr;
}
};

minimal positive number divisible to N

1<=N<=1000
How to find the minimal positive number, that is divisible by N, and its digit sum should be equal to N.
For example:
N:Result
1:1
10:190
And the algorithm shouldn't take more than 2 seconds.
Any ideas(Pseudocode,pascal,c++ or java) ?
Let f(len, sum, mod) be a bool, meaning we can build a number(maybe with leading zeros), that has length len+1, sum of digits equal to sum and gives mod when diving by n.
Then f(len, sum, mod) = or (f(len-1, sum-i, mod- i*10^len), for i from 0 to 9). Then you can find minimal l, that f(l, n, n) is true. After that just find first digit, then second and so on.
#define FOR(i, a, b) for(int i = a; i < b; ++i)
#define REP(i, N) FOR(i, 0, N)
#define FILL(a,v) memset(a,v,sizeof(a))
const int maxlen = 120;
const int maxn = 1000;
int st[maxlen];
int n;
bool can[maxlen][maxn+1][maxn+1];
bool was[maxlen][maxn+1][maxn+1];
bool f(int l, int s, int m)
{
m = m%n;
if(m<0)
m += n;
if(s == 0)
return (m == 0);
if(s<0)
return false;
if(l<0)
return false;
if(was[l][s][m])
return can[l][s][m];
was[l][s][m] = true;
can[l][s][m] = false;
REP(i,10)
if(f(l-1, s-i, m - st[l]*i))
{
can[l][s][m] = true;
return true;
}
return false;
}
string build(int l, int s, int m)
{
if(l<0)
return "";
m = m%n;
if(m<0)
m += n;
REP(i,10)
if(f(l-1, s-i, m - st[l]*i))
{
return char('0'+i) + build(l-1, s-i, m - st[l]*i);
}
return "";
}
int main(int argc, char** argv)
{
ios_base::sync_with_stdio(false);
cin>>n;
FILL(was, false);
st[0] = 1;
FOR(i, 1, maxlen)
st[i] = (st[i-1]*10)%n;
int l = -1;
REP(i, maxlen)
if(f(i, n, n))
{
cout<<build(i,n,n)<<endl;
break;
}
return 0;
}
NOTE that this uses ~250 mb of memory.
EDIT: I found a test where this solution runs, a bit too long. 999, takes almost 5s.
Update: I understood that the result was supposed to be between 0 and 1000, but no. With larger inputs the naïve algorithm can take a considerable amount of time. The output for 80 would be 29999998880.
You don't need a fancy algorithm. A loop that checks your condition for 1000 numbers will take less than 2 seconds on any reasonably modern computer, even in interpreted languages.
If you want to make it smart, you only need to check numbers that are multiples of N. To further restrict the search space, the remainders of N and the result have to be equal when divided by 9. This means that now you have to check only one number in every 9N.
Sure, pseudo-code, since it smells like homework :-)
def findNum (n):
testnum = n
while testnum <= 1000:
tempnum = testnum
sum = 0
while tempnum > 0:
sum = sum + (tempnum mod 10)
tempnum = int (tempnum / 10)
if sum == n:
return testnum
testnum = testnum + n
return -1
It takes about 15 thousandths of a second when translated to Python so well under your two-second threshold. It works by basically testing every multiple of N less than or equal to 1000.
The test runs through each digit in the number adding it to a sum then, if that sum matches N, it returns the number. If no number qualifies, it returns -1.
As test cases, I used:
n findNum(n) Justification
== ========== =============
1 1 1 = 1 * 1, 1 = 1
10 190 190 = 19 * 10, 10 = 1 + 9 + 0
13 247 247 = 13 * 19, 13 = 2 + 4 + 7
17 476 476 = 17 * 28, 17 = 4 + 7 + 6
99 -1 none needed
Now that only checks multiples up to 1000 as opposed to checking all numbers but checking all numbers starts to take much more than two seconds, no matter what language you use. You may be able to find a faster algorithm but I'd like to suggest something else.
You will probably not find a faster algorithm than what it would take to simply look up the values in a table. So, I would simply run a program once to generate output along the lines of:
int numberDesired[] = {
0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
190, 209, 48, 247, 266, 195, 448, 476, 198, 874,
...
-1, -1};
and then just plug that into a new program so that it can use a blindingly fast lookup.
For example, you could do that with some Python like:
print "int numberDesired[] = {"
for i in range (0, 10):
s = " /* %4d-%4d */"%(i*10,i*10+9)
for j in range (0, 10):
s = "%s %d,"%(s,findNum(i*10+j))
print s
print "};"
which generates:
int numberDesired[] = {
/* 0- 9 */ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
/* 10- 19 */ 190, 209, 48, 247, 266, 195, 448, 476, 198, 874,
/* 20- 29 */ 3980, 399, 2398, 1679, 888, 4975, 1898, 999, 7588, 4988,
/* 30- 39 */ 39990, 8959, 17888, 42999, 28798, 57995, 29988, 37999, 59888, 49998,
/* 40- 49 */ 699880, 177899, 88998, 99889, 479996, 499995, 589996, 686999, 699888, 788998,
/* 50- 59 */ 9999950, 889899, 1989988, 2989889, 1999998, 60989995, 7979888, 5899899, 8988898, 8888999,
/* 60- 69 */ 79999980, 9998998, 19999898, 19899999, 59989888, 69999995, 67999998, 58999999, 99899888, 79899999,
:
};
That will take a lot longer than two seconds, but here's the thing: you only have to run it once, then cut and paste the table into your code. Once you have the table, it will most likely blow away any algorithmic solution.
The maximum digit sum you have to worry about is 1000. Since 1000 / 9 = ~100 This is actually not a lot, so I think the following should work:
Consider the following data structure:
entry { int r, sum, prev, lastDigit; }
Hold a queue of entry where initially you have r = 1 mod N, sum = 1, prev = -1, lastDigit = 1; r = 2 mod N, sum = 2, prev = -1, lastDigit = 2 etc.
When you extract an entry x from the queue:
y = new entry
for i = 0 to 9 do
y.r = (x.r * 10 + i) % N
y.sum = x.sum + i
y.prev = <position of x in the queue>
y.lastDigit = i
if y.r == 0 and y.sum == N
// you found your multiple: use the prev and lastDigit entries to rebuild it
if y.sum < N then
queue.add(y)
This is basically a BFS on the digits. Since the maximum sum you care about is small, this should be pretty efficient.
After thinking about it a bit, I think I have found the expected answer.
Think of it as a graph. For any number, you can make new number by multiplying that number by 10 and adding any of the digits 0-9. You will need to use BFS to reach the smallest number first.
For every node maintain sum and remainder. Using these values you can move to the adjacent nodes, also these values will help you avoid reaching useless states again and again. To print the number, you can use these values to trace your steps.
Complexity is O(n^2), in worst case table is completely filled. (See code)
Note : Code takes number of test cases first. Works under 0.3s for n<=1000.
[Edit] : Ac on spoj in 6.54s. Test files have 50 numbers.
#include<cstdio>
#include<queue>
#include<cstring>
using namespace std;
#define F first
#define S second
#define N 1100
#define mp make_pair
queue<pair<int, int> >Q;
short sumTrace[N][N], mulTrace[N][N];
void print(int sum, int mul) {
if (sumTrace[sum][mul] == 42)return;
print(sum-sumTrace[sum][mul], mulTrace[sum][mul]);
printf("%d",sumTrace[sum][mul]);
}
void solve(int n) {
Q.push(mp(0,0));
sumTrace[0][0]=42; // any number greater than 9
while (1) {
int sum = Q.front().F;
int mul = Q.front().S;
if (sum == n && mul == 0) break;
Q.pop();
for (int i=0; i<10; i++) {
int nsum = sum+i;
if (nsum > n)break;
int nmul = (mul*10+i)%n;
if (sumTrace[nsum][nmul] == -1) {
Q.push(mp(nsum, nmul));
sumTrace[nsum][nmul] = i;
mulTrace[nsum][nmul] = mul;
}
}
}
print(n,0);
while(!Q.empty())Q.pop();
}
int main() {
int t;
scanf("%d", &t);
while (t--) {
int n;
scanf("%d", &n);
memset(sumTrace, -1, sizeof sumTrace);
solve(n);
printf("\n");
}
return 0;
}

Algorithm to calculate the number of 1s for a range of numbers in binary

So I just got back for the ACM Programing competition and did pretty well but there was one problem that not one team got.
The Problem.
Start with an integer N0 which is greater than 0. Let N1 be the number of ones in the binary representation of N0. So, if N0 = 27, N1 = 4. For all i > 0, let Ni be the number of ones in the binary representation of Ni-1. This sequence will always converge to one. For any starting number, N0, let K be the minimum value of i >= 0 for which N1 = 1. For example, if N0 = 31, then N1 = 5, N2 = 2, N3 = 1, so K = 3.
Given a range of consecutive numbers and a value of X how many numbers in the range have a K value equal to X?
Input
There will be several test cases in the input. Each test case will consist of three integers on a single line:
LO HI X
Where LO and HI (1 <= LO <= HI <= 10^18) are the lower and upper limits of a range of integers, and X (0 <= X <= 10) is the target value for K. The input will end with a line of three 0s.
Output
For each test case output a single integer, representing the number of integers in the range from LO to HI (inclusive) which have a K value equal to X in the input. Print each Integer on its own line with no spaces. Do not print any blank lines between answers.
Sample Input
31 31 3
31 31 1
27 31 1
27 31 2
1023 1025 1
1023 1025 2
0 0 0
Sample Output
1
0
0
3
1
1
If you guys want I can include our answer or our problem, because finding for a small range is easy but I will give you a hint first your program needs to run in seconds not minutes. We had a successful solution but not an efficient algorithm to use a range similar to
48238 10^18 9
Anyway good luck and if the community likes these we had some more we could not solve that could be some good brain teasers for you guys. The competition allows you to use Python, C++, or Java—all three are acceptable in an answer.
So as a hint my coach said to think of how binary numbers count rather than checking every bit. I think that gets us a lot closer.
I think a key is first understanding the pattern of K values and how rapidly it grows. Basically, you have:
K(1) = 0
K(X) = K(bitcount(X))+1 for X > 1
So finding the smallest X values for a given K we see
K(1) = 0
K(2) = 1
K(3) = 2
K(7) = 3
K(127) = 4
K(170141183460469231731687303715884105727) = 5
So for an example like 48238 10^18 9 the answer is trivially 0. K=0 only for 1, and K=1 only for powers of 2, so in the range of interest, we'll pretty much only see K values of 2, 3 or 4, and never see K >= 5
edit
Ok, so we're looking for an algorithm to count the number of values with K=2,3,4 in a range of value LO..HI without iterating over the entire range. So the first step is to find the number of values in the range with bitcount(x)==i for i = 1..59 (since we only care about values up to 10^18 and 10^18 < 2^60). So break down the range lo..hi into subranges that are a power of 2 size and differ only in their lower n bits -- a range of the form x*(2^n)..(x+1)*(2^n)-1. We can break down the arbitray lo..hi range into such subranges easily. For each such subrange there will be choose(n, i) values with i+bitcount(x) set bits.
So we just add all the subranges together to get a vector of counts for 1..59, which we then iterate over, adding together those elements with the same K value to get our answer.
edit (fixed again to be be C89 compatible and work for lo=1/k=0)
Here's a C program to do what I previously described:
#include <stdio.h>
#include <string.h>
#include <assert.h>
int bitcount(long long x) {
int rv = 0;
while(x) { rv++; x &= x-1; }
return rv; }
long long choose(long long m, long long n) {
long long rv = 1;
int i;
for (i = 0; i < n; i++) {
rv *= m-i;
rv /= i+1; }
return rv; }
void bitcounts_p2range(long long *counts, long long base, int l2range) {
int i;
assert((base & ((1LL << l2range) - 1)) == 0);
counts += bitcount(base);
for (i = 0; i <= l2range; i++)
counts[i] += choose(l2range, i); }
void bitcounts_range(long long *counts, long long lo, long long hi) {
int l2range = 0;
while (lo + (1LL << l2range) - 1 <= hi) {
if (lo & (1LL << l2range)) {
bitcounts_p2range(counts, lo, l2range);
lo += 1LL << l2range; }
l2range++; }
while (l2range >= 0) {
if (lo + (1LL << l2range) - 1 <= hi) {
bitcounts_p2range(counts, lo, l2range);
lo += 1LL << l2range; }
l2range--; }
assert(lo == hi+1); }
int K(int x) {
int rv = 0;
while(x > 1) {
x = bitcount(x);
rv++; }
return rv; }
int main() {
long long counts[64];
long long lo, hi, total;
int i, k;
while (scanf("%lld%lld%d", &lo, &hi, &k) == 3) {
if (lo < 1 || lo > hi || k < 0) break;
if (lo == 0 || hi == 0 || k == 0) break;
total = 0;
if (lo == 1) {
lo++;
if (k == 0) total++; }
memset(counts, 0, sizeof(counts));
bitcounts_range(counts, lo, hi);
for (i = 1; i < 64; i++)
if (K(i)+1 == k)
total += counts[i];
printf("%lld\n", total); }
return 0; }
which runs just fine for values up to 2^63-1 (LONGLONG_MAX).
For 48238 1000000000000000000 3 it gives 513162479025364957, which certainly seems plausible
edit
giving the inputs of
48238 1000000000000000000 1
48238 1000000000000000000 2
48238 1000000000000000000 3
48238 1000000000000000000 4
gives outputs of
44
87878254941659920
513162479025364957
398959266032926842
Those add up to 999999999999951763 which is correct. The value for k=1 is correct (there are 44 powers of two in that range 2^16 up to 2^59). So while I'm not sure the other 3 values are correct, they're certainly plausible.
The idea behind this answer can help you develop very fast solution. Having ranges 0..2^N the complexity of a potential algorithm would be O(N) in the worst case (Assuming that complexity of a long arithmetic is O(1)) If programmed correctly it should easily handle N = 1000000 in a matter of milliseconds.
Imagine we have the following values:
LO = 0; (0000000000000000000000000000000)
HI = 2147483647; (1111111111111111111111111111111)
The lowest possible N1 in range LO..HI is 0
The highest possible N1 in range LO..HI is 31
So the computation of N2..NN part is done only for one of 32 values (i.e. 0..31).
Which can be done simply, even without a computer.
Now lets compute the amount of N1=X for a range of values LO..HI
When we have X = 0 we have count(N1=X) = 1 this is the following value:
1 0000000000000000000000000000000
When we have X = 1 we have count(N1=X) = 31 these are the following values:
01 1000000000000000000000000000000
02 0100000000000000000000000000000
03 0010000000000000000000000000000
...
30 0000000000000000000000000000010
31 0000000000000000000000000000001
When we have X = 2 we have the following pattern:
1100000000000000000000000000000
How many unique strings can be formed with 29 - '0' and 2 - '1'?
Imagine the rightmost '1'(#1) is cycling from left to right, we get the following picture:
01 1100000000000000000000000000000
02 1010000000000000000000000000000
03 1001000000000000000000000000000
...
30 1000000000000000000000000000001
Now we've got 30 unique strings while moving the '1'(#1) from left to right, it is now impossible to
create a unique string by moving the '1'(#1) in any direction. This means we should move '1'(#2) to the right,
let's also reset the position of '1'(#1) as left as possible remaining uniqueness, we get:
01 0110000000000000000000000000000
now we do the cycling of '1'(#1) once again
02 0101000000000000000000000000000
03 0100100000000000000000000000000
...
29 0100000000000000000000000000001
Now we've got 29 unique strings, continuing this whole operation 28 times we get the following expression
count(N1=2) = 30 + 29 + 28 + ... + 1 = 465
When we have X = 3 the picture remains similar but we are moving '1'(#1), '1'(#2), '1'(#3)
Moving the '1'(#1) creates 29 unique strings, when we start moving '1'(#2) we get
29 + 28 + ... + 1 = 435 unique strings, after that we are left to process '1'(#3) so we have
29 + 28 + ... + 1 = 435
28 + ... + 1 = 406
...
+ 1 = 1
435 + 406 + 378 + 351 + 325 + 300 + 276 +
253 + 231 + 210 + 190 + 171 + 153 + 136 +
120 + 105 + 091 + 078 + 066 + 055 + 045 +
036 + 028 + 021 + 015 + 010 + 006 + 003 + 001 = 4495
Let's try to solve the general case i.e. when we have N zeros and M ones.
Overall amount of permutations for the string of length (N + M) is equal to (N + M)!
The amount of '0' duplicates in this string is equal to N!
The amount of '1' duplicates in this string is equal to M!
thus receiving overall amount of unique strings formed of N zeros and M ones is
(N + M)! 32! 263130836933693530167218012160000000
F(N, M) = ============= => ========== = ====================================== = 4495
(N!) * (M!) 3! * 29! 6 * 304888344611713860501504000000
Edit:
F(N, M) = Binomial(N + M, M)
Now let's consider a real life example:
LO = 43797207; (0000010100111000100101011010111)
HI = 1562866180; (1011101001001110111001000000100)
So how do we apply our unique permutations formula to this example? Since we don't know how
many '1' is located below LO and how many '1' is located above HI.
So lets count these permutations below LO and above HI.
Lets remember how we cycled '1'(#1), '1'(#2), ...
1111100000000000000000000000000 => 2080374784
1111010000000000000000000000000 => 2046820352
1111001000000000000000000000000 => 2030043136
1111000000000000000000000000001 => 2013265921
1110110000000000000000000000000 => 1979711488
1110101000000000000000000000000 => 1962934272
1110100100000000000000000000000 => 1954545664
1110100010000000000000000000001 => 1950351361
As you see this cycling process decreases the decimal values smoothly. So we need to count amount of
cycles until we reach HI value. But we shouldn't be counting these values by one because
the worst case can generate up to 32!/(16!*16!) = 601080390 cycles, which we will be cycling very long :)
So we need cycle chunks of '1' at once.
Having our example we would want to count the amount of cycles of a transformation
1111100000000000000000000000000 => 1011101000000000000000000000000
1011101001001110111001000000100
So how many cycles causes the transformation
1111100000000000000000000000000 => 1011101000000000000000000000000
?
Lets see, the transformation:
1111100000000000000000000000000 => 1110110000000000000000000000000
is equal to following set of cycles:
01 1111100000000000000000000000000
02 1111010000000000000000000000000
...
27 1111000000000000000000000000001
28 1110110000000000000000000000000
So we need 28 cycles to transform
1111100000000000000000000000000 => 1110110000000000000000000000000
How many cycles do we need to transform
1111100000000000000000000000000 => 1101110000000000000000000000000
performing following moves we need:
1110110000000000000000000000000 28 cycles
1110011000000000000000000000000 27 cycles
1110001100000000000000000000000 26 cycles
...
1110000000000000000000000000011 1 cycle
and 1 cycle for receiving:
1101110000000000000000000000000 1 cycle
thus receiving 28 + 27 + ... + 1 + 1 = 406 + 1
but we have seen this value before and it was the result for the amount of unique permutations, which was
computed for 2 '1' and 27 '0'. This means that amount of cycles while moving
11100000000000000000000000000 => 01110000000000000000000000000
is equal to moving
_1100000000000000000000000000 => _0000000000000000000000000011
plus one additional cycle
so this means if we have M zeros and N ones and want to move the chunk of U '1' to the right we will need to
perform the following amount of cycles:
(U - 1 + M)!
1 + =============== = f(U, M)
M! * (U - 1)!
Edit:
f(U, M) = 1 + Binomial(U - 1 + M, M)
Now let's come back to our real life example:
LO = 43797207; (0000010100111000100101011010111)
HI = 1562866180; (1011101001001110111001000000100)
so what we want to do is count the amount cycles needed to perform the following
transformations (suppose N1 = 6)
1111110000000000000000000000000 => 1011101001000000000000000000000
1011101001001110111001000000100
this is equal to:
1011101001000000000000000000000 1011101001000000000000000000000
------------------------------- -------------------------------
_111110000000000000000000000000 => _011111000000000000000000000000 f(5, 25) = 118756
_____11000000000000000000000000 => _____01100000000000000000000000 f(2, 24) = 301
_______100000000000000000000000 => _______010000000000000000000000 f(1, 23) = 24
________10000000000000000000000 => ________01000000000000000000000 f(1, 22) = 23
thus resulting 119104 'lost' cycles which are located above HI
Regarding LO, there is actually no difference in what direction we are cycling
so for computing LO we can do reverse cycling:
0000010100111000100101011010111 0000010100111000100101011010111
------------------------------- -------------------------------
0000000000000000000000000111___ => 0000000000000000000000001110___ f(3, 25) = 2926
00000000000000000000000011_____ => 00000000000000000000000110_____ f(2, 24) = 301
Thus resulting 3227 'lost' cycles which are located below LO this means that
overall amount of lost cycles = 119104 + 3227 = 122331
overall amount of all possible cycles = F(6, 25) = 736281
N1 in range 43797207..1562866180 is equal to 736281 - 122331 = 613950
I wont provide the remaining part of the solution. It is not that hard to grasp the remaining part. Good luck!
I think it's a problem in Discrete mathematics,
assuming LOW is 0,
otherwise we can insert a function for summing numbers below LOW,
from numbers shown i understand the longest number will consist up to 60 binary digit at most
alg(HIGH,k)
l=len(HIGH)
sum=0;
for(i=0;i<l;i++)
{
count=(l choose i);
nwia=numbers_with_i_above(i,HIGH);
if canreach(i,k) sum+=(count-nwia);
}
all the numbers appear
non is listed twice
numbers_with_i_above is trivial
canreach with numbers up to 60 is easy
len is it length of a binary represention
Zobgib,
The key to this problem is not to understand how rapidly the growth of K's pattern grows, but HOW it grows, itself. The first step in this is to understand (as your coach said) how binary numbers count, as this determines everything about how K is determined. Binary numbers follow a pattern that is distinct when counting the number of positive bits. Its a single progressive repetitive pattern. I am going to demonstrate in an unusual way...
Assume i is an integer value. Assume b is the number of positive bits in i
i = 1;
b = 1;
i = 2; 3;
b = 1; 2;
i = 4; 5; 6; 7;
b = 1; 2; 2; 3;
i = 8; 9; 10; 11; 12; 13; 14; 15;
b = 1; 2; 2; 3; 2; 3; 3; 4;
i = 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31;
b = 1; 2; 2; 3; 2; 3; 3; 4; 2; 3; 3; 4; 3; 4; 4; 5;
I assure you, this pattern holds to infinity, but if needed you
should be able to find or construct a proof easily.
If you look at the data above, you'll notice a distinct pattern related to 2^n. Each time you have an integer exponent of 2, the pattern will reset by including the each term of previous pattern, and then each term of the previous pattern incremented by 1. As such, to get K, you just apply the new number to the pattern above. The key is to find a single expression (that is efficient) to receive your number of bits.
For demonstration, yet again, you can further extrapolate a new pattern off of this, because it is static and follows the same progression. Below is the original data modified with its K value (based on the recursion).
Assume i is an integer value. Assume b is the number of positive bits in i
i = 1;
b = 1;
K = 1;
i = 2; 3;
b = 1; 2;
K = 1; 2;
i = 4; 5; 6; 7;
b = 1; 2; 2; 3;
K = 1; 2; 2; 3;
i = 8; 9; 10; 11; 12; 13; 14; 15;
b = 1; 2; 2; 3; 2; 3; 3; 4;
K = 1; 2; 2; 3; 2; 3; 3; 2;
i = 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31;
b = 1; 2; 2; 3; 2; 3; 3; 4; 2; 3; 3; 4; 3; 4; 4; 5;
K = 1; 2; 2; 3; 2; 3; 3; 2; 2; 3; 3; 2; 3; 2; 2; 3;
If you notice, K follows a similar patterning, with a special condition... Everytime b is a power of 2, it actually lowers the K value by 2. Soooo, if you follow a binary progression, you should be able to easily map your K values. Since this pattern is dependant on powers of 2, and the pattern is dependant upon finding the nearest power of 2 and starting there, I propose the following solution. Take your LOW value and find the nearest power of 2 (p) such that 2^p < LOW. This can be done by "counting the bits" for just the lowest number. Again, once you know which exponent it is, you don't have to count the bits for any other number. You just increment through the pattern and you will have your b and hence K (which is following the same pattern).
Note: If you are particularly observant, you can use the previous b or K to determine the next. If the current i is odd, add 1 to the previous b. If the current i is divisible by 4, then you decrement b by either 1 or 2, dependent upon whether it's in the first 1/2 of the pattern or second half. And, of course, if i is a power of 2, start over at 1.
Fuzzical Logic
Pseudo-code Example (non-Optimized)
{ var LOW, HIGH
var power = 0
//Get Nearest Power Of 2
for (var i = 0 to 60) {
// Compare using bitwise AND
if (LOW bitAND (2 ^ i) = (2 ^ i)) {
if ((2 ^ i) <= LOW) {
set power to i
}
else {
// Found the Power: end the for loop
set i to 61
}
}
}
// Automatically 1 at a Power of 2
set numOfBits to 1
array numbersWithPositiveBits with 64 integers = 0
// Must create the pattern from Power of 2
set foundLOW to false
for (var j = (2^power) to HIGH) {
set lenOfPatten to (power + 1)
// Don't record until we have found the LOW value
if ((foundLOW is false) bitAND (j is equal to LOW)) {
set foundLOW to true
}
// If j is odd, increment numOfBits
if ((1 bitAND j) is equal to 1) {
increment numOfBits
}
else if (j modulus 4 == 0) {
decrement numOfBits accordingly //Figure this one out yourself, please
}
else if ((j - (2^power)) == (power + 1)) {
// We are at the next power
increment power
// Start pattern over
set numOfBits to 1
}
// Record if appropriate
if (foundLOW is equal to true) {
increment element numOfBits in array numbersWithPositiveBits
}
}
// From here, derive your K values.
You can solve this efficiently as follows:
ret = 0;
for (i = 1; i <= 64; i++) {
if (computeK(i) != desiredK) continue;
ret += numBelow(HIGH, i) - numBelow(LO - 1, i);
}
return ret;
The function numBelow(high, numSet) computes the number of integers less than or equal to high and greater than zero that have numSet bits set. To implement numBelow(high, numSet) efficiently, you can use something like the following:
numBelow(high, numSet) {
t = floor(lg(high));
ret = 0;
if (numBitsSet(high) == numSet) ret++;
while (numSet > 0 && t > 0) {
ret += nchoosek(t - 1, numSet);
numSet--;
while (--t > 0 && (((1 << t) & high) == 0));
}
return ret;
}
This is a full working example with c++17
#include <bits/stdc++.h>
using namespace std;
#define BASE_MAX 61
typedef unsigned long long ll;
ll combination[BASE_MAX][BASE_MAX];
vector<vector<ll>> NK(4);
int count_bit(ll n) {
int ret = 0;
while (n) {
if (n & 1) {
ret++;
}
n >>= 1;
}
return ret;
}
int get_leftmost_bit_index(ll n) {
int ret = 0;
while (n > 1) {
ret++;
n >>= 1;
}
return ret;
}
void pre_calculate() {
for (int i = 0; i < BASE_MAX; i++)
combination[i][0] = 1;
for (int i = 1; i < BASE_MAX; i++) {
for (int j = 1; j < BASE_MAX; j++) {
combination[i][j] = combination[i - 1][j] + combination[i - 1][j - 1];
}
}
NK[0].push_back(1);
for (int i = 2; i < BASE_MAX; i++) {
int bitCount = count_bit(i);
if (find(NK[0].begin(), NK[0].end(), bitCount) != NK[0].end()) {
NK[1].push_back(i);
}
}
for (int i = 1; i < BASE_MAX; i++) {
int bitCount = count_bit(i);
if (find(NK[1].begin(), NK[1].end(), bitCount) != NK[1].end()) {
NK[2].push_back(i);
}
}
for (int i = 1; i < BASE_MAX; i++) {
int bitCount = count_bit(i);
if (find(NK[2].begin(), NK[2].end(), bitCount) != NK[2].end()) {
NK[3].push_back(i);
}
}
}
ll how_many_numbers_have_n_bit_in_range(ll lo, ll hi, int bit_count) {
if (bit_count == 0) {
if (lo == 0) return 1;
else return 0;
}
if (lo == hi) {
return count_bit(lo) == bit_count;
}
int lo_leftmost = get_leftmost_bit_index(lo); // 100 -> 2
int hi_leftmost = get_leftmost_bit_index(hi); // 1101 -> 3
if (lo_leftmost == hi_leftmost) {
return how_many_numbers_have_n_bit_in_range(lo & ~(1LL << lo_leftmost), hi & ~(1LL << hi_leftmost),
bit_count - 1);
}
if (lo != 0) {
return how_many_numbers_have_n_bit_in_range(0, hi, bit_count) -
how_many_numbers_have_n_bit_in_range(0, lo - 1, bit_count);
}
ll ret = combination[hi_leftmost][bit_count];
ret += how_many_numbers_have_n_bit_in_range(1LL << hi_leftmost, hi, bit_count);
return ret;
}
int main(void) {
pre_calculate();
while (true) {
ll LO, HI;
int X;
scanf("%lld%lld%d", &LO, &HI, &X);
if (LO == 0 && HI == 0 && X == 0)
break;
switch (X) {
case 0:
cout << (LO == 1) << endl;
break;
case 1: {
int ret = 0;
ll power2 = 1;
for (int i = 0; i < BASE_MAX; i++) {
power2 *= 2;
if (power2 > HI)
break;
if (power2 >= LO)
ret++;
}
cout << ret << endl;
break;
}
case 2:
case 3:
case 4: {
vector<ll> &addedBitsSizes = NK[X - 1];
ll ret = 0;
for (auto bit_count_to_added: addedBitsSizes) {
ll result = how_many_numbers_have_n_bit_in_range(LO, HI, bit_count_to_added);
ret += result;
}
cout << ret << endl;
break;
}
default:
cout << 0 << endl;
break;
}
}
return 0;
}

Find set of numbers in one collection that adds up to a number in another

For a game I'm making I have a situation where I have a list of numbers – say [7, 4, 9, 1, 15, 2] (named A for this) – and another list of numbers – say [11, 18, 14, 8, 3] (named B) – provided to me. The goal is to find all combinations of numbers in A that add up to a number in B. For example:
1 + 2 = 3
1 + 7 = 8
2 + 9 = 11
4 + 7 = 11
1 + 2 + 4 + 7 = 14
1 + 2 + 15 = 18
2 + 7 + 9 = 18
...and so on. (For purposes of this, 1 + 2 is the same as 2 + 1.)
For small lists like this, it's trivial to just brute-force the combinations, but I'm facing the possibility of seeing thousands to tens of thousands of these numbers and will be using this routine repeatedly over the lifespan of the application. Is there any kind of elegant algorithm available to accomplish this in reasonable time with 100% coverage? Failing this, is there any kind of decent heuristics I can find that can give me a "good enough" set of combinations in a reasonable amount of time?
I'm looking for an algorithm in pseudo-code or in any decently popular and readable language (note the "and" there....;) or even just an English description of how one would go about implementing this kind of search.
Edited to add:
Lots of good information provided so far. Thanks guy! Summarizing for now:
The problem is NP-Complete so there is no way short of brute force to get 100% accuracy in reasonable time.
The problem can be viewed as a variant of either the subset sum or knapsack problems. There are well-known heuristics for both which may be adaptable to this problem.
Keep the ideas coming! And thanks again!
This problem is NP-Complete... This is some variation of the sub-set sum problem which is known to be NP-Complete (actually, the sub-set sum problem is easier than yours).
Read here for more information:
http://en.wikipedia.org/wiki/Subset_sum_problem
As said in the comments with numbers ranging only from 1 to 30 the problem has a fast solution. I tested it in C and for your given example it only needs miliseconds and will scale very well. The complexity is O(n+k) where n is length of list A and k the length of list B, but with a high constant factor (there are 28.598 possibilites to get a sum from 1 to 30).
#define WIDTH 30000
#define MAXNUMBER 30
int create_combination(unsigned char comb[WIDTH][MAXNUMBER+1],
int n,
unsigned char i,
unsigned char len,
unsigned char min,
unsigned char sum) {
unsigned char j;
if (len == 1) {
if (n+1>=WIDTH) {
printf("not enough space!\n");
exit(-1);
}
comb[n][i] = sum;
for (j=0; j<=i; j++)
comb[n+1][j] = comb[n][j];
n++;
return n;
}
for (j=min; j<=sum/len; j++) {
comb[n][i] = j;
n = create_combination(comb, n, i+1, len-1, j, sum-j);
}
return n;
}
int main(void)
{
unsigned char A[6] = { 7, 4, 9, 1, 15, 2 };
unsigned char B[5] = { 11, 18, 14, 8, 3 };
unsigned char combinations[WIDTH][MAXNUMBER+1];
unsigned char needed[WIDTH][MAXNUMBER];
unsigned char numbers[MAXNUMBER];
unsigned char sums[MAXNUMBER];
unsigned char i, j, k;
int n=0, m;
memset(combinations, 0, sizeof combinations);
memset(needed, 0, sizeof needed);
memset(numbers, 0, sizeof numbers);
memset(sums, 0, sizeof sums);
// create array with all possible combinations
// combinations[n][0] stores the sum
for (i=2; i<=MAXNUMBER; i++) {
for (j=2; j<=i; j++) {
for (k=1; k<=MAXNUMBER; k++)
combinations[n][k] = 0;
combinations[n][0] = i;
n = create_combination(combinations, n, 1, j, 1, i);
}
}
// count quantity of any summands in each combination
for (m=0; m<n; m++)
for (i=1; i<=MAXNUMBER && combinations[m][i] != 0; i++)
needed[m][combinations[m][i]-1]++;
// count quantity of any number in A
for (m=0; m<6; m++)
if (numbers[A[m]-1] < MAXNUMBER)
numbers[A[m]-1]++;
// collect possible sums from B
for (m=0; m<5; m++)
sums[B[m]-1] = 1;
for (m=0; m<n; m++) {
// check if sum is in B
if (sums[combinations[m][0]-1] == 0)
continue;
// check if enough summands from current combination are in A
for (i=0; i<MAXNUMBER; i++) {
if (numbers[i] < needed[m][i])
break;
}
if (i<MAXNUMBER)
continue;
// output result
for (j=1; j<=MAXNUMBER && combinations[m][j] != 0; j++) {
printf(" %s %d", j>1 ? "+" : "", combinations[m][j]);
}
printf(" = %d\n", combinations[m][0]);
}
return 0;
}
1 + 2 = 3
1 + 7 = 8
2 + 9 = 11
4 + 7 = 11
1 + 4 + 9 = 14
1 + 2 + 4 + 7 = 14
1 + 2 + 15 = 18
2 + 7 + 9 = 18
Sounds like a Knapsack problem (see http://en.wikipedia.org/wiki/Knapsack_problem. On that page they also explain that the problem is NP-complete in general.
I think this means that if you want to find ALL valid combinations, you just need a lot of time.
This is a small generalization of the subset sum problem. In general, it is NP-complete, but as long as all the numbers are integers and the maximum number in B is relatively small, the pseudo-polynomial solution described in the Wikipedia article I linked should do the trick.

Resources