I need help with a problem problem c,Although I have a solution But i want to ask If we want to minimize the sweets then
after assigning these ans += b[i] * M sweets we can now say that this is minimum number .
But after that we are assigning again here again ans += g[i] - b[N]
Here b[N] is the max number in group of boys
Please help!
#include <bits/stdc++.h>
using namespace std;
const int maxn = 1e5 + 10;
int N, M;
long long g[maxn], b[maxn];
int main() {
scanf("%d%d", &N, &M);
for(int i = 1; i <= N; i ++)
scanf("%lld", &b[i]);
for(int i = 1; i <= M; i ++) {
scanf("%lld", &g[i]);
}
sort(b + 1, b + 1 + N);
sort(g + 1, g + 1 + M);
if(b[N] > g[1]) printf("-1\n");
else {
long long ans = 0;
for(int i = N; i >= 1; i --)
ans += b[i] * M;
for(int i = 2; i <= M; i ++)
ans += g[i] - b[N];
if(g[1] != b[N]) ans += g[1] - b[N - 1];
printf("%lld\n", ans);
}
return 0;
}
First we sort the girls and boys by min given/max get sweets.
There are 3 steps then:
1 - Each boys give their minimum sweets to each girls.
for(int i = N; i >= 1; i --)
ans += b[i] * M;
boys' conditions are fulfilled.
2 - Each girls except the first one get their maximum sweets from the last boy. (Who has the largest minimum, because this way we can minimize the sweets given in this step.) But he had already given his minimum to all of the girls in the previous step, so that's why he gives only the remaining g[i]-b[N] sweets here.
Notice, he cannot add the max to the first girl too, because he needs to add his minimum to somebody.
for(int i = 2; i <= M; i ++)
ans += g[i] - b[N];
girls' conditions (except the first one) are fulfilled.
3 - somebody needs to add the maximum sweets to the first girl. The optimum solution is if the boy with the 2nd largest minimum gives them to her. But it's only needed if the first girl's maximum is greater than the lasts' boy's minimum, otherwise her condition was already fulfilled.
if(g[1] != b[N])
ans += g[1] - b[N - 1];
firs't girl's condition is fulfilled.
Maybe it's more understandable in this format:
for(int i = 1; i < N; i ++)
ans += b[i] * M;
for(int i = 2; i <= M; i ++)
ans += g[i];
ans += b[N];
if(g[1] != b[N])
ans += g[1] - b[N - 1];
... however, there are a few corner-cases which are not handled in this algorithm: What is there is only 1 boy or/and 1 girl? (But that's only a few extra if/else branches.)
so I was wondering how would I work out the time complexity (T(n)) of a piece of code, for example, the one below, in terms of the number of operations.
for( int i = n; i > 0; i /= 2 ) {
for( int j = 1; j < n; j *= 2 ) {
for( int k = 0; k < n; k += 2 ) {
... // constant number of operations
}
}
}
I'm sure its simple but this concept wasn't taught very well by my lecturer and I really want to know how to work out the time complexity!
Thank you in advance!
To approach this, one method is to breakdown the complexity of your three loops individually.
A key observation we can make is that:
(P): The number of steps in each of the loop does not depend on the value of the "index" of its parent loop.
Let's call
f(n) the number of operations aggregated in the outer loop (1)
g(n) in the intermediate inner loop (2)
h(n) in the most inner loop (3).
for( int i = n; i > 0; i /= 2 ) { // (1): f(n)
for( int j = 1; j < n; j *= 2 ) { // (2): g(n)
for( int k = 0; k < n; k += 2 ) { // (3): h(n)
// constant number of operations // => (P)
}
}
}
Loop (1)
Number of steps
i gets the values n, n/2, n/4, ... etc. until it reaches n/2^k where 2^k is greater than n (2^k > n), such that n/2^k = 0, at which point you exit the loop.
Another way to say it is that you have step 1 (i = n), step 2 (i = n/2), step 3 (i = n/4), ... step k - 1 (i = n/2^(k-1)), then you exit the loop. These are k steps.
Now what is the value of k? Observe that n - 1 <= 2^k < n <=> log2(n - 1) <= k < log2(n) <= INT(log2(n - 1)) <= k <= INT(log2(n)). This makes k = INT(log2(n)) or loosely speaking k = log2(n).
Cost of each step
Now how many operations do you have for each individual step?
At step i, it is g(i) = g(n) according to the notations we chose and the property (P).
Loop (2)
Number of steps
You have step (1) (j = 1), step (2) (j = 2), step (3) (j = 4), etc. until you reach step (p) (j = 2^p) where p is defined as the smallest integer such that 2^p > n, or loosely speaking log2(n).
Cost of each step
The cost of step j is h(j) = h(n) according to the notations we chose and the property (P).
Loop (3)
Number of steps
Again, let's count the steps: (1):k = 0, (1):k = 2, (2):k = 4, ..., k = n - 1 or k = n - 2. This amounts to n / 2 steps.
Cost of each step
Because of (P), it is constant. Let's call this constant K.
All loops altogether
The number of aggregated operations is
T(n) = f(n) = sum(i = 0, i < log2(n), g(i))
= sum(i = 0, i < log2(n), g(n))
= log2(n).g(n)
= log2(n).sum(j = 0, j < log2(n), h(j))
= log2(n).log2(n).h(n)
= log2(n).log2(n).(n/2).K
So T(n) = (K/2).(log2(n))^2.n
Write a method, add a counter, return the result:
int nIterator (int n) {
int counter = 0;
for( int i = n; i > 0; i /= 2 ) {
for( int j = 1; j < n; j *= 2 ) {
for( int k = 0; k < n; k += 2 ) {
++counter;
}
}
}
return counter;
}
Protocol for fast increasing N and document in a readable manner the results:
int old = 0;
for (int i = 0, j=1; i < 18; ++i, j*=2) {
int res = nIterator (j);
double quote = (old == 0) ? 0.0 : (res*1.0)/old;
System.out.printf ("%6d %10d %3f\n", j, res, quote);
old=res;
}
Result:
1 0 0,000000
2 2 0,000000
4 12 6,000000
8 48 4,000000
16 160 3,333333
32 480 3,000000
64 1344 2,800000
128 3584 2,666667
256 9216 2,571429
512 23040 2,500000
1024 56320 2,444444
2048 135168 2,400000
4096 319488 2,363636
8192 745472 2,333333
16384 1720320 2,307692
32768 3932160 2,285714
65536 8912896 2,266667
131072 20054016 2,250000
So n is increasing by factor 2, the counter increases in the beginning with more than 2², but then decreases rapidly towards something, not much higher than 2. This should help you find the way.
I have the following pseudo-code which I want to determine its run-time T(n).
Can someone please give me the steps I should follow ?
Here is the code:
i := 1;
while (i <= n)
j := i;
x := x+A[i];
while (j > 0)
y := x/(2*j);
j = j /2; // Assume here that this returns the floor of the quotient
i = 2 * i;
return y;
I think it's O(log log N) if my calculation is not wrong.
First, we omit the lines that does not affect the loop times, also assume array random visit as O(1).
i := 1;
while (i <= n)
j := i;
while (j > 0)
j = j / 2;
i = 2 * i;
return y;
Then we assume that all the operations in one line are done in O(1), we add them up and get the result.
Besides the math analysis, we can also use computational: we run the fragment for several times and see the time growth (I also omitted the operations that does not affect the loop times).
#include <iostream>
using namespace std;
int main(){
long long i, j, n;
double cost;
clock_t start, finish;
i = 1;
n = 1000000000000;
start = clock();
while (i <= n) {
j = i;
while (j > 0) {
j = j / 2;
}
i = 2 * i;
}
finish = clock();
cout << (double)(finish - start)/CLOCKS_PER_SEC << endl;
}
I was wondering if anyone could explain to me the semantics of analyzing programs. I get how to do simple ones, but there are some more complicated ones I am not sure how to do. For example here is a question from my book. We are given 6 little segments of code and told to analyze them:
(1). sum = 0;
for( i = 0; i < n; ++i )
++sum;
(2). sum = 0;
for( i = 0; i < n; ++i )
for( j = 0; j < n; ++j )
++sum;
(3). sum = 0;
for( i = 0; i < n; ++i )
for( j = 0; j < n * n; ++j )
++sum;
(4). sum = 0;
for( i = 0; i < n; ++i )
for( j = 0; j < i; ++j )
++sum;
(5). sum = 0;
for( i = 0; i < n; ++i )
for( j = 0; j < i * i; ++j )
for( k = 0; k < j; ++k )
++sum;
(6). sum = 0;
for( i = 1; i < n; ++i )
for( j = 1; j < i * i; ++j )
if( j % i == 0 )
for( k = 0; k < j; ++k )
++sum;
I understand 1, 2, and 4. Those ones are easy. The ones I don't get are 3, 5, and 6.
1 runs n times so that one is in Big-Oh(n). 2 has two for loops which both run n times each so this one is Big-Oh(n^2). 4 is something I have seen before. The inner loop runs as many times as the value of i. So if i = 1 then loop runs once, if i = 2 the loop runs twice for a patters of 1 + 2 + 3 + ... + n which is the pattern n(n + 1) / 2 which means this whole this is in Big-Oh(n^2). I am unsure how to go about 3 with the n * n in the conditional. That is also the reason I am not sure how to go about 5 with the i * i also in there. As for 6, not only do we have the i * i but there is also an if statement that may or may not run. What do I do with that? Can anyone help explain how to do those ones? Thanks.
UPDATE I had an idea about 3. The outer for loop in that one runs n times, the inner for loop runs n^2 times. So for that one, would we have n * n^2 which is n^3? So would that one be in Big-Oh(n^3)?
You figured third one out. The same reasoning applies to all of them. Actually your aim is to find the final value of sum in terms of n.
For the fifth one, that is to find the sum: ∑i=0n(∑j=0n^2 (j))
The inner summation is just (i2*(i2 + 1)) / 2, which is follows from the simple summation formula.
Then you have ∑i=0n(i2*(i2 + 1)) / 2 which is equal to [∑i=0n(i4) + ∑i=0n(i2)] / 2.
This is O(N5) looking at the dominating term ∑i=0n(i4), which you can think as an integral so you just increase the exponent and you get N5. Sixth one is left as an exercise, just imagine what sum will be in the end.
So I just got back for the ACM Programing competition and did pretty well but there was one problem that not one team got.
The Problem.
Start with an integer N0 which is greater than 0. Let N1 be the number of ones in the binary representation of N0. So, if N0 = 27, N1 = 4. For all i > 0, let Ni be the number of ones in the binary representation of Ni-1. This sequence will always converge to one. For any starting number, N0, let K be the minimum value of i >= 0 for which N1 = 1. For example, if N0 = 31, then N1 = 5, N2 = 2, N3 = 1, so K = 3.
Given a range of consecutive numbers and a value of X how many numbers in the range have a K value equal to X?
Input
There will be several test cases in the input. Each test case will consist of three integers on a single line:
LO HI X
Where LO and HI (1 <= LO <= HI <= 10^18) are the lower and upper limits of a range of integers, and X (0 <= X <= 10) is the target value for K. The input will end with a line of three 0s.
Output
For each test case output a single integer, representing the number of integers in the range from LO to HI (inclusive) which have a K value equal to X in the input. Print each Integer on its own line with no spaces. Do not print any blank lines between answers.
Sample Input
31 31 3
31 31 1
27 31 1
27 31 2
1023 1025 1
1023 1025 2
0 0 0
Sample Output
1
0
0
3
1
1
If you guys want I can include our answer or our problem, because finding for a small range is easy but I will give you a hint first your program needs to run in seconds not minutes. We had a successful solution but not an efficient algorithm to use a range similar to
48238 10^18 9
Anyway good luck and if the community likes these we had some more we could not solve that could be some good brain teasers for you guys. The competition allows you to use Python, C++, or Java—all three are acceptable in an answer.
So as a hint my coach said to think of how binary numbers count rather than checking every bit. I think that gets us a lot closer.
I think a key is first understanding the pattern of K values and how rapidly it grows. Basically, you have:
K(1) = 0
K(X) = K(bitcount(X))+1 for X > 1
So finding the smallest X values for a given K we see
K(1) = 0
K(2) = 1
K(3) = 2
K(7) = 3
K(127) = 4
K(170141183460469231731687303715884105727) = 5
So for an example like 48238 10^18 9 the answer is trivially 0. K=0 only for 1, and K=1 only for powers of 2, so in the range of interest, we'll pretty much only see K values of 2, 3 or 4, and never see K >= 5
edit
Ok, so we're looking for an algorithm to count the number of values with K=2,3,4 in a range of value LO..HI without iterating over the entire range. So the first step is to find the number of values in the range with bitcount(x)==i for i = 1..59 (since we only care about values up to 10^18 and 10^18 < 2^60). So break down the range lo..hi into subranges that are a power of 2 size and differ only in their lower n bits -- a range of the form x*(2^n)..(x+1)*(2^n)-1. We can break down the arbitray lo..hi range into such subranges easily. For each such subrange there will be choose(n, i) values with i+bitcount(x) set bits.
So we just add all the subranges together to get a vector of counts for 1..59, which we then iterate over, adding together those elements with the same K value to get our answer.
edit (fixed again to be be C89 compatible and work for lo=1/k=0)
Here's a C program to do what I previously described:
#include <stdio.h>
#include <string.h>
#include <assert.h>
int bitcount(long long x) {
int rv = 0;
while(x) { rv++; x &= x-1; }
return rv; }
long long choose(long long m, long long n) {
long long rv = 1;
int i;
for (i = 0; i < n; i++) {
rv *= m-i;
rv /= i+1; }
return rv; }
void bitcounts_p2range(long long *counts, long long base, int l2range) {
int i;
assert((base & ((1LL << l2range) - 1)) == 0);
counts += bitcount(base);
for (i = 0; i <= l2range; i++)
counts[i] += choose(l2range, i); }
void bitcounts_range(long long *counts, long long lo, long long hi) {
int l2range = 0;
while (lo + (1LL << l2range) - 1 <= hi) {
if (lo & (1LL << l2range)) {
bitcounts_p2range(counts, lo, l2range);
lo += 1LL << l2range; }
l2range++; }
while (l2range >= 0) {
if (lo + (1LL << l2range) - 1 <= hi) {
bitcounts_p2range(counts, lo, l2range);
lo += 1LL << l2range; }
l2range--; }
assert(lo == hi+1); }
int K(int x) {
int rv = 0;
while(x > 1) {
x = bitcount(x);
rv++; }
return rv; }
int main() {
long long counts[64];
long long lo, hi, total;
int i, k;
while (scanf("%lld%lld%d", &lo, &hi, &k) == 3) {
if (lo < 1 || lo > hi || k < 0) break;
if (lo == 0 || hi == 0 || k == 0) break;
total = 0;
if (lo == 1) {
lo++;
if (k == 0) total++; }
memset(counts, 0, sizeof(counts));
bitcounts_range(counts, lo, hi);
for (i = 1; i < 64; i++)
if (K(i)+1 == k)
total += counts[i];
printf("%lld\n", total); }
return 0; }
which runs just fine for values up to 2^63-1 (LONGLONG_MAX).
For 48238 1000000000000000000 3 it gives 513162479025364957, which certainly seems plausible
edit
giving the inputs of
48238 1000000000000000000 1
48238 1000000000000000000 2
48238 1000000000000000000 3
48238 1000000000000000000 4
gives outputs of
44
87878254941659920
513162479025364957
398959266032926842
Those add up to 999999999999951763 which is correct. The value for k=1 is correct (there are 44 powers of two in that range 2^16 up to 2^59). So while I'm not sure the other 3 values are correct, they're certainly plausible.
The idea behind this answer can help you develop very fast solution. Having ranges 0..2^N the complexity of a potential algorithm would be O(N) in the worst case (Assuming that complexity of a long arithmetic is O(1)) If programmed correctly it should easily handle N = 1000000 in a matter of milliseconds.
Imagine we have the following values:
LO = 0; (0000000000000000000000000000000)
HI = 2147483647; (1111111111111111111111111111111)
The lowest possible N1 in range LO..HI is 0
The highest possible N1 in range LO..HI is 31
So the computation of N2..NN part is done only for one of 32 values (i.e. 0..31).
Which can be done simply, even without a computer.
Now lets compute the amount of N1=X for a range of values LO..HI
When we have X = 0 we have count(N1=X) = 1 this is the following value:
1 0000000000000000000000000000000
When we have X = 1 we have count(N1=X) = 31 these are the following values:
01 1000000000000000000000000000000
02 0100000000000000000000000000000
03 0010000000000000000000000000000
...
30 0000000000000000000000000000010
31 0000000000000000000000000000001
When we have X = 2 we have the following pattern:
1100000000000000000000000000000
How many unique strings can be formed with 29 - '0' and 2 - '1'?
Imagine the rightmost '1'(#1) is cycling from left to right, we get the following picture:
01 1100000000000000000000000000000
02 1010000000000000000000000000000
03 1001000000000000000000000000000
...
30 1000000000000000000000000000001
Now we've got 30 unique strings while moving the '1'(#1) from left to right, it is now impossible to
create a unique string by moving the '1'(#1) in any direction. This means we should move '1'(#2) to the right,
let's also reset the position of '1'(#1) as left as possible remaining uniqueness, we get:
01 0110000000000000000000000000000
now we do the cycling of '1'(#1) once again
02 0101000000000000000000000000000
03 0100100000000000000000000000000
...
29 0100000000000000000000000000001
Now we've got 29 unique strings, continuing this whole operation 28 times we get the following expression
count(N1=2) = 30 + 29 + 28 + ... + 1 = 465
When we have X = 3 the picture remains similar but we are moving '1'(#1), '1'(#2), '1'(#3)
Moving the '1'(#1) creates 29 unique strings, when we start moving '1'(#2) we get
29 + 28 + ... + 1 = 435 unique strings, after that we are left to process '1'(#3) so we have
29 + 28 + ... + 1 = 435
28 + ... + 1 = 406
...
+ 1 = 1
435 + 406 + 378 + 351 + 325 + 300 + 276 +
253 + 231 + 210 + 190 + 171 + 153 + 136 +
120 + 105 + 091 + 078 + 066 + 055 + 045 +
036 + 028 + 021 + 015 + 010 + 006 + 003 + 001 = 4495
Let's try to solve the general case i.e. when we have N zeros and M ones.
Overall amount of permutations for the string of length (N + M) is equal to (N + M)!
The amount of '0' duplicates in this string is equal to N!
The amount of '1' duplicates in this string is equal to M!
thus receiving overall amount of unique strings formed of N zeros and M ones is
(N + M)! 32! 263130836933693530167218012160000000
F(N, M) = ============= => ========== = ====================================== = 4495
(N!) * (M!) 3! * 29! 6 * 304888344611713860501504000000
Edit:
F(N, M) = Binomial(N + M, M)
Now let's consider a real life example:
LO = 43797207; (0000010100111000100101011010111)
HI = 1562866180; (1011101001001110111001000000100)
So how do we apply our unique permutations formula to this example? Since we don't know how
many '1' is located below LO and how many '1' is located above HI.
So lets count these permutations below LO and above HI.
Lets remember how we cycled '1'(#1), '1'(#2), ...
1111100000000000000000000000000 => 2080374784
1111010000000000000000000000000 => 2046820352
1111001000000000000000000000000 => 2030043136
1111000000000000000000000000001 => 2013265921
1110110000000000000000000000000 => 1979711488
1110101000000000000000000000000 => 1962934272
1110100100000000000000000000000 => 1954545664
1110100010000000000000000000001 => 1950351361
As you see this cycling process decreases the decimal values smoothly. So we need to count amount of
cycles until we reach HI value. But we shouldn't be counting these values by one because
the worst case can generate up to 32!/(16!*16!) = 601080390 cycles, which we will be cycling very long :)
So we need cycle chunks of '1' at once.
Having our example we would want to count the amount of cycles of a transformation
1111100000000000000000000000000 => 1011101000000000000000000000000
1011101001001110111001000000100
So how many cycles causes the transformation
1111100000000000000000000000000 => 1011101000000000000000000000000
?
Lets see, the transformation:
1111100000000000000000000000000 => 1110110000000000000000000000000
is equal to following set of cycles:
01 1111100000000000000000000000000
02 1111010000000000000000000000000
...
27 1111000000000000000000000000001
28 1110110000000000000000000000000
So we need 28 cycles to transform
1111100000000000000000000000000 => 1110110000000000000000000000000
How many cycles do we need to transform
1111100000000000000000000000000 => 1101110000000000000000000000000
performing following moves we need:
1110110000000000000000000000000 28 cycles
1110011000000000000000000000000 27 cycles
1110001100000000000000000000000 26 cycles
...
1110000000000000000000000000011 1 cycle
and 1 cycle for receiving:
1101110000000000000000000000000 1 cycle
thus receiving 28 + 27 + ... + 1 + 1 = 406 + 1
but we have seen this value before and it was the result for the amount of unique permutations, which was
computed for 2 '1' and 27 '0'. This means that amount of cycles while moving
11100000000000000000000000000 => 01110000000000000000000000000
is equal to moving
_1100000000000000000000000000 => _0000000000000000000000000011
plus one additional cycle
so this means if we have M zeros and N ones and want to move the chunk of U '1' to the right we will need to
perform the following amount of cycles:
(U - 1 + M)!
1 + =============== = f(U, M)
M! * (U - 1)!
Edit:
f(U, M) = 1 + Binomial(U - 1 + M, M)
Now let's come back to our real life example:
LO = 43797207; (0000010100111000100101011010111)
HI = 1562866180; (1011101001001110111001000000100)
so what we want to do is count the amount cycles needed to perform the following
transformations (suppose N1 = 6)
1111110000000000000000000000000 => 1011101001000000000000000000000
1011101001001110111001000000100
this is equal to:
1011101001000000000000000000000 1011101001000000000000000000000
------------------------------- -------------------------------
_111110000000000000000000000000 => _011111000000000000000000000000 f(5, 25) = 118756
_____11000000000000000000000000 => _____01100000000000000000000000 f(2, 24) = 301
_______100000000000000000000000 => _______010000000000000000000000 f(1, 23) = 24
________10000000000000000000000 => ________01000000000000000000000 f(1, 22) = 23
thus resulting 119104 'lost' cycles which are located above HI
Regarding LO, there is actually no difference in what direction we are cycling
so for computing LO we can do reverse cycling:
0000010100111000100101011010111 0000010100111000100101011010111
------------------------------- -------------------------------
0000000000000000000000000111___ => 0000000000000000000000001110___ f(3, 25) = 2926
00000000000000000000000011_____ => 00000000000000000000000110_____ f(2, 24) = 301
Thus resulting 3227 'lost' cycles which are located below LO this means that
overall amount of lost cycles = 119104 + 3227 = 122331
overall amount of all possible cycles = F(6, 25) = 736281
N1 in range 43797207..1562866180 is equal to 736281 - 122331 = 613950
I wont provide the remaining part of the solution. It is not that hard to grasp the remaining part. Good luck!
I think it's a problem in Discrete mathematics,
assuming LOW is 0,
otherwise we can insert a function for summing numbers below LOW,
from numbers shown i understand the longest number will consist up to 60 binary digit at most
alg(HIGH,k)
l=len(HIGH)
sum=0;
for(i=0;i<l;i++)
{
count=(l choose i);
nwia=numbers_with_i_above(i,HIGH);
if canreach(i,k) sum+=(count-nwia);
}
all the numbers appear
non is listed twice
numbers_with_i_above is trivial
canreach with numbers up to 60 is easy
len is it length of a binary represention
Zobgib,
The key to this problem is not to understand how rapidly the growth of K's pattern grows, but HOW it grows, itself. The first step in this is to understand (as your coach said) how binary numbers count, as this determines everything about how K is determined. Binary numbers follow a pattern that is distinct when counting the number of positive bits. Its a single progressive repetitive pattern. I am going to demonstrate in an unusual way...
Assume i is an integer value. Assume b is the number of positive bits in i
i = 1;
b = 1;
i = 2; 3;
b = 1; 2;
i = 4; 5; 6; 7;
b = 1; 2; 2; 3;
i = 8; 9; 10; 11; 12; 13; 14; 15;
b = 1; 2; 2; 3; 2; 3; 3; 4;
i = 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31;
b = 1; 2; 2; 3; 2; 3; 3; 4; 2; 3; 3; 4; 3; 4; 4; 5;
I assure you, this pattern holds to infinity, but if needed you
should be able to find or construct a proof easily.
If you look at the data above, you'll notice a distinct pattern related to 2^n. Each time you have an integer exponent of 2, the pattern will reset by including the each term of previous pattern, and then each term of the previous pattern incremented by 1. As such, to get K, you just apply the new number to the pattern above. The key is to find a single expression (that is efficient) to receive your number of bits.
For demonstration, yet again, you can further extrapolate a new pattern off of this, because it is static and follows the same progression. Below is the original data modified with its K value (based on the recursion).
Assume i is an integer value. Assume b is the number of positive bits in i
i = 1;
b = 1;
K = 1;
i = 2; 3;
b = 1; 2;
K = 1; 2;
i = 4; 5; 6; 7;
b = 1; 2; 2; 3;
K = 1; 2; 2; 3;
i = 8; 9; 10; 11; 12; 13; 14; 15;
b = 1; 2; 2; 3; 2; 3; 3; 4;
K = 1; 2; 2; 3; 2; 3; 3; 2;
i = 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31;
b = 1; 2; 2; 3; 2; 3; 3; 4; 2; 3; 3; 4; 3; 4; 4; 5;
K = 1; 2; 2; 3; 2; 3; 3; 2; 2; 3; 3; 2; 3; 2; 2; 3;
If you notice, K follows a similar patterning, with a special condition... Everytime b is a power of 2, it actually lowers the K value by 2. Soooo, if you follow a binary progression, you should be able to easily map your K values. Since this pattern is dependant on powers of 2, and the pattern is dependant upon finding the nearest power of 2 and starting there, I propose the following solution. Take your LOW value and find the nearest power of 2 (p) such that 2^p < LOW. This can be done by "counting the bits" for just the lowest number. Again, once you know which exponent it is, you don't have to count the bits for any other number. You just increment through the pattern and you will have your b and hence K (which is following the same pattern).
Note: If you are particularly observant, you can use the previous b or K to determine the next. If the current i is odd, add 1 to the previous b. If the current i is divisible by 4, then you decrement b by either 1 or 2, dependent upon whether it's in the first 1/2 of the pattern or second half. And, of course, if i is a power of 2, start over at 1.
Fuzzical Logic
Pseudo-code Example (non-Optimized)
{ var LOW, HIGH
var power = 0
//Get Nearest Power Of 2
for (var i = 0 to 60) {
// Compare using bitwise AND
if (LOW bitAND (2 ^ i) = (2 ^ i)) {
if ((2 ^ i) <= LOW) {
set power to i
}
else {
// Found the Power: end the for loop
set i to 61
}
}
}
// Automatically 1 at a Power of 2
set numOfBits to 1
array numbersWithPositiveBits with 64 integers = 0
// Must create the pattern from Power of 2
set foundLOW to false
for (var j = (2^power) to HIGH) {
set lenOfPatten to (power + 1)
// Don't record until we have found the LOW value
if ((foundLOW is false) bitAND (j is equal to LOW)) {
set foundLOW to true
}
// If j is odd, increment numOfBits
if ((1 bitAND j) is equal to 1) {
increment numOfBits
}
else if (j modulus 4 == 0) {
decrement numOfBits accordingly //Figure this one out yourself, please
}
else if ((j - (2^power)) == (power + 1)) {
// We are at the next power
increment power
// Start pattern over
set numOfBits to 1
}
// Record if appropriate
if (foundLOW is equal to true) {
increment element numOfBits in array numbersWithPositiveBits
}
}
// From here, derive your K values.
You can solve this efficiently as follows:
ret = 0;
for (i = 1; i <= 64; i++) {
if (computeK(i) != desiredK) continue;
ret += numBelow(HIGH, i) - numBelow(LO - 1, i);
}
return ret;
The function numBelow(high, numSet) computes the number of integers less than or equal to high and greater than zero that have numSet bits set. To implement numBelow(high, numSet) efficiently, you can use something like the following:
numBelow(high, numSet) {
t = floor(lg(high));
ret = 0;
if (numBitsSet(high) == numSet) ret++;
while (numSet > 0 && t > 0) {
ret += nchoosek(t - 1, numSet);
numSet--;
while (--t > 0 && (((1 << t) & high) == 0));
}
return ret;
}
This is a full working example with c++17
#include <bits/stdc++.h>
using namespace std;
#define BASE_MAX 61
typedef unsigned long long ll;
ll combination[BASE_MAX][BASE_MAX];
vector<vector<ll>> NK(4);
int count_bit(ll n) {
int ret = 0;
while (n) {
if (n & 1) {
ret++;
}
n >>= 1;
}
return ret;
}
int get_leftmost_bit_index(ll n) {
int ret = 0;
while (n > 1) {
ret++;
n >>= 1;
}
return ret;
}
void pre_calculate() {
for (int i = 0; i < BASE_MAX; i++)
combination[i][0] = 1;
for (int i = 1; i < BASE_MAX; i++) {
for (int j = 1; j < BASE_MAX; j++) {
combination[i][j] = combination[i - 1][j] + combination[i - 1][j - 1];
}
}
NK[0].push_back(1);
for (int i = 2; i < BASE_MAX; i++) {
int bitCount = count_bit(i);
if (find(NK[0].begin(), NK[0].end(), bitCount) != NK[0].end()) {
NK[1].push_back(i);
}
}
for (int i = 1; i < BASE_MAX; i++) {
int bitCount = count_bit(i);
if (find(NK[1].begin(), NK[1].end(), bitCount) != NK[1].end()) {
NK[2].push_back(i);
}
}
for (int i = 1; i < BASE_MAX; i++) {
int bitCount = count_bit(i);
if (find(NK[2].begin(), NK[2].end(), bitCount) != NK[2].end()) {
NK[3].push_back(i);
}
}
}
ll how_many_numbers_have_n_bit_in_range(ll lo, ll hi, int bit_count) {
if (bit_count == 0) {
if (lo == 0) return 1;
else return 0;
}
if (lo == hi) {
return count_bit(lo) == bit_count;
}
int lo_leftmost = get_leftmost_bit_index(lo); // 100 -> 2
int hi_leftmost = get_leftmost_bit_index(hi); // 1101 -> 3
if (lo_leftmost == hi_leftmost) {
return how_many_numbers_have_n_bit_in_range(lo & ~(1LL << lo_leftmost), hi & ~(1LL << hi_leftmost),
bit_count - 1);
}
if (lo != 0) {
return how_many_numbers_have_n_bit_in_range(0, hi, bit_count) -
how_many_numbers_have_n_bit_in_range(0, lo - 1, bit_count);
}
ll ret = combination[hi_leftmost][bit_count];
ret += how_many_numbers_have_n_bit_in_range(1LL << hi_leftmost, hi, bit_count);
return ret;
}
int main(void) {
pre_calculate();
while (true) {
ll LO, HI;
int X;
scanf("%lld%lld%d", &LO, &HI, &X);
if (LO == 0 && HI == 0 && X == 0)
break;
switch (X) {
case 0:
cout << (LO == 1) << endl;
break;
case 1: {
int ret = 0;
ll power2 = 1;
for (int i = 0; i < BASE_MAX; i++) {
power2 *= 2;
if (power2 > HI)
break;
if (power2 >= LO)
ret++;
}
cout << ret << endl;
break;
}
case 2:
case 3:
case 4: {
vector<ll> &addedBitsSizes = NK[X - 1];
ll ret = 0;
for (auto bit_count_to_added: addedBitsSizes) {
ll result = how_many_numbers_have_n_bit_in_range(LO, HI, bit_count_to_added);
ret += result;
}
cout << ret << endl;
break;
}
default:
cout << 0 << endl;
break;
}
}
return 0;
}