Double Squares: counting numbers which are sums of two perfect squares - algorithm

Source: Facebook Hacker Cup Qualification Round 2011
A double-square number is an integer X which can be expressed as the sum of two perfect squares. For example, 10 is a double-square because 10 = 32 + 12. Given X, how can we determine the number of ways in which it can be written as the sum of two squares? For example, 10 can only be written as 32 + 12 (we don't count 12 + 32 as being different). On the other hand, 25 can be written as 52 + 02 or as 42 + 32.
You need to solve this problem for 0 ≤ X ≤ 2,147,483,647.
Examples:
10 => 1
25 => 2
3 => 0
0 => 1
1 => 1

Factor the number n, and check if it has a prime factor p with odd valuation, such that p = 3 (mod 4). It does if and only if n is not a sum of two squares.
The number of solutions has a closed form expression involving the number of divisors of n. See this, Theorem 3 for a precise statement.

Here is my simple answer in O(sqrt(n)) complexity
x^2 + y^2 = n
x^2 = n-y^2
x = sqrt(n - y^2)
x should be integer so (n-y^2) should be perfect square. Loop to y=[0, sqrt(n)] and check whether (n-y^2) is perfect square or not
Pseudocode :
count = 0;
for y in range(0, sqrt(n))
if( isPerfectSquare(n - y^2))
count++
return count/2

Here's a much simpler solution:
create list of squares in the given range (that's 46340 values for the example given)
for each square value x
if list contains a value y such that x + y = target value (i.e. does [target - x] exist in list)
output √x, √y as solution (roots can be stored in a std::map lookup created in the first step)

Looping through all pairs (a, b) is infeasible given the constrains on X. There is a faster way though!
For fixed a, we can work out b: b = √(X - a2). b won't always be an integer though, so we have to check this. Due to precision issues, perform the check with a small tolerance: if b is x.99999, we can be fairly certain it's an integer. So we loop through all possible values of a and count all cases where b is an integer. We need to be careful not to double-count, so we place the constraint that a <= b. For X = a2 + b2, a will be at most √(X/2) with this constraint.
Here is an implementation of this algorithm in C++:
int count = 0;
// add EPS to avoid flooring x.99999 to x
for (int a = 0; a <= sqrt(X/2) + EPS; a++) {
int b2 = X - a*a; // b^2
int b = (int) (sqrt(b2) + EPS);
if (abs(b - sqrt(b2)) < EPS) // check b is an integer
count++;
}
cout << count << endl;
See it on ideone with sample input

Here's a version which is trivially O(sqrt(N)) and avoids all loop-internal branches.
Start by generating all squares up to the limit, easily done without any multiplications, then initialize a l and r index.
In each iteration you calculate the sum, then update the two indices and the count based on a comparison with the target value. This is sqrt(N) iterations to generate the table and maximum sqrt(N) iterations of the search loop. Estimated running time with a reasonable compiler is max 10 clock cycles per sqrt(N), so for a maximum input value if 2^31 (sqrt(N) ~= 46341) this should correspond to less than 500K clock cycles or a few tenths of a second:
unsigned countPairs(unsigned n)
{
unsigned sq = 0, i;
unsigned square[65536];
for (i = 0; sq <= n; i++) {
square[i] = sq;
sq += i+i+1;
}
unsigned l = 0, r = i-1, count = 0;
do {
unsigned sum = square[l] + square[r];
l += sum <= n; // Increment l if the sum is <= N
count += sum == n; // Increment the count if a match
r -= sum >= n; // Decrement r if the sum is >= N
} while (l <= r);
return count;
}
A good compiler can note that the three compares at the end are all using the same operands so it only needs a single CMP opcode followed by three different conditional move operations (CMOVcc).

I was in a hurry, so solved it using a rather brute-force approach (very similar to marcog's) using Python 2.6.
def is_perfect_square(x):
rt = int(math.sqrt(x))
return rt*rt == x
def double_sqaures(n):
rng = int(math.sqrt(n))
ways = 0
for i in xrange(rng+1):
if is_perfect_square(n - i*i):
ways +=1
if ways % 2 == 0:
ways = ways // 2
else:
ways = ways // 2 + 1
return ways
Note: ways will be odd when the number is a perfect sqaure.

The number of solutions (x,y) of
x^2+y^2=n
over the integers is exactly 4 times the number of divisors of n congruent to 1 mod 4.
Similar identities exist also for the problems
x^2 + 2y^2 = n
and
x^2 + y^2 + z^2 + w^2 = n.

Related

Are there any effective many-to-one algorithms without using modulo operator?

Given a set containing 1~N and I tried to fairly map them into one of M slots (N > M). I think this is many-to-one mapping problems.
The naive solution is using modulo operator, given N = 10 and M = 3, we can do mapping like:
N M
1 % 3 = 1 (assign to 2nd slot)
2 % 3 = 2 (assign to 3rd slot)
...
9 % 3 = 0 (assign to 1st slot)
This solution seems pretty fair but takes expensive operator. Are there any existing algorithm to take care of this kind of problem?
Thanks in advance!
It is debatable if % is a slow operator, but bit manipulation is faster. If you are happy to map into a number of bins that are a power of two, M=2^k, then you mask out the lower k bits
x & (M - 1);
or
x & ((1 << k)-1);
If the number of bins is a Mersenne prime, M = 2^s-1 there is also a quick way to get the remainder:
unsigned int mod_Mersenne(unsigned int x, unsigned int s)
{
unsigned int p = (1 << s) - 1;
unsigned int y = (x & p) + (x >> s);
return (y > p) ? y - p : y;
}
I believe you can also do it branchless, but I don’t remember how.
If you need to bin the numbers in sequence, as in your example, and if you can choose M to be the word size of a smaller integer, you can also exploit that unsigned integer types handle overflow like modulo, so you could do something like
unsigned char i = 0; // M = 256 (probably)
for (int j = 0; j < N; j++, i++)
bin[i]++; // do something with the bin
When i moves past the size of an unsigned char it wraps around to zero.
This is only guaranteed for unsigned, so don’t use a signed integer here. And be ware that a char doesn’t have to be eight bit, but you can check. (It is very likely to be).
Generally, unsigned arithmetic behaves as if you have already taken modulo, so you can exploit that if you can choose N to match a word size.
Modulus m = n % M with constant M is typically implemented directly from the definition
m = n - M*(n/M)
which can be easily regarded expensive - at least in comparison to bit masking.
For division by a constant, sophisticated compilers typically implement another algorithm (developed by Montgomery), which contains first an approximation by reciprocal multiplication, then one or two adjustment stages to fix some corner cases, where the first approximations m' = (n * R) >> K) can be off by one (or possibly two).
This suggests a few improvements:
carefully skipping the adjustment stages, offsetting the (1<<k)/M with some value, so that the top bits of the product of the new coefficient 0 <= m'' = (n * R) >> K < M are purely within the wanted range.
considering if the mapping function actually needs to be modulus: if it's sufficient that 0<= m'' < M, which leaves out the need to multiply the m = n - M*m''.
For N=10, M=3, the suitable coefficients are K=256/3 = 85, k = 8, which maps the values n=0..9 to m=0..2 with m = n * 85 >> 8 as
// n = 0 1 2 3 4 5 6 7 8 9
// m = 0 0 0 0 1 1 1 2 2 2 (approximation of n/3)
(The smallest numbers to get the same set of output values is btw K=16/3 = 5, k = 4).

Maximization of N modulo k when N is fixed, and k<=N [duplicate]

Given two numbers n and k, find x, 1 <= x <= k that maximises the remainder n % x.
For example, n = 20 and k = 10 the solution is x = 7 because the remainder 20 % 7 = 6 is maximum.
My solution to this is :
int n, k;
cin >> n >> k;
int max = 0;
for(int i = 1; i <= k; ++i)
{
int xx = n - (n / i) * i; // or int xx = n % i;
if(max < xx)
max = xx;
}
cout << max << endl;
But my solution is O(k). Is there any more efficient solution to this?
Not asymptotically faster, but faster, simply by going backwards and stopping when you know that you cannot do better.
Assume k is less than n (otherwise just output k).
int max = 0;
for(int i = k; i > 0 ; --i)
{
int xx = n - (n / i) * i; // or int xx = n % i;
if(max < xx)
max = xx;
if (i < max)
break; // all remaining values will be smaller than max, so break out!
}
cout << max << endl;
(This can be further improved by doing the for loop as long as i > max, thus eliminating one conditional statement, but I wrote it this way to make it more obvious)
Also, check Garey and Johnson's Computers and Intractability book to make sure this is not NP-Complete (I am sure I remember some problem in that book that looks a lot like this). I'd do that before investing too much effort on trying to come up with better solutions.
This problem is equivalent to finding maximum of function f(x)=n%x in given range. Let's see how this function looks like:
It is obvious that we could get the maximum sooner if we start with x=k and then decrease x while it makes any sense (until x=max+1). Also this diagram shows that for x larger than sqrt(n) we don't need to decrease x sequentially. Instead we could jump immediately to preceding local maximum.
int maxmod(const int n, int k)
{
int max = 0;
while (k > max + 1 && k > 4.0 * std::sqrt(n))
{
max = std::max(max, n % k);
k = std::min(k - 1, 1 + n / (1 + n / k));
}
for (; k > max + 1; --k)
max = std::max(max, n % k);
return max;
}
Magic constant 4.0 allows to improve performance by decreasing number of iterations of the first (expensive) loop.
Worst case time complexity could be estimated as O(min(k, sqrt(n))). But for large enough k this estimation is probably too pessimistic: we could find maximum much sooner, and if k is significantly greater than sqrt(n) we need only 1 or 2 iterations to find it.
I did some tests to determine how many iterations are needed in the worst case for different values of n:
n max.iterations (both/loop1/loop2)
10^1..10^2 11 2 11
10^2..10^3 20 3 20
10^3..10^4 42 5 42
10^4..10^5 94 11 94
10^5..10^6 196 23 196
up to 10^7 379 43 379
up to 10^8 722 83 722
up to 10^9 1269 157 1269
Growth rate is noticeably better than O(sqrt(n)).
For k > n the problem is trivial (take x = n+1).
For k < n, think about the graph of remainders n % x. It looks the same for all n: the remainders fall to zero at the harmonics of n: n/2, n/3, n/4, after which they jump up, then smoothly decrease towards the next harmonic.
The solution is the rightmost local maximum below k. As a formula x = n//((n//k)+1)+1 (where // is integer division).
waves hands around
No value of x which is a factor of n can produce the maximum n%x, since if x is a factor of n then n%x=0.
Therefore, you would like a procedure which avoids considering any x that is a factor of n. But this means you want an easy way to know if x is a factor. If that were possible you would be able to do an easy prime factorization.
Since there is not a known easy way to do prime factorization there cannot be an "easy" way to solve your problem (I don't think you're going to find a single formula, some kind of search will be necessary).
That said, the prime factorization literature has cunning ways of getting factors quickly relative to a naive search, so perhaps it can be leveraged to answer your question.
Nice little puzzle!
Starting with the two trivial cases.
for n < k: any x s.t. n < x <= k solves.
for n = k: x = floor(k / 2) + 1 solves.
My attempts.
for n > k:
x = n
while (x > k) {
x = ceil(n / 2)
}
^---- Did not work.
x = floor(float(n) / (floor(float(n) / k) + 1)) + 1
x = ceil(float(n) / (floor(float(n) / k) + 1)) - 1
^---- "Close" (whatever that means), but did not work.
My pride inclines me to mention that I was first to utilize the greatest k-bounded harmonic, given by 1.
Solution.
Inline with other answers I simply check harmonics (term courtesy of #ColonelPanic) of n less than k, limiting by the present maximum value (courtesy of #TheGreatContini). This is the best of both worlds and I've tested with random integers between 0 and 10000000 with success.
int maximalModulus(int n, int k) {
if (n < k) {
return n;
}
else if (n == k) {
return n % (k / 2 + 1);
}
else {
int max = -1;
int i = (n / k) + 1;
int x = 1;
while (x > max + 1) {
x = (n / i) + 1;
if (n%x > max) {
max = n%x;
}
++i;
}
return max;
}
}
Performance tests:
http://cpp.sh/72q6
Sample output:
Average number of loops:
bruteForce: 516
theGreatContini: 242.8
evgenyKluev: 2.28
maximalModulus: 1.36 // My solution
I'm wrong for sure, but it looks to me that it depends on if n < k or not.
I mean, if n < k, n%(n+1) gives you the maximum, so x = (n+1).
Well, on the other hand, you can start from j = k and go back evaluating n%j until it's equal to n, thus x = j is what you are looking for and you'll get it in max k steps... Too much, is it?
Okay, we want to know divisor that gives maximum remainder;
let n be a number to be divided and i be the divisor.
we are interested to find the maximum remainder when n is divided by i, for all i<n.
we know that, remainder = n - (n/i) * i //equivalent to n%i
If we observe the above equation to get maximum remainder we have to minimize (n/i)*i
minimum of n/i for any i<n is 1.
Note that, n/i == 1, for i<n, if and only if i>n/2
now we have, i>n/2.
The least possible value greater than n/2 is n/2+1.
Therefore, the divisor that gives maximum remainder, i = n/2+1
Here is the code in C++
#include <iostream>
using namespace std;
int maxRemainderDivisor(int n){
n = n>>1;
return n+1;
}
int main(){
int n;
cin>>n;
cout<<maxRemainderDivisor(n)<<endl;
return 0;
}
Time complexity: O(1)

Binary number with two one insite it

is there an algorithm that find all binaries numbers between a and b, in which there are exactly two one?
For example:
a = 5
b = 10
find(a, b)
It will find
5 = 00000101
6 = 00000110
9 = 00001001
10 = 00001010
A bit-hacking trick that iterates through all bit-paterns that contain the same number of 1-bits looks as follows
unsigned next_combination(unsigned x)
{
unsigned u = x & -x;
unsigned v = u + x;
x = v + (((v ^ x) / u) >> 2);
return x;
}
It generates the values in ascending order. It takes the previous value and transforms it into the next one with the same number of 1-bits. This means that you just have to start from the minimal bit combination that is greater or equal to a and iterate until you encounter a value greater than b.
Of course, in this form it will only work if your a and b are within the range of unsigned.
These numbers are of the form
2^m + 2^n
with m > n.
You can find them by exhaustive search on m, n.
M= 1
while M < b:
N= 1
while M + N <= b:
if a <= M + N:
print M + N
N+= N
M+= M
This can probably slightly be optimized to avoid searching when 2^m < a, but the benefit will be tiny: the complexity is O(log²b), which is already small.

Find minimum sum that cannot be formed

Given positive integers from 1 to N where N can go upto 10^9. Some K integers from these given integers are missing. K can be at max 10^5 elements. I need to find the minimum sum that can't be formed from remaining N-K elements in an efficient way.
Example; say we have N=5 it means we have {1,2,3,4,5} and let K=2 and missing elements are: {3,5} then remaining array is now {1,2,4} the minimum sum that can't be formed from these remaining elements is 8 because :
1=1
2=2
3=1+2
4=4
5=1+4
6=2+4
7=1+2+4
So how to find this un-summable minimum?
I know how to find this if i can store all the remaining elements by this approach:
We can use something similar to Sieve of Eratosthenes, used to find primes. Same idea, but with different rules for a different purpose.
Store the numbers from 0 to the sum of all the numbers, and cross off 0.
Then take numbers, one at a time, without replacement.
When we take the number Y, then cross off every number that is Y plus some previously-crossed off number.
When we have done this for every number that is remaining, the smallest un-crossed-off number is our answer.
However, its space requirement is high. Can there be a better and faster way to do this?
Here's an O(sort(K))-time algorithm.
Let 1 &leq; x1 &leq; x2 &leq; … &leq; xm be the integers not missing from the set. For all i from 0 to m, let yi = x1 + x2 + … + xi be the partial sum of the first i terms. If it exists, let j be the least index such that yj + 1 < xj+1; otherwise, let j = m. It is possible to show via induction that the minimum sum that cannot be made is yj + 1 (the hypothesis is that, for all i from 0 to j, the numbers x1, x2, …, xi can make all of the sums from 0 to yi and no others).
To handle the fact that the missing numbers are specified, there is an optimization that handles several consecutive numbers in constant time. I'll leave it as an exercise.
Let X be a bitvector initialized to zero. For each number Ni you set X = (X | X << Ni) | Ni. (i.e. you can make Ni and you can increase any value you could make previously by Ni).
This will set a '1' for every value you can make.
Running time is linear in N, and bitvector operations are fast.
process 1: X = 00000001
process 2: X = (00000001 | 00000001 << 2) | (00000010) = 00000111
process 4: X = (00000111 | 00000111 << 4) | (00001000) = 01111111
First number you can't make is 8.
Here is my O(K lg K) approach. I didn't test it very much because of lazy-overflow, sorry about that. If it works for you, I can explain the idea:
const int MAXK = 100003;
int n, k;
int a[MAXK];
long long sum(long long a, long long b) { // sum of elements from a to b
return max(0ll, b * (b + 1) / 2 - a * (a - 1) / 2);
}
void answer(long long ans) {
cout << ans << endl;
exit(0);
}
int main()
{
cin >> n >> k;
for (int i = 1; i <= k; ++i) {
cin >> a[i];
}
a[0] = 0;
a[k+1] = n+1;
sort(a, a+k+2);
long long ans = 0;
for (int i = 1; i <= k+1; ++i) {
// interval of existing numbers [lo, hi]
int lo = a[i-1] + 1;
int hi = a[i] - 1;
if (lo <= hi && lo > ans + 1)
break;
ans += sum(lo, hi);
}
answer(ans + 1);
}
EDIT: well, thanks God #DavidEisenstat in his answer wrote the description of the approach I used, so I don't have to write it. Basically, what he mentions as exercise is not adding the "existing numbers" one by one, but all at the same time. Before this,you just need to check if some of them breaks the invariant, which can be done using binary search. Hope it helped.
EDIT2: as #DavidEisenstat pointed in the comments, the binary search is not needed, since only the first number in every interval of existing numbers can break the invariant. Modified the code accordingly.

Counting the bits set in the Fibonacci number system?

We know that, each non negative decimal number can be represented uniquely by sum of Fibonacci numbers(here we are concerned about minimal representation i.e- no consecutive Fibonacci numbers are taken in the representation of a number and also each Fibonacci number is taken at most one in the representation).
For example:
1-> 1
2-> 10
3->100
4->101, here f1=1 , f2=2 and f(n)=f(n-1)+f(n-2);
so each decimal number can be represented in the Fibonacci system as a binary sequence. If we write all natural numbers successively in Fibonacci system, we will obtain a sequence like this: 110100101… This is called “Fibonacci bit sequence of natural numbers”.
My task is is counting the numbers of times that bit 1 appears in first N bits of this sequence.Since N can take value from 1 to 10^15,Can i do this without storing the Fibonacci sequence ?
for example: if N is 5,the answer is 3.
So this is just a preliminary sketch of an algorithm. It works when the upper bound is itself a Fibonacci number, but I'm not sure how to adapt it for general upper bounds. Hopefully someone can improve upon this.
The general idea is to look at the structure of the Fibonacci encodings. Here are the first few numbers:
0
1
10
100
101
1000
1001
1010
10000
10001
10010
10100
10101
100000
The invariant in each of these numbers is that there's never a pair of consecutive 1s. Given this invariant, we can increment from one number to the next using the following pattern:
If the last digit is 0, set it to 1.
If the last digit is 1, then since there aren't any consecutive 1s, set the last digit to 0 and the next digit to 1.
Eliminate any doubled 1s by setting them both to 0 and setting the next digit to a 1, repeating until all doubled 1s are eliminated.
The reason that this is important is that property (3) tells us something about the structure of these numbers. Let's revisit the first few Fibonacci-encoded numbers once more. Look, for example, at the first three numbers:
00
01
10
Now, look at all four-bit numbers:
1000
1001
1010
The next number will have five digits, as shown here:
1011 → 1100 → 10000
The interesting detail to notice is that the number of numbers with four digits is equal to the number of values with up to two digits. In fact, we get the four-digit numbers by just prefixing the at-most-two-digit-numbers with 10.
Now, look at three-digit numbers:
000
001
010
100
101
And look at five-digit numbers:
10000
10001
10010
10100
10101
Notice that the five-digit numbers are just the three-digit numbers with 10 prefixed.
This gives us a very interesting way for counting up how many 1s there are. Specifically, if you look at (k+2)-digit numbers, each of them is just a k-digit number with a 10 prefixed to it. This means that if there are B 1s total in all of the k-digit numbers, the number of Bs total in numbers that are just k+2 digits is equal to B plus the number of k-digit numbers, since we're just replaying the sequence with an extra 1 prepended to each number.
We can exploit this to compute the number of 1s in the Fibonacci codings that have at most k digits in them. The trick is as follows - if for each number of digits we keep track of
How many numbers have at most that many digits (call this N(d)), and
How many 1s are represented numbers with at most d digits (call this B(d)).
We can use this information to compute these two pieces of information for one more digit. It's a beautiful DP recurrence. Initially, we seed it as follows. For one digit, N(d) = 2 and B(d) is 1, since for one digit the numbers are 0 and 1. For two digits, N(d) = 3 (there's just one two-digit number, 10, and the two one-digit numbers 0 and 1) and B(d) is 2 (one from 1, one from 10). From there, we have that
N(d + 2) = N(d) + N(d + 1). This is because the number of numbers with up to d + 2 digits is the number of numbers with up to d + 1 digits (N(d + 1)), plus the numbers formed by prefixing 10 to numbers with d digits (N(d))
B(d + 2) = B(d + 1) + B(d) + N(d) (The number of total 1 bits in numbers of length at most d + 2 is the total number of 1 bits in numbers of length at most d + 1, plus the extra we get from numbers of just d + 2 digits)
For example, we get the following:
d N(d) B(d)
---------------------
1 2 1
2 3 2
3 5 5
4 8 10
5 13 20
We can actually check this. For 1-digit numbers, there are a total of 1 one bit used. For 2-digit numbers, there are two ones (1 and 10). For 3-digit numbers, there are five 1s (1, 10, 100, 101). For four-digit numbers, there are 10 ones (the five previous, plus 1000, 1001, 1010). Extending this outward gives us the sequence that we'd like.
This is extremely easy to compute - we can compute the value for k digits in time O(k) with just O(1) memory usage if we reuse space from before. Since the Fibonacci numbers grow exponentially quickly, this means that if we have some number N and want to find the sum of all 1s bits to the largest Fibonacci number smaller than N, we can do so in time O(log N) and space O(1).
That said, I'm not sure how to adapt this to work with general upper bounds. However, I'm optimistic that there is some way to do it. This is a beautiful recurrence and there just has to be a nice way to generalize it.
Hope this helps! Thanks for an awesome problem!
Lest solve 3 problems. Each next is harder then previous, each one uses result of previous.
1. How many ones are set if you write down every number from 0 to fib[i]-1.
Call this dp[i]. Lets look at the numbers
0
1
10
100
101
1000
1001
1010 <-- we want to count ones up to here
10000
If you write all numbers up to fib[i]-1, first you write all numbers up to fib[i-1]-1 (dp[i-1]), then you write the last block of numbers. There are exactly fib[i-2] of those numbers, each has a one on the first position, so we add fib[i-2], and if you erase those ones
000
001
010
then remove leading zeros, you can see that each number from 0 to fib[i-2]-1 is written down. Numbers of one there is equal to dp[i-2], which gives us:
dp[i] = fib[i-2] + dp[i-2] + dp[i-1];
2. How many ones are set if you write down every number from 0 to n.
0
1
10
100
101
1000
1001 <-- we want to count ones up to here
1010
Lets call this solNumber(n)
Suppose, that your number is f[i] + x, where f[i] is a maximum possible fibonacci number. Then anser if dp[i] + solNumber(x). This can be proved in the same way as in point 1.
3. How many ones are set in first n digits.
3a. How many numbers have representation length exactly l
if l = 1 the answer is 1, else its fib[l-2] + 1.
You can note, that if you erase leading ones and then all leading zeros you'll have each number from 0 to fib[l-1]-1. Exactly fib[l] numbers.
//End of 3a
Now you can find such number m than, if you write all numbers from 1 to m, their total length will be <=n. But if you write all from 1 to m+1, total length will be > n. Solve the problem manually for m+1 and add solNumber(m).
All 3 problems are solved in O(log n)
#include <iostream>
using namespace std;
#define FOR(i, a, b) for(int i = a; i < b; ++i)
#define RFOR(i, b, a) for(int i = b - 1; i >= a; --i)
#define REP(i, N) FOR(i, 0, N)
#define RREP(i, N) RFOR(i, N, 0)
typedef long long Long;
const int MAXL = 30;
long long fib[MAXL];
//How much ones are if you write down the representation of first fib[i]-1 natural numbers
long long dp[MAXL];
void buildDP()
{
fib[0] = 1;
fib[1] = 1;
FOR(i,2,MAXL)
fib[i] = fib[i-1] + fib[i-2];
dp[0] = 0;
dp[1] = 0;
dp[2] = 1;
FOR(i,3,MAXL)
dp[i] = fib[i-2] + dp[i-2] + dp[i-1];
}
//How much ones are if you write down the representation of first n natural numbers
Long solNumber(Long n)
{
if(n == 0)
return n;
Long res = 0;
RREP(i,MAXL)
if(n>=fib[i])
{
n -= fib[i];
res += dp[i];
res += (n+1);
}
return res;
}
int solManual(Long num, Long n)
{
int cr = 0;
RREP(i,MAXL)
{
if(n == 0)
break;
if(num>=fib[i])
{
num -= fib[i];
++cr;
}
if(cr != 0)
--n;
}
return cr;
}
Long num(int l)
{
if(l<=2)
return 1;
return fib[l-1];
}
Long sol(Long n)
{
//length of fibonacci representation
int l = 1;
//totatl acumulated length
int cl = 0;
while(num(l)*l + cl <= n)
{
cl += num(l)*l;
++l;
}
//Number of digits, that represent numbers with maxlength
Long nn = n - cl;
//Number of full numbers;
Long t = nn/l;
//The last full number
n = fib[l] + t-1;
return solNumber(n) + solManual(n+1, nn%l);
}
int main(int argc, char** argv)
{
ios_base::sync_with_stdio(false);
buildDP();
Long n;
while(cin>>n)
cout<<"ANS: "<<sol(n)<<endl;
return 0;
}
Compute m, the number responsible for the (N+1)th bit of the sequence. Compute the contribution of m to the count.
We have reduced the problem to counting the number of one bits in the range [1, m). In the style of interval trees, partition this range into O(log N) subranges, each having an associated glob like 10100???? that matches the representations of exactly the numbers belonging to that range. It is easy to compute the contribution of the prefixes.
We have reduced the problem to counting the total number T(k) of one bits in all Fibonacci words of length k (i.e., the ???? part of the globs). T(k) is given by the following recurrence.
T(0) = 0
T(1) = 1
T(k) = T(k - 1) + T(k - 2) + F(k - 2)
Mathematica says there's a closed form solution, but it looks awful and isn't needed for this polylog(N)-time algorithm.
This is not a full answer but it does outline how you can do this calculation without using brute force.
The Fibonacci representation of Fn is a 1 followed by n-1 zeros.
For the numbers from Fn up to but not including F(n+1), the number of 1's consists of two parts:
There are F(n-1) such numbers, so there are F(n-1) leading 1's.
The binary digits after the leading numbers are just the binary representations of all numbers up to but not including F(n-1).
So, if we call the total number of bits in the sequence up to but not including the nth Fibonacci number an, then we have the following recursion:
a(n+1) = an + F(n-1) + a(n-1)
You can also easily get the number of bits in the sequence up to Fn.
If it takes k Fibonacci numbers to get to (but not pass) N, then you can count those bits with the above formula, and after some further manipulation reduce the problem to counting the number of bits in the remaining sequence.
[Edit] : Basically I have followed the property that for any number n which is to be represented in fibonacci base, we can break it as n = n - x where x is the largest fibonacci just less than n. Using this property, any number can be broken in bit form.
First step is finding the decimal number such that Nth bit ends in it.
We can see that all numbers between fibonacci number F(n) and F(n+1) will have same number of bits. Using this, we can pre-calculate a table and find the appropriate number.
Lets say that you have the decimal number D at which there is the Nth bit.
Now, let X be the largest fibonacci number lesser than or equal to D.
To find set bits for all numbers from 1 to D we represnt it as ...
X+0, X+1, X+2, .... X + D-X. So, all the X will be repsented by 1 at the end and we have broken the problem into a much smaller sub-problem. That is, we need to find all set bits till D-X. We keep doing this recusively. Using the same logic, we can build a table which has appropriate number of set bits count for all fibonacci numbers (till limit). We would use this table for finding number of set bits from 1 to X.
So,
Findsetbits(D) { // finds number of set bits from 1 to D.
find X; // largest fibonacci number just less than D
ans = tablesetbits[X];
ans += 1 * (D-x+1); // All 1s at the end due to X+0,X+1,...
ans += Findsetbits(D-x);
return ans;
}
I tried some examples by hand and saw the pattern.
I have coded a rough solution which I have checked by hand for N <= 35. It works pretty fast for large numbers, though I can't be sure that it is correct. If it is an online judge problem, please give the link to it.
#include<iostream>
#include<vector>
#include<map>
#include<algorithm>
using namespace std;
#define pb push_back
typedef long long LL;
vector<LL>numbits;
vector<LL>fib;
vector<LL>numones;
vector<LL>cfones;
void init() {
fib.pb(1);
fib.pb(2);
int i = 2;
LL c = 1;
while ( c < 100000000000000LL ) {
c = fib[i-1] + fib[i-2];
i++;
fib.pb(c);
}
}
LL answer(LL n) {
if (n <= 3) return n;
int a = (lower_bound(fib.begin(),fib.end(),n))-fib.begin();
int c = 1;
if (fib[a] == n) {
c = 0;
}
LL ans = cfones[a-1-c] ;
return ans + answer(n - fib[a-c]) + 1 * (n - fib[a-c] + 1);
}
int fillarr(vector<int>& a, LL n) {
if (n == 0)return -1;
if (n == 1) {
a[0] = 1;
return 0;
}
int in = lower_bound(fib.begin(),fib.end(),n) - fib.begin(),v=0;
if (fib[in] != n) v = 1;
LL c = n - fib[in-v];
a[in-v] = 1;
fillarr(a, c);
return in-v;
}
int main() {
init();
numbits.pb(1);
int b = 2;
LL c;
for (int i = 1; i < fib.size()-2; i++) {
c = fib[i+1] - fib[i] ;
c = c*(LL)b;
b++;
numbits.pb(c);
}
for (int i = 1; i < numbits.size(); i++) {
numbits[i] += numbits[i-1];
}
numones.pb(1);
cfones.pb(1);
numones.pb(1);
cfones.pb(2);
numones.pb(1);
cfones.pb(5);
for (int i = 3; i < fib.size(); i++ ) {
LL c = 0;
c += cfones[i-2]+ 1 * fib[i-1];
numones.pb(c);
cfones.pb(c + cfones[i-1]);
}
for (int i = 1; i < numones.size(); i++) {
numones[i] += numones[i-1];
}
LL N;
cin>>N;
if (N == 1) {
cout<<1<<"\n";
return 0;
}
// find the integer just before Nth bit
int pos;
for (int i = 0;; i++) {
if (numbits[i] >= N) {
pos = i;
break;
}
}
LL temp = (N-numbits[pos-1])/(pos+1);
LL temp1 = (N-numbits[pos-1]);
LL num = fib[pos]-1 + (temp1>0?temp+(temp1%(pos+1)?1:0):0);
temp1 -= temp*(pos+1);
if(!temp1) temp1 = pos+1;
vector<int>arr(70,0);
int in = fillarr(arr, num);
int sub = 0;
for (int i = in-(temp1); i >= 0; i--) {
if (arr[i] == 1)
sub += 1;
}
cout<<"\nNumber answer "<<num<<" "<<answer(num) - sub<<"\n";
return 0;
}
Here is O((log n)^3).
Lets compute how many numbers fits in first N bits
Imagine that we have function:
long long number_of_all_bits_in_sequence(long long M);
It computes length of "Fibonacci bit sequence of natural numbers" created by all numbers that aren't greater than M.
With this function we could use binary search to find how many numbers fits in the first N bits.
How many bits are 1's in representation of first M numbers
Lets create function which calculates how many numbers <= M have 1 at k-th bit.
long long kth_bit_equal_1(long long M, int k);
First lets preprocess results of this function for all small values, lets say M <= 1000000.
Implementation for M > PREPROCESS_LIMIT:
long long kth_bit_equal_1(long long M, int k) {
if (M <= PREPROCESS_LIMIT) return preprocess_result[M][k];
long long fib_number = greatest_fib_which_isnt_greater_than(M);
int fib_index = index_of_fib_in_fibonnaci_sequence(fib);
if (fib_index < k) {
// all numbers are smaller than k-th fibbonacci number
return 0;
}
if (fib_index == k) {
// only numbers between [fib_number, M] have k-th bit set to 1
return M - fib_number + 1;
}
if (fib_index > k) {
long long result = 0;
// all numbers between [fib_number, M] have bit at fib_index set to 1
// so lets subtrack fib_number from all numbers in this interval
// now this interval is [0, M - fib_number]
// lets calculate how many numbers in this inteval have k-th bit set.
result += kth_bit_equal_1(M - fib_number, k);
// don't forget about remaining numbers (interval [1, fib_number - 1])
result += kth_bit_equal_1(fib_number - 1, k);
return result;
}
}
Complexity of this function is O(M / PREPROCESS_LIMIT).
Notice that in reccurence one of the addends is always one of fibbonaci numbers.
kth_bit_equal_1(fib_number - 1, k);
So if we memorize all computed results than complexity will improve to T(N) = T(N/2) + O(1) . T(n) = O(log N).
Lets get back to number_of_all_bits_in_sequence
We can slighly modify kth_bit_equal_1 so it would also count bits equal to 0.
Here's a way to count all the one digits in the set of numbers up to a given digit length bound. This seems to me to be a reasonable starting point for a solution
Consider 10 digits. Start by writing;
0000000000
Now we can turn some number of these zeros into ones, keeping the last digit always as a 0. Consider the possibilities case by case.
0 There's just one way to chose 0 of these to be ones. Summing the 1-bits in this one case gives 0.
1 There are {9 choose 1} ways to turn one of the zeros into a one. Each of these contributes 1.
2 There are {8 choose 2} ways to turn two of the zeros into ones. Each of these contributes 2.
...
5 There are {5 choose 5} ways to turn five of the zeros into ones. Each of these contributes 5 to the bit count.
It's easy to think of this as a tiling problem. The string of 10 zeros is a 10x1 board, which we want to tile with 1x1 squares and 2x1 dominoes. Choosing some number of the zeros to be ones is then the same as choosing some of the tiles to be dominoes. My solution is closely related to Identity 4 in "Proofs that really count" by Benjamin and Quinn.
Second step Now try to use the above construction to solve the original problem
Suppose we want to the one bits in the first 100100010 bits (the number is in Fibonacci representation of course). Start by overcounting the sum for all ways to replace the x's with zeros and ones in 10xxxxx0. To overcompensate for overcounting, subract the count for 10xxx0. Continue the procedure of overcounting and overcompensation.
This problem has a dynamic solution, as illustrated by the tested algorithm below.
Some points to keep in mind, which are evident in the code:
The best solution for each number i will be obtained by using the fibonacci number f where f == i
OR where f is less than i then it must be f and the greatest number n <= f: i = f+n.
Note that the fib sequence is memoized over the entire algorithm.
public static int[] fibonacciBitSequenceOfNaturalNumbers(int num) {
int[] setBits = new int[num + 1];
setBits[0] = 0;//anchor case of fib seq
setBits[1] = 1;//anchor case of fib seq
int a = 1, b = 1;//anchor case of fib seq
for (int i = 2; i <= num; i++) {
int c = b;
while (c < i) {
c = a + b;
a = b;
b = c;
}//fib
if (c == i) {
setBits[i] = 1;
continue;
}
c = a;
int tmp = c;//to optimize further, make tmp the fib before a
while (c + tmp != i) {
tmp--;
}
setBits[i] = 1 + setBits[tmp];
}//done
return setBits;
}
Test with:
public static void main(String... args) {
int[] arr = fibonacciBitSequenceOfNaturalNumbers(23);
//print result
for(int i=1; i<arr.length; i++)
System.out.format("%d has %d%n", i, arr[i]);
}
RESULT OF TEST: i has x set bits
1 has 1
2 has 1
3 has 1
4 has 2
5 has 1
6 has 2
7 has 2
8 has 1
9 has 2
10 has 2
11 has 2
12 has 3
13 has 1
14 has 2
15 has 2
16 has 2
17 has 3
18 has 2
19 has 3
20 has 3
21 has 1
22 has 2
23 has 2
EDIT BASED ON COMMENT:
//to return total number of set between 1 and n inclusive
//instead of returning as in original post, replace with this code
int total = 0;
for(int i: setBits)
total+=i;
return total;

Resources