recent Google interview puzzle on bitwise operation - algorithm

This is a recent interview question from Google:
We define f(X, Y) as number of different corresponding bits in binary
representation of X and Y. For example, f(2, 7) = 2, since binary
representation of 2 and 7 are 010 and 111, respectively. The first and
the third bit differ, so f(2, 7) = 2.
You are given an array of N positive integers, A1, A2 ,…, AN. Find sum
of f(Ai, Aj) for all pairs (i, j) such that 1 ≤ i, j ≤ N
For example:
A=[1, 3, 5]
We return
f(1, 1) + f(1, 3) + f(1, 5) + f(3, 1) + f(3, 3) + f(3, 5) + f(5, 1) +
f(5, 3) + f(5, 5) =
0 + 1 + 1 + 1 + 0 + 2 + 1 + 2 + 0 = 8
I could think of this solution which is O(n^2)
int numSetBits(unsigned int A) {
int count = 0;
while(A != 0) {
A = A & (A-1);
count++;
}
return count;
}
int count_diff_bits(int a, int b)
{
int x = a ^ b;
return numSetBits(x);
}
for (i = 0; i < n; i++)
for (j = 0; j < n; j++) {
sum += count_diff_bits(A[i], A[j]);
}
}
Another approach i can think of is (considering that each element contains only one binary digit):
Start from the end of the array
keep a count of 1's and 0's found so far
If the current element is 1, then it will contribute count_of_zeros to the final sum
Continue like this till we reach the start of the array.
Is this approach correct.

Iterate the array, and count number of "on" bits in each bit index, for example [1, 3, 5]:
0 0 1
0 1 1
1 0 1
-----
1 1 3
Now, for each bit counter, calculate:
[bit count] * [array size - bit count] * 2
and sum for all bits...
With example above:
3 * (3 - 3) * 2 = 0
1 * (3 - 1) * 2 = 4
1 * (3 - 1) * 2 = 4
total = 8
To show why this works, lets look at a subset of the problem, using a single bit. Let's see what happens if we have an array with: [1, 1, 0, 0, 1, 0, 1]. Our count is 4 and size is 7. If we examine the first bit with all the bits in the array (including self, as in the question), we get:
1 xor 1 = 0
1 xor 1 = 0
1 xor 0 = 1
1 xor 0 = 1
1 xor 1 = 0
1 xor 0 = 1
1 xor 1 = 0
As can be seen, the contribution of this bit is the number of "off" bits. The same holds true for any other "on" bit. We could say that each "on" bit counts as the number of "off" bits:
[bit count] * [array size - bit count]
And where does the multiplication by 2 comes from? well, since we do the same with the "off" bits, except that for these, the contribution is the number of "on" bits:
[array size - bit count] * [bit count]
which of course is the same as above, and we can just multiply...
Complexity is O(n*k) where k is number of bits (32 in your code).

#include <bits/stdc++.h>
#define MOD 1000000007ll
using namespace std;
typedef long long LL;
int solve(int arr[], int n) {
int ans = 0;
// traverse over all bits
for(int i = 0; i < 31; i++) {
// count number of elements with ith bit = 0
long long count = 0;
for(int j = 0; j < n; j++) if ( ( arr[j] & ( 1 << i ) ) ) count++;
// add to answer count * (n - count) * 2
ans += (count * ((LL)n - count) * 2ll) % MOD;
if(ans >= MOD) ans -= MOD;
}
return ans;
}
int main() {
int arr[] = {1, 3, 5};
int n = sizeof arr / sizeof arr[0];
cout << solve(arr, n) << endl;
return 0;
}

Related

XOR queries on a given array

Given an array of n integers, indexed from 1->n. The task is to perform of Q given queries, and print the sum of the array after each queries.
We can perform three types of operations:
1 X: add X to the array (its index will be n+1, n+2,...)
2 Y: remove element with index Y from the array
3 Z: for each element i in the array, perform i^Z (i xor Z)
Example:
Input
arr[] = {2, 3, 9, 5, 6, 6}, Q = 5
1 3
3 5
2 2
3 2
2 7
Output: 34 37 31 27 23
Explanation:
1 3 -> arr[] = {2, 3, 9, 5, 6, 6, 3} -> sum = 34
3 5 -> arr[] = {7, 6, 12, 0, 3, 3, 6} -> sum = 37
2 2 -> arr[] = {7, 12, 0, 3, 3, 6} -> sum = 31
3 2 -> arr[] = {5, 14, 2, 1, 1, 4} -> sum = 27
2 7 -> arr[] = {5, 14, 2, 1, 1} -> sum = 23
P/S: I'm trying to solve the problem with Segment Tree, but I can't update the tree with XOR operator. Is there any other way to solve this problem? I'm trying to solve it in O(n.logn)
Assuming your numbers do not exceed some standard constant like 232 or 264, we can do this in constant time, by counting the bits separately.
You will need to:
Remember how many numbers there are in the array
Remember how many lit bits there are at every position in the binary positioning system.
So here's your example, expanded into bits, with the least significant ones at the top:
2 3 9 5 6 6 3 | sum
-------------------------
0 1 1 1 0 0 1 | 4
1 1 0 0 1 1 1 | 5
0 0 0 1 1 1 0 | 3
0 0 1 0 0 0 0 | 1
Now, that means that there are
4 "first" bits lit
5 "second" bits lit
3 "third" bits lit and
1 "fourth" bit lit.
The number of numbers is 7.
The sum of these numbers is 34
We now xor this with 5, which is 0101 in binary, so there will now be
7 - 4 = 3 "first" bits lit
5 "second" bits lit
7 - 3 = 4 "third" bits lit
1 "fourth" bit lit
If we sum this up, we get 3 * 2^0 + 5 * 2^1 + 4 * 2^2 + 1 * 2^3 = 37 (where now by ^ I mean exponentiation as opposed to xor).
So this is what you do every time the xor operation pops up. Adding and removing numbers is the easy parts because you go over their bits and accordingly adjust the counts of lit "i-th" bits in the array.
Thanks to Maurycyt I have solved the problem. Below is my code in case anyone need it
const int MAX = 1e5 + 5;
const int MAXBIT = 32;
int n, q, num, xor_add;
int arr[MAX], sum[32];
int getSum()
{
int res = 0;
for(int i = 0; i < MAXBIT; i++)
res += sum[i]*(1<<i);
return res;
}
void updateXor(int x){
xor_add ^= x;
for(int i = 0; i < MAXBIT; i++)
if(x & (1<<i))sum[i] = num - sum[i];
}
void add(int x){
++num;
arr[n++] = x;
for(int i = 0; i < MAXBIT; i++)
if(x & (1<<i))sum[i]++;
}
void remv(int i){
--num;
int x = arr[i-1]^xor_add;
for(int i = 0; i < MAXBIT; i++)
if(x & (1<<i))sum[i]--;
}
int main()
{
cin >> n >> q;
num = n;
for(int i = 0; i < n; i++)cin >> arr[i];
for(int i = 0; i < MAXBIT; i++)
for(int j = 0; j < n; j++)
if(arr[j] & (1<<i))sum[i]++;
while(q--){
int id, x;
cin >> id >> x;
if(id == 1)add(x);
else if(id == 2)remv(x);
else updateXor(x);
cout << getSum() << '\n';
}
return 0;
}

Replace operators of equation, so that the sum is equal to zero

I'm given the equation like this one:
n = 7
1 + 1 - 4 - 4 - 4 - 2 - 2
How can I optimally replace operators, so that the sum of the equation is equal to zero, or print  -1. I think of one algorithm, but it is not optimal. I have an idea to bruteforce all cases with complexity O(n*2^n), but (n < 300).
Here is the link of the problem: http://codeforces.com/gym/100989/problem/M.
You can solve this with dynamic programming. Keep a map of all possible partial sums (mapping to the minimum number of changes to reach this sum), and then update it one number at a time,
Here's a concise Python solution:
def signs(nums):
xs = {nums[0]: 0}
for num in nums[1:]:
ys = dict()
for d, k in xs.iteritems():
for cost, n in enumerate([num, -num]):
ys[d+n] = min(ys.get(d+n, 1e100), k+cost)
xs = ys
return xs.get(0, -1)
print signs([1, 1, -4, -4, -4, -2, -2])
In theory this has exponential complexity in the worst case (since the number of partial sums can double at each step). However, if (as here) the given numbers are always (bounded) small ints, then the number of partial sums grows linearly, and the program works in O(n^2) time.
A somewhat more optimised version uses a sorted array of (subtotal, cost) instead of a dict. One can discard partial sums that are too large or too small (making it impossible to end up at 0 assuming all of the remaining elements are between -300 and +300). This runs approximately twice as fast, and is a more natural implementation to port to a lower-level language than Python for maximum speed.
def merge(xs, num):
i = j = 0
ci = 0 if num >= 0 else 1
cj = 0 if num < 0 else 1
num = abs(num)
while j < len(xs):
if xs[i][0] + num < xs[j][0] - num:
yield (xs[i][0] + num, xs[i][1] + ci)
i += 1
elif xs[i][0] + num > xs[j][0] - num:
yield (xs[j][0] - num, xs[j][1] + cj)
j += 1
else:
yield (xs[i][0] + num, min(xs[i][1] + ci, xs[j][1] + cj))
i += 1
j += 1
while i < len(xs):
yield (xs[i][0] + num, xs[i][1] + ci)
i += 1
def signs2(nums):
xs = [(nums[0], 0)]
for i in xrange(1, len(nums)):
limit = (len(nums) - 1 - i) * 300
xs = [x for x in merge(xs, nums[i]) if -limit <= x[0] <= limit]
for x, c in xs:
if x == 0: return c
return -1
print signs2([1, 1, -4, -4, -4, -2, -2])
Here is the implementation in C++:
unordered_map <int, int> M, U;
unordered_map<int, int>::iterator it;
int a[] = {1, -1, 4, -4};
int solve() {
for(int i = 0; i < n; ++i) {
if(i == 0) M[a[i]] = 1;
else {
vector <pair <int, int>> vi;
for(it = M.begin(); it != M.end(); ++it) {
int k = it->first, d = it->second;
vi.push_back({k + a[i], d});
vi.push_back({k - a[i], d + 1});
}
for(int j = 0; j < vi.size(); ++j) M[vi[j].first] = MAXN;
for(int j = 0; j < vi.size(); ++j) {
M[vi[j].first] = min(M[vi[j].first], vi[j].second);
}
}
}
return (M[0] == 0 ? -1 : M[0] - 1);
}
What I can think of:
You calculate the original equation. This results in -14.
Now you sort the numbers (taking into account their + or -)
When the equation results in a negative number, you look for the largest numbers to fix the equation. When a number is too large, you skip it.
orig_eq = -14
After sorting:
-4, -4, -4, -2, -2, 1, 1
You loop over this and select each number if the equation orig_eq - current number is closer to zero.
This way you can select each number to change the sign of

Number of contiguous subarrays in which element of array is max

Given an array of 'n' integers, i need to find for each element of the array, the number of continuous subarrays that have that element as its max element.
Elements can repeat.
Is there a way to do it in less than O(n^2).
O(nlogn) or O(n)?
Example-
If array is {1,2,3}. Then-
For '1': 1 Such subarray {1}.
For '2': 2 Such subarrays {2},{1,2}
For '3': 3 Such subarrays {3},{2,3},{1,2,3}
I am having hard time trying to explain my solution in words. I will just add the code. It will explain itself:
#include <iostream>
#include <fstream>
using namespace std;
#define max 10000
int main(int argc, const char * argv[]) {
ifstream input("/Users/appleuser/Documents/Developer/xcode projects/SubArrayCount/SubArrayCount/input.in");
int n, arr[max], before[max]={0}, after[max]={0}, result[max];
input >> n;
for (int i=0; i<n; i++)
input >> arr[i];
for (int i=0;i<n;i++)
for (int j=i-1;j>=0&&arr[j]<arr[i];j-=before[j]+1)
before[i]+=before[j]+1;
for (int i=n-1;i>=0;i--)
for (int j=i+1;j<n&&arr[j]<arr[i];j+=after[j]+1)
after[i]+=after[j]+1;
for (int i=0;i<n;i++)
result[i]= (before[i]+1)*(after[i]+1);
for (int i=0; i<n; i++)
cout << result [i] << " ";
cout << endl;
return 0;
}
Explanation for (before[i]+1)*(after[i]+1):
for each value we need the numbers lies before and less than the value and the numbers lies after and less than the value.
| 0 1 2 3 4 5 .... count of numbers less than the value and appears before.
---------------------
0 | 1 2 3 4 5 6
1 | 2 4 6 8 10 12
2 | 3 6 9 12 15 18
3 | 4 8 12 16 20 24
4 | 5 10 15 20 25 30
5 | 6 12 18 24 30 36
. |
. |
. |
count of numbers less than the value and appears after.
Example: for a number that have 3 values less than it and appears before and have 4 values less than it and appears after. answer is V(3,4) = 20 = (3+1) * (4+1)
please, let me know the results.
Did you manage to find the source link of the problem?
You could store SubArrays sizes in another Array (arr2) to save yourself recalculating them.
arr2 must be the length of the max value in the arr1
i.e -
Take the Array {1,2,4,6,7,8}
arr2 is declared like this:
arr2 = []
for i in range(max(arr1)):
arr2.append(0)
Now, the algorithm goes like this:
Say you hit number 6.
Since 6-1=5 does not exist it has a default value of 0 corresponding to index 5 in arr2, because nothing has been added there yet. So you store 0+1=1 in position 6 of arr2. Then you hit the number 7. You check if 7-1=6 exists in arr2. It does, with a value of 1. Hence add the value of 1+1=2 to position 7 in arr2.
For each value in arr2 we simply add this to the count. We can do so simultaneously with a count variable.
This algorithm is O(n)
Here is my O(N) time java solution using Stack. Basic idea is to move from left to right keeping track of sub arrays ending at "i" and then right to left keeping track of sub arrays starting from "i":
public int[] countSubarrays(int[] arr) {
Stack<Integer> stack = new Stack<>();
int[] ans = new int[arr.length];
for(int i = 0; i < arr.length; i++) {
while(!stack.isEmpty() && arr[stack.peek()] < arr[i]) {
ans[i] += ans[stack.pop()];
}
stack.push(i);
ans[i]++;
}
stack.clear();
int[] temp = new int[arr.length];
for(int i = arr.length - 1; i >= 0; i--) {
while(!stack.isEmpty() && arr[stack.peek()] < arr[i]) {
int idx = stack.pop();
ans[i] += temp[idx];
temp[i] += temp[idx];
}
stack.push(i);
temp[i]++;
}
return ans;
}
You haven't specified in which way repeating elements are handled/what that element is (the element at the precise position in the array, or any element in the array with the same value).
Assuming the problem is for the element at a precise index this can be solved easily in linear time:
define ctSubarrays(int[] in , int at)
int minInd = at, maxInd = at;
//search for the minimum-index (lowest index with a smaller element than in[at]
for(; minInd > 0 && in[minInd - 1] < in[at] ; minInd--);
//search for the maximum-index (highest index with a smaller element than in[at]
for(; maxInd < length(at) - 1 && in[maxInd + 1] < in[at] ; maxInd++);
//now we've got the length of the largest subarray meeting all constraints
//next step: get the number of possible subarrays containing in[at]
int spaceMin = at - minInd;
int spaceMax = maxInd - at;
return spaceMin * spaceMax;
Lets look at an example.
{4, 5, 6, 3, 2}
Iterating from the begin till the end we can detect single increasing subarray: {4, 5, 6} and two single elements 3, and 2.
So we're detecting lengths of subarrays 3, 1, and 1.
First subarray {4, 5, 6} gives us 6 possible decisions, i.e. 1 + 2 + 3 = 6. It's a key.
For any length of increasing subarray N we can calculate the number of decisions as N * (N + 1)/2.
Therefore we have 3 * (3 + 1)/2 + 1 * (1 + 1)/2 + 1 * (1 + 1)/2, i.e. 6 + 1 + 1 = 8.
While we need a single iteration only, we have O(N) algorithm.
If the array is sorted,
count = 1;
for (i = 1 to n-1){
if(a[i-1] == a[i]){
count = count + 1;
}else if(a[i-1] + 1 == a[i]){
count of sub arrays for a[i-1] = count;
count = count + 1;
}else{
count of sub arrays for a[i-1] = count;
count = 1;
}
}
count of sub arrays for a[n-1] = count;
If the array is not sorted,
Assumption 3:If the array is like {3,1,2,3} then #sub arrays for 3 is 3
aMin = min(a);//O(n)
aMax = max(a);
len = (aMax - aMin + 1);
create array b of size len;
for (j = 0 to len-1){
b[j] = 0;
}
count = 1;
for (i = 1 to n-1){
if(a[i-1] == a[i]){
count = count + 1;
}else if(a[i-1] + 1 == a[i]){
if(b[a[i-1] - aMin] < count){
b[a[i-1] - aMin] = count;
}
count = count + 1;
}else{
if(b[a[i-1] - aMin] < count){
b[a[i-1] - aMin] = count;
}
count = 1;
}
}
if(b[a[n-1] - aMin] < count){
b[a[n-1] - aMin] = count;
}
for (i = 0 to n-1){
count of sub arrays for a[i] = b[a[i] - aMin];
}
This will work even if the array contains negative integers
If Assumption 3 fails according to your problem, and it is like,
Assumption 4:If the array is like {3,1,2,3} then #sub arrays for 3 is 4
{3}, {1,2,3}, {2,3}, {3}
Modify the above code by replacing
if(b[a[i-1] - aMin] < count){
b[a[i-1] - aMin] = count;
}
with this
b[a[i-1] - aMin] = b[a[i-1] - aMin] + count;
Create a value-to-index map and traverse from bottom to top - maintain an augmented tree of intervals. Each time an index is added, adjust the appropriate interval and calculate the total from the relevant segment. For example:
A = [5,1,7,2,3] => {1:1, 2:3, 3:4, 5:0, 7:2}
indexes interval total sub-arrays with maximum exactly
1 (1,1) 1 => 1
1,3 (3,3) 2 => 1
1,3,4 (3,4) 3 => 2
1,3,4,0 (0,1) 5 => 2
1,3,4,0,2 (0,4) 7 => 3 + 2*3 = 9
Insertion and deletion in augmented trees are of O(log n) time-complexity. Worst-case total time-complexity is O(n log n).
Using JavaScript Not sure the Big O notation. But here i'm looping the list. Then starting 2 loops. One counting down from i, and the other counting up from i+1.
let countArray = []
for(let i = 0; i < arr.length; i++){
let count = 0;
*This will count downwards starting at i*
for(let j = i; j >= 0; j--){
if(arr[j] > arr[i]) {break;}
count++;
}
*This will count upwards starting at i+1 so that you dont get a duplicate of the first value*
for(let j = i+1; j < arr.length; j++){
if(arr[j] >= arr[i]) {break;}
count++;
}
countArray.push(count);
}
return countArray;

Number of submatricies containing all zeros

Is there a way to find a number of rectangular submatrices containing all zeros with a complexity smaller than O(n^3), where n is the dimension of given matrix?
Here is a solution O(n² log n).
First, let's convert the main problem to something like this:
For given histogram, find the number of submatrices containing all zeros.
How to convert it ?
For each position calculate the height of column that start on that position and contain only zeros.
Example:
10010 01101
00111 12000
00001 -> 23110
01101 30020
01110 40001
It can be easily find in O(n²).
for(int i = 1; i <= n; i++)
for(int j = 1; j <= m; j++)
up[i][j] = arr[i][j] ? 0 : 1 + up[i - 1][j];
Now we can consider each row as histogram with given heights.
Let's solve the problem with histogram.
Our goal is to travel all heights from left to right, and on each step we are going to update array L.
This array for each height is going to contain maximum widths so that we can make a rectangle of this width from current position, to the left and of given height.
Consider example:
0
0 0
0 000
00000 -> heights: 6 3 4 4 5 2
000000
000000
L[6]: 1 0 0 0 0 0
L[5]: 1 0 0 0 1 0
L[4]: 1 0 1 2 3 0
L[3]: 1 2 3 4 5 0
L[2]: 1 2 3 4 5 6
L[1]: 1 2 3 4 5 6
steps: 1 2 3 4 5 6
As you can see if we add all those numbers we will receive an answer for given histogram.
We can simply update array L in O(n), however we can also do it in O(log n) by using segment tree (with lazy propagation) that can add in interval, set value in interval and get sum from interval.
In each step we just add 1 to interval [1, height] and set 0 in interval[height + 1, maxHeight] and get sum from interval [1, maxHeight].
height - height of current column in histogram.
maxHeight - maximum height of column in histogram.
And thats how you can get O(n² * log n) solution :)
Here is main code in C++:
const int MAXN = 1000;
int n;
int arr[MAXN + 5][MAXN + 5]; // stores given matrix
int up[MAXN + 5][MAXN + 5]; // heights of columns of zeros
long long answer;
long long calculate(int *h, int maxh) { // solve it for histogram
clearTree();
long long result = 0;
for(int i = 1; i <= n; i++) {
add(1, h[i]); // add 1 to [1, h[i]]
set(h[i] + 1, maxh); // set 0 in [h[i] + 1, maxh];
result += query(); // get sum from [1, maxh]
}
return result;
}
int main() {
ios_base::sync_with_stdio(0);
cin >> n;
for(int i = 1; i <= n; i++)
for(int j = 1; j <= n; j++)
cin >> arr[i][j]; // read the data
for(int i = 1; i <= n; i++)
for(int j = 1; j <= n; j++)
up[i][j] = arr[i][j] ? 0 : 1 + up[i - 1][j]; // calculate values of up
for(int i = 1; i <= n; i++)
answer += calculate(up[i], i); // calculate for each row
cout << answer << endl;
}
Here is the beginning of code, segment tree:
#include <iostream>
using namespace std;
// interval-interval tree that stores sums
const int p = 11;
int sums[1 << p];
int lazy[1 << p];
int need[1 << p];
const int M = 1 << (p - 1);
void update(int node) {
if(need[node] == 1) { // add
sums[node] += lazy[node];
if(node < M) {
need[node * 2] = need[node * 2] == 2 ? 2 : 1;
need[node * 2 + 1] = need[node * 2 + 1] == 2 ? 2 : 1;
lazy[node * 2] += lazy[node] / 2;
lazy[node * 2 + 1] += lazy[node] / 2;
}
} else if(need[node] == 2) { // set
sums[node] = lazy[node];
if(node < M) {
need[node * 2] = need[node * 2 + 1] = 2;
lazy[node * 2] = lazy[node] / 2;
lazy[node * 2 + 1] = lazy[node] / 2;
}
}
need[node] = 0;
lazy[node] = 0;
}
void insert(int node, int l, int r, int lq, int rq, int value, int id) {
update(node);
if(lq <= l && r <= rq) {
need[node] = id;
lazy[node] = value * (r - l + 1);
update(node);
return;
}
int mid = (l + r) / 2;
if(lq <= mid) insert(node * 2, l, mid, lq, rq, value, id);
if(mid + 1 <= rq) insert(node * 2 + 1, mid + 1, r, lq, rq, value, id);
sums[node] = sums[node * 2] + sums[node * 2 + 1];
}
int query() {
return sums[1]; // we only need to know sum of the whole interval
}
void clearTree() {
for(int i = 1; i < 1 << p; i++)
sums[i] = lazy[i] = need[i] = 0;
}
void add(int left, int right) {
insert(1, 0, M - 1, left, right, 1, 1);
}
void set(int left, int right) {
insert(1, 0, M - 1, left, right, 0, 2);
}
// end of the tree

Algorithm to calculate the number of 1s for a range of numbers in binary

So I just got back for the ACM Programing competition and did pretty well but there was one problem that not one team got.
The Problem.
Start with an integer N0 which is greater than 0. Let N1 be the number of ones in the binary representation of N0. So, if N0 = 27, N1 = 4. For all i > 0, let Ni be the number of ones in the binary representation of Ni-1. This sequence will always converge to one. For any starting number, N0, let K be the minimum value of i >= 0 for which N1 = 1. For example, if N0 = 31, then N1 = 5, N2 = 2, N3 = 1, so K = 3.
Given a range of consecutive numbers and a value of X how many numbers in the range have a K value equal to X?
Input
There will be several test cases in the input. Each test case will consist of three integers on a single line:
LO HI X
Where LO and HI (1 <= LO <= HI <= 10^18) are the lower and upper limits of a range of integers, and X (0 <= X <= 10) is the target value for K. The input will end with a line of three 0s.
Output
For each test case output a single integer, representing the number of integers in the range from LO to HI (inclusive) which have a K value equal to X in the input. Print each Integer on its own line with no spaces. Do not print any blank lines between answers.
Sample Input
31 31 3
31 31 1
27 31 1
27 31 2
1023 1025 1
1023 1025 2
0 0 0
Sample Output
1
0
0
3
1
1
If you guys want I can include our answer or our problem, because finding for a small range is easy but I will give you a hint first your program needs to run in seconds not minutes. We had a successful solution but not an efficient algorithm to use a range similar to
48238 10^18 9
Anyway good luck and if the community likes these we had some more we could not solve that could be some good brain teasers for you guys. The competition allows you to use Python, C++, or Java—all three are acceptable in an answer.
So as a hint my coach said to think of how binary numbers count rather than checking every bit. I think that gets us a lot closer.
I think a key is first understanding the pattern of K values and how rapidly it grows. Basically, you have:
K(1) = 0
K(X) = K(bitcount(X))+1 for X > 1
So finding the smallest X values for a given K we see
K(1) = 0
K(2) = 1
K(3) = 2
K(7) = 3
K(127) = 4
K(170141183460469231731687303715884105727) = 5
So for an example like 48238 10^18 9 the answer is trivially 0. K=0 only for 1, and K=1 only for powers of 2, so in the range of interest, we'll pretty much only see K values of 2, 3 or 4, and never see K >= 5
edit
Ok, so we're looking for an algorithm to count the number of values with K=2,3,4 in a range of value LO..HI without iterating over the entire range. So the first step is to find the number of values in the range with bitcount(x)==i for i = 1..59 (since we only care about values up to 10^18 and 10^18 < 2^60). So break down the range lo..hi into subranges that are a power of 2 size and differ only in their lower n bits -- a range of the form x*(2^n)..(x+1)*(2^n)-1. We can break down the arbitray lo..hi range into such subranges easily. For each such subrange there will be choose(n, i) values with i+bitcount(x) set bits.
So we just add all the subranges together to get a vector of counts for 1..59, which we then iterate over, adding together those elements with the same K value to get our answer.
edit (fixed again to be be C89 compatible and work for lo=1/k=0)
Here's a C program to do what I previously described:
#include <stdio.h>
#include <string.h>
#include <assert.h>
int bitcount(long long x) {
int rv = 0;
while(x) { rv++; x &= x-1; }
return rv; }
long long choose(long long m, long long n) {
long long rv = 1;
int i;
for (i = 0; i < n; i++) {
rv *= m-i;
rv /= i+1; }
return rv; }
void bitcounts_p2range(long long *counts, long long base, int l2range) {
int i;
assert((base & ((1LL << l2range) - 1)) == 0);
counts += bitcount(base);
for (i = 0; i <= l2range; i++)
counts[i] += choose(l2range, i); }
void bitcounts_range(long long *counts, long long lo, long long hi) {
int l2range = 0;
while (lo + (1LL << l2range) - 1 <= hi) {
if (lo & (1LL << l2range)) {
bitcounts_p2range(counts, lo, l2range);
lo += 1LL << l2range; }
l2range++; }
while (l2range >= 0) {
if (lo + (1LL << l2range) - 1 <= hi) {
bitcounts_p2range(counts, lo, l2range);
lo += 1LL << l2range; }
l2range--; }
assert(lo == hi+1); }
int K(int x) {
int rv = 0;
while(x > 1) {
x = bitcount(x);
rv++; }
return rv; }
int main() {
long long counts[64];
long long lo, hi, total;
int i, k;
while (scanf("%lld%lld%d", &lo, &hi, &k) == 3) {
if (lo < 1 || lo > hi || k < 0) break;
if (lo == 0 || hi == 0 || k == 0) break;
total = 0;
if (lo == 1) {
lo++;
if (k == 0) total++; }
memset(counts, 0, sizeof(counts));
bitcounts_range(counts, lo, hi);
for (i = 1; i < 64; i++)
if (K(i)+1 == k)
total += counts[i];
printf("%lld\n", total); }
return 0; }
which runs just fine for values up to 2^63-1 (LONGLONG_MAX).
For 48238 1000000000000000000 3 it gives 513162479025364957, which certainly seems plausible
edit
giving the inputs of
48238 1000000000000000000 1
48238 1000000000000000000 2
48238 1000000000000000000 3
48238 1000000000000000000 4
gives outputs of
44
87878254941659920
513162479025364957
398959266032926842
Those add up to 999999999999951763 which is correct. The value for k=1 is correct (there are 44 powers of two in that range 2^16 up to 2^59). So while I'm not sure the other 3 values are correct, they're certainly plausible.
The idea behind this answer can help you develop very fast solution. Having ranges 0..2^N the complexity of a potential algorithm would be O(N) in the worst case (Assuming that complexity of a long arithmetic is O(1)) If programmed correctly it should easily handle N = 1000000 in a matter of milliseconds.
Imagine we have the following values:
LO = 0; (0000000000000000000000000000000)
HI = 2147483647; (1111111111111111111111111111111)
The lowest possible N1 in range LO..HI is 0
The highest possible N1 in range LO..HI is 31
So the computation of N2..NN part is done only for one of 32 values (i.e. 0..31).
Which can be done simply, even without a computer.
Now lets compute the amount of N1=X for a range of values LO..HI
When we have X = 0 we have count(N1=X) = 1 this is the following value:
1 0000000000000000000000000000000
When we have X = 1 we have count(N1=X) = 31 these are the following values:
01 1000000000000000000000000000000
02 0100000000000000000000000000000
03 0010000000000000000000000000000
...
30 0000000000000000000000000000010
31 0000000000000000000000000000001
When we have X = 2 we have the following pattern:
1100000000000000000000000000000
How many unique strings can be formed with 29 - '0' and 2 - '1'?
Imagine the rightmost '1'(#1) is cycling from left to right, we get the following picture:
01 1100000000000000000000000000000
02 1010000000000000000000000000000
03 1001000000000000000000000000000
...
30 1000000000000000000000000000001
Now we've got 30 unique strings while moving the '1'(#1) from left to right, it is now impossible to
create a unique string by moving the '1'(#1) in any direction. This means we should move '1'(#2) to the right,
let's also reset the position of '1'(#1) as left as possible remaining uniqueness, we get:
01 0110000000000000000000000000000
now we do the cycling of '1'(#1) once again
02 0101000000000000000000000000000
03 0100100000000000000000000000000
...
29 0100000000000000000000000000001
Now we've got 29 unique strings, continuing this whole operation 28 times we get the following expression
count(N1=2) = 30 + 29 + 28 + ... + 1 = 465
When we have X = 3 the picture remains similar but we are moving '1'(#1), '1'(#2), '1'(#3)
Moving the '1'(#1) creates 29 unique strings, when we start moving '1'(#2) we get
29 + 28 + ... + 1 = 435 unique strings, after that we are left to process '1'(#3) so we have
29 + 28 + ... + 1 = 435
28 + ... + 1 = 406
...
+ 1 = 1
435 + 406 + 378 + 351 + 325 + 300 + 276 +
253 + 231 + 210 + 190 + 171 + 153 + 136 +
120 + 105 + 091 + 078 + 066 + 055 + 045 +
036 + 028 + 021 + 015 + 010 + 006 + 003 + 001 = 4495
Let's try to solve the general case i.e. when we have N zeros and M ones.
Overall amount of permutations for the string of length (N + M) is equal to (N + M)!
The amount of '0' duplicates in this string is equal to N!
The amount of '1' duplicates in this string is equal to M!
thus receiving overall amount of unique strings formed of N zeros and M ones is
(N + M)! 32! 263130836933693530167218012160000000
F(N, M) = ============= => ========== = ====================================== = 4495
(N!) * (M!) 3! * 29! 6 * 304888344611713860501504000000
Edit:
F(N, M) = Binomial(N + M, M)
Now let's consider a real life example:
LO = 43797207; (0000010100111000100101011010111)
HI = 1562866180; (1011101001001110111001000000100)
So how do we apply our unique permutations formula to this example? Since we don't know how
many '1' is located below LO and how many '1' is located above HI.
So lets count these permutations below LO and above HI.
Lets remember how we cycled '1'(#1), '1'(#2), ...
1111100000000000000000000000000 => 2080374784
1111010000000000000000000000000 => 2046820352
1111001000000000000000000000000 => 2030043136
1111000000000000000000000000001 => 2013265921
1110110000000000000000000000000 => 1979711488
1110101000000000000000000000000 => 1962934272
1110100100000000000000000000000 => 1954545664
1110100010000000000000000000001 => 1950351361
As you see this cycling process decreases the decimal values smoothly. So we need to count amount of
cycles until we reach HI value. But we shouldn't be counting these values by one because
the worst case can generate up to 32!/(16!*16!) = 601080390 cycles, which we will be cycling very long :)
So we need cycle chunks of '1' at once.
Having our example we would want to count the amount of cycles of a transformation
1111100000000000000000000000000 => 1011101000000000000000000000000
1011101001001110111001000000100
So how many cycles causes the transformation
1111100000000000000000000000000 => 1011101000000000000000000000000
?
Lets see, the transformation:
1111100000000000000000000000000 => 1110110000000000000000000000000
is equal to following set of cycles:
01 1111100000000000000000000000000
02 1111010000000000000000000000000
...
27 1111000000000000000000000000001
28 1110110000000000000000000000000
So we need 28 cycles to transform
1111100000000000000000000000000 => 1110110000000000000000000000000
How many cycles do we need to transform
1111100000000000000000000000000 => 1101110000000000000000000000000
performing following moves we need:
1110110000000000000000000000000 28 cycles
1110011000000000000000000000000 27 cycles
1110001100000000000000000000000 26 cycles
...
1110000000000000000000000000011 1 cycle
and 1 cycle for receiving:
1101110000000000000000000000000 1 cycle
thus receiving 28 + 27 + ... + 1 + 1 = 406 + 1
but we have seen this value before and it was the result for the amount of unique permutations, which was
computed for 2 '1' and 27 '0'. This means that amount of cycles while moving
11100000000000000000000000000 => 01110000000000000000000000000
is equal to moving
_1100000000000000000000000000 => _0000000000000000000000000011
plus one additional cycle
so this means if we have M zeros and N ones and want to move the chunk of U '1' to the right we will need to
perform the following amount of cycles:
(U - 1 + M)!
1 + =============== = f(U, M)
M! * (U - 1)!
Edit:
f(U, M) = 1 + Binomial(U - 1 + M, M)
Now let's come back to our real life example:
LO = 43797207; (0000010100111000100101011010111)
HI = 1562866180; (1011101001001110111001000000100)
so what we want to do is count the amount cycles needed to perform the following
transformations (suppose N1 = 6)
1111110000000000000000000000000 => 1011101001000000000000000000000
1011101001001110111001000000100
this is equal to:
1011101001000000000000000000000 1011101001000000000000000000000
------------------------------- -------------------------------
_111110000000000000000000000000 => _011111000000000000000000000000 f(5, 25) = 118756
_____11000000000000000000000000 => _____01100000000000000000000000 f(2, 24) = 301
_______100000000000000000000000 => _______010000000000000000000000 f(1, 23) = 24
________10000000000000000000000 => ________01000000000000000000000 f(1, 22) = 23
thus resulting 119104 'lost' cycles which are located above HI
Regarding LO, there is actually no difference in what direction we are cycling
so for computing LO we can do reverse cycling:
0000010100111000100101011010111 0000010100111000100101011010111
------------------------------- -------------------------------
0000000000000000000000000111___ => 0000000000000000000000001110___ f(3, 25) = 2926
00000000000000000000000011_____ => 00000000000000000000000110_____ f(2, 24) = 301
Thus resulting 3227 'lost' cycles which are located below LO this means that
overall amount of lost cycles = 119104 + 3227 = 122331
overall amount of all possible cycles = F(6, 25) = 736281
N1 in range 43797207..1562866180 is equal to 736281 - 122331 = 613950
I wont provide the remaining part of the solution. It is not that hard to grasp the remaining part. Good luck!
I think it's a problem in Discrete mathematics,
assuming LOW is 0,
otherwise we can insert a function for summing numbers below LOW,
from numbers shown i understand the longest number will consist up to 60 binary digit at most
alg(HIGH,k)
l=len(HIGH)
sum=0;
for(i=0;i<l;i++)
{
count=(l choose i);
nwia=numbers_with_i_above(i,HIGH);
if canreach(i,k) sum+=(count-nwia);
}
all the numbers appear
non is listed twice
numbers_with_i_above is trivial
canreach with numbers up to 60 is easy
len is it length of a binary represention
Zobgib,
The key to this problem is not to understand how rapidly the growth of K's pattern grows, but HOW it grows, itself. The first step in this is to understand (as your coach said) how binary numbers count, as this determines everything about how K is determined. Binary numbers follow a pattern that is distinct when counting the number of positive bits. Its a single progressive repetitive pattern. I am going to demonstrate in an unusual way...
Assume i is an integer value. Assume b is the number of positive bits in i
i = 1;
b = 1;
i = 2; 3;
b = 1; 2;
i = 4; 5; 6; 7;
b = 1; 2; 2; 3;
i = 8; 9; 10; 11; 12; 13; 14; 15;
b = 1; 2; 2; 3; 2; 3; 3; 4;
i = 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31;
b = 1; 2; 2; 3; 2; 3; 3; 4; 2; 3; 3; 4; 3; 4; 4; 5;
I assure you, this pattern holds to infinity, but if needed you
should be able to find or construct a proof easily.
If you look at the data above, you'll notice a distinct pattern related to 2^n. Each time you have an integer exponent of 2, the pattern will reset by including the each term of previous pattern, and then each term of the previous pattern incremented by 1. As such, to get K, you just apply the new number to the pattern above. The key is to find a single expression (that is efficient) to receive your number of bits.
For demonstration, yet again, you can further extrapolate a new pattern off of this, because it is static and follows the same progression. Below is the original data modified with its K value (based on the recursion).
Assume i is an integer value. Assume b is the number of positive bits in i
i = 1;
b = 1;
K = 1;
i = 2; 3;
b = 1; 2;
K = 1; 2;
i = 4; 5; 6; 7;
b = 1; 2; 2; 3;
K = 1; 2; 2; 3;
i = 8; 9; 10; 11; 12; 13; 14; 15;
b = 1; 2; 2; 3; 2; 3; 3; 4;
K = 1; 2; 2; 3; 2; 3; 3; 2;
i = 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31;
b = 1; 2; 2; 3; 2; 3; 3; 4; 2; 3; 3; 4; 3; 4; 4; 5;
K = 1; 2; 2; 3; 2; 3; 3; 2; 2; 3; 3; 2; 3; 2; 2; 3;
If you notice, K follows a similar patterning, with a special condition... Everytime b is a power of 2, it actually lowers the K value by 2. Soooo, if you follow a binary progression, you should be able to easily map your K values. Since this pattern is dependant on powers of 2, and the pattern is dependant upon finding the nearest power of 2 and starting there, I propose the following solution. Take your LOW value and find the nearest power of 2 (p) such that 2^p < LOW. This can be done by "counting the bits" for just the lowest number. Again, once you know which exponent it is, you don't have to count the bits for any other number. You just increment through the pattern and you will have your b and hence K (which is following the same pattern).
Note: If you are particularly observant, you can use the previous b or K to determine the next. If the current i is odd, add 1 to the previous b. If the current i is divisible by 4, then you decrement b by either 1 or 2, dependent upon whether it's in the first 1/2 of the pattern or second half. And, of course, if i is a power of 2, start over at 1.
Fuzzical Logic
Pseudo-code Example (non-Optimized)
{ var LOW, HIGH
var power = 0
//Get Nearest Power Of 2
for (var i = 0 to 60) {
// Compare using bitwise AND
if (LOW bitAND (2 ^ i) = (2 ^ i)) {
if ((2 ^ i) <= LOW) {
set power to i
}
else {
// Found the Power: end the for loop
set i to 61
}
}
}
// Automatically 1 at a Power of 2
set numOfBits to 1
array numbersWithPositiveBits with 64 integers = 0
// Must create the pattern from Power of 2
set foundLOW to false
for (var j = (2^power) to HIGH) {
set lenOfPatten to (power + 1)
// Don't record until we have found the LOW value
if ((foundLOW is false) bitAND (j is equal to LOW)) {
set foundLOW to true
}
// If j is odd, increment numOfBits
if ((1 bitAND j) is equal to 1) {
increment numOfBits
}
else if (j modulus 4 == 0) {
decrement numOfBits accordingly //Figure this one out yourself, please
}
else if ((j - (2^power)) == (power + 1)) {
// We are at the next power
increment power
// Start pattern over
set numOfBits to 1
}
// Record if appropriate
if (foundLOW is equal to true) {
increment element numOfBits in array numbersWithPositiveBits
}
}
// From here, derive your K values.
You can solve this efficiently as follows:
ret = 0;
for (i = 1; i <= 64; i++) {
if (computeK(i) != desiredK) continue;
ret += numBelow(HIGH, i) - numBelow(LO - 1, i);
}
return ret;
The function numBelow(high, numSet) computes the number of integers less than or equal to high and greater than zero that have numSet bits set. To implement numBelow(high, numSet) efficiently, you can use something like the following:
numBelow(high, numSet) {
t = floor(lg(high));
ret = 0;
if (numBitsSet(high) == numSet) ret++;
while (numSet > 0 && t > 0) {
ret += nchoosek(t - 1, numSet);
numSet--;
while (--t > 0 && (((1 << t) & high) == 0));
}
return ret;
}
This is a full working example with c++17
#include <bits/stdc++.h>
using namespace std;
#define BASE_MAX 61
typedef unsigned long long ll;
ll combination[BASE_MAX][BASE_MAX];
vector<vector<ll>> NK(4);
int count_bit(ll n) {
int ret = 0;
while (n) {
if (n & 1) {
ret++;
}
n >>= 1;
}
return ret;
}
int get_leftmost_bit_index(ll n) {
int ret = 0;
while (n > 1) {
ret++;
n >>= 1;
}
return ret;
}
void pre_calculate() {
for (int i = 0; i < BASE_MAX; i++)
combination[i][0] = 1;
for (int i = 1; i < BASE_MAX; i++) {
for (int j = 1; j < BASE_MAX; j++) {
combination[i][j] = combination[i - 1][j] + combination[i - 1][j - 1];
}
}
NK[0].push_back(1);
for (int i = 2; i < BASE_MAX; i++) {
int bitCount = count_bit(i);
if (find(NK[0].begin(), NK[0].end(), bitCount) != NK[0].end()) {
NK[1].push_back(i);
}
}
for (int i = 1; i < BASE_MAX; i++) {
int bitCount = count_bit(i);
if (find(NK[1].begin(), NK[1].end(), bitCount) != NK[1].end()) {
NK[2].push_back(i);
}
}
for (int i = 1; i < BASE_MAX; i++) {
int bitCount = count_bit(i);
if (find(NK[2].begin(), NK[2].end(), bitCount) != NK[2].end()) {
NK[3].push_back(i);
}
}
}
ll how_many_numbers_have_n_bit_in_range(ll lo, ll hi, int bit_count) {
if (bit_count == 0) {
if (lo == 0) return 1;
else return 0;
}
if (lo == hi) {
return count_bit(lo) == bit_count;
}
int lo_leftmost = get_leftmost_bit_index(lo); // 100 -> 2
int hi_leftmost = get_leftmost_bit_index(hi); // 1101 -> 3
if (lo_leftmost == hi_leftmost) {
return how_many_numbers_have_n_bit_in_range(lo & ~(1LL << lo_leftmost), hi & ~(1LL << hi_leftmost),
bit_count - 1);
}
if (lo != 0) {
return how_many_numbers_have_n_bit_in_range(0, hi, bit_count) -
how_many_numbers_have_n_bit_in_range(0, lo - 1, bit_count);
}
ll ret = combination[hi_leftmost][bit_count];
ret += how_many_numbers_have_n_bit_in_range(1LL << hi_leftmost, hi, bit_count);
return ret;
}
int main(void) {
pre_calculate();
while (true) {
ll LO, HI;
int X;
scanf("%lld%lld%d", &LO, &HI, &X);
if (LO == 0 && HI == 0 && X == 0)
break;
switch (X) {
case 0:
cout << (LO == 1) << endl;
break;
case 1: {
int ret = 0;
ll power2 = 1;
for (int i = 0; i < BASE_MAX; i++) {
power2 *= 2;
if (power2 > HI)
break;
if (power2 >= LO)
ret++;
}
cout << ret << endl;
break;
}
case 2:
case 3:
case 4: {
vector<ll> &addedBitsSizes = NK[X - 1];
ll ret = 0;
for (auto bit_count_to_added: addedBitsSizes) {
ll result = how_many_numbers_have_n_bit_in_range(LO, HI, bit_count_to_added);
ret += result;
}
cout << ret << endl;
break;
}
default:
cout << 0 << endl;
break;
}
}
return 0;
}

Resources