Can anyone suggest me a O(n) algorithm for OR the result of pairwise XOR operations on N numbers for example let N=3 and numbers be 5,7 and 9
5 ^ 7 = 2, 7 ^ 9 = 14, 9 ^ 5 = 12
2|14|12 = 14
^ for XOR operation and | for OR operation
If A[i] differs from A[j] in k-th bit then:
either A[i] differs from A[1] in k-th bit
or A[j] differs from A[1] in k-th bit
So, N operations are enough:
// A[i] = array of source numbers, i=1..N
Result = 0
for i = 2 to N
Result = Result OR (A[i] XOR A[1])
print(Result)
The bits are independent, so you can compute the result for each bit separately.
So given a sequence of bits, we want to now the OR of all pairswise XORs. The OR is 1 iif at least one of the bitwise XORs is 1. This is the case iif you have at least one 0 and at least one 1 in your input sequence. It is very easy to check whether this is the case.
Total time: O(wN) where w is the maximum bit length of any of your input numbers.
Assuming that w is smaller than the word size, we can also solve it in O(N): Just solve all bits in parallel:
have_one = 0
have_zero = 0
for x in input:
have_one |= x
have_zero |= ~x
return have_one & have_zero
have_one has a bit set exactly at those positions where we have a 1 somewhere in the input. have_zero has a bit set exactly at those positions where we have a 0 somewhere in the input. The result is 1 for those positions where both is the case.
Related
I was solving the below problem from USACO training. I found this really fast solution for which, I am finding it unable to absorb fully.
Problem: Consider an ordered set S of strings of N (1 <= N <= 31) bits. Bits, of course, are either 0 or 1.
This set of strings is interesting because it is ordered and contains all possible strings of length N that have L (1 <= L <= N) or fewer bits that are `1'.
Your task is to read a number I (1 <= I <= sizeof(S)) from the input and print the Ith element of the ordered set for N bits with no more than L bits that are `1'.
sample input: 5 3 19
output: 10110
The two solutions I could think of:
Firstly the brute force solution which goes through all possible combinations of bits, selects and stores the strings whose count of '1's are less than equal to 'L' and returning the Ith string.
Secondly, we can find all the permutations of '1's from 5 positions with range of count(0 to L), sort the strings in increasing order and returning the Ith string.
The best Solution:
The OP who posted the solution has used combination instead of permutation. According to him, the total number of string possible is 5C0 + 5C1 + 5C2 + 5C3.
So at every position i of the string, we decide whether to include the ith bit in our output or not, based on the total number of ways we have to build the rest of the string. Below is a dry run of the entire approach for the above input.
N = 5, L = 3, I = 19
00000
at i = 0, for the rem string, we have 4C0 + 4C1 + 4C2 + 4C3 = 15
It says that, there are 15 other numbers possible with the last 4 positions. as 15 is less than 19, our first bit has to be set.
N = 5, L = 2, I = 4
10000
at i = 1, we have 3C0 + 3C1 + 3C2 (as we have used 1 from L) = 7
as 7 is greater than 4, we cannot set this bit.
N = 5, L = 2, I = 4
10000
at i = 2 we have 2C0 + 2C2 = 2
as 2 <= I(4), we take this bit in our output.
N = 5, L = 1, I = 2
10100
at i = 3, we have 1C0 + 1C1 = 2
as 2 <= I(2) we can take this bit in our output.
as L == 0, we stop and 10110 is our answer. I was amazed to find this solution. However, I am finding it difficult to get the intuition behind this solution.
How does this solution sort-of zero in directly to the Ith number in the set?
Why does the order of the bits not matter in the combinations of set bits?
Suppose we have precomputed the number of strings of length n with k or fewer bits set. Call that S(n, k).
Now suppose we want the i'th string (in lexicographic order) of length N with L or fewer bits set.
All the strings with the most significant bit zero come before those with the most significant bit 1. There's S(N-1, L) strings with the most significant bit zero, and S(N-1, L-1) strings with the most significant bit 1. So if we want the i'th string, if i<=S(N-1, L), then it must have the top bit zero and the remainder must be the i'th string of length N-1 with at most L bits set, and otherwise it must have the top bit one, and the remainder must be the (i-S(N-1, L))'th string of length N-1 with at most L-1 bits set.
All that remains to code is to precompute S(n, k), and to handle the base cases.
You can figure out a combinatorial solution to S(n, k) as your friend did, but it's more practical to use a recurrence relation: S(n, k) = S(n-1, k) + S(n-1, k-1), and S(0, k) = S(n, 0) = 1.
Here's code that does all that, and as an example prints out all 8-bit numbers with 3 or fewer bits set, in lexicographic order. If i is out of range, then it raises an IndexError exception, although in your question you assume i is always in range, so perhaps that's not necessary.
S = [[1] * 32 for _ in range(32)]
for n in range(1, 32):
for k in range(1, 32):
S[n][k] = S[n-1][k] + S[n-1][k-1]
def ith_string(n, k, i):
if n == 0:
if i != 1:
raise IndexError
return ''
elif i <= S[n-1][k]:
return "0" + ith_string(n-1, k, i)
elif k == 0:
raise IndexError
else:
return "1" + ith_string(n-1, k-1, i - S[n-1][k])
print([ith_string(8, 3, i) for i in range(1, 94)])
Suppose I have given a number n. I want to find out all then even numbers which are less than n, and also have a greater exponent of 2 in its prime factorization than that of the exponent of 2 in the prime factorization of n.
if n=18 answer is 4 i.e, 4,8,12,16.
Using a for loop from i=2 to less than n and checking for every i will show time limit exceeded in the code.
My approach is to count no of times i will continue to divide by 2. But constraints of n=10^18. So, i think its a O (1) operation . Can anyone help me to find any formula or algorithm to find the answer as fast as possible?
First assume n is an odd number. Obviously every even number less than n also has a greater exponent of 2 in its factorization, so the answer will be equal to (n−1) / 2.
Now suppose n is equal to 2 times some odd number p. There are (p−1) / 2 even numbers that are smaller than p, so it follows that there are also (p−1) / 2 numbers smaller than n that are divisible by at least 22.
In general, given any number n that is equal to 2k times some odd number q, there will be (q−1) / 2 numbers that are smaller than n and have a larger exponent of 2 (> 2k) in their factorization.
So a function like this should work:
def count_smaller_numbers_with_greater_power_of_2_as_a_factor(n):
assert n > 0
while n % 2 == 0:
n >>= 1
return (n-1) // 2
Example 1 (n = 18)
Since n is even, keep dividing it by 2 until you get an odd number. This only takes one step (because n / 2 = 9)
Count the number of even numbers that are less than 9. This is equal to (9−1) / 2 = 4
Example 2 (n = 1018)
In this case, n = 218 × 518. So if we keep halving n until we get an odd number, the result will be 518.
The number of even numbers that are less than 518 is equal to (518−1) / 2 = 1907348632812
Your division is limited by constant number 64 (for 10^18~2^64), and O(64)=O(1) in complexity theory.
Number of two's in value factorization is equal to the number of trailing zero bits in binary representation of this value, so you can use bit operations (like & 1 and right shift shr, >>) to accelerate code a bit or apply some bit tricks
First, suppose n = 2^k * something. Find out k:
long k = 0;
while(n % 2 == 0) { n >>= 1; k++; }
n <<= k;
Now that you know who is k, multiply 2^k by 2 to get the first power of 2 greater than 2^k:
long next_power = 1 << (k + 1); // same as 2^(k + 1)
And lastly, check if n is odd. If it isn't, print all the multiples of next_power:
if(k == 0){ //equivalent to testing n % 2 == 0
for(long i = next_power; i < n; i += next_power) cout<<i<<endl;
}
EXAMPLE: n = 18
k will be 1, because 18 = 2^1 * 9 and the while will finish there.
next_power will be 4 (= 1 << (k + 1) = 2 ^ (k + 1)).
for(long i = next_power; i < n; i += next_power) cout<<i<<endl; will print 4, 8, 12 and 16.
This is very easy to do with a gcd trick i found:
You can find the count by //4. So 10^18 has
In [298]: pow(10,18)//4
Out[298]: 250000000000000000
You can find the count of 18 by //4 which is 4
Fan any numbers that meet your criteria. You can check by using my
algorithm here, and taking the len of the array and conpare with the
number div//4 to see that that is the answer your looking for: an exact
match. You'll notice that it's every four numbers that don't have an
exponent of 2. So the count of numbers can be found with //4.
import math
def lars_last_modulus_powers_of_two(hm):
return math.gcd(hm, 1<<hm.bit_length())
def findevennumberswithexponentgreaterthan2lessthannum(hm):
if hm %2 != 0:
return "only for use of even numbers"
vv = []
for x in range(hm,1,-2):
if lars_last_modulus_powers_of_two(x) != 2:
vv.append(x)
return vv
Result:
In [3132]: findevennumberswithexponentgreaterthan2lessthannum(18)
Out[3132]: [16, 12, 8, 4]
This is the fastest way to do it as you skip the mod down the path to get the answer. Instantly get the number you need with lars_last_modulus_powers_of_two(num) which is one operation per number.
Here is some example to show the answer is right:
In [302]: len(findevennumberswithexponentgreaterthan2lessthannum(100))
Out[302]: 25
In [303]: 100//4
Out[303]: 25
In [304]: len(findevennumberswithexponentgreaterthan2lessthannum(1000))
Out[304]: 250
In [305]: 1000//4
Out[305]: 250
In [306]: len(findevennumberswithexponentgreaterthan2lessthannum(23424))
Out[306]: 5856
In [307]: 23424//4
Out[307]: 5856
I found this problem in a hiring contest(which is over now). Here it is:
You are given two natural numbers N and X. You are required to create an array of N natural numbers such that the bitwise XOR of these numbers is equal to X. The sum of all the natural numbers that are available in the array is as minimum as possible.
If there exist multiple arrays, print the smallest one
Array A< Array B if
A[i] < B[i] for any index i, and A[i]=B[i] for all indices less than i
Sample Input: N=3, X=2
Sample output : 1 1 2
Explanation: We have to print 3 natural numbers having the minimum sum Thus the N-spaced numbers are [1 1 2]
My approach:
If N is odd, I put N-1 ones in the array (so that their xor is zero) and then put X
If N is even, I put N-1 ones again and then put X-1(if X is odd) and X+1(if X is even)
But this algorithm failed for most of the test cases. For example, when N=4 and X=6 my output is
1 1 1 7 but it should be 1 1 2 4
Anyone knows how to make the array sum minimum?
In order to have the minimum sum, you need to make sure that when your target is X, you are not cancelling the bits of X and recreating them again. Because this will increase the sum. For this, you have create the bits of X one by one (ideally) from the end of the array. So, as in your example of N=4 and X=6 we have: (I use ^ to show xor)
X= 7 = 110 (binary) = 2 + 4. Note that 2^4 = 6 as well because these numbers don't share any common bits. So, the output is 1 1 2 4.
So, we start by creating the most significant bits of X from the end of the output array. Then, we also have to handle the corner cases for different values of N. I'm going with a number of different examples to make the idea clear:
``
A) X=14, N=5:
X=1110=8+4+2. So, the array is 1 1 2 4 8.
B) X=14, N=6:
X=8+4+2. The array should be 1 1 1 1 2 12.
C) X=15, N=6:
X=8+4+2+1. The array should be 1 1 1 2 4 8.
D) X=15, N=5:
The array should be 1 1 1 2 12.
E) X=14, N=2:
The array should be 2 12. Because 12 = 4^8
``
So, we go as follows. We compute the number of powers of 2 in X. Let this number be k.
Case 1 - If k <= n (example E): we start by picking the smallest powers from left to right and merge the remaining on the last position in the array.
Case 2 - If k > n (example A, B, C, D): we compute h = n - k. If h is odd we put h = n-k+1. Now, we start by putting h 1's in the beginning of the array. Then, the number of places left is less than k. So, we can follow the idea of Case 1 for the remaining positions. Note that in case 2, instead of having odd number of added 1's we put and even number of 1's and then do some merging at the end. This guarantees that the array is the smallest it can be.
We have to consider that we have to minimize the sum of the array for solution and that is the key point.
First calculate set bits in N suppose if count of setbits are less than or equal to X then divide N in X integers based on set bits like
N = 15, X = 2
setbits in 15 are 4 solution is 1 14
if X = 3 solution is 1 2 12
this minimizes array sum too.
other case if setbits are greater than X
calculate difference = setbits(N) - X
If difference is even then add ones as needed and apply above algorithm all ones will cancel out.
If difference is odd then add ones but now you have take care of that 1 extra one in the answer array.
Check for the corner cases too.
def power_of_two?(n)
n & (n-1) == 0
end
This method checks if a given number n is a power of two.
How does this work? I don't understand the usage of &
& is called Bitwise AND operator.
The AND operator walks through the binary representation of two supplied integers bit by bit. If the bits at the same position in both integers are 1 the resulting integer will have that bit set to 1. If not, the bit will be set to 0:
(a = 18).to_s(2) #=> "10010"
(b = 20).to_s(2) #=> "10100"
(a & b).to_s(2) #=> "10000"
if the number is a power of two already, then one less will result in a binary number that only has the lower-order bits set. Using & there will do nothing.
Example with 8: 0100 & (0100 - 1) --> (0100 & 0011) --> 0000
To understand it follow "How does this bitwise operation check for a power of 2?".
Example through IRB:
>> 4.to_s(2)
=> "100"
>> 3.to_s(2)
=> "11"
>> 4 & 3
=> 0
>>
This is why you can say 4 is power of 2 number.
The "&" is a bit-wise "AND" (see http://calleerlandsson.com/2014/02/06/rubys-bitwise-operators/) operator. It compares two numbers, as explained in the following example:
Suppose that n=4 (which is a power of two). This means that n-1=3. In binary (which I'm writing with ones and zeros in quotes like "1101011101" so we can see the bits) we have n="100" and n-1="011".
The bit-wise AND of these two numbers is 0="000" (in the following, each column only contains a single 1, never two 1s)
100 <-- this is n, n=4
011 <-- this is n-1, n-1=3
---
000 <-- this is n & (n-1)
As another example, now lets say that n=14 (not a power of two) and so n-1=13. In that case n="1110" and n-1="1101", and we have n & (n-1) = 12
1110 <-- this is n, n=14
1101 <-- this is n-1, n-1=13
----
1100 <-- this is n & (n-1)
In the above example, the first two columns of n and n-1 both contain a 1, thus the AND of those columns is one.
Okay, lets consider one final example where n is again a power of two (this should make it abundently clear if it is not already why "poweroftwo?" is written as it is. Suppose n=16 (which is a power of two).
Suppose that n=16 (which is a power of two). This means that n-1=15 so we have n="10000" and n-1="01111".
The bit-wise AND of these two numbers is 0="00000" (in the following, each column only contains a single 1, never two 1s)
10000 <-- this is n, n=16
01111 <-- this is n-1, n-1=15
---
00000 <-- this is n & (n-1)
Caveat: In the special case that n=0, the function "power_of_two?" will return True even though n=0 is not a power of two. This is because 0 is represented as a bit string of all zeros, and anything ANDed with zero is zero.
So, in general, the function "power_of_two?" will return True if and only if n is a power of two or n is zero. The above examples only illustrate this fact, they do not prove it... However, it is the case.
We wish to prove that
n & (n-1) == 0
if and only if n is a power of 2.
We may assume that n is an integer greater than 1. (In fact, I will use this assumption to obtain a contraction.)
If n is a power of 2, its binary representation has 1 at bit-offset
p = log2(n)
and 0s at all lower-order bit positions j, j < p. Moreover, since (n-1)+1 = n, n-1 must have 1's at all bit offsets j, 0 <= j < p. Therefore,
n & (n-1) == 0
It remains to prove that if n is not a power of 2 and
n & m == 0
then m != n-1. I assume that m = n-1 and will obtain a contraction, thereby completing the proof.
n's most significant bit is of course 1. Since n is not a power of 2, n has at least one other bit equal to 1. Among those 1-bits, consider the one at the most significant bit position j.
Since n & (n-1) == 0, n-1 must have a 0 at position j of its binary representation. When we add 1 to n-1, to make it equal n, it must have a 1 at offset j, meaning that n-1 must have 1's in all bit positions < j. Moreover, (n-1)+1 has zeroes in all bit positions < j after 1 is added. But since n = (n-1)+1, that can only be true if j == 0, since n & (n-1) == 0. Hence, for this to be true, n's most-significant and least-significant bits most both equal 1 and all other bits must equal zero. However, since n = (n-1)+1, that would imply n-1==0 and hence that n == 1, the needed contradiction.
(Whew! There's got to be an easier proof!)
The procedure for decreasing one from a binary number is, starting from the least significant bit:
if the bit is 0 - turn it into 1 and continue to the next significant bit
if the bit is 1 - turn it into 0 and stop.
This means that if there is more than one 1 digit in a number not all digits will be toggled (since you stopped before you got the most significant bit).
Let us say the first 1 in our number n is at position i. If we shift right the number n we'll get the part of the number which did not change when we decreased one, let's call that m. If we shift the number n-1 we should get the same number m, exactly because it is the part that did not change when we decreased one:
n >> i == m
(n - 1) >> i == m
Shifting right two numbers by the same amount will also shift right by the same amount the result of &ing them:
(n >> i) & ((n - 1) >> i) == 0 >> i
But 0 >> i is 0, no matter the i, so:
(n >> i) & ((n - 1) >> i) == 0
Let's put m where we know it is:
m & m == 0
But we also know that:
m & m == m # for any m
So m == 0!
Therefore n & (n - 1) == 0 if and only if there is at most one 1 bit in the number n.
The only numbers which have at most one 1 bit are all the (non-negative) powers of 2 (a leading 1 and a non-negative number of zeroes after it), and the number 0.
QED
In the case of a power of two, it takes the binary form of a single bit of value 1 followed by zeros. Any such value when decremented will take the form of a run of 1's, thus when using the bitwise-and, since it's necessarily less than the former, it will mask it out. E.g.
0b1000 & (0b1000 - 1) = 0b1000 & 0b111 = 0
So, anything (num - 1) might become, the key here is touching the highest bit of num, by decreasing it, we clear it out.
On the other hand, if a number is not a power of two, the result must be non-zero.
The reason behind is that the operation can always be carried without touching the highest bit, because there will always be a non-zero bit in the way, and so at least the highest bit makes it's way to the mask and will show up in the result.
I need to loop through all n-bit integers which has at most k bits ON (bits 1), where 0 < n <= 32 and 0 <= k <= n. For example, if n = 4 and k = 2 then these numbers are (in binary digits): 0000, 0001, 0010, 0100, 1000, 0011, 0101, 0110, 1001, 1010, 1100. The order in which these numbers are looped through is not important, but each is visited only once.
Currently I am using this straightforward algorithm:
for x = 0 to (2^n - 1)
count number of bits 1 in x
if count <= k
do something with x
end if
end for
I think this algorithm is inefficient because it has to loop through too many numbers. For example, if n = 32 and k = 2 then it has to loop through 2^32 numbers to find only 529 numbers (which have <= 2 bits 1).
My question is: is there any more efficient algorithm to do this?
You are going to need to make your own bitwise counting algorithm for incrementing the loop counter. Basically, for calculating the next number in the sequence, if there are fewer than k '1' bits, increment normally, if there are k '1' bits, pretend the '0' bits after the least significant '1' don't exist and increment normally.
Another way of saying it is that with a standard counter you add 1 to the least significant bit and carry. In your case, when there are k number of '1's you will add in the 1 to the lowest '1' bit.
For instance if k is 2 and you have 1010 ignore the last 0 and increment the 101 so you get 110 and then add in the 0 for 1100.
Here is Pseudocode for incrementing the number:
Count 1 bits in current number
If number of 1's is < k
number = number + 1
Else
shift_number = number of 0 bits after least significant 1 bit
number = number >> shift_number
number = number + 1
number = number << shift_number
Take the answer to Bit hack to generate all integers with a given number of 1s and loop over [1,k]. That will generate each integer with up to k bits once.
If you have to set 2 bits in 4, the lowest bit set must be at most the third (counting from 0...3) and the highest at least the second.
So you could loop with 2 loops
for lowest in 0 to (n-k)
for highest in lowest + 1 to 3
(0000).setBit (lowest).setBit (highest)
Since you don't want to write 16 loops for 16 bits, you might transform this idea into a recursive one.
Combinatorics.
If you have integers with n bit length and r bits set then there are nCr such numbers. Simply use a combination generator and iterate over the combinations as appropriate.
you can use a while loop as shown below. This loop will only loop for no of bits on. incase your no of bits is fixed you can use a break
countbits = 0
while num > 0
num = num & (num-1)
countbits = countbits + 1
end while
eg:
if 64(1000000) it will loop only once,
if 72(1001000) then 2 times