I'm having difficulty with simplifying the following function into several several atomic binary operations, it feels like it's possible however I'm unable to do it, I'm scratching my head for few hours already:
public UInt32 reverse_xor_lshift(UInt32 y, Int32 shift)
{
var x = y & (UInt32)((1 << shift) - 1);
for (int i = 0; i < (32 - shift); i++) {
var bit = ((x & (1 << i)) >> i) ^ ((y & (1 << (shift + i))) >> (shift + i));
x |= (UInt32)(bit << (shift + i));
}
return x;
}
what the function does is just it computes the inverse of the Z = X ^ (X << Y), in other words reverse_xor_lshift(Z, Y) == X
You can inverse it with much fewer operations, though in a harder to understand way, by using the same technique as used in converting back from grey code:
Apply the transformation z ^= z << i where i starts at shift and doubles every iteration.
In pseudocode:
while (i < 32)
x ^= x << i
i *= 2
This works because in the first step, you xor the lowest bits (unaffected) by the place where they were "xored in", thus "xoring them out". Then the part that has been changed to the original is twice as wide. The new number is then of the form x ^ (x << k) ^ (x << k) ^ (x << 2k) = x ^ (x << 2k) which is the same thing again but with twice the offset, so the same trick will work again, decoding yet more of the original bits.
Related
Having a bit of trouble fully understanding a basic algo which takes a number x and swaps the bits at positions i and j. The algo is this well-known one
def swap_bits(x, i, j):
if (x >> i) & 1 != (x >> j) & 1:
bit_mask = (1 << i) | (1 << j)
x ^= bit_mask
return x
As I understand it, the algo works by
checking if the bits at position i and j are different. If not, we're done bc swapping the same bits is the same as doing nothing
if they are different then we swap them by flipping the bits. We can do this with XOR.
What I don't fully understand is how the constructing of the bit mask works. I get that the goal of the mask is to identify the subset of bits we want to toggle, but why is (1 << i) | (x << j) the way to do that? I think I see it for a second, then I lose it.
EDIT:
Think I see it now. We're simply creating two binary numbers, one with a bit set in the i position and one with a bit set in the j position. By ORing these, we have a number with bits set in the i and j positions. We can apply this mask to our input x because x ^ 1 = 0 when x = 1 and 1 when x = 0 to swap the bits.
Your initial intuition that something looks fishy is correct. There's a typo:
> def swap_bits(x, i, j):
... if (x >> i) & 1 != (x >> j) & 1:
... bit_mask = (1 << i) | (x << j)
... x ^= bit_mask
... return x
...
>>> swap_bits(0x55555, 1, 2)
1048579
>>> hex(swap_bits(0x55555, 1, 2))
'0x100003'
>>>
The answer should have been 0x55553. A corrected version would have
bit_mask = (1 << i) | (1 << j)
I agree with one of the comments that this method begs for an if-less implementation. In C:
unsigned swap_bits(unsigned val, int i, int j) {
unsigned b = ((val >> i) ^ (val >> j)) & 1;
return ((b << i) | (b << j)) ^ val;
}
Given a rectangular grid and a point, I need an algorithm for visiting all points in a zigzag manner.
So, I'm looking for a function f that generates the below plot if run like this:
loop:
new_x, new_y = f(x, y, minx, miny, maxx, maxy)
if new_x == x and new_y == y:
end loop
Can someone help me with such an algorithm?
Be warned, I count from 1:
If you are on an odd-numbered row step to the right.
If you are on an even-numbered row step to the left.
If you are at the end of a row step up.
This is a bit fiddly to code but I can't see any particular problems.
Assuming that 0<=X<=L, and 0<=Y, given an index N, you can find the coordinates as follows:
Y = floor(N/L)
X = (Y is even)? N mod L : L - (N mod L) - 1
--Edit--
I Notice that this doesn't comply with your loop structure constraint, but it may be helpful anyway.
Maybe something like this?
#include <iostream>
#include <utility>
std::pair<size_t, size_t>
foo (size_t N, size_t k) {
const auto r = k / N;
const auto c = (r & 1) == 0 ? k % N : N - k % N - 1;
return {r, c};
}
int
main () {
const size_t N = 10;
for (size_t i = 0; i < N * N; ++i) {
auto p = foo (N, i);
std::cout << "(" << p.first << ", " << p.second << ")\n";
}
std::cout << std::endl;
}
During a job interview I had some time ago I was asked to calculate the number of positive (i.e. set to "1") bits in a bitvector-structure (like unsigned integer or long). My solution was rather straightforward in C#:
int CountBits(uint input)
{
int reply = 0;
uint dirac = 1;
while(input != 0)
{
if ((input & dirac) > 0) reply++;
input &= ~dirac;
dirac<<=1;
}
return reply;
}
Then I was asked to solve the task without using without using any shifts: neither explicit (like "<<" or ">>") nor implicit (like multiplying by 2) ones. The "brute force" solution with using the potential row of 2 (like 0, 1, 2, 4, 8, 16 etc) wouldn't do either.
Does somebody know such an algorithm?
As far as I understood, it should be a sort of more or less generic algorithm which does not depend upon the size of the input bit vector. All other bitwise operations and any math functions are allowed.
There is this x & (x-1) hack that, if you give it a thought for a while, clears last 1 in an integer. Rest is trivial.
Some processors have a population count instruction. If not, I believe this is the fastest method (for 32-bits):
int NumberOfSetBits(int i) {
i = i - ((i >> 1) & 0x55555555);
i = (i & 0x33333333) + ((i >> 2) & 0x33333333);
return (((i + (i >> 4)) & 0x0F0F0F0F) * 0x01010101) >> 24;
}
See this link for a full explanation: http://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetParallel
As for doing it without shifts, I think using a lookup table would be the best answer:
int NumberOfSetBits(int i) {
unsigned char * p = (unsigned char *) &i;
return BitsSetTable256[p[0]] + BitsSetTable256[p[1]] +
BitsSetTable256[p[2]] + BitsSetTable256[p[3]];
}
// To initially generate the table algorithmically:
BitsSetTable256[0] = 0;
for (int i = 0; i < 256; i++) {
BitsSetTable256[i] = (i & 1) + BitsSetTable256[i / 2];
}
In the same way as Anthony Blake described, but a bit more readable, I guess.
uint32_t bitsum(uint32_t x)
{
// leafs (0101 vs 1010)
x = (x & 0x55555555) + ((x >> 1) & 0x55555555);
// 2nd level (0011 vs 1100)
x = (x & 0x33333333) + ((x >> 2) & 0x33333333);
// 3rd level (nybbles)
//x = (x & 0x0f0f0f0f) + ((x >> 4) & 0x0f0f0f0f);
x = (x & 0x07070707) + ((x >> 4) & 0x07070707);
/*
// 4th level (bytes)
//x = (x & 0x00ff00ff) + ((x >> 8) & 0x00ff00ff);
x = (x & 0x000f000f) + ((x >> 8) & 0x000f000f);
// 5th level (16bit words)
//return (x & 0x0000ffff) + ((x >> 16) & 0x0000ffff);
return (x & 0x0000001f) + ((x >> 16) & 0x0000001f);
*/
// since current mask of bits 0x0f0f0f0f
// result of summing 0f + 0f = 1f
// x * 0x01010101 will produce
// sum of all current and lower octets in
// each octet
return (x * 0x01010101) >> 24;
}
Suppose you have two numbers, both signed integers, and you want to sum them but can't use your language's conventional + and - operators. How would you do that?
Based on http://www.ocf.berkeley.edu/~wwu/riddles/cs.shtml
Not mine, but cute
int a = 42;
int b = 17;
char *ptr = (char*)a;
int result = (int)&ptr[b];
Using Bitwise operations just like Adder Circuits
Cringe. Nobody builds an adder from 1-bit adders anymore.
do {
sum = a ^ b;
carry = a & b;
a = sum;
b = carry << 1;
} while (b);
return sum;
Of course, arithmetic here is assumed to be unsigned modulo 2n or twos-complement. It's only guaranteed to work in C if you convert to unsigned, perform the calculation unsigned, and then convert back to signed.
Since ++ and -- are not + and - operators:
int add(int lhs, int rhs) {
if (lhs < 0)
while (lhs++) --rhs;
else
while (lhs--) ++rhs;
return rhs;
}
Using bitwise logic:
int sum = 0;
int carry = 0;
while (n1 > 0 || n2 > 0) {
int b1 = n1 % 2;
int b2 = n2 % 2;
int sumBits = b1 ^ b2 ^ carry;
sum = (sum << 1) | sumBits;
carry = (b1 & b2) | (b1 & carry) | (b2 & carry);
n1 /= 2;
n2 /= 2;
}
Here's something different than what's been posted already. Use the facts that:
log (a^b) = b * log a
e^a * e^b = e^(a + b)
So:
log (e^(a + b)) = log(e^a * e^b) = a + b (if the log is base e)
So just find log(e^a * e^b).
Of course this is just theoretical, in practice this is going to be inefficient and most likely inexact too.
If we're obeying the letter of the rules:
a += b;
Otherwise http://www.geekinterview.com/question_details/67647 has a pretty complete list.
This version has a restriction on the number range:
(((int64_t)a << 32) | ((int64_t)b & INT64_C(0xFFFFFFFF)) % 0xFFFFFFFF
This also counts under the "letter of the rules" category.
Simple example in Python, complete with a simple test:
NUM_BITS = 32
def adder(a, b, carry):
sum = a ^ b ^ carry
carry = (a & b) | (carry & (a ^ b))
#print "%d + %d = %d (carry %d)" % (a, b, sum, carry)
return sum, carry
def add_two_numbers(a, b):
carry = 0
result = 0
for n in range(NUM_BITS):
mask = 1 << n
bit_a = (a & mask) >> n
bit_b = (b & mask) >> n
sum, carry = adder(bit_a, bit_b, carry)
result = result | (sum << n)
return result
if __name__ == '__main__':
assert add_two_numbers(2, 3) == 5
assert add_two_numbers(57, 23) == 80
for a in range(10):
for b in range(10):
result = add_two_numbers(a, b)
print "%d + %d == %d" % (a, b, result)
assert result == a + b
In Common Lisp:
(defun esoteric-sum (a b)
(let ((and (logand a b)))
(if (zerop and)
;; No carrying necessary.
(logior a b)
;; Combine the partial sum with the carried bits again.
(esoteric-sum (logxor a b) (ash and 1)))))
That's taking the bitwise-and of the numbers, which figures out which bits need to carry, and, if there are no bits that require shifting, returns the bitwise-or of the operands. Otherwise, it shifts the carried bits one to the left and combines them again with the bitwise-exclusive-or of the numbers, which sums all the bits that don't need to carry, until no more carrying is necessary.
Here's an iterative alternative to the recursive form above:
(defun esoteric-sum-iterative (a b)
(loop for first = a then (logxor first second)
for second = b then (ash and 1)
for and = (logand first second)
until (zerop and)
finally (return (logior first second))))
Note that the function needs another concession to overcome Common Lisp's reluctance to employ fixed-width two's complement arithmetic—normally an immeasurable asset—but I'd rather not cloud the form of the function with that accidental complexity.
If you need more detail on why that works, please ask a more detailed question to probe the topic.
Not very creative, I know, but in Python:
sum([a,b])
I realize that this might not be the most elegant solution to the problem, but I figured out a way to do this using the len(list) function as a substitute for the addition operator.
'''
Addition without operators: This program obtains two integers from the user
and then adds them together without using operators. This is one of the 'hard'
questions from 'Cracking the Coding Interview' by
'''
print('Welcome to addition without a plus sign!')
item1 = int(input('Please enter the first number: '))
item2 = int(input('Please eneter the second number: '))
item1_list = []
item2_list = []
total = 0
total_list = []
marker = 'x'
placeholder = 'placeholder'
while len(item1_list) < item1:
item1_list.append(marker)
while len(item2_list) < item2:
item2_list.append(marker)
item1_list.insert(1, placeholder)
item1_list.insert(1, placeholder)
for item in range(1, len(item1_list)):
total_list.append(item1_list.pop())
for item in range(1, len(item2_list)):
total_list.append(item2_list.pop())
total = len(total_list)
print('The sum of', item1, 'and', item2, 'is', total)
#include <stdio.h>
int main()
{
int n1=5,n2=55,i=0;
int sum = 0;
int carry = 0;
while (n1 > 0 || n2 > 0)
{
int b1 = n1 % 2;
int b2 = n2 % 2;
int sumBits = b1 ^ b2 ^ carry;
sum = sum | ( sumBits << i);
i++;
carry = (b1 & b2) | (b1 & carry) | (b2 & carry);
n1 /= 2;
n2 /= 2;
}
sum = sum | ( carry << i );
printf("%d",sum);
return 0;
}
I am trying to do bit reversal in a byte. I use the code below
static int BitReversal(int n)
{
int u0 = 0x55555555; // 01010101010101010101010101010101
int u1 = 0x33333333; // 00110011001100110011001100110011
int u2 = 0x0F0F0F0F; // 00001111000011110000111100001111
int u3 = 0x00FF00FF; // 00000000111111110000000011111111
int u4 = 0x0000FFFF;
int x, y, z;
x = n;
y = (x >> 1) & u0;
z = (x & u0) << 1;
x = y | z;
y = (x >> 2) & u1;
z = (x & u1) << 2;
x = y | z;
y = (x >> 4) & u2;
z = (x & u2) << 4;
x = y | z;
y = (x >> 8) & u3;
z = (x & u3) << 8;
x = y | z;
y = (x >> 16) & u4;
z = (x & u4) << 16;
x = y | z;
return x;
}
It can reverser the bit (on a 32-bit machine), but there is a problem,
For example, the input is 10001111101, I want to get 10111110001, but this method would reverse the whole byte including the heading 0s. The output is 10111110001000000000000000000000.
Is there any method to only reverse the actual number? I do not want to convert it to string and reverser, then convert again. Is there any pure math method or bit operation method?
Best Regards,
Get the highest bit number using a similar approach and shift the resulting bits to the right 33 - #bits and voila!
Cheesy way is to shift until you get a 1 on the right:
if (x != 0) {
while ((x & 1) == 0) {
x >>= 1;
}
}
Note: You should switch all the variables to unsigned int. As written you can have unwanted sign-extension any time you right shift.
One method could be to find the leading number of sign bits in the number n, left shift n by that number and then run it through your above algorithm.
It's assuming all 32 bits are significant and reversing the whole thing. You COULD try to make it guess the number of significant bits by finding the highest 1, but that isn't necessarily accurate so I'd suggest you modify the function so it takes a second parameter indicating the number of significant bits. Then after reversing the bits just shift them to the right.
Try using Integer.reverse(int x);