Algorithm for bitwise fiddling - algorithm

If I have a 32-bit binary number and I want to replace the lower 16-bit of the binary number with a 16-bit number that I have and keep the upper 16-bit of that number to produce a new binary number.. how can I do this using simple bitwise operator?
For example the 32-bit binary number is:
1010 0000 1011 1111 0100 1000 1010 1001
and the lower 16-bit I have is:
0000 0000 0000 0001
so the result is:
1010 0000 1011 1111 0000 0000 0000 0001
how can I do this?

You do this in two steps:
Mask out the bits that you want to replace (AND it with 0s)
Fill in the replacements (OR it with the new bits)
So in your case,
i32 number;
i32 mask_lower_16 = FFFF0000;
i16 newValue;
number = (number AND mask_lower_16) OR newValue;
In actual programming language implementation, you may also need to address the issue of sign extension on the 16-bit value. In Java, for example, you have to mask the upper 16 bits of the short like this:
short v = (short) 0xF00D;
int number = 0x12345678;
number = (number & 0xFFFF0000) | (v & 0x0000FFFF);
System.out.println(Integer.toHexString(number)); // "1234f00d"

(original32BitNumber & 0xFFFF0000) | 16bitNumber

Well, I could tell you the answer. But perhaps this is homework. So I won't.
Consider that you have a few options:
| // bitwise OR
^ // bitwise XOR
& // bitwise AND
Maybe draw up a little table and decide which one will give you the right result (when you operate on the right section of your larger binary number).

use & to mask off the low bits and then | to merge the 16 bit value with the 32 bit value
uint a = 0xa0bf68a9
short b = 1
uint result = (a & 0xFFFF0000) | b;

Related

Go Float32bit() result not expected

eg: 16777219.0 dec to bit
16777219 -> 1 0000 0000 0000 0000 0000 0011
Mantissa: 23 bit
0000 0000 0000 0000 0000 001
Exponent:
24+127 = 151
151 -> 10010111
Result shoud be:
0_10010111_000 0000 0000 0000 0000 0001
1001011100000000000000000000001
but:
fmt.Printf("%b\n", math.Float32bits(float32(16777219.0)))
// 1001011100000000000000000000010
why the Go Float32bit() result not expected?
reference:
base-conversion.ro
update:
fmt.Printf("16777216.0:%b\n", math.Float32bits(float32(16777216.0)))
fmt.Printf("16777217.0:%b\n", math.Float32bits(float32(16777217.0)))
//16777216.0:1001011100000000000000000000000
//16777217.0:1001011100000000000000000000000
fmt.Printf("16777218.0:%b\n", math.Float32bits(float32(16777218.0)))
//16777218.0:1001011100000000000000000000001
fmt.Printf("16777219.0:%b\n", math.Float32bits(float32(16777219.0)))
fmt.Printf("16777220.0:%b\n", math.Float32bits(float32(16777220.0)))
fmt.Printf("16777221.0:%b\n", math.Float32bits(float32(16777221.0)))
//16777219.0:1001011100000000000000000000010
//16777220.0:1001011100000000000000000000010
//16777221.0:1001011100000000000000000000010
fmt.Printf("000:%f\n", math.Float32frombits(0b_10010111_00000000000000000000000))
// 000:16777216.000000
fmt.Printf("001:%f\n", math.Float32frombits(0b_10010111_00000000000000000000001))
// 001:16777218.000000
fmt.Printf("010:%f\n", math.Float32frombits(0b_10010111_00000000000000000000010))
// 010:16777220.000000
fmt.Printf("011:%f\n", math.Float32frombits(0b_10010111_00000000000000000000011))
// 011:16777222.000000
what is the rules?
Go gives the correct IEEE-754 binary floating point result - round to nearest, ties to even.
The Go Programming Language Specification
float32 the set of all IEEE-754 32-bit floating-point numbers
Decimal
16777219
is binary
1000000000000000000000011
For 32-bit IEEE-754 floating-point binary binary, the 24-bit integer mantissa is
100000000000000000000001.1
Round to nearest, ties to even gives
100000000000000000000010
Removing the implicit one bit for the 23-bit mantissa gives
00000000000000000000010
package main
import (
"fmt"
"math"
)
func main() {
const n = 16777219
fmt.Printf("%d\n", n)
fmt.Printf("%b\n", n)
f := float32(n)
fmt.Printf("%g\n", f)
fmt.Printf("%b\n", math.Float32bits(f))
}
https://go.dev/play/p/yMaVkuiSJ5A
16777219
1000000000000000000000011
1.677722e+07
1001011100000000000000000000010
Why the result is not expected: You're expecting the wrong result.
The IEEE 754 standard specifies that, per Wikipedia:
if the number falls midway, it is rounded to the nearest value with an even least significant digit.
So when rounding 16777219, which is midway between 16777218 and 16777218. The "round up" option 16777220 gives an even LSB, and is the correct result you're observing.

Is there an algorithm to bit shift every n-long bits from a number without overflow?

For example, if I have a number 0101 1111 and I want to shift every 4 bit long section to the left to get 1010 1110. While I could just modulo off each section to get two 4-bit numbers, is there an algorithm that doesn't need to do this?
A naive approach
A first naive appraoch is to slice the 4 bit groups and process them individually. The expected result is obtained with the following for the first group of 4 bits.
(((x & 0xf) // take only 4 bits
<< 1) // shift them by 1
& 0xf) // get rid of potential overflow
For the n+1 th group of 4 bits, it's
(((x & (0xf<<(n*4)))
<< 1)
& (0xf<<(n*4)))
Since this is designed, so that there is no overlap around the 4 bits that are processed, you could iterate, and binary-or the partial results.
A less naive approach
Another approach is to simply shift the full x by 1, causing every 4 bit group to be shifted at once:
0101 1111 -> 1011 1110
We can then easily get rid of the overflow, and at the same time make sure that 0's are injected on the left, by clearing every 4th bit in the result of the shift:
1011 1110
& 1110 1110
---------
1010 1110
1110 is e in hexadecimal. So you need to generate a number with as many 0xe as there are 4 bit segments. In your case it's 0xee if it's just 8 bits. It's 0xeeeeeeeeeeeeeeee if it's 64 bits. Someone told this answer in the comments. Here you have the explanation.
Caution if your underlying data type is signed, because of the sign bit. Do this processing on unsigned integers to avoid any surprise.
Here is one way.
int bits = 0b1111_0001_0011_0111;
int result = 0;
int m = 0b1111;
while(m != 0) {
result |= ((bits & m) << 1) & m;
m <<= 4;
}
System.out.printf("%-7s = %s%n","src", Integer.toBinaryString(bits));
System.out.printf("%-7s = %s%n","result", Integer.toBinaryString(result));
Prints
src = 1111000100110111
result = 1110001001101110

How to bitmask a number (in hex) using the AND operator?

I know that you can bitmask by ANDing a value with 0. However, how can I both bitmask certain nibbles and maintain others. In other words if I have 0x000f0b7c and I wanted to mask the everything but b (in other words my result would be 0x00000b00) how would I use AND to do this? Would it require multiple steps?
You can better understand boolean operations if you represent values in binary form.
The AND operation between two binary digits returns 1 if both the binary digits have a value of 1, otherwise it returns 0.
Suppose you have two binary digits a and b, you can build the following "truth table":
a | b | a AND b
---+---+---------
0 | 0 | 0
1 | 0 | 0
0 | 1 | 0
1 | 1 | 1
The masking operation consists of ANDing a given value with a "mask" where every bit that needs to be preserved is set to 1, while every bit to discard is set to 0.
This is done by ANDing each bit of the given value with the corresponding bit of the mask.
The given value, 0xf0b7c, can be converted as follows:
f 0 b 7 c (hex)
1111 0000 1011 0111 1100 (bin)
If you want to preserve only the bits corresponding to the "b" value (bits 8..11) you can mask it this way:
f 0 b 7 c
1111 0000 1011 0111 1100
0000 0000 1111 0000 0000
The value 0000 0000 1111 0000 0000 can be converted to hex and has a value of 0xf00.
So if you calculate "0xf0b7c AND 0xf00" you obtain 0xb00.

How do you convert little Endian to big Endian with bitwise operations?

I get that you'd want to do something like take the first four bits put them on a stack (reading from left to right) then do you just put them in a register and shift them x times to put them at the right part of the number?
Something like
1000 0000 | 0000 0000 | 0000 0000 | 0000 1011
Stack: bottom - 1101 - top
shift it 28 times to the left
Then do something similar with the last four bits but shift to the right and store in a register.
Then you and that with an empty return value of 0
Is there an easier way?
Yes there is. Check out the _byteswap functions/intrinsics, and/or the bswap instruction.
You could do this way..
For example
I/p : 0010 1000 and i want output
1000 0010
input store into a variable x
int x;
i = x>>4
j = x<<4
k = i | j
print(K) //it will have 1000 0010.

the nth gray code

the formula for calculating nth gray code is :
(n-1) XOR (floor((n-1)/2))
(Source: wikipedia)
I encoded it as:
int gray(int n)
{
n--;
return n ^ (n >> 1);
}
Can someone explain how the above formula works, or possibly its deriviation?
If you look at binary counting sequence, you note, that neighboring codes differ at several last bits (with no holes), so if you xor them, pattern of several trailing 1's appear. Also, when you shift numbers right, xors also will be shifted right: (A xor B)>>N == A>>N xor B>>N.
N N>>1 gray
0000 . 0000 . 0000 .
| >xor = 0001 >xor = 0000 >xor = 0001
0001 . 0000 . 0001 .
|| >xor = 0011 | >xor = 0001 >xor = 0010
0010 . 0001 . 0011 .
| >xor = 0001 >xor = 0000 >xor = 0001
0011 . 0001 . 0010 .
||| >xor = 0111 || >xor = 0011 >xor = 0100
0100 0010 0110
Original Xor results and shifted results differ in single bit (i marked them by dot above). This means that if you xor them, you'll get pattern with 1 bit set. So,
(A xor B) xor (A>>1 xor B>>1) == (A xor A>>1) xor (B xor B>>1) == gray (A) xor gray (B)
As xor gives us 1s in differing bits, it proves, what neighbouring codes differ only in single bit, and that's main property of Gray code we want to get.
So for completeness, whould be proven, that N can be restored from its N ^ (N>>1) value: knowing n'th bit of code we can restore n-1'th bit using xor.
A_[bit n-1] = A_[bit n] xor gray(A)_[bit n-1]
Starting from largest bit (it is xored with 0) thus we can restore whole number.
Prove by induction.
Hint: The 1<<kth to (1<<(k+1))-1th values are twice the 1<<(k-1)th to (1<<k)-1th values, plus either zero or one.
Edit: That was way too confusing. What I really mean is,
gray(2*n) and gray(2*n+1) are 2*gray(n) and 2*gray(n)+1 in some order.
The Wikipedia entry you refer to explains the equation in a very circuitous manner.
However, it helps to start with this:
Therefore the coding is stable, in the
sense that once a binary number
appears in Gn it appears in the same
position in all longer lists; so it
makes sense to talk about the
reflective Gray code value of a
number: G(m) = the m-th reflecting
Gray code, counting from 0.
In other words, Gn(m) & 2^n-1 is either Gn-1(m & 2^n-1) or ~Gn-1(m & 2^n-1). For example, G(3) & 1 is either G(1) or ~G(1). Now, we know that Gn(m) & 2^n-1 will be the reflected (bitwise inverse) if m is greater than 2^n-1.
In other words:
G(m, bits), k= 2^(bits - 1)
G(m, bits)= m>=k ? (k | ~G(m & (k - 1), bits - 1)) : G(m, bits - 1)
G(m, 1) = m
Working out the math in its entirety, you get (m ^ (m >> 1)) for the zero-based Gray code.
Incrementing a number, when you look at it bitwise, flips all trailing ones to zeros and the last zero to one. That's a whole lot of bits flipped, and the purpose of Gray code is to make it exactly one. This transformation makes both numbers (before and after increment) equal on all the bits being flipped, except the highest one.
Before:
011...11
+ 1
---------
100...00
After:
010...00
+ 1
---------
110...00
^<--------This is the only bit that differs
(might be flipped in both numbers by carry over from higher position)
n ^ (n >> 1) is easier to compute but it seems that only changing the trailing 011..1 to 010..0 (i.e. zeroing the whole trailing block of 1's except the highest 1) and 10..0 to 11..0 (i.e flipping the highest 0 in the trailing 0's) is enough to obtain a Gray code.

Resources