I want to create a method which given a number n and the number 16 and applies the modulus operator to them (n % 16). The thing which makes it hard for me is that I need to not use any of math operators (+, -, /, *, %).
Since 16 is 2^4 you can obtain the same result by truncating the value to the 4 least significant bits.
So:
x & 0xF is equivalent to x % 16
This is valid just because you are working with a power of two.
The key here is that 16 is a power of two, and so you can exploit the fact that computers utilise binary representations to achieve what you want with a bitwise operator.
Consider how multiples of 16 look when represented in binary:
0001 0000 // n = 16, n%16 = 0
0010 0000 // n = 32, n%16 = 0
0011 0000 // n = 48, n%16 = 0
0100 0000 // n = 64, n%16 = 0
Now take a look at some numbers for which n % 16 would be non-zero:
0000 0111 // n = 7, n%16 = 7
0001 0111 // n = 23, n%16 = 7
0010 0001 // n = 33, n%16 = 1
0100 0001 // n = 65, n%16 = 1
Notice that the remainder is simply the least significant 4 bits (nibble) - therefore we simply need to construct a bitwise expression that will keep these bits intact, whilst masking all other bits to zero. This can be achieved by performing a bitwise AND operation with the binary value 15:
x = n & 0xF
Related
fmt.Println(^1)
Why does this print -2?
The ^ operator is the bitwise complement operator. Spec: Arithmetic operators:
For integer operands, the unary operators +, -, and ^ are defined as follows:
+x is 0 + x
-x negation is 0 - x
^x bitwise complement is m ^ x with m = "all bits set to 1" for unsigned x
and m = -1 for signed x
So 1 in binary is a single 1 bit preceded with full of zeros:
0000000000000000000000000000000000000000000000000000000000000001
So the bitwise complement is a single 0 bit preceded by full of ones:
1111111111111111111111111111111111111111111111111111111111111110
The ^1 is an untyped constant expression. When it is passed to a function, it has to be converted to a type. Since 1 is an untyped integer constant, its default type int will be used. int in Go is represented using the 2's complement where negative numbers start with a 1. The number being full ones is -1, the number being smaller by one (in binary) is -2 etc.
The bit pattern above is the 2's complement representation of -2.
To print the bit patterns and type, use this code:
fmt.Println(^1)
fmt.Printf("%T\n", ^1)
fmt.Printf("%064b\n", 1)
i := ^1
fmt.Printf("%064b\n", uint(i))
It outputs (try it on the Go Playground):
-2
int
0000000000000000000000000000000000000000000000000000000000000001
1111111111111111111111111111111111111111111111111111111111111110
Okay, this has to do with the way that we use signed signs in computation.
For a 1 byte number, you can get
D
B
-8
1000
-7
1001
-6
1010
-5
1011
-4
1100
-3
1101
-2
1110
-1
1111
0
0000
1
0001
2
0010
3
0011
4
0100
5
0101
6
0110
7
0111
You can see here that 1 is equivalent to 0001 (Nothing changes) but -1 is equal to 1111. ^ operator does a bitwise xor operation. Therefore:
0001
1111 xor
-------
1110 -> That is actually -2.
All this is because of the convention of two complement that we use to do calculations with negative numbers. Of course, this can be extrapolated to longer binary numbers.
You can test this by using windows calculator to do a xor bitwise calculation.
I am trying to understand
256 bits in hexadecimal is 32 bytes, or 64 characters in the range 0-9 or A-F
How can a 32 bytes string be 64 characters in the range 0-9 or A-F?
What does 32 bytes mean?
I would assume that bits mean a digit 0 or 1, so 256 bits would be 256 digits of either 0 or 1.
I know that 1 byte equals 8 bits, so is 32 bytes a 32 digits of either 0, 1, 2, 3, 4, 5, 6, or 7 (i.e. 8 different values)?
I do know a little about different bases (e.g. that binary has 0 and 1, decimal has 0-9, hexadecimal has 0-9 and A-F, etc.), but I still fail to understand why 256 bits in hexadecimal can be 32 bytes or 64 characters.
I know it's quite basic in computer science, so I have to read up on this, but can you give a brief explanation?
A single hexadecimal character represents 4 bits.
1 = 0001
2 = 0010
3 = 0011
4 = 0100
5 = 0101
6 = 0110
7 = 0111
8 = 1000
9 = 1001
A = 1010
B = 1011
C = 1100
D = 1101
E = 1110
F = 1111
Two hexadecimal characters can represent a byte (8 bits).
How can a 32 bytes string be 64 characters in the range 0-9 or A-F?
Keep in mind that the hexadecimal representation is an EXTERNAL depiction of the bit settings. If byte contains 01001010, was can say that it 4A in hex. The characters 4A are not stored in the byte. It's like in mathematics where we use the depictions "e" and "π" to represent numbers.
What does 32 bytes mean?
1 Byte = 8 bits. 32 bytes = 256 bits.
I know that you can bitmask by ANDing a value with 0. However, how can I both bitmask certain nibbles and maintain others. In other words if I have 0x000f0b7c and I wanted to mask the everything but b (in other words my result would be 0x00000b00) how would I use AND to do this? Would it require multiple steps?
You can better understand boolean operations if you represent values in binary form.
The AND operation between two binary digits returns 1 if both the binary digits have a value of 1, otherwise it returns 0.
Suppose you have two binary digits a and b, you can build the following "truth table":
a | b | a AND b
---+---+---------
0 | 0 | 0
1 | 0 | 0
0 | 1 | 0
1 | 1 | 1
The masking operation consists of ANDing a given value with a "mask" where every bit that needs to be preserved is set to 1, while every bit to discard is set to 0.
This is done by ANDing each bit of the given value with the corresponding bit of the mask.
The given value, 0xf0b7c, can be converted as follows:
f 0 b 7 c (hex)
1111 0000 1011 0111 1100 (bin)
If you want to preserve only the bits corresponding to the "b" value (bits 8..11) you can mask it this way:
f 0 b 7 c
1111 0000 1011 0111 1100
0000 0000 1111 0000 0000
The value 0000 0000 1111 0000 0000 can be converted to hex and has a value of 0xf00.
So if you calculate "0xf0b7c AND 0xf00" you obtain 0xb00.
the formula for calculating nth gray code is :
(n-1) XOR (floor((n-1)/2))
(Source: wikipedia)
I encoded it as:
int gray(int n)
{
n--;
return n ^ (n >> 1);
}
Can someone explain how the above formula works, or possibly its deriviation?
If you look at binary counting sequence, you note, that neighboring codes differ at several last bits (with no holes), so if you xor them, pattern of several trailing 1's appear. Also, when you shift numbers right, xors also will be shifted right: (A xor B)>>N == A>>N xor B>>N.
N N>>1 gray
0000 . 0000 . 0000 .
| >xor = 0001 >xor = 0000 >xor = 0001
0001 . 0000 . 0001 .
|| >xor = 0011 | >xor = 0001 >xor = 0010
0010 . 0001 . 0011 .
| >xor = 0001 >xor = 0000 >xor = 0001
0011 . 0001 . 0010 .
||| >xor = 0111 || >xor = 0011 >xor = 0100
0100 0010 0110
Original Xor results and shifted results differ in single bit (i marked them by dot above). This means that if you xor them, you'll get pattern with 1 bit set. So,
(A xor B) xor (A>>1 xor B>>1) == (A xor A>>1) xor (B xor B>>1) == gray (A) xor gray (B)
As xor gives us 1s in differing bits, it proves, what neighbouring codes differ only in single bit, and that's main property of Gray code we want to get.
So for completeness, whould be proven, that N can be restored from its N ^ (N>>1) value: knowing n'th bit of code we can restore n-1'th bit using xor.
A_[bit n-1] = A_[bit n] xor gray(A)_[bit n-1]
Starting from largest bit (it is xored with 0) thus we can restore whole number.
Prove by induction.
Hint: The 1<<kth to (1<<(k+1))-1th values are twice the 1<<(k-1)th to (1<<k)-1th values, plus either zero or one.
Edit: That was way too confusing. What I really mean is,
gray(2*n) and gray(2*n+1) are 2*gray(n) and 2*gray(n)+1 in some order.
The Wikipedia entry you refer to explains the equation in a very circuitous manner.
However, it helps to start with this:
Therefore the coding is stable, in the
sense that once a binary number
appears in Gn it appears in the same
position in all longer lists; so it
makes sense to talk about the
reflective Gray code value of a
number: G(m) = the m-th reflecting
Gray code, counting from 0.
In other words, Gn(m) & 2^n-1 is either Gn-1(m & 2^n-1) or ~Gn-1(m & 2^n-1). For example, G(3) & 1 is either G(1) or ~G(1). Now, we know that Gn(m) & 2^n-1 will be the reflected (bitwise inverse) if m is greater than 2^n-1.
In other words:
G(m, bits), k= 2^(bits - 1)
G(m, bits)= m>=k ? (k | ~G(m & (k - 1), bits - 1)) : G(m, bits - 1)
G(m, 1) = m
Working out the math in its entirety, you get (m ^ (m >> 1)) for the zero-based Gray code.
Incrementing a number, when you look at it bitwise, flips all trailing ones to zeros and the last zero to one. That's a whole lot of bits flipped, and the purpose of Gray code is to make it exactly one. This transformation makes both numbers (before and after increment) equal on all the bits being flipped, except the highest one.
Before:
011...11
+ 1
---------
100...00
After:
010...00
+ 1
---------
110...00
^<--------This is the only bit that differs
(might be flipped in both numbers by carry over from higher position)
n ^ (n >> 1) is easier to compute but it seems that only changing the trailing 011..1 to 010..0 (i.e. zeroing the whole trailing block of 1's except the highest 1) and 10..0 to 11..0 (i.e flipping the highest 0 in the trailing 0's) is enough to obtain a Gray code.
If I have a 32-bit binary number and I want to replace the lower 16-bit of the binary number with a 16-bit number that I have and keep the upper 16-bit of that number to produce a new binary number.. how can I do this using simple bitwise operator?
For example the 32-bit binary number is:
1010 0000 1011 1111 0100 1000 1010 1001
and the lower 16-bit I have is:
0000 0000 0000 0001
so the result is:
1010 0000 1011 1111 0000 0000 0000 0001
how can I do this?
You do this in two steps:
Mask out the bits that you want to replace (AND it with 0s)
Fill in the replacements (OR it with the new bits)
So in your case,
i32 number;
i32 mask_lower_16 = FFFF0000;
i16 newValue;
number = (number AND mask_lower_16) OR newValue;
In actual programming language implementation, you may also need to address the issue of sign extension on the 16-bit value. In Java, for example, you have to mask the upper 16 bits of the short like this:
short v = (short) 0xF00D;
int number = 0x12345678;
number = (number & 0xFFFF0000) | (v & 0x0000FFFF);
System.out.println(Integer.toHexString(number)); // "1234f00d"
(original32BitNumber & 0xFFFF0000) | 16bitNumber
Well, I could tell you the answer. But perhaps this is homework. So I won't.
Consider that you have a few options:
| // bitwise OR
^ // bitwise XOR
& // bitwise AND
Maybe draw up a little table and decide which one will give you the right result (when you operate on the right section of your larger binary number).
use & to mask off the low bits and then | to merge the 16 bit value with the 32 bit value
uint a = 0xa0bf68a9
short b = 1
uint result = (a & 0xFFFF0000) | b;