Go Float32bit() result not expected - go

eg: 16777219.0 dec to bit
16777219 -> 1 0000 0000 0000 0000 0000 0011
Mantissa: 23 bit
0000 0000 0000 0000 0000 001
Exponent:
24+127 = 151
151 -> 10010111
Result shoud be:
0_10010111_000 0000 0000 0000 0000 0001
1001011100000000000000000000001
but:
fmt.Printf("%b\n", math.Float32bits(float32(16777219.0)))
// 1001011100000000000000000000010
why the Go Float32bit() result not expected?
reference:
base-conversion.ro
update:
fmt.Printf("16777216.0:%b\n", math.Float32bits(float32(16777216.0)))
fmt.Printf("16777217.0:%b\n", math.Float32bits(float32(16777217.0)))
//16777216.0:1001011100000000000000000000000
//16777217.0:1001011100000000000000000000000
fmt.Printf("16777218.0:%b\n", math.Float32bits(float32(16777218.0)))
//16777218.0:1001011100000000000000000000001
fmt.Printf("16777219.0:%b\n", math.Float32bits(float32(16777219.0)))
fmt.Printf("16777220.0:%b\n", math.Float32bits(float32(16777220.0)))
fmt.Printf("16777221.0:%b\n", math.Float32bits(float32(16777221.0)))
//16777219.0:1001011100000000000000000000010
//16777220.0:1001011100000000000000000000010
//16777221.0:1001011100000000000000000000010
fmt.Printf("000:%f\n", math.Float32frombits(0b_10010111_00000000000000000000000))
// 000:16777216.000000
fmt.Printf("001:%f\n", math.Float32frombits(0b_10010111_00000000000000000000001))
// 001:16777218.000000
fmt.Printf("010:%f\n", math.Float32frombits(0b_10010111_00000000000000000000010))
// 010:16777220.000000
fmt.Printf("011:%f\n", math.Float32frombits(0b_10010111_00000000000000000000011))
// 011:16777222.000000
what is the rules?

Go gives the correct IEEE-754 binary floating point result - round to nearest, ties to even.
The Go Programming Language Specification
float32 the set of all IEEE-754 32-bit floating-point numbers
Decimal
16777219
is binary
1000000000000000000000011
For 32-bit IEEE-754 floating-point binary binary, the 24-bit integer mantissa is
100000000000000000000001.1
Round to nearest, ties to even gives
100000000000000000000010
Removing the implicit one bit for the 23-bit mantissa gives
00000000000000000000010
package main
import (
"fmt"
"math"
)
func main() {
const n = 16777219
fmt.Printf("%d\n", n)
fmt.Printf("%b\n", n)
f := float32(n)
fmt.Printf("%g\n", f)
fmt.Printf("%b\n", math.Float32bits(f))
}
https://go.dev/play/p/yMaVkuiSJ5A
16777219
1000000000000000000000011
1.677722e+07
1001011100000000000000000000010

Why the result is not expected: You're expecting the wrong result.
The IEEE 754 standard specifies that, per Wikipedia:
if the number falls midway, it is rounded to the nearest value with an even least significant digit.
So when rounding 16777219, which is midway between 16777218 and 16777218. The "round up" option 16777220 gives an even LSB, and is the correct result you're observing.

Related

Why does the algorithm expression result get trucated?

If I run $((0x100 - 0 & 0xff)), I got 0.
However $((0x100 - 0)) gives me 256.
Why the result from the first expression got truncated?
Because & is a bitwise operator, and there are no matching bits in 0x100 and 0xff.
What that means is it looks at the bits that make up your numbers and you get a 1 back in the position where both inputs have a 1.
So if you do $((0x06 & 0x03))
In binary you end up with
6 = 0110
3 = 0011
So when you logical and those together, you'll get
0010 (binary) or 0x02
For the numbers you have, there are no bits in common:
0x100 in binary is
0000 0001 0000 0000
0xff in binary is
0000 0000 1111 1111
If you bitwise and them together, there are no matching bits, so you'll end up with
0000 0000 0000 0000
Interestingly, it does the subtraction before it does the bitwise and operation (I expected it to do the other way):
$((0x100 - 1 & 0xff)) gives 255 or 0xff because 0x100 - 1 = 0xff

Why does the CRC of "1" yield the generator polynomial itself?

While testing a CRC implementation, I noticed that the CRC of 0x01 usually (?) seems to be the polynomial itself. When trying to manually do the binary long division however, I keep ending up losing the leading "1" of the polynomial, e.g. with a message of "0x01" and the polynomial "0x1021", I would get
1 0000 0000 0000 (zero padded value)
(XOR) 1 0000 0010 0001
-----------------
0 0000 0010 0001 = 0x0021
But any sample implementation (I'm dealing with XMODEM-CRC here) results in 0x1021 for the given input.
Looking at https://en.wikipedia.org/wiki/Computation_of_cyclic_redundancy_checks, I can see how the XOR step of the upper bit leaving the shift register with the generator polynomial will cause this result. What I don't get is why this step is performed in that manner at all, seeing as it clearly alters the result of a true polynomial division?
I just read http://www.ross.net/crc/download/crc_v3.txt and noticed that in section 9, there is mention of an implicitly prepended 1 to enforce the desired polynomial width.
In my example case, this means that the actual polynomial used as divisor would not be 0x1021, but 0x11021. This results in the leading "1" being dropped, and the remainder being the "intended" 16-bit polynomial:
1 0000 0000 0000 0000 (zero padded value)
(XOR) 1 0001 0000 0010 0001
-----------------
0 0001 0000 0010 0001 = 0x1021

How to bitmask a number (in hex) using the AND operator?

I know that you can bitmask by ANDing a value with 0. However, how can I both bitmask certain nibbles and maintain others. In other words if I have 0x000f0b7c and I wanted to mask the everything but b (in other words my result would be 0x00000b00) how would I use AND to do this? Would it require multiple steps?
You can better understand boolean operations if you represent values in binary form.
The AND operation between two binary digits returns 1 if both the binary digits have a value of 1, otherwise it returns 0.
Suppose you have two binary digits a and b, you can build the following "truth table":
a | b | a AND b
---+---+---------
0 | 0 | 0
1 | 0 | 0
0 | 1 | 0
1 | 1 | 1
The masking operation consists of ANDing a given value with a "mask" where every bit that needs to be preserved is set to 1, while every bit to discard is set to 0.
This is done by ANDing each bit of the given value with the corresponding bit of the mask.
The given value, 0xf0b7c, can be converted as follows:
f 0 b 7 c (hex)
1111 0000 1011 0111 1100 (bin)
If you want to preserve only the bits corresponding to the "b" value (bits 8..11) you can mask it this way:
f 0 b 7 c
1111 0000 1011 0111 1100
0000 0000 1111 0000 0000
The value 0000 0000 1111 0000 0000 can be converted to hex and has a value of 0xf00.
So if you calculate "0xf0b7c AND 0xf00" you obtain 0xb00.

Are these endian transformations correct?

I am struggling to figure this out, I am trying to represent a 32bit variable in both big and little endian. For the sake of argument let's say we try the number, "666."
Big Endian: 0010 1001 1010 0000 0000 0000 0000
Little Endian: 0000 0000 0000 0000 0010 1001 1010
Is this correct, or is my thinking wrong here?
666 (decimal) as 32-bit binary is represented as:
[0000 0000] [0000 0000] [0000 0010] [1001 1010] (big endian, most significant byte first))
[1001 1010] [0000 0010] [0000 0000] [0000 0000] (little endian, least significant byte first)
Ref.
(I have used square brackets to group 4-bit nibbles into bytes)

Algorithm for bitwise fiddling

If I have a 32-bit binary number and I want to replace the lower 16-bit of the binary number with a 16-bit number that I have and keep the upper 16-bit of that number to produce a new binary number.. how can I do this using simple bitwise operator?
For example the 32-bit binary number is:
1010 0000 1011 1111 0100 1000 1010 1001
and the lower 16-bit I have is:
0000 0000 0000 0001
so the result is:
1010 0000 1011 1111 0000 0000 0000 0001
how can I do this?
You do this in two steps:
Mask out the bits that you want to replace (AND it with 0s)
Fill in the replacements (OR it with the new bits)
So in your case,
i32 number;
i32 mask_lower_16 = FFFF0000;
i16 newValue;
number = (number AND mask_lower_16) OR newValue;
In actual programming language implementation, you may also need to address the issue of sign extension on the 16-bit value. In Java, for example, you have to mask the upper 16 bits of the short like this:
short v = (short) 0xF00D;
int number = 0x12345678;
number = (number & 0xFFFF0000) | (v & 0x0000FFFF);
System.out.println(Integer.toHexString(number)); // "1234f00d"
(original32BitNumber & 0xFFFF0000) | 16bitNumber
Well, I could tell you the answer. But perhaps this is homework. So I won't.
Consider that you have a few options:
| // bitwise OR
^ // bitwise XOR
& // bitwise AND
Maybe draw up a little table and decide which one will give you the right result (when you operate on the right section of your larger binary number).
use & to mask off the low bits and then | to merge the 16 bit value with the 32 bit value
uint a = 0xa0bf68a9
short b = 1
uint result = (a & 0xFFFF0000) | b;

Resources