IPv4 Address BigEndian ByteOrder - go

I'm a bit confused about how Go binary package within the standard library represents integer into []byte with BigEndian ordering.
For reference, below is the method in the standard library I'm confused with:
func (bigEndian) PutUint32(b []byte, v uint32) {
_ = b[3] // early bounds check to guarantee safety of writes below
b[0] = byte(v >> 24)
b[1] = byte(v >> 16)
b[2] = byte(v >> 8)
b[3] = byte(v)
}
Suppose I have an IPv4 addressed represented as an unsigned 32-bits integer such as 236194314
With a BigEndian ordering, this should be represented as 4-bytes slice: [10 10 20 14]
However, the PutUint32 stores the most significant byte in the array in the last index b[3] = byte(v) resulting in [14 20 10 10].
Is there any specific explanation for this?

The number 236194314 is 0E 14 0A 0A in hex. So the most significant byte is indeed 14d. Your IPv4 addressed represented as an unsigned 32-bits integer comes in already byte reversed.
The problem happened before you convert to a byte slice.

Related

Get value of one bit from 32 bits

How do you apply a mask to get only one bit after you shift right? Does it depend on how many positions you shifted right?
In a 32 bit structure I'm trying to get the value of the 9th bit and the 10th bit.
x := uint32(11537664)
0000 0000 1011 0000 0000 1101 0000 0000
^^
So for the 9th bit, if I shift right 23 bits I need to mask one byte? That seems to isolate the 9th bit because I'm getting a value of 1.
(x >> 23) & 0xff
9th bit...should be 1... looks ok.
00000000000000000000000000000001
0x1
So to get the 10th bit which should be 0 I am shifting one less bit which does make 0 all the way to the right. But there is a 1 after it which needs to be masked. I figured 1 byte plus 1 bit for the mask but I'm still seeing the the bit in position two so that can't be right.
(x >> 22) & 0x1ff
10th bit... should be 0, but this shift and mask does not look correct.
00000000000000000000000000000010
^ This bit I don't want.
0x2
Link to example:
https://play.golang.org/p/zqofCAAKDZz
package main
import (
"fmt"
)
func bin(i uint32) {
fmt.Printf("%032b\n", i)
}
func hex(i uint32) {
fmt.Printf("0x%x\n", i)
}
func show(i uint32) {
bin(i)
hex(i)
fmt.Println()
}
func main() {
x := uint32(11537664)
fmt.Println("Data")
show(x)
fmt.Println("First 8 bits.")
show(x >> 24)
fmt.Println("9th bit...should be 1")
show((x >> 23) & 0xff)
fmt.Println("10th bit... should be 0")
show((x >> 22) & 0x1ff)
}
After the shift you get a number being 0b10, and you only need the lowest bit. So why are you masking with 0x1ff? That has 9 one bits, that will leave the lowest 9 bits unchanged (unmasked).
Instead mask with 0b01 = 0x01. That only leaves the lowest bit, and zeroes all others:
show((x >> 22) & 0x01)
Try it on the Go Playground.
Also note that if you just want to test if a certain bit is one or zero, you don't neccessarily have to shift. Masking by a proper bitmask that contains a single one at the certain position is enough. You may compare the masking result with zero.
The proper bitmask for testing the nth bit is simply 1<<n (where bits are zero indexed). The 2 bits you want to test are the 22. and 23. bits.
See this example:
x := uint32(11537664)
fmt.Printf("x : %032b\n", x)
fmt.Println()
const mask22 = 1 << 22
fmt.Printf("mask22 : %032b\n", mask22)
fmt.Printf("22. bit: %032b %t\n", x&mask22, x&mask22 != 0)
fmt.Println()
const mask23 = 1 << 23
fmt.Printf("mask23 : %032b\n", mask23)
fmt.Printf("23. bit: %032b %t\n", x&mask23, x&mask23 != 0)
It outputs (try it on the Go Playground):
x : 00000000101100000000110100000000
mask22 : 00000000010000000000000000000000
22. bit: 00000000000000000000000000000000 false
mask23 : 00000000100000000000000000000000
23. bit: 00000000100000000000000000000000 true

Bitwise not to how to do without ffffffff

When doing bitwise not, get a lot ffffffff. How to do correctly?
space := " "
str := "12345678999298765432179.170.184.81"
sp := len(str) % 4
if sp > 0 {
str = str + space[0:4-sp]
}
fmt.Println(str, len(str))
hx := hex.EncodeToString([]byte(str))
ln := len(hx)
a, _ := strconv.ParseUint(hx[0:8], 16, 0)
for i := 8; i < ln; i += 8 {
b, _ := strconv.ParseUint(hx[i:i+8], 16, 0)
a = a ^ b
}
xh := strconv.FormatUint(^a, 16)
fmt.Println(xh)
output
ffffffffc7c7dbcb
I need only
c7c7dbcb
You get a lot of leading ff because your a number in fact is only 32-bit "large" but is used "within" a 64-bit uint64 value. (You're processing numbers with 8 hex digits = 4 bytes data = 32 bit.) It has 4 leading 0 bytes, which when negated will turn into ff. You can verify this with:
fmt.Printf("a %#x\n",a)
Outputs:
a 0x38382434
To get rid of those leading ff, convert the result to uint32:
xh := strconv.FormatUint(uint64(uint32(^a)), 16)
fmt.Println(xh)
(Converting back to uint64 is because strconv.FormatUint() expects / requires uint64.)
This outputs:
c7c7dbcb
Another option is to apply a 0xffffffff bitmask:
xh = strconv.FormatUint(^a&0xffffffff, 16)
fmt.Println(xh)
Also note that you could print it using fmt.Printf() (or fmt.Sprintf() if you need it as a string) where you specify %08x verb which also adds leading zeros should the input has more than 3 leading 0 bits (and thus strconv.FormatUint() would not add leading hex zeros):
fmt.Printf("%08x", uint32(^a))
This outputs the same. Try the examples on the Go Playground.

Why does golang RGBA.RGBA() method use | and <<?

In the golang color package, there is a method to get r,g,b,a values from an RGBA object:
func (c RGBA) RGBA() (r, g, b, a uint32) {
r = uint32(c.R)
r |= r << 8
g = uint32(c.G)
g |= g << 8
b = uint32(c.B)
b |= b << 8
a = uint32(c.A)
a |= a << 8
return
}
If I were to implement this simple function, I would just write this
func (c RGBA) RGBA() (r, g, b, a uint32) {
r = uint32(c.R)
g = uint32(c.G)
b = uint32(c.B)
a = uint32(c.A)
return
}
What's the reason r |= r << 8 is used?
From the the excellent "The Go image package" blogpost:
[...] the channels have a 16-bit effective range: 100% red is represented by
RGBA returning an r of 65535, not 255, so that converting from CMYK or
YCbCr is not as lossy. Third, the type returned is uint32, even though
the maximum value is 65535, to guarantee that multiplying two values
together won't overflow.
and
Note that the R field of an RGBA is an 8-bit alpha-premultiplied color in the range [0, 255]. RGBA satisfies the Color interface by multiplying that value by 0x101 to generate a 16-bit alpha-premultiplied color in the range [0, 65535]
So if we look at the bit representation of a color with the value c.R = 10101010 then this operation
r = uint32(c.R)
r |= r << 8
effectively copies the first byte to the second byte.
00000000000000000000000010101010 (r)
| 00000000000000001010101000000000 (r << 8)
--------------------------------------
00000000000000001010101010101010 (r |= r << 8)
This is equivalent to a multiplication with the factor 0x101 and distributes all 256 possible values evenly across the range [0, 65535].
The color.RGBA type implements the RGBA method to satisfy the color.Color interface:
type Color interface {
// RGBA returns the alpha-premultiplied red, green, blue and alpha values
// for the color. Each value ranges within [0, 0xffff], but is represented
// by a uint32 so that multiplying by a blend factor up to 0xffff will not
// overflow.
//
// An alpha-premultiplied color component c has been scaled by alpha (a),
// so has valid values 0 <= c <= a.
RGBA() (r, g, b, a uint32)
}
Now the RGBA type represents the colour channels with the uint8 type, giving a range of [0, 0xff]. Simply converting these values to uint32 would not extend the range up to [0, 0xffff].
An appropriate conversion would be something like:
r = uint32((float64(c.R) / 0xff) * 0xffff)
However, they want to avoid the floating point arithmetic. Luckily 0xffff / 0xff is 0x0101, so we can simplify the expression (ignoring the type conversions for now):
r = c.R * 0x0101
= c.R * 0x0100 + c.R
= (c.R << 8) + c.R # multiply by power of 2 is equivalent to shift
= (c.R << 8) | c.R # equivalent, since bottom 8 bits of first operand are 0
And that's essentially what the code in the standard library is doing.
Converting a value in the range 0 to 255 (an 8-bit RGB component) to a value in the range 0 to 65535 (a 16-bit RGB component) would be done by multiplying the 8-bit value by 65535/255; 65535/255 is exactly 257, which is hex 101, so multiplying a one-byte by 65535/255 can be done by shifting that byte value left 8 bits and ORing it with the original value.
(There's nothing Go-specific about this; similar tricks are done elsewhere, in other languages, when converting 8-bit RGB/RGBA components to 16-bit RGB/RGBA components.)
To convert from 8- to 16-bits per RGB component, copy the byte into the high byte of the 16-bit value. e.g., 0x03 becomes 0x0303, 0xFE becomes 0xFEFE, so that the 8-bit values 0 through 255 (0xFF) produce 16-bit values 0 to 65,535 (0xFFFF) with an even distribution of values.

go - encoding unsigned 16 bit float in binary

In Go, how can I encode a float into a byte array as a 16 bit unsigned float with 11 explicit bits of mantissa and 5 bits of explicit exponent?
There doesn't seem to be a clean way to do it. The only thing I can think of is encoding it as in Convert byte array "[]uint8" to float64 in GoLang and manually truncating the bits.
Is there a "go" way to do this?
Here's the exact definition:
A 16 bit unsigned float with 11 explicit bits of mantissa and 5 bits of explicit exponent
The bit format is loosely modeled after IEEE 754. For example, 1 microsecond is represented as 0x1, which has an exponent of zero, presented in the 5 high order bits, and mantissa of 1, presented in the 11 low order bits. When the explicit exponent is greater than zero, an implicit high-order 12th bit of 1 is assumed in the mantissa. For example, a floatingvalue of 0x800 has an explicit exponent of 1, as well as an explicit mantissa of 0, but then has an effective mantissa of 4096 (12th bit is assumed to be 1). Additionally, the actual exponent is one-less than the explicit exponent, and the value represents 4096 microseconds. Any values larger than the representable range are clamped to 0xFFFF.
I am not sure whether I understand the encoding correctly (see my comment on the original question), but here is a function which may do what you want:
func EncodeFloat(seconds float64) uint16 {
us := math.Floor(1e6*seconds + 0.5)
if us < 0 {
panic("cannot encode negative value")
} else if us > (1<<30)*4095+0.5 {
return 0xffff
}
usInt := uint64(us)
expBits := uint16(0)
if usInt >= 2048 {
exp := uint16(1)
for usInt >= 4096 {
exp++
usInt >>= 1
}
usInt -= 2048
expBits = exp << 11
}
return expBits | uint16(usInt)
}
(code is at http://play.golang.org/p/G599VOBMcL )

Breaking a 32 bit integer into 8 bit chucks for Radix Sort

I am basically a beginner in Computer Science. Please forgive me if I ask elementary questions. I am trying to understand radix sort. I read that a 32 bit unsigned integer can be broken down into 4 8-bit chunks. After that, all it takes is "4 passes" to complete the radix sort. Can somebody please show me an example for how this breakdown (32 bit into 4 8-bit chunks) works? Maybe, a 32-bit integer like 2147507648.
Thanks!
You would divide the 32 bit integer up in 4 pieces of 8 bits. Extracting those pieces is a matter of using using some of the operators available in C.:
uint32_t x = 2147507648;
uint8_t chunk1 = x & 0x000000ff; //lower 8 bits
uint8_t chunk2 = (x & 0x0000ff00) >> 8;
uint8_t chunk3 = (x & 0x00ff0000) >> 16;
uint8_t chunk4 = (x & 0xff000000) >> 24; //highest 8 bits
2147507648 decimal is 0x80005DC0 hex. You an pretty much eyeball those 8 bits out of the hex representation, since each hex digit represents 4 bits, two and two of them represents 8 bits.
So that now means chunk 1 is 0xC0, chunk 2 is 0x5D, chunk3 is 0x00 and chunk 4 is 0x80
It's done as follows:
2147507648
=> 0x80005DC0 (hex value of 2147507648)
=> 0x80 0x00 0x5D 0xC0
=> 128 0 93 192
To do this, you'd need bitwise operations as nos suggested.

Resources