Convert [8]byte to a uint64 - go

all. I'm encountering what seems to be a very strange problem. (It could be that it's far past when I should be asleep, and I'm overlooking something obvious.)
I have a []byte with length 8 as a result of some hex decoding. I need to produce a uint64 in order to use it. I have tried using binary.Uvarint(), from encoding/binary to do so, but it seems to only use the first byte in the array. Consider the following example.
package main
import (
"encoding/binary"
"fmt"
)
func main() {
array := []byte{0x00, 0x01, 0x08, 0x00, 0x08, 0x01, 0xab, 0x01}
num, _ := binary.Uvarint(array[0:8])
fmt.Printf("%v, %x\n", array, num)
}
Here it is on play.golang.org.
When that is run, it displays the num as 0, even though, in hex, it should be 000108000801ab01. Furthermore, if one catches the second value from binary.Uvarint(), it is the number of bytes read from the buffer, which, to my knowledge, should be 8, even though it is actually 1.
Am I interpreting this wrong? If so, what should I be using instead?
Thanks, you all. :)

You're decoding using a function whose use isn't the one you need :
Varints are a method of encoding integers using one or more bytes;
numbers with smaller absolute value take a smaller number of bytes.
For a specification, see
http://code.google.com/apis/protocolbuffers/docs/encoding.html.
It's not the standard encoding but a very specific, variable byte number, encoding. That's why it stops at the first byte whose value is less than 0x080.
As pointed by Stephen, binary.BigEndian and binary.LittleEndian provide useful functions to decode directly :
type ByteOrder interface {
Uint16([]byte) uint16
Uint32([]byte) uint32
Uint64([]byte) uint64
PutUint16([]byte, uint16)
PutUint32([]byte, uint32)
PutUint64([]byte, uint64)
String() string
}
So you may use
package main
import (
"encoding/binary"
"fmt"
)
func main() {
array := []byte{0x00, 0x01, 0x08, 0x00, 0x08, 0x01, 0xab, 0x01}
num := binary.LittleEndian.Uint64(array)
fmt.Printf("%v, %x", array, num)
}
or (if you want to check errors instead of panicking, thanks jimt for pointing this problem with the direct solution) :
package main
import (
"encoding/binary"
"bytes"
"fmt"
)
func main() {
array := []byte{0x00, 0x01, 0x08, 0x00, 0x08, 0x01, 0xab, 0x01}
var num uint64
err := binary.Read(bytes.NewBuffer(array[:]), binary.LittleEndian, &num)
fmt.Printf("%v, %x", array, num)
}

If don't care byte order, you can try this:
arr := [8]byte{1,2,3,4,5,6,7,8}
num := *(*uint64)(unsafe.Pointer(&arr[0]))
http://play.golang.org/p/aM2r40ANQC

If you look at the function for Uvarint you will see that it is not as straight a conversion as you expect.
To be honest, I haven't yet figured out what kind of byte format it expects (see edit).
But to write your own is close to trivial:
func Uvarint(buf []byte) (x uint64) {
for i, b := range buf {
x = x << 8 + uint64(b)
if i == 7 {
return
}
}
return
}
Edit
The byte format is none I am familiar.
It is a variable width encoding where the highest bit of each byte is a flag.
If set to 0, that byte is the last in the sequence.
If set to 1, the encoding should continue with the next byte.
Only the lower 7 bits of each byte are used to build the uint64 value. The first byte will set the lowest 7 bits of the uint64, the following byte bit 8-15, etc.

Related

Color operation in go

There're some simple color operations, but the output is wrong. I'm just wondering what happened here.
main.c:
package main
import (
"fmt"
"image/color"
)
func main() {
startColor := color.RGBA{0x34, 0xeb, 0x64, 0xff}
endColor := color.RGBA{0x34, 0xc9, 0xeb, 0xff}
fmt.Printf("%d-%d=%d\n", endColor.G, startColor.G, endColor.G-startColor.G)
}
output:
201-235=222
color.RGBA.G is a uint8. Since 235 is bigger than 201, but uint8 doesn't store negative numbers like -34, the value is instead wrapping.
There's nothing color specific about the situation.
You get the same answer (222) with:
var g1, g2 uint8 = 0xc9, 0xeb
fmt.Println(g1 - g2)
So nothing unusual, just standard Go unsigned integer overflow wrapping. It isn't even undefined behavior.

How to add an ASCII value of a byte to an integer in golang?

I'd like to get a byte[] variables' asic value added to an integer. For example, I fisrt read all input in a byte[] buffer. And then I get out a number string "123" in it. And then I can assign it to an integer by ('1' - '0')*100 + ('2' - '0')*10 + '3' - '0'. But I can not assign integers with byte variables. How can I do that with any means? Thank you very much :)
In go, you can convert byte array to string with the string() coercion and then use strconv.Atoi on it. Presumably, you also want to use slice operations to isolate just the part of the input you want to convert.
package main
import (
"strconv"
"fmt"
)
func main() {
data := []byte { 0x20, 0x31, 0x32, 0x33, 0x20 } // Embedded number
// string(...) coercion yields a string from a byte buffer
// Number starts at char 1, ends before char 4
str := string(data[1:4])
i, err := strconv.Atoi(str)
if err != nil {
fmt.Printf("Error %v\n", err)
return
}
fmt.Printf("value %v\n", i)
}
Prints
value 123
And since go has nicely hygenic practicies, errors will be handled too.
If you need to read an integer from a stream of bytes, the fastest way would be just to scan it with fmt. Example:
n := 0
_, err := fmt.Scanf("%d", &n)
if err != nil {
log.Fatal(err)
}
fmt.Printf("you entered %d\n", n)

Go - Convert 2 byte array into a uint16 value

If I have a slice of bytes in Go, similar to this:
numBytes := []byte { 0xFF, 0x10 }
How would I convert it to it's uint16 value (0xFF10, 65296)?
you may use binary.BigEndian.Uint16(numBytes)
like this working sample code (with commented output):
package main
import (
"encoding/binary"
"fmt"
)
func main() {
numBytes := []byte{0xFF, 0x10}
u := binary.BigEndian.Uint16(numBytes)
fmt.Printf("%#X %[1]v\n", u) // 0XFF10 65296
}
and see inside binary.BigEndian.Uint16(b []byte):
func (bigEndian) Uint16(b []byte) uint16 {
_ = b[1] // bounds check hint to compiler; see golang.org/issue/14808
return uint16(b[1]) | uint16(b[0])<<8
}
I hope this helps.
To combine two bytes into uint16
x := uint16(numBytes[i])<<8 | uint16(numBytes[i+1])
where i is the starting position of the uint16. So if your array is always only two items it would be x := uint16(numBytes[0])<<8 | uint16(numBytes[1])
Firstly you have a slice not an array - an array has a fixed size and would be declared like this [2]byte.
If you just have a 2 bytes slice, I wouldn't do anything fancy, I'd just do
numBytes := []byte{0xFF, 0x10}
n := int(numBytes[0])<<8 + int(numBytes[1])
fmt.Printf("n =0x%04X = %d\n", n, n)
Playground
EDIT: Just noticed you wanted uint16 - replace int with that in the above!
You can use the following unsafe conversion:
*(*uint16)(unsafe.Pointer(&numBytes[0])

Sum hexadecimal on Golang

Thanks for reading my question.
I am trying to count ASTM checksum on Golang but couldn't figure it out how to convert string or byte to hexadecimal that is countable by myself and Google.
Please let me request help, thanks.
At Golang, how to convert a character to hexadecimal that can allow performing a sum?
Example:
// Convert character "a" to hex 0x61 ( I understand this will not work for my case as it became a string.)
hex := fmt.Sprintf("%x","a")
// sum the 0x61 with 0x01 so it will become 0x62 = "b"
fmt.Printf("%v",hex + 0x01)
Thank you so much and please have a nice day.
Thanks for everyone answering my question! peterSO and ANisus answers both solved my problem. Please let me choose ANisus's reply as answer as it including ASTM special character in it. I wish StackOverflow could choose multiple answers. Thanks for everybody answering me and please have a nice day!
Intermernet's answer shows you how to convert a hexadecimal string into an int value.
But your question seems to suggest that you want to want to get the code point value of the letter 'a' and then do aritmetics on that value. To do this, you don't need hexadecimal. You can do the following:
package main
import "fmt"
func main() {
// Get the code point value of 'a' which is 0x61
val := 'a'
// sum the 0x61 with 0x01 so it will become 0x62 = 'b'
fmt.Printf("%v", string(val + 0x01))
}
Result:
b
Playground: http://play.golang.org/p/SbsUHIcrXK
Edit:
Doing the actual ASTM checksum from a string using the algorithm described here can be done with the following code:
package main
import (
"fmt"
)
const (
ETX = 0x03
ETB = 23
STX = 0x02
)
func ASTMCheckSum(frame string) string {
var sumOfChars uint8
//take each byte in the string and add the values
for i := 0; i < len(frame) ; i++ {
byteVal := frame[i]
sumOfChars += byteVal
if byteVal == STX {
sumOfChars = 0
}
if byteVal == ETX || byteVal == ETB {
break
}
}
// return as hex value in upper case
return fmt.Sprintf("%02X", sumOfChars)
}
func main() {
data := "\x025R|2|^^^1.0000+950+1.0|15|||^5^||V||34001637|20080516153540|20080516153602|34001637\r\x033D\r\n"
//fmt.Println(data)
fmt.Println(ASTMCheckSum(data))
}
Result:
3D
Playground: http://play.golang.org/p/7cbwryZk8r
You can use ParseInt from the strconv package.
ParseInt interprets a string s in the given base (2 to 36) and returns the corresponding value i. If base == 0, the base is implied by the string's prefix: base 16 for "0x", base 8 for "0", and base 10 otherwise.
package main
import (
"fmt"
"strconv"
)
func main() {
start := "a"
result, err := strconv.ParseInt(start, 16, 0)
if err != nil {
panic(err)
}
fmt.Printf("%x", result+1)
}
Playground
You do not want to "convert a character to hex" because hexadecimal (and decimal and binary and all other base-N representations of integers) are here for displaying numbers to humans and consuming them back. A computer is free to actually store the number it operates on in any form it wishes; while most (all?) real-world computers store them in binary form—using bits, they don't have to.
What I'm leading you to, is that you actually want to convert your character representing a number using hexadecimal notation ("display form") to a number (what computers operate on). For this, you can either use the strconv package as already suggested or roll your own simple conversion code. Or you can just grab one from the encoding/hex standard package—see its fromHexChar function.
For example,
package main
import "fmt"
func ASTMCheckSum(data []byte) []byte {
cs := byte(0)
for _, b := range data {
cs += b
}
return []byte(fmt.Sprintf("%02X", cs))
}
func main() {
data := []byte{0x01, 0x08, 0x1f, 0xff, 0x07}
fmt.Printf("%x\n", data)
cs := ASTMCheckSum(data)
fmt.Printf("%s\n", cs)
}
Output:
01081fff07
2E

Decoding data from a byte slice to Uint32

package main
import (
"bytes"
"encoding/binary"
"fmt"
)
func main() {
aa := uint(0xFFFFFFFF)
fmt.Println(aa)
byteNewbuf := []byte{0xFF, 0xFF, 0xFF, 0xFF}
buf := bytes.NewBuffer(byteNewbuf)
tt, _ := binary.ReadUvarint(buf)
fmt.Println(tt)
}
Need to convert 4 bytes array to uint32 but why the results are not same ?
go verion : beta 1.1
You can do this with one of the ByteOrder objects from the encoding/binary package. For instance:
package main
import (
"encoding/binary"
"fmt"
)
func main() {
aa := uint(0x7FFFFFFF)
fmt.Println(aa)
slice := []byte{0xFF, 0xFF, 0xFF, 0x7F}
tt := binary.LittleEndian.Uint32(slice)
fmt.Println(tt)
}
If your data is in big endian format, you can instead use the same methods on binary.BigEndian.
tt := uint32(buf[0])<<24 | uint32(buf[1])<<16 | uint32(buf[2]) <<8 |
uint32(buf[3])
for BE or
tt := uint32(buf[0]) | uint32(buf[1])<<8 | uint32(buf[2]) <<16 |
uint32(buf[3]) <<24
for LE.
[u]varint is a different kind of encoding (32 bit numbers can have as much as 5 bytes in the encoded form, 64 bit numbers up to 10).
No need to create a buffer for []byte. Use Varint or Uvarint directly on the byte slice instead.
You're throwing away the error returned by the function. The second result indicates how many bytes were read or if there was a problem. There is a problem while decoding 0xff, 0xff, 0xff, 0xff as an uvarint.
Here is how to use the encoding/binary package to do what you want. Note that you don't want to use any of the var functions as those do variable length encoding.
Playground version
package main
import (
"bytes"
"encoding/binary"
"fmt"
"log"
)
func main() {
aa := uint(0xFFFFFF0F)
fmt.Println(aa)
tt := uint32(0)
byteNewbuf := []byte{0x0F, 0xFF, 0xFF, 0xFF}
buf := bytes.NewBuffer(byteNewbuf)
err := binary.Read(buf, binary.LittleEndian, &tt)
if err != nil {
log.Fatalf("Decode failed: %s", err)
}
fmt.Println(tt)
}
Result is
4294967055
4294967055
Numeric types
byte alias for uint8
Since byte is an alias for uint8, your question, "Need to convert 4 bytes array to uint32", has already been answered:
How to convert [4]uint8 into uint32 in Go?
Package binary
[Uvarints and] Varints are a method of encoding integers using one
or more bytes; numbers with smaller absolute value take a smaller
number of bytes. For a specification, see
http://code.google.com/apis/protocolbuffers/docs/encoding.html.
Since Uvarints are a peculiar form of integer representation and storage, you should only use the ReadUvarint function on values that have been written with the Uvarint function.
For example,
package main
import (
"bytes"
"encoding/binary"
"fmt"
)
func main() {
buf := make([]byte, 10)
x := uint64(0xFFFFFFFF)
fmt.Printf("%2d %2d %v\n", x, len(buf), buf)
n := binary.PutUvarint(buf, x)
buf = buf[:n]
fmt.Printf("%2d %2d %v\n", x, len(buf), buf)
y, err := binary.ReadUvarint(bytes.NewBuffer(buf))
if err != nil {
fmt.Println(err)
return
}
fmt.Printf("%2d %2d %v\n", y, len(buf), buf)
}
Output:
4294967295 10 [0 0 0 0 0 0 0 0 0 0]
4294967295 5 [255 255 255 255 15]
4294967295 5 [255 255 255 255 15]

Resources