Sum hexadecimal on Golang - go

Thanks for reading my question.
I am trying to count ASTM checksum on Golang but couldn't figure it out how to convert string or byte to hexadecimal that is countable by myself and Google.
Please let me request help, thanks.
At Golang, how to convert a character to hexadecimal that can allow performing a sum?
Example:
// Convert character "a" to hex 0x61 ( I understand this will not work for my case as it became a string.)
hex := fmt.Sprintf("%x","a")
// sum the 0x61 with 0x01 so it will become 0x62 = "b"
fmt.Printf("%v",hex + 0x01)
Thank you so much and please have a nice day.
Thanks for everyone answering my question! peterSO and ANisus answers both solved my problem. Please let me choose ANisus's reply as answer as it including ASTM special character in it. I wish StackOverflow could choose multiple answers. Thanks for everybody answering me and please have a nice day!

Intermernet's answer shows you how to convert a hexadecimal string into an int value.
But your question seems to suggest that you want to want to get the code point value of the letter 'a' and then do aritmetics on that value. To do this, you don't need hexadecimal. You can do the following:
package main
import "fmt"
func main() {
// Get the code point value of 'a' which is 0x61
val := 'a'
// sum the 0x61 with 0x01 so it will become 0x62 = 'b'
fmt.Printf("%v", string(val + 0x01))
}
Result:
b
Playground: http://play.golang.org/p/SbsUHIcrXK
Edit:
Doing the actual ASTM checksum from a string using the algorithm described here can be done with the following code:
package main
import (
"fmt"
)
const (
ETX = 0x03
ETB = 23
STX = 0x02
)
func ASTMCheckSum(frame string) string {
var sumOfChars uint8
//take each byte in the string and add the values
for i := 0; i < len(frame) ; i++ {
byteVal := frame[i]
sumOfChars += byteVal
if byteVal == STX {
sumOfChars = 0
}
if byteVal == ETX || byteVal == ETB {
break
}
}
// return as hex value in upper case
return fmt.Sprintf("%02X", sumOfChars)
}
func main() {
data := "\x025R|2|^^^1.0000+950+1.0|15|||^5^||V||34001637|20080516153540|20080516153602|34001637\r\x033D\r\n"
//fmt.Println(data)
fmt.Println(ASTMCheckSum(data))
}
Result:
3D
Playground: http://play.golang.org/p/7cbwryZk8r

You can use ParseInt from the strconv package.
ParseInt interprets a string s in the given base (2 to 36) and returns the corresponding value i. If base == 0, the base is implied by the string's prefix: base 16 for "0x", base 8 for "0", and base 10 otherwise.
package main
import (
"fmt"
"strconv"
)
func main() {
start := "a"
result, err := strconv.ParseInt(start, 16, 0)
if err != nil {
panic(err)
}
fmt.Printf("%x", result+1)
}
Playground

You do not want to "convert a character to hex" because hexadecimal (and decimal and binary and all other base-N representations of integers) are here for displaying numbers to humans and consuming them back. A computer is free to actually store the number it operates on in any form it wishes; while most (all?) real-world computers store them in binary form—using bits, they don't have to.
What I'm leading you to, is that you actually want to convert your character representing a number using hexadecimal notation ("display form") to a number (what computers operate on). For this, you can either use the strconv package as already suggested or roll your own simple conversion code. Or you can just grab one from the encoding/hex standard package—see its fromHexChar function.

For example,
package main
import "fmt"
func ASTMCheckSum(data []byte) []byte {
cs := byte(0)
for _, b := range data {
cs += b
}
return []byte(fmt.Sprintf("%02X", cs))
}
func main() {
data := []byte{0x01, 0x08, 0x1f, 0xff, 0x07}
fmt.Printf("%x\n", data)
cs := ASTMCheckSum(data)
fmt.Printf("%s\n", cs)
}
Output:
01081fff07
2E

Related

How to convert an sha3 hash to an big integer in golang

I generated a hash value using sha3 and I need to convert it to a big.Int value. Is it possible ? or is there a method to get the integervalue of the hash ?
the following code throws an error that cannot convert type hash.Hash to type int64 :
package main
import (
"math/big"
"golang.org/x/crypto/sha3"
"fmt"
)
func main(){
chall := "hello word"
b := byte[](chall)
h := sha3.New244()
h.Write(chall)
h.Write(b)
d := make([]byte, 16)
h.Sum(d)
val := big.NewInt(int64(h))
fmt.Println(val)
}
TL;DR;
sha3.New224() cannot be represented in uint64 type.
There are many hash types - and of differing sizes. Go standard library picks a very generic interface to cover all type of hashes: https://golang.org/pkg/hash/#Hash
type Hash interface {
io.Writer
Sum(b []byte) []byte
Reset()
Size() int
BlockSize() int
}
Having said that some Go hash implementations optionally include extra methods like hash.Hash64:
type Hash64 interface {
Hash
Sum64() uint64
}
others may implement encoding.BinaryMarshaler:
type BinaryMarshaler interface {
MarshalBinary() (data []byte, err error)
}
which one can use to preserve a hash state.
sha3.New224() does not implement the above 2 interfaces, but crc64 hash does.
To do a runtime check:
h64, ok := h.(hash.Hash64)
if ok {
fmt.Printf("64-bit: %d\n", h64.Sum64())
}
Working example: https://play.golang.org/p/uLUfw0gMZka
(See Peter's comment for the simpler version of this.)
Interpreting a series of bytes as a big.Int is the same as interpreting a series of decimal digits as an arbitrarily large number. For example, to convert the digits 1234 into a "number", you'd do this:
Start with 0
Multiply by 10 = 0
Add 1 = 1
Multiply by 10 = 10
Add 2 = 12
Multiply by 10 = 120
Add 3 = 123
Multiply by 10 = 1230
Add 4 = 1234
The same applies to bytes. The "digits" are just base-256 rather than base-10:
val := big.NewInt(0)
for i := 0; i < h.Size(); i++ {
val.Lsh(val, 8)
val.Add(val, big.NewInt(int64(d[i])))
}
(Lsh is a left-shift. Left shifting by 8 bits is the same as multiplying by 256.)
Playground

Go - Convert 2 byte array into a uint16 value

If I have a slice of bytes in Go, similar to this:
numBytes := []byte { 0xFF, 0x10 }
How would I convert it to it's uint16 value (0xFF10, 65296)?
you may use binary.BigEndian.Uint16(numBytes)
like this working sample code (with commented output):
package main
import (
"encoding/binary"
"fmt"
)
func main() {
numBytes := []byte{0xFF, 0x10}
u := binary.BigEndian.Uint16(numBytes)
fmt.Printf("%#X %[1]v\n", u) // 0XFF10 65296
}
and see inside binary.BigEndian.Uint16(b []byte):
func (bigEndian) Uint16(b []byte) uint16 {
_ = b[1] // bounds check hint to compiler; see golang.org/issue/14808
return uint16(b[1]) | uint16(b[0])<<8
}
I hope this helps.
To combine two bytes into uint16
x := uint16(numBytes[i])<<8 | uint16(numBytes[i+1])
where i is the starting position of the uint16. So if your array is always only two items it would be x := uint16(numBytes[0])<<8 | uint16(numBytes[1])
Firstly you have a slice not an array - an array has a fixed size and would be declared like this [2]byte.
If you just have a 2 bytes slice, I wouldn't do anything fancy, I'd just do
numBytes := []byte{0xFF, 0x10}
n := int(numBytes[0])<<8 + int(numBytes[1])
fmt.Printf("n =0x%04X = %d\n", n, n)
Playground
EDIT: Just noticed you wanted uint16 - replace int with that in the above!
You can use the following unsafe conversion:
*(*uint16)(unsafe.Pointer(&numBytes[0])

Two's complement and fmt.Printf

So computers use Two's complement to internally represent signed integers. I.e., -5 is represented as ^5 + 1 = "1111 1011".
However, trying to print the binary representation, e.g. the following code:
var i int8 = -5
fmt.Printf("%b", i)
Outputs -101. Not quite what I'd expect. Is the formatting different or is it not using Two's complement after all?
Interestingly, converting to an unsigned int results in the "correct" bit pattern:
var u uint8 = uint(i)
fmt.Printf("%b", u)
Output is 11111011 - exactly the 2s complement of -5.
So it seems to me the value is internally the really using Two's complement, but the formatting is printing the unsigned 5 and prepending a -.
Can somebody clarify this?
I believe the answer lies in how the fmt module formats binary numbers, rather than the internal format.
If you take a look at fmt.integer, one of the very first actions that the function does is to convert the negative signed integer to a positive one:
165 negative := signedness == signed && a < 0
166 if negative {
167 a = -a
168 }
There's then logic to append - in front of the string that's output here.
IOW -101 really is - appended to 5 in binary.
Note: fmt.integer is called from pp.fmtInt64 in print.go, itself called from pp.printArg in the same function.
Here is a method without using unsafe:
package main
import (
"fmt"
"math/bits"
)
func unsigned8(x uint8) []byte {
b := make([]byte, 8)
for i := range b {
if bits.LeadingZeros8(x) == 0 {
b[i] = 1
}
x = bits.RotateLeft8(x, 1)
}
return b
}
func signed8(x int8) []byte {
return unsigned8(uint8(x))
}
func main() {
b := signed8(-5)
fmt.Println(b) // [1 1 1 1 1 0 1 1]
}
In this case you could also use [8]byte, but the above is better if you have
a positive integer, and want to trim the leading zeros.
https://golang.org/pkg/math/bits#RotateLeft
Unsafe pointers must be used to correctly represent negative numbers in binary format.
package main
import (
"fmt"
"strconv"
"unsafe"
)
func bInt8(n int8) string {
return strconv.FormatUint(uint64(*(*uint8)(unsafe.Pointer(&n))), 2)
}
func main() {
fmt.Println(bInt8(-5))
}
Output
11111011

How can I convert a zero-terminated byte array to string?

I need to read [100]byte to transfer a bunch of string data.
Because not all of the strings are precisely 100 characters long, the remaining part of the byte array is padded with 0s.
If I convert [100]byte to string by: string(byteArray[:]), the tailing 0s are displayed as ^#^#s.
In C, the string will terminate upon 0, so what's the best way to convert this byte array to string in Go?
Methods that read data into byte slices return the number of bytes read. You should save that number and then use it to create your string. If n is the number of bytes read, your code would look like this:
s := string(byteArray[:n])
To convert the full string, this can be used:
s := string(byteArray[:len(byteArray)])
This is equivalent to:
s := string(byteArray[:])
If for some reason you don't know n, you could use the bytes package to find it, assuming your input doesn't have a null character embedded in it.
n := bytes.Index(byteArray[:], []byte{0})
Or as icza pointed out, you can use the code below:
n := bytes.IndexByte(byteArray[:], 0)
Use:
s := string(byteArray[:])
Simplistic solution:
str := fmt.Sprintf("%s", byteArray)
I'm not sure how performant this is though.
For example,
package main
import "fmt"
func CToGoString(c []byte) string {
n := -1
for i, b := range c {
if b == 0 {
break
}
n = i
}
return string(c[:n+1])
}
func main() {
c := [100]byte{'a', 'b', 'c'}
fmt.Println("C: ", len(c), c[:4])
g := CToGoString(c[:])
fmt.Println("Go:", len(g), g)
}
Output:
C: 100 [97 98 99 0]
Go: 3 abc
The following code is looking for '\0', and under the assumptions of the question the array can be considered sorted since all non-'\0' precede all '\0'. This assumption won't hold if the array can contain '\0' within the data.
Find the location of the first zero-byte using a binary search, then slice.
You can find the zero-byte like this:
package main
import "fmt"
func FirstZero(b []byte) int {
min, max := 0, len(b)
for {
if min + 1 == max { return max }
mid := (min + max) / 2
if b[mid] == '\000' {
max = mid
} else {
min = mid
}
}
return len(b)
}
func main() {
b := []byte{1, 2, 3, 0, 0, 0}
fmt.Println(FirstZero(b))
}
It may be faster just to naively scan the byte array looking for the zero-byte, especially if most of your strings are short.
When you do not know the exact length of non-nil bytes in the array, you can trim it first:
string(bytes.Trim(arr, "\x00"))
Use this:
bytes.NewBuffer(byteArray).String()
Only use for performance tuning.
package main
import (
"fmt"
"reflect"
"unsafe"
)
func BytesToString(b []byte) string {
return *(*string)(unsafe.Pointer(&b))
}
func StringToBytes(s string) []byte {
return *(*[]byte)(unsafe.Pointer(&s))
}
func main() {
b := []byte{'b', 'y', 't', 'e'}
s := BytesToString(b)
fmt.Println(s)
b = StringToBytes(s)
fmt.Println(string(b))
}
Though not extremely performant, the only readable solution is:
// Split by separator and pick the first one.
// This has all the characters till null, excluding null itself.
retByteArray := bytes.Split(byteArray[:], []byte{0}) [0]
// OR
// If you want a true C-like string, including the null character
retByteArray := bytes.SplitAfter(byteArray[:], []byte{0}) [0]
A full example to have a C-style byte array:
package main
import (
"bytes"
"fmt"
)
func main() {
var byteArray = [6]byte{97,98,0,100,0,99}
cStyleString := bytes.SplitAfter(byteArray[:], []byte{0}) [0]
fmt.Println(cStyleString)
}
A full example to have a Go style string excluding the nulls:
package main
import (
"bytes"
"fmt"
)
func main() {
var byteArray = [6]byte{97, 98, 0, 100, 0, 99}
goStyleString := string(bytes.Split(byteArray[:], []byte{0}) [0])
fmt.Println(goStyleString)
}
This allocates a slice of slice of bytes. So keep an eye on performance if it is used heavily or repeatedly.
Use slices instead of arrays for reading. For example, io.Reader accepts a slice, not an array.
Use slicing instead of zero padding.
Example:
buf := make([]byte, 100)
n, err := myReader.Read(buf)
if n == 0 && err != nil {
log.Fatal(err)
}
consume(buf[:n]) // consume() will see an exact (not padded) slice of read data
Here is an option that removes the null bytes:
package main
import "golang.org/x/sys/windows"
func main() {
b := []byte{'M', 'a', 'r', 'c', 'h', 0}
s := windows.ByteSliceToString(b)
println(s == "March")
}
https://pkg.go.dev/golang.org/x/sys/unix#ByteSliceToString
https://pkg.go.dev/golang.org/x/sys/windows#ByteSliceToString

Convert [8]byte to a uint64

all. I'm encountering what seems to be a very strange problem. (It could be that it's far past when I should be asleep, and I'm overlooking something obvious.)
I have a []byte with length 8 as a result of some hex decoding. I need to produce a uint64 in order to use it. I have tried using binary.Uvarint(), from encoding/binary to do so, but it seems to only use the first byte in the array. Consider the following example.
package main
import (
"encoding/binary"
"fmt"
)
func main() {
array := []byte{0x00, 0x01, 0x08, 0x00, 0x08, 0x01, 0xab, 0x01}
num, _ := binary.Uvarint(array[0:8])
fmt.Printf("%v, %x\n", array, num)
}
Here it is on play.golang.org.
When that is run, it displays the num as 0, even though, in hex, it should be 000108000801ab01. Furthermore, if one catches the second value from binary.Uvarint(), it is the number of bytes read from the buffer, which, to my knowledge, should be 8, even though it is actually 1.
Am I interpreting this wrong? If so, what should I be using instead?
Thanks, you all. :)
You're decoding using a function whose use isn't the one you need :
Varints are a method of encoding integers using one or more bytes;
numbers with smaller absolute value take a smaller number of bytes.
For a specification, see
http://code.google.com/apis/protocolbuffers/docs/encoding.html.
It's not the standard encoding but a very specific, variable byte number, encoding. That's why it stops at the first byte whose value is less than 0x080.
As pointed by Stephen, binary.BigEndian and binary.LittleEndian provide useful functions to decode directly :
type ByteOrder interface {
Uint16([]byte) uint16
Uint32([]byte) uint32
Uint64([]byte) uint64
PutUint16([]byte, uint16)
PutUint32([]byte, uint32)
PutUint64([]byte, uint64)
String() string
}
So you may use
package main
import (
"encoding/binary"
"fmt"
)
func main() {
array := []byte{0x00, 0x01, 0x08, 0x00, 0x08, 0x01, 0xab, 0x01}
num := binary.LittleEndian.Uint64(array)
fmt.Printf("%v, %x", array, num)
}
or (if you want to check errors instead of panicking, thanks jimt for pointing this problem with the direct solution) :
package main
import (
"encoding/binary"
"bytes"
"fmt"
)
func main() {
array := []byte{0x00, 0x01, 0x08, 0x00, 0x08, 0x01, 0xab, 0x01}
var num uint64
err := binary.Read(bytes.NewBuffer(array[:]), binary.LittleEndian, &num)
fmt.Printf("%v, %x", array, num)
}
If don't care byte order, you can try this:
arr := [8]byte{1,2,3,4,5,6,7,8}
num := *(*uint64)(unsafe.Pointer(&arr[0]))
http://play.golang.org/p/aM2r40ANQC
If you look at the function for Uvarint you will see that it is not as straight a conversion as you expect.
To be honest, I haven't yet figured out what kind of byte format it expects (see edit).
But to write your own is close to trivial:
func Uvarint(buf []byte) (x uint64) {
for i, b := range buf {
x = x << 8 + uint64(b)
if i == 7 {
return
}
}
return
}
Edit
The byte format is none I am familiar.
It is a variable width encoding where the highest bit of each byte is a flag.
If set to 0, that byte is the last in the sequence.
If set to 1, the encoding should continue with the next byte.
Only the lower 7 bits of each byte are used to build the uint64 value. The first byte will set the lowest 7 bits of the uint64, the following byte bit 8-15, etc.

Resources