Two's complement and fmt.Printf - go

So computers use Two's complement to internally represent signed integers. I.e., -5 is represented as ^5 + 1 = "1111 1011".
However, trying to print the binary representation, e.g. the following code:
var i int8 = -5
fmt.Printf("%b", i)
Outputs -101. Not quite what I'd expect. Is the formatting different or is it not using Two's complement after all?
Interestingly, converting to an unsigned int results in the "correct" bit pattern:
var u uint8 = uint(i)
fmt.Printf("%b", u)
Output is 11111011 - exactly the 2s complement of -5.
So it seems to me the value is internally the really using Two's complement, but the formatting is printing the unsigned 5 and prepending a -.
Can somebody clarify this?

I believe the answer lies in how the fmt module formats binary numbers, rather than the internal format.
If you take a look at fmt.integer, one of the very first actions that the function does is to convert the negative signed integer to a positive one:
165 negative := signedness == signed && a < 0
166 if negative {
167 a = -a
168 }
There's then logic to append - in front of the string that's output here.
IOW -101 really is - appended to 5 in binary.
Note: fmt.integer is called from pp.fmtInt64 in print.go, itself called from pp.printArg in the same function.

Here is a method without using unsafe:
package main
import (
"fmt"
"math/bits"
)
func unsigned8(x uint8) []byte {
b := make([]byte, 8)
for i := range b {
if bits.LeadingZeros8(x) == 0 {
b[i] = 1
}
x = bits.RotateLeft8(x, 1)
}
return b
}
func signed8(x int8) []byte {
return unsigned8(uint8(x))
}
func main() {
b := signed8(-5)
fmt.Println(b) // [1 1 1 1 1 0 1 1]
}
In this case you could also use [8]byte, but the above is better if you have
a positive integer, and want to trim the leading zeros.
https://golang.org/pkg/math/bits#RotateLeft

Unsafe pointers must be used to correctly represent negative numbers in binary format.
package main
import (
"fmt"
"strconv"
"unsafe"
)
func bInt8(n int8) string {
return strconv.FormatUint(uint64(*(*uint8)(unsafe.Pointer(&n))), 2)
}
func main() {
fmt.Println(bInt8(-5))
}
Output
11111011

Related

How to convert an sha3 hash to an big integer in golang

I generated a hash value using sha3 and I need to convert it to a big.Int value. Is it possible ? or is there a method to get the integervalue of the hash ?
the following code throws an error that cannot convert type hash.Hash to type int64 :
package main
import (
"math/big"
"golang.org/x/crypto/sha3"
"fmt"
)
func main(){
chall := "hello word"
b := byte[](chall)
h := sha3.New244()
h.Write(chall)
h.Write(b)
d := make([]byte, 16)
h.Sum(d)
val := big.NewInt(int64(h))
fmt.Println(val)
}
TL;DR;
sha3.New224() cannot be represented in uint64 type.
There are many hash types - and of differing sizes. Go standard library picks a very generic interface to cover all type of hashes: https://golang.org/pkg/hash/#Hash
type Hash interface {
io.Writer
Sum(b []byte) []byte
Reset()
Size() int
BlockSize() int
}
Having said that some Go hash implementations optionally include extra methods like hash.Hash64:
type Hash64 interface {
Hash
Sum64() uint64
}
others may implement encoding.BinaryMarshaler:
type BinaryMarshaler interface {
MarshalBinary() (data []byte, err error)
}
which one can use to preserve a hash state.
sha3.New224() does not implement the above 2 interfaces, but crc64 hash does.
To do a runtime check:
h64, ok := h.(hash.Hash64)
if ok {
fmt.Printf("64-bit: %d\n", h64.Sum64())
}
Working example: https://play.golang.org/p/uLUfw0gMZka
(See Peter's comment for the simpler version of this.)
Interpreting a series of bytes as a big.Int is the same as interpreting a series of decimal digits as an arbitrarily large number. For example, to convert the digits 1234 into a "number", you'd do this:
Start with 0
Multiply by 10 = 0
Add 1 = 1
Multiply by 10 = 10
Add 2 = 12
Multiply by 10 = 120
Add 3 = 123
Multiply by 10 = 1230
Add 4 = 1234
The same applies to bytes. The "digits" are just base-256 rather than base-10:
val := big.NewInt(0)
for i := 0; i < h.Size(); i++ {
val.Lsh(val, 8)
val.Add(val, big.NewInt(int64(d[i])))
}
(Lsh is a left-shift. Left shifting by 8 bits is the same as multiplying by 256.)
Playground

Float64 type printing as int in Golang

Surprisingly I couldn't find anyone else having this same issue; I tried simply initializing a float64 in Go and printing it, then attempting a string conversion and printing that. Neither output was accurate.
I've attempted this with many fractions, including those which don't resolve to repeating decimals, as well as simply writing out the float and printing (e.g. num := 1.5 then fmt.Println(num) gives output 1).
package main
import (
"fmt"
"strconv"
)
func main() {
var num float64
num = 5/3
fmt.Printf("%v\n", num)
numString := strconv.FormatFloat(num, 'f', -1, 64)
fmt.Println(numString)
}
Expected:
// Output:
1.66
1.66
Actual:
// Output:
1
1
The Go Programming Language Specification
Integer literals
An integer literal is a sequence of digits representing an integer
constant.
Floating-point literals
A floating-point literal is a decimal representation of a
floating-point constant. It has an integer part, a decimal point, a
fractional part, and an exponent part. The integer and fractional part
comprise decimal digits; the exponent part is an e or E followed by an
optionally signed decimal exponent. One of the integer part or the
fractional part may be elided; one of the decimal point or the
exponent may be elided.
Arithmetic operators
For two integer values x and y, the integer quotient q = x / y and
remainder r = x % y satisfy the following relationships:
x = q*y + r and |r| < |y|
with x / y truncated towards zero.
You wrote, using integer literals and arithmetic (x / y truncates towards zero):
package main
import (
"fmt"
"strconv"
)
func main() {
var num float64
num = 5 / 3 // float64(int(5)/int(3))
fmt.Printf("%v\n", num)
numString := strconv.FormatFloat(num, 'f', -1, 64)
fmt.Println(numString)
}
Playground: https://play.golang.org/p/PBqSbpHvuSL
Output:
1
1
You should write, using floating-point literals and arithmetic:
package main
import (
"fmt"
"strconv"
)
func main() {
var num float64
num = 5.0 / 3.0 // float64(float64(5.0) / float64 (3.0))
fmt.Printf("%v\n", num)
numString := strconv.FormatFloat(num, 'f', -1, 64)
fmt.Println(numString)
}
Playground: https://play.golang.org/p/Hp1nac358HK
Output:
1.6666666666666667
1.6666666666666667

How to transform a string into an ASCII string like in C?

I have to do a cryptography project for my school and I choose Go for this project !
I read the doc but I only C, so it's kinda hard for me right now.
First , I needed to collect the program arguments, I did it. I stockd all arguments in a string variable like :
var text, base string = os.Args[1], os. Args[6]
Now , i need to store the ASCII number in a array of int , for exemple , in C I would done something like that :
int arr[18];
char str[18] = "Hi Stack OverFlow";
arr[i] = str[i] - 96;
So how could I do that in Go?
Thanks !
Here's an example that is similar to the other answer but avoids importing additional packages.
Create a slice of int with the length equal to the string's length. Then iterate over the string to extract each character as int and assign it to the corresponding index in the int slice. Here's code (also on the Go Playground):
package main
import "fmt"
func main() {
s := "Hi Stack OverFlow"
fmt.Println(StringToInts(s))
}
// makes a slice of int and stores each char from string
// as int in the slice
func StringToInts(s string) (intSlice []int) {
intSlice = make([]int, len(s))
for i, _ := range s {
intSlice[i] = int(s[i])
}
return
}
Output of the above program is:
[72 105 32 83 116 97 99 107 32 79 118 101 114 70 108 111 119]
The StringToInts function in the above should do what you want. Though it returns a slice (not an array) of int, it should satisfy your usecase.
My guess is that you want something like this:
package main
import (
"fmt"
"strings"
)
// transform transforms ASCII letters to numbers.
// Letters in the English (basic Latin) alphabet, both upper and lower case,
// are represented by a number between one and twenty-six. All other characters,
// including space, are represented by the number zero.
func transform(s string) []int {
n := make([]int, 0, len(s))
other := 'a' - 1
for _, r := range strings.ToLower(s) {
if 'a' > r || r > 'z' {
r = other
}
n = append(n, int(r-other))
}
return n
}
func main() {
s := "Hi Stack OverFlow"
fmt.Println(s)
n := transform(s)
fmt.Println(n)
}
Output:
Hi Stack OverFlow
[8 9 0 19 20 1 3 11 0 15 22 5 18 6 12 15 23]
Take A Tour of Go and see if you can understand what the program does.

Sum hexadecimal on Golang

Thanks for reading my question.
I am trying to count ASTM checksum on Golang but couldn't figure it out how to convert string or byte to hexadecimal that is countable by myself and Google.
Please let me request help, thanks.
At Golang, how to convert a character to hexadecimal that can allow performing a sum?
Example:
// Convert character "a" to hex 0x61 ( I understand this will not work for my case as it became a string.)
hex := fmt.Sprintf("%x","a")
// sum the 0x61 with 0x01 so it will become 0x62 = "b"
fmt.Printf("%v",hex + 0x01)
Thank you so much and please have a nice day.
Thanks for everyone answering my question! peterSO and ANisus answers both solved my problem. Please let me choose ANisus's reply as answer as it including ASTM special character in it. I wish StackOverflow could choose multiple answers. Thanks for everybody answering me and please have a nice day!
Intermernet's answer shows you how to convert a hexadecimal string into an int value.
But your question seems to suggest that you want to want to get the code point value of the letter 'a' and then do aritmetics on that value. To do this, you don't need hexadecimal. You can do the following:
package main
import "fmt"
func main() {
// Get the code point value of 'a' which is 0x61
val := 'a'
// sum the 0x61 with 0x01 so it will become 0x62 = 'b'
fmt.Printf("%v", string(val + 0x01))
}
Result:
b
Playground: http://play.golang.org/p/SbsUHIcrXK
Edit:
Doing the actual ASTM checksum from a string using the algorithm described here can be done with the following code:
package main
import (
"fmt"
)
const (
ETX = 0x03
ETB = 23
STX = 0x02
)
func ASTMCheckSum(frame string) string {
var sumOfChars uint8
//take each byte in the string and add the values
for i := 0; i < len(frame) ; i++ {
byteVal := frame[i]
sumOfChars += byteVal
if byteVal == STX {
sumOfChars = 0
}
if byteVal == ETX || byteVal == ETB {
break
}
}
// return as hex value in upper case
return fmt.Sprintf("%02X", sumOfChars)
}
func main() {
data := "\x025R|2|^^^1.0000+950+1.0|15|||^5^||V||34001637|20080516153540|20080516153602|34001637\r\x033D\r\n"
//fmt.Println(data)
fmt.Println(ASTMCheckSum(data))
}
Result:
3D
Playground: http://play.golang.org/p/7cbwryZk8r
You can use ParseInt from the strconv package.
ParseInt interprets a string s in the given base (2 to 36) and returns the corresponding value i. If base == 0, the base is implied by the string's prefix: base 16 for "0x", base 8 for "0", and base 10 otherwise.
package main
import (
"fmt"
"strconv"
)
func main() {
start := "a"
result, err := strconv.ParseInt(start, 16, 0)
if err != nil {
panic(err)
}
fmt.Printf("%x", result+1)
}
Playground
You do not want to "convert a character to hex" because hexadecimal (and decimal and binary and all other base-N representations of integers) are here for displaying numbers to humans and consuming them back. A computer is free to actually store the number it operates on in any form it wishes; while most (all?) real-world computers store them in binary form—using bits, they don't have to.
What I'm leading you to, is that you actually want to convert your character representing a number using hexadecimal notation ("display form") to a number (what computers operate on). For this, you can either use the strconv package as already suggested or roll your own simple conversion code. Or you can just grab one from the encoding/hex standard package—see its fromHexChar function.
For example,
package main
import "fmt"
func ASTMCheckSum(data []byte) []byte {
cs := byte(0)
for _, b := range data {
cs += b
}
return []byte(fmt.Sprintf("%02X", cs))
}
func main() {
data := []byte{0x01, 0x08, 0x1f, 0xff, 0x07}
fmt.Printf("%x\n", data)
cs := ASTMCheckSum(data)
fmt.Printf("%s\n", cs)
}
Output:
01081fff07
2E

Idiomatic Type Conversion in Go

I was playing around with Go and was wondering what the best way is to perform idiomatic type conversions in Go. Basically my problem lays within automatic type conversions between uint8, uint64, and float64. From my experience with other languages a multiplication of a uint8 with a uint64 will yield a uint64 value, but not so in go.
Here is an example that I build and I ask if this is the idiomatic way of writing this code or if I'm missing an important language construct.
package main
import ("math";"fmt")
const(Width=64)
func main() {
var index uint32
var bits uint8
index = 100
bits = 3
var c uint64
// This is the line of interest vvvv
c = uint64(math.Ceil(float64(index * uint32(bits))/float64(Width)))
fmt.Println("Test: %v\n", c)
}
From my point of view the calculation of the ceiling value seems unnecessary complex because of all the explicit type conversions.
Thanks!
There are no implicit type conversions for non-constant values.
You can write
var x float64
x = 1
But you cannot write
var x float64
var y int
y = 1
x = y
See the spec for reference.
There's a good reason, to not allow automatic/implicit type conversions, as they can
become very messy and one has to learn many rules to circumvent the various caveats
that may occur. Take the Integer Conversion Rules in C for example.
For example,
package main
import "fmt"
func CeilUint(a, b uint64) uint64 {
return (a + (b - 1)) / b
}
func main() {
const Width = 64
var index uint32 = 100
var bits uint8 = 3
var c uint64 = CeilUint(uint64(index)*uint64(bits), Width)
fmt.Println("Test:", c)
}
Output:
Test: 5
To add to #nemo terrific answer. The convenience of automatic conversion between numeric types in C is outweighed by the confusion it causes. See https://Golang.org/doc/faq#conversions. Thats why you can't even convert from int to int32 implicitly. See https://stackoverflow.com/a/13852456/12817546.
package main
import (
. "fmt"
. "strconv"
)
func main() {
i := 71
c := []interface{}{byte(i), []byte(string(i)), float64(i), i, rune(i), Itoa(i), i != 0}
checkType(c)
}
func checkType(s []interface{}) {
for k, _ := range s {
Printf("%T %v\n", s[k], s[k])
}
}
byte(i) creates a uint8 with a value of 71, []byte(string(i)) a []uint8 with [71], float64(i) float64 71, i int 71, rune(i) int32 71, Itoa(i) string 71 and i != 0 a bool with a value of true.
Since Go won't convert numeric types automatically for you (See https://stackoverflow.com/a/13851553/12817546) you have to convert between types manually. See https://stackoverflow.com/a/41419962/12817546. Note, Itoa(i) sets an "Integer to an ASCII". See comment in https://stackoverflow.com/a/10105983/12817546.

Resources