Idiomatic Type Conversion in Go - go

I was playing around with Go and was wondering what the best way is to perform idiomatic type conversions in Go. Basically my problem lays within automatic type conversions between uint8, uint64, and float64. From my experience with other languages a multiplication of a uint8 with a uint64 will yield a uint64 value, but not so in go.
Here is an example that I build and I ask if this is the idiomatic way of writing this code or if I'm missing an important language construct.
package main
import ("math";"fmt")
const(Width=64)
func main() {
var index uint32
var bits uint8
index = 100
bits = 3
var c uint64
// This is the line of interest vvvv
c = uint64(math.Ceil(float64(index * uint32(bits))/float64(Width)))
fmt.Println("Test: %v\n", c)
}
From my point of view the calculation of the ceiling value seems unnecessary complex because of all the explicit type conversions.
Thanks!

There are no implicit type conversions for non-constant values.
You can write
var x float64
x = 1
But you cannot write
var x float64
var y int
y = 1
x = y
See the spec for reference.
There's a good reason, to not allow automatic/implicit type conversions, as they can
become very messy and one has to learn many rules to circumvent the various caveats
that may occur. Take the Integer Conversion Rules in C for example.

For example,
package main
import "fmt"
func CeilUint(a, b uint64) uint64 {
return (a + (b - 1)) / b
}
func main() {
const Width = 64
var index uint32 = 100
var bits uint8 = 3
var c uint64 = CeilUint(uint64(index)*uint64(bits), Width)
fmt.Println("Test:", c)
}
Output:
Test: 5

To add to #nemo terrific answer. The convenience of automatic conversion between numeric types in C is outweighed by the confusion it causes. See https://Golang.org/doc/faq#conversions. Thats why you can't even convert from int to int32 implicitly. See https://stackoverflow.com/a/13852456/12817546.
package main
import (
. "fmt"
. "strconv"
)
func main() {
i := 71
c := []interface{}{byte(i), []byte(string(i)), float64(i), i, rune(i), Itoa(i), i != 0}
checkType(c)
}
func checkType(s []interface{}) {
for k, _ := range s {
Printf("%T %v\n", s[k], s[k])
}
}
byte(i) creates a uint8 with a value of 71, []byte(string(i)) a []uint8 with [71], float64(i) float64 71, i int 71, rune(i) int32 71, Itoa(i) string 71 and i != 0 a bool with a value of true.
Since Go won't convert numeric types automatically for you (See https://stackoverflow.com/a/13851553/12817546) you have to convert between types manually. See https://stackoverflow.com/a/41419962/12817546. Note, Itoa(i) sets an "Integer to an ASCII". See comment in https://stackoverflow.com/a/10105983/12817546.

Related

How to convert an sha3 hash to an big integer in golang

I generated a hash value using sha3 and I need to convert it to a big.Int value. Is it possible ? or is there a method to get the integervalue of the hash ?
the following code throws an error that cannot convert type hash.Hash to type int64 :
package main
import (
"math/big"
"golang.org/x/crypto/sha3"
"fmt"
)
func main(){
chall := "hello word"
b := byte[](chall)
h := sha3.New244()
h.Write(chall)
h.Write(b)
d := make([]byte, 16)
h.Sum(d)
val := big.NewInt(int64(h))
fmt.Println(val)
}
TL;DR;
sha3.New224() cannot be represented in uint64 type.
There are many hash types - and of differing sizes. Go standard library picks a very generic interface to cover all type of hashes: https://golang.org/pkg/hash/#Hash
type Hash interface {
io.Writer
Sum(b []byte) []byte
Reset()
Size() int
BlockSize() int
}
Having said that some Go hash implementations optionally include extra methods like hash.Hash64:
type Hash64 interface {
Hash
Sum64() uint64
}
others may implement encoding.BinaryMarshaler:
type BinaryMarshaler interface {
MarshalBinary() (data []byte, err error)
}
which one can use to preserve a hash state.
sha3.New224() does not implement the above 2 interfaces, but crc64 hash does.
To do a runtime check:
h64, ok := h.(hash.Hash64)
if ok {
fmt.Printf("64-bit: %d\n", h64.Sum64())
}
Working example: https://play.golang.org/p/uLUfw0gMZka
(See Peter's comment for the simpler version of this.)
Interpreting a series of bytes as a big.Int is the same as interpreting a series of decimal digits as an arbitrarily large number. For example, to convert the digits 1234 into a "number", you'd do this:
Start with 0
Multiply by 10 = 0
Add 1 = 1
Multiply by 10 = 10
Add 2 = 12
Multiply by 10 = 120
Add 3 = 123
Multiply by 10 = 1230
Add 4 = 1234
The same applies to bytes. The "digits" are just base-256 rather than base-10:
val := big.NewInt(0)
for i := 0; i < h.Size(); i++ {
val.Lsh(val, 8)
val.Add(val, big.NewInt(int64(d[i])))
}
(Lsh is a left-shift. Left shifting by 8 bits is the same as multiplying by 256.)
Playground

How to convert a slice to alias slice in go?

I defined my Int type as int.
I want to convert a slice of Int to a slice of int, but got a compile error:
cannot convert c (type []Int) to type []int
How can I fix this?
package main
import (
"fmt"
)
type Int int
func main() {
var c = []Int{}
var x = []int( c )
fmt.Println(len(x))
}
Your Int type is not an alias of int, it's a new type with int being its underlying type. This type of conversion is not supported / allowed by the language spec. More specifically, converting a slice type to another where the element type is different is not allowed.
The safe way
If you only need an []int "view" of the []Int, the safe way to "convert" would be to create a copy of the []Int slice but with a type of []int, and use a for range loop and convert each individual element from Int to int type:
var c = []Int{1, 2}
x := make([]int, len(c))
for i, v := range c {
x[i] = int(v)
}
fmt.Println(x)
Output (try it on the Go Playground):
[1 2]
The unsafe way
There is also an "unsafe" way:
var c = []Int{1, 2}
var x []int = *(*[]int)(unsafe.Pointer(&c))
fmt.Println(x)
Output is the same. Try this one on the Go Playground.
What happens here is that the address of c (which is &c) is converted to unsafe.Pointer (all pointers can be converted to this), which then is converted to *[]int (unsafe.Pointer can be converted to any pointer type), and then this pointer is dereferenced which gives a value of type []int. In this case it is safe because the memory layout of []Int and []int is identical (because Int has int as its underlying type), but in general, use of package unsafe should be avoided whenever possible.
If Int would be a "true" alias
Note that if Int would be a "true" alias to int, the conversion would not even be needed:
var c = []Int{1, 2}
var x []int = c
fmt.Println(x)
Output is the same as above (try it on the Go Playground). The reason why this works is because writing []Int is identical to writing []int, they are the same type, so you don't even need a conversion here.
By using a slice type
Also note that if you would create a new type with []int as its underlying type, you could use type conversion:
type IntSlice = []int
func main() {
var c = IntSlice{1, 2}
var x []int = []int(c)
fmt.Println(x)
}
Output is again the same. Try this one on the Go Playground.
The problem is that you are not creating Int as an alias, doing
type Int int
Will create Int as a new type that can't interoperate with int.
The proper way to create Int as an alias is
type Int = int
With this change your program is ok.
Technically, type Int int does not define an alias, but a completely new type. Even though Int and int now have identical underlying types and can be converted to each other, that does not apply to slices. More about allowed conversions is in the spec.
Actually, a slice a simply points to an underlying array of the designated type (in this case the types are different, Int and int). So unless your underlying type is the same a conversion won't work. Just to illustrate this something like this would work though:
package main
import (
"fmt"
)
type Int int
type IntSl []int
func main() {
var c = IntSl{2, 3, 4}
var x []int
x = []int(c)
var a Int
var b int
a = 1
b = int(a)
fmt.Println(len(x), a, b, c)
}
Playground : https://play.golang.org/p/ROOX1XoXg1j
As #icza points out there's the unsafe way & of course you can always do the conversion looping over each of the elements which could be expensive.

Confused with Type conversions in golang

I recently tried to learn golang. But I got confused with this code from https://tour.golang.org/basics/13.
package main
import (
"fmt"
"math"
)
func main() {
var x, y int = 3, 4
var f float64 = math.Sqrt(float64(x*x + y*y))
var z uint = uint(f)
fmt.Println(x, y, z)
}
That one works well. Then I tried
var f = math.Sqrt(9 + 16)
which also works. But when I change it to var f = math.Sqrt(x*x + y*y) why is it not working? It says cannot use x * x + y * y (type int) as type float64 in argument to math.Sqrt
I have javascript background, and I somehow can't understand the code above.
The math.Sqrt function signature:
func Sqrt(x float64) float64
requires that you pass float64
In this case:
var f float64 = math.Sqrt(float64(x*x + y*y))
You are converting to float64 directly
In this case:
var f = math.Sqrt(x*x + y*y)
you are passing an int, when float64 is required.
In this case:
var f = math.Sqrt(9 + 16)
The compiler is able to infer the type, and pass float64 for you.
But when we pass a number directly, it automatically converted?
No, not really *). Your "direct numbers" are called "constants" in Go and constants are often "untyped" and (almost) of arbitrary precision. There are special rules for constants: A constant 5 and the integer a defined by a := 5 behave differently because 5 is a constant with special rules and not an int.
Constant expressions like 9 + 16 are evaluated at compile time like if you had typed 25. This 25 is still a (constant.
While Go does not have automatic type conversions for types it does have automatic conversions from constants to several types. The constant 25 can be converted to float64 or int, uint8 or even complex128 automatically.
Please read the blog post https://blog.golang.org/constants and the official language spec for a full explanation and all details: https://golang.org/ref/spec#Constants . This explains the strange notion of
"untyped integer" better than I could.
*) "not really" because it is not helpful to think about it that way. The distinction of constants is special in Go: Most other languages tread 3+5 as a sum of two ints resulting in an int while Go sees two untyped integer constants and evaluates this expression into a new arbitrary precision, untyped constant. Only later are constants converted to actual integers.

Why am I getting a compile error 'cannot use ... as type uint8 in argument to ...' when the parameter is an int

I am new to Go and was working through a problem in The Go Programming Language. The code should create GIF animations out of random Lissajous figures with the images being produced in the different colors from palate:
// Copyright © 2016 Alan A. A. Donovan & Brian W. Kernighan.
// License: https://creativecommons.org/licenses/by-nc-sa/4.0/
// Run with "web" command-line argument for web server.
// See page 13.
//!+main
// Lissajous generates GIF animations of random Lissajous figures.
package main
import (
"image"
"image/color"
"image/gif"
"io"
"math"
"math/rand"
"os"
)
//!-main
// Packages not needed by version in book.
import (
"log"
"net/http"
"time"
)
//!+main
// #00ff55
var palette = []color.Color{color.RGBA{0x00, 0xff, 0x55, 0xFF}, color.Black, color.RGBA{0x00, 0x00, 0xff, 0xFF}, color.RGBA{0xff, 0x00, 0xff, 0xFF}}
const (
whiteIndex = 0 // first color in palette
)
func main() {
//!-main
// The sequence of images is deterministic unless we seed
// the pseudo-random number generator using the current time.
// Thanks to Randall McPherson for pointing out the omission.
rand.Seed(time.Now().UTC().UnixNano())
if len(os.Args) > 1 && os.Args[1] == "web" {
//!+http
handler := func(w http.ResponseWriter, r *http.Request) {
lissajous(w)
}
http.HandleFunc("/", handler)
//!-http
log.Fatal(http.ListenAndServe("localhost:8000", nil))
return
}
//!+main
lissajous(os.Stdout)
}
func lissajous(out io.Writer) {
const (
cycles = 5 // number of complete x oscillator revolutions
res = 0.001 // angular resolution
size = 100 // image canvas covers [-size..+size]
nframes = 64 // number of animation frames
delay = 8 // delay between frames in 10ms units
)
freq := rand.Float64() * 3.0 // relative frequency of y oscillator
anim := gif.GIF{LoopCount: nframes}
phase := 0.0 // phase difference
colorIndex := 2
for i := 0; i < nframes; i++ {
rect := image.Rect(0, 0, 2*size+1, 2*size+1)
img := image.NewPaletted(rect, palette)
for t := 0.0; t < cycles*2*math.Pi; t += res {
x := math.Sin(t)
y := math.Sin(t*freq + phase)
img.SetColorIndex(size+int(x*size+0.5), size+int(y*size+0.5), colorIndex)
colorIndex++
}
phase += 0.1
anim.Delay = append(anim.Delay, delay)
anim.Image = append(anim.Image, img)
}
gif.EncodeAll(out, &anim) // NOTE: ignoring encoding errors
}
//!-main
Here is the error I am getting
lissajous/main.go:76: cannot use colorIndex (type int) as type uint8 in argument to img.SetColorIndex
Is there a difference between int and uint8 types or something?
The type of colorIndex is int. The argument type is uint8. An int cannot be assigned to a uint8. Here are some options for fixing the program:
Declare colorIndex as an untyped constant.
const colorIndex = 2
Declare colorIndex as uint8 type:
colorIndex := uint8(3)
Convert the value at the call:
img.SetColorIndex(size+int(x*size+0.5), size+int(y*size+0.5), uint8(colorIndex))
You can replace all uses of uint8 in this answer with byte because byte is an alias for uint8.
In variable declarations, a default type is used, and in your case, colorIndex := 2, i.e. colorIndex becomes int, not uint8.
From the docs ( https://golang.org/ref/spec#Short_variable_declarations ):
"If a type is present, each variable is given that type. Otherwise, each variable is given the type of the corresponding initialization value in the assignment. If that value is an untyped constant, it is first converted to its default type;..."
"var i = 42 // i is int"
and then
"An untyped constant has a default type which is the type to which the constant is implicitly converted in contexts where a typed value is required, for instance, in a short variable declaration such as i := 0 where there is no explicit type. The default type of an untyped constant is bool, rune, int, float64, complex128 or string respectively, depending on whether it is a boolean, rune, integer, floating-point, complex, or string constant."
So to get uint8, you should either explicitly declare colorIndex as uint8 var colorIndex uint8 = 2 or cast uint8 in img.SetColorIndex as :
img.SetColorIndex(size+int(x*size+0.5), size+int(y*size+0.5), uint8(colorIndex))

Can I make type redefinition more optimal?

I have such code:
type Speed float64
type Distance float64
type Time float64
func speed(a Distance, b Time) Speed {
return Speed(float64(a) / float64(b))
}
func main() {
s := Distance(123.0)
t := Time(300)
fmt.Println(speed(s, t))
}
Can I make it more optimal by removing somehow casting to float64 in speed function?
No, you cannot avoid casting your distance and time back into floats because the division is not defined for those types. And as previously said, Go is strongly typed.
So, in your case you'd have to put casts everywhere (not a good idea). Type aliasing is good if you want to write custom methods for your types, but its purpose is not to solely hide the underlying type under a custom one.
However, not all type are working this way. If you make an alias of a map, then you can call the bracket operators without problem.
type Map map[string]string
func main() {
m := Map(make(map[string]string))
m["answer"] = "42"
fmt.Printf("m's type is %T and answer is %s\n", m, m["answer"])
//
// m's type is main.Map and answer is 42
}
Also, when initializing your custom aliases, casting is unnecessary:
type Speed float64
type Distance float64
func main() {
var s Distance = 123.0
var t Time = 300
// ...
}
This compiles and works perfectly. What happens behind the scene is that the literal 123.0 is considered as an untyped float and 300 is considered as an untyped int.
I know this sounds weird but basically those values are not typed so Go tries to fit them into the type at the left. This is why you can write var f float64 = 1 even though 1 is not a float. But you can't write var f float64 = int(1) because 1 becomes a typed int which cannot be translated in a float64.
This is why the following won't work:
func main() {
var distance float64 = 123.0
var time float64 = 300
var s Distance = distance
var t Time = time
// ...
}
You can't make implicit casts between custom types-- Go is strongly typed.
I know this is just a small example, but maybe you really don't need those custom types?
package main
import "fmt"
func speed(distance float64, time float64) float64 {
return distance / time
}
func main() {
s := 123.0
t := 300.0
fmt.Println(speed(s, t))
}

Resources