No panic when converting int to uint? - go

I'm confused about the following type conversion. I would expect both uint conversions to panic.
a := -1
_ = uint(a) // why no panic?
_ = uint(-1) // panics: constant -1 overflows uint
Why doesn't it panic in line 2?
https://play.golang.org/p/jcfDL8km2C

As mentioned in issue 6923:
T(c) where T is a type and c is a constant means to treat c as having type T rather than one of the default types.
It gives an error if c can not be represented in T, except that for float and complex constants we quietly round to T as long as the value is not too large.
Here:
const x uint = -1
var x uint = -1
This doesn't work because -1 cannot be (implicitly) converted to a uint.
_ = uint(a) // why no panic?
Because a is not an untyped constant, but a typed variable (int). See Playground and "what's wrong with Golang constant overflows uint64":
package main
import "fmt"
func main() {
a := -1
_ = uint(a) // why no panic?
var b uint
b = uint(a)
fmt.Println(b)
// _ = uint(-1) // panics: main.go:7: constant -1 overflows uint
}
Result: 4294967295 (on 32-bits system) or 18446744073709551615 (on 64-bits system), as commented by starriet
That are specific rules for the conversion of non-constant numeric values:
When converting between integer types, if the value is a signed integer, it is sign extended to implicit infinite precision; otherwise it is zero extended.
It is then truncated to fit in the result type's size.

Related

Printing type of the numeric constant causes overflow

I am new to Go and currently following A Tour of Go.
I am currently at page Numeric Constants. Down below is a trimmed down version of the code that runs on that page:
package main
import "fmt"
const Big = 1 << 100
func needFloat(x float64) float64 {
return x * 0.1
}
func main() {
fmt.Println(needFloat(Big))
// fmt.Printf("Type of Big %T", Big)
}
this code compiles successfully with the output 1.2676506002282295e+29
The following code however will not compile and give an error:
package main
import "fmt"
const Big = 1 << 100
func needFloat(x float64) float64 {
return x * 0.1
}
func main() {
fmt.Println(needFloat(Big))
fmt.Printf("Type of Big %T", Big)
}
Output:
./prog.go:9:13: constant 1267650600228229401496703205376 overflows int
Why do you think this happened? I hope you will kindly explain.
The constant Big is an untyped constant. An untyped constant can be arbitrarily large and it doesn't have to fit into any predefined type's limits. It is interpreted and truncated in the context it is used.
The function needFloat gets a float64 argument. At this instance Big is converted to a float64 and used that way.
When you use it for Printf, it tries to pass it in as an int because it is not a decimal number (otherwise it would've converted it to float64), and it causes an overflow. Pass it as float64(Big), and it should work.
I guess the reason is that Big gets computed (i.e. casted right before being passed to needFloat, and gets instead computed as a int64 before the Printf. As a proof, the following statement computes correctly:
package main
import "fmt"
const Big = 1 << 100
func main() {
fmt.Printf("Type of Big %T", float64(Big))
}
Hope this helps.
The untyped constant n must be converted to a type before it can be assigned to the interface{} parameter in the call to fmt.Println.
fmt.Println(a ...interface{})
When the type can’t be inferred from the context, an untyped constant is converted to a bool, int, float64, complex128, string or rune depending of the format of the constant.
In this case the constant is an integer, but n is larger than the maximum value of an int.
However, n can be represented as a float64.
const n = 9876543210 * 9876543210
fmt.Println(float64(n))
For exact representation of big numbers, the math/big package implements arbitrary-precision arithmetic. It supports signed integers, rational numbers and floating-point numbers.
This is taken from https://yourbasic.org/golang/gotcha-constant-overflows-int/.

confusion about convert `uint8` to `int8`

I want to convert uint8 to int, so I write a const 0xfc, and try to use int8(0xfc) to convert it. However the code raises an error:
package main
import (
"fmt"
)
func main() {
a := int8(0xfc) // compile error: constant 252 overflows int8
b := a
fmt.Println(b)
}
But if I defer the type conversion after assignment, the code can work around.
package main
import (
"fmt"
)
func main() {
a := 0xfc
b := int8(a) // ok
fmt.Println(b)
}
My question:
Is there any difference between these two codes?
Why does the first one raise a compile error?
see: https://golang.org/ref/spec#Constant_expressions
The values of typed constants must always be accurately representable by values of the constant type. The following constant expressions are illegal:
uint(-1) // -1 cannot be represented as a uint
int(3.14) // 3.14 cannot be represented as an int
int64(Huge) // 1267650600228229401496703205376 cannot be represented as an int64
Four * 300 // operand 300 cannot be represented as an int8 (type of Four)
Four * 100 // product 400 cannot be represented as an int8 (type of Four)
see:
https://blog.golang.org/constants
not all integer values can fit in all integer types. There are two problems that might arise: the value might be too large, or it might be a negative value being assigned to an unsigned integer type. For instance, int8 has range -128 through 127, so constants outside of that range can never be assigned to a variable of type int8:
var i8 int8 = 128 // Error: too large.
Similarly, uint8, also known as byte, has range 0 through 255, so a large or negative constant cannot be assigned to a uint8:
var u8 uint8 = -1 // Error: negative value.
This type-checking can catch mistakes like this one:
type Char byte
var c Char = '世' // Error: '世' has value 0x4e16, too large.
If the compiler complains about your use of a constant, it's likely a real bug like this.
My actual demand is to convert a byte to int32 when parsing a binary file. I may encounter the constant byte 0xfc, and should transfer it to the int8 before converting it to the int32 with the consideration of sign.
Yes, this is the way to go:
var b byte = 0xff
i32 := int32(int8(b))
fmt.Println(i32) // -1
Is there any difference between these two codes?
The first example uses a constant expression. The second uses plain expressions. Constant expressions are evaluated at compile time with different rules from plain expressions.
Why does the first one raise a compile error?
The int8(0xfc) is a typed constant expression. Values of typed constants must always be accurately representable by values of the constant type. The compiler reports an error because the value 252 cannot be represented by the values of int8.
Based on comments on other answers, I see that the goal is to get an int32 from a byte with sign extension. Given a byte variable b, use the expression int32(int8(b)) to get the int32 value with sign extension.

golang how can I convert uint64 to int64? [duplicate]

This question already has answers here:
Convert uint64 to int64 without loss of information
(3 answers)
Closed 5 years ago.
anyone can help me? converting uint64 to int64 pls
//fmt.Println(int64(18446744073709551615))
//constant 18446744073709551615 overflows int64
var x uint64 = 18446744073709551615
var y int64 = int64(x)
fmt.Println(y) //-1
//just like(c)signed long long
//anyone can help me pls!
//How can I using like this?
// -9223372036854775808 +9223372036854775807
func BytesToInt(b []byte) int {
bytesBuffer := bytes.NewBuffer(b)
var tmp int32
binary.Read(bytesBuffer, binary.BigEndian, &tmp)
return int(tmp)
}
What you are asking (to store 18,446,744,073,709,551,615 as an int64 value) is impossible.
A unit64 stores positive integers and has 64 bits available to hold information. It can therefore store any positive integer between 0 and 18,446,744,073,709,551,615 (2^64-1).
An int64 uses one bit to hold the sign, leaving 63 bits to hold information about the number. It can store any value between -9,223,372,036,854,775,808 and +9,223,372,036,854,775,807 (-2^63 and 2^63-1).
Both types can hold 18,446,744,073,709,551,616 unique integers, it is just that the uint64 range starts at zero, where as the int64 values straddle zero.
To hold 18,446,744,073,709,551,615 as a signed integer would require 65 bits.
In your conversion, no information from the underlying bytes is lost. The difference in the integer values returned is due to how the the two types interpret and display the values.
uint64 will display a raw integer value, whereas int64 will use two's complement.
var x uint64 = 18446744073709551615
var y int64 = int64(x)
fmt.Printf("uint64: %v = %#[1]x, int64: %v = %#x\n", x, y, uint64(y))
// uint64: 18446744073709551615 = 0xffffffffffffffff
// int64: -1 = 0xffffffffffffffff
x -= 100
y -= 100
fmt.Printf("uint64: %v = %#[1]x, int64: %v = %#x\n", x, y, uint64(y))
// uint64: 18446744073709551515 = 0xffffffffffffff9b
// int64: -101 = 0xffffffffffffff9b
https://play.golang.com/p/hlWqhnC9Dh

Why am I getting a compile error 'cannot use ... as type uint8 in argument to ...' when the parameter is an int

I am new to Go and was working through a problem in The Go Programming Language. The code should create GIF animations out of random Lissajous figures with the images being produced in the different colors from palate:
// Copyright © 2016 Alan A. A. Donovan & Brian W. Kernighan.
// License: https://creativecommons.org/licenses/by-nc-sa/4.0/
// Run with "web" command-line argument for web server.
// See page 13.
//!+main
// Lissajous generates GIF animations of random Lissajous figures.
package main
import (
"image"
"image/color"
"image/gif"
"io"
"math"
"math/rand"
"os"
)
//!-main
// Packages not needed by version in book.
import (
"log"
"net/http"
"time"
)
//!+main
// #00ff55
var palette = []color.Color{color.RGBA{0x00, 0xff, 0x55, 0xFF}, color.Black, color.RGBA{0x00, 0x00, 0xff, 0xFF}, color.RGBA{0xff, 0x00, 0xff, 0xFF}}
const (
whiteIndex = 0 // first color in palette
)
func main() {
//!-main
// The sequence of images is deterministic unless we seed
// the pseudo-random number generator using the current time.
// Thanks to Randall McPherson for pointing out the omission.
rand.Seed(time.Now().UTC().UnixNano())
if len(os.Args) > 1 && os.Args[1] == "web" {
//!+http
handler := func(w http.ResponseWriter, r *http.Request) {
lissajous(w)
}
http.HandleFunc("/", handler)
//!-http
log.Fatal(http.ListenAndServe("localhost:8000", nil))
return
}
//!+main
lissajous(os.Stdout)
}
func lissajous(out io.Writer) {
const (
cycles = 5 // number of complete x oscillator revolutions
res = 0.001 // angular resolution
size = 100 // image canvas covers [-size..+size]
nframes = 64 // number of animation frames
delay = 8 // delay between frames in 10ms units
)
freq := rand.Float64() * 3.0 // relative frequency of y oscillator
anim := gif.GIF{LoopCount: nframes}
phase := 0.0 // phase difference
colorIndex := 2
for i := 0; i < nframes; i++ {
rect := image.Rect(0, 0, 2*size+1, 2*size+1)
img := image.NewPaletted(rect, palette)
for t := 0.0; t < cycles*2*math.Pi; t += res {
x := math.Sin(t)
y := math.Sin(t*freq + phase)
img.SetColorIndex(size+int(x*size+0.5), size+int(y*size+0.5), colorIndex)
colorIndex++
}
phase += 0.1
anim.Delay = append(anim.Delay, delay)
anim.Image = append(anim.Image, img)
}
gif.EncodeAll(out, &anim) // NOTE: ignoring encoding errors
}
//!-main
Here is the error I am getting
lissajous/main.go:76: cannot use colorIndex (type int) as type uint8 in argument to img.SetColorIndex
Is there a difference between int and uint8 types or something?
The type of colorIndex is int. The argument type is uint8. An int cannot be assigned to a uint8. Here are some options for fixing the program:
Declare colorIndex as an untyped constant.
const colorIndex = 2
Declare colorIndex as uint8 type:
colorIndex := uint8(3)
Convert the value at the call:
img.SetColorIndex(size+int(x*size+0.5), size+int(y*size+0.5), uint8(colorIndex))
You can replace all uses of uint8 in this answer with byte because byte is an alias for uint8.
In variable declarations, a default type is used, and in your case, colorIndex := 2, i.e. colorIndex becomes int, not uint8.
From the docs ( https://golang.org/ref/spec#Short_variable_declarations ):
"If a type is present, each variable is given that type. Otherwise, each variable is given the type of the corresponding initialization value in the assignment. If that value is an untyped constant, it is first converted to its default type;..."
"var i = 42 // i is int"
and then
"An untyped constant has a default type which is the type to which the constant is implicitly converted in contexts where a typed value is required, for instance, in a short variable declaration such as i := 0 where there is no explicit type. The default type of an untyped constant is bool, rune, int, float64, complex128 or string respectively, depending on whether it is a boolean, rune, integer, floating-point, complex, or string constant."
So to get uint8, you should either explicitly declare colorIndex as uint8 var colorIndex uint8 = 2 or cast uint8 in img.SetColorIndex as :
img.SetColorIndex(size+int(x*size+0.5), size+int(y*size+0.5), uint8(colorIndex))

Idiomatic Type Conversion in Go

I was playing around with Go and was wondering what the best way is to perform idiomatic type conversions in Go. Basically my problem lays within automatic type conversions between uint8, uint64, and float64. From my experience with other languages a multiplication of a uint8 with a uint64 will yield a uint64 value, but not so in go.
Here is an example that I build and I ask if this is the idiomatic way of writing this code or if I'm missing an important language construct.
package main
import ("math";"fmt")
const(Width=64)
func main() {
var index uint32
var bits uint8
index = 100
bits = 3
var c uint64
// This is the line of interest vvvv
c = uint64(math.Ceil(float64(index * uint32(bits))/float64(Width)))
fmt.Println("Test: %v\n", c)
}
From my point of view the calculation of the ceiling value seems unnecessary complex because of all the explicit type conversions.
Thanks!
There are no implicit type conversions for non-constant values.
You can write
var x float64
x = 1
But you cannot write
var x float64
var y int
y = 1
x = y
See the spec for reference.
There's a good reason, to not allow automatic/implicit type conversions, as they can
become very messy and one has to learn many rules to circumvent the various caveats
that may occur. Take the Integer Conversion Rules in C for example.
For example,
package main
import "fmt"
func CeilUint(a, b uint64) uint64 {
return (a + (b - 1)) / b
}
func main() {
const Width = 64
var index uint32 = 100
var bits uint8 = 3
var c uint64 = CeilUint(uint64(index)*uint64(bits), Width)
fmt.Println("Test:", c)
}
Output:
Test: 5
To add to #nemo terrific answer. The convenience of automatic conversion between numeric types in C is outweighed by the confusion it causes. See https://Golang.org/doc/faq#conversions. Thats why you can't even convert from int to int32 implicitly. See https://stackoverflow.com/a/13852456/12817546.
package main
import (
. "fmt"
. "strconv"
)
func main() {
i := 71
c := []interface{}{byte(i), []byte(string(i)), float64(i), i, rune(i), Itoa(i), i != 0}
checkType(c)
}
func checkType(s []interface{}) {
for k, _ := range s {
Printf("%T %v\n", s[k], s[k])
}
}
byte(i) creates a uint8 with a value of 71, []byte(string(i)) a []uint8 with [71], float64(i) float64 71, i int 71, rune(i) int32 71, Itoa(i) string 71 and i != 0 a bool with a value of true.
Since Go won't convert numeric types automatically for you (See https://stackoverflow.com/a/13851553/12817546) you have to convert between types manually. See https://stackoverflow.com/a/41419962/12817546. Note, Itoa(i) sets an "Integer to an ASCII". See comment in https://stackoverflow.com/a/10105983/12817546.

Resources