I'm building a Lisp, and I want 32 bit integers to automatically switch to 64 bit integers if a computation would cause them to otherwise overflow. And likewise, for 64 bit overflows, switch to arbitrarily sized integers.
The problem I have is that I don't know what the "correct" way is to detect an integer overflow.
a, b := 2147483647, 2147483647
c := a + b
How can I efficiently check if c overflowed?
I have considered always converting to 64 bit values to do the calculation, then down-sizing again afterwards when possible, but that seems expensive and memory wasteful for something that is as primitive and core to the language as basic arithmetic.
For example, to detect 32-bit integer overflow for addition,
package main
import (
"errors"
"fmt"
"math"
)
var ErrOverflow = errors.New("integer overflow")
func Add32(left, right int32) (int32, error) {
if right > 0 {
if left > math.MaxInt32-right {
return 0, ErrOverflow
}
} else {
if left < math.MinInt32-right {
return 0, ErrOverflow
}
}
return left + right, nil
}
func main() {
var a, b int32 = 2147483327, 2147483327
c, err := Add32(a, b)
if err != nil {
// handle overflow
fmt.Println(err, a, b, c)
}
}
Output:
integer overflow 2147483327 2147483327 0
For 32 bit integers, the standard way is as you said, to cast to 64 bit, then down size again [1]:
package main
func add32(x, y int32) (int32, int32) {
sum64 := int64(x) + int64(y)
return x + y, int32(sum64 >> 31)
}
func main() {
{
s, c := add32(2147483646, 1)
println(s == 2147483647, c == 0)
}
{
s, c := add32(2147483647, 1)
println(s == -2147483648, c == 1)
}
}
However if you don't like that, you can use some bit operations [2]:
func add32(x, y int32) (int32, int32) {
sum := x + y
return sum, x & y | (x | y) &^ sum >> 30
}
https://github.com/golang/go/blob/go1.16.3/src/math/bits/bits.go#L368-L373
https://github.com/golang/go/blob/go1.16.3/src/math/bits/bits.go#L380-L387
I imported the math library in my program, and I was trying to find the minimum of three numbers in the following way:
v1[j+1] = math.Min(v1[j]+1, math.Min(v0[j+1]+1, v0[j]+cost))
where v1 is declared as:
t := "stackoverflow"
v1 := make([]int, len(t)+1)
However, when I run my program I get the following error:
./levenshtein_distance.go:36: cannot use int(v0[j + 1] + 1) (type int) as type float64 in argument to math.Min
I thought it was weird because I have another program where I write
fmt.Println(math.Min(2,3))
and that program outputs 2 without complaining.
so I ended up casting the values as float64, so that math.Min could work:
v1[j+1] = math.Min(float64(v1[j]+1), math.Min(float64(v0[j+1]+1), float64(v0[j]+cost)))
With this approach, I got the following error:
./levenshtein_distance.go:36: cannot use math.Min(int(v1[j] + 1), math.Min(int(v0[j + 1] + 1), int(v0[j] + cost))) (type float64) as type int in assignment
so to get rid of the problem, I just casted the result back to int
I thought this was extremely inefficient and hard to read:
v1[j+1] = int(math.Min(float64(v1[j]+1), math.Min(float64(v0[j+1]+1), float64(v0[j]+cost))))
I also wrote a small minInt function, but I think this should be unnecessary because the other programs that make use of math.Min work just fine when taking integers, so I concluded this has to be a problem of my program and not the library per se.
Is there anything that I'm doing terrible wrong?
Here's a program that you can use to reproduce the issues above, line 36 specifically:
package main
import (
"math"
)
func main() {
LevenshteinDistance("stackoverflow", "stackexchange")
}
func LevenshteinDistance(s string, t string) int {
if s == t {
return 0
}
if len(s) == 0 {
return len(t)
}
if len(t) == 0 {
return len(s)
}
v0 := make([]int, len(t)+1)
v1 := make([]int, len(t)+1)
for i := 0; i < len(v0); i++ {
v0[i] = i
}
for i := 0; i < len(s); i++ {
v1[0] = i + 1
for j := 0; j < len(t); j++ {
cost := 0
if s[i] != t[j] {
cost = 1
}
v1[j+1] = int(math.Min(float64(v1[j]+1), math.Min(float64(v0[j+1]+1), float64(v0[j]+cost))))
}
for j := 0; j < len(v0); j++ {
v0[j] = v1[j]
}
}
return v1[len(t)]
}
Until Go 1.18 a one-off function was the standard way; for example, the stdlib's sort.go does it near the top of the file:
func min(a, b int) int {
if a < b {
return a
}
return b
}
You might still want or need to use this approach so your code works on Go versions below 1.18!
Starting with Go 1.18, you can write a generic min function which is just as efficient at run time as the hand-coded single-type version, but works with any type with < and > operators:
func min[T constraints.Ordered](a, b T) T {
if a < b {
return a
}
return b
}
func main() {
fmt.Println(min(1, 2))
fmt.Println(min(1.5, 2.5))
fmt.Println(min("Hello", "世界"))
}
There's been discussion of updating the stdlib to add generic versions of existing functions, but if that happens it won't be until a later version.
math.Min(2, 3) happened to work because numeric constants in Go are untyped. Beware of treating float64s as a universal number type in general, though, since integers above 2^53 will get rounded if converted to float64.
There is no built-in min or max function for integers, but it’s simple to write your own. Thanks to support for variadic functions we can even compare more integers with just one call:
func MinOf(vars ...int) int {
min := vars[0]
for _, i := range vars {
if min > i {
min = i
}
}
return min
}
Usage:
MinOf(3, 9, 6, 2)
Similarly here is the max function:
func MaxOf(vars ...int) int {
max := vars[0]
for _, i := range vars {
if max < i {
max = i
}
}
return max
}
For example,
package main
import "fmt"
func min(x, y int) int {
if x < y {
return x
}
return y
}
func main() {
t := "stackoverflow"
v0 := make([]int, len(t)+1)
v1 := make([]int, len(t)+1)
cost := 1
j := 0
v1[j+1] = min(v1[j]+1, min(v0[j+1]+1, v0[j]+cost))
fmt.Println(v1[j+1])
}
Output:
1
Though the question is quite old, maybe my package imath can be helpful for someone who does not like reinventing a bicycle. There are few functions, finding minimal of two integers: ix.Min (for int), i8.Min (for int8), ux.Min (for uint) and so on. The package can be obtained with go get, imported in your project by URL and functions referred as typeabbreviation.FuncName, for example:
package main
import (
"fmt"
"<Full URL>/go-imath/ix"
)
func main() {
a, b := 45, -42
fmt.Println(ix.Min(a, b)) // Output: -42
}
As the accepted answer states, with the introduction of generics in go 1.18 it's now possible to write a generic function that provides min/max for different numeric types (there is not one built into the language). And with variadic arguments we can support comparing 2 elements or a longer list of elements.
func Min[T constraints.Ordered](args ...T) T {
min := args[0]
for _, x := range args {
if x < min {
min = x
}
}
return min
}
func Max[T constraints.Ordered](args ...T) T {
max := args[0]
for _, x := range args {
if x > max {
max = x
}
}
return max
}
example calls:
Max(1, 2) // 2
Max(4, 5, 3, 1, 2) // 5
Could use https://github.com/pkg/math:
import (
"fmt"
"github.com/pkg/math"
)
func main() {
a, b := 45, -42
fmt.Println(math.Min(a, b)) // Output: -42
}
Since the issue has already been resolved, I would like to add a few words. Always remember that the math package in Golang operates on float64. You can use type conversion to cast int into a float64. Keep in mind to account for type ranges. For example, you cannot fit a float64 into an int16 if the number exceeds the limit for int16 which is 32767. Last but not least, if you convert a float into an int in Golang, the decimal points get truncated without any rounding.
If you want the minimum of a set of N integers you can use (assuming N > 0):
import "sort"
func min(set []int) int {
sort.Slice(set, func(i, j int) bool {
return set[i] < set[j]
})
return set[0]
}
Where the second argument to min function is your less function, that is, the function that decides when an element i of the passed slice is less than an element j
Check it out here in Go Playground: https://go.dev/play/p/lyQYlkwKrsA
Here is my Go code: http://play.golang.org/p/CDUagFZ-rk
package main
import "fmt"
func main() {
var max int = 0
for i := 0; i < 1000000; i++ {
var len int = GetCollatzSeqLen(i)
if len > max {
max = len
}
}
fmt.Println(max)
}
func GetCollatzSeqLen(n int) int {
var len int = 1
for n > 1 {
len++
if n%2 == 0 {
n = n / 2
} else {
n = 3*n + 1
}
}
return len
}
On my local machine, when I run the program, I get 525 as the output. When I run it on the Go Playground, the output is 476.
I am wondering what's different.
It's because of the implementation-specific size of int, 32 or 64 bits. Use int64 for consistent results. For example,
package main
import "fmt"
func main() {
var max int64 = 0
for i := int64(0); i < 1000000; i++ {
var len int64 = GetCollatzSeqLen(i)
if len > max {
max = len
}
}
fmt.Println(max)
}
func GetCollatzSeqLen(n int64) int64 {
var len int64 = 1
for n > 1 {
len++
if n%2 == 0 {
n = n / 2
} else {
n = 3*n + 1
}
}
return len
}
Output:
525
Playground: http://play.golang.org/p/0Cdic16edP
The Go Programming Language Specification
Numeric types
int32 the set of all signed 32-bit integers (-2147483648 to 2147483647)
int64 the set of all signed 64-bit integers (-9223372036854775808 to 9223372036854775807)
The value of an n-bit integer is n bits wide and represented using
two's complement arithmetic.
There is also a set of predeclared numeric types with
implementation-specific sizes:
uint either 32 or 64 bits
int same size as uint
To see the implementation-specific size of int, run this program.
package main
import (
"fmt"
"runtime"
"strconv"
)
func main() {
fmt.Println(
"For "+runtime.GOARCH+" the implementation-specific size of int is",
strconv.IntSize, "bits.",
)
}
Output:
For amd64 the implementation-specific size of int is 64 bits.
On Go Playground: http://play.golang.org/p/7O6dEdgDNd
For amd64p32 the implementation-specific size of int is 32 bits.
Is there a built-in function to convert a uint to a slice of binary integers {0,1} ?
>> convert_to_binary(2)
[1, 0]
I am not aware of such a function, however you can use strconv.FormatUint for that purpose.
Example (on play):
func Bits(i uint64) []byte {
bits := []byte{}
for _, b := range strconv.FormatUint(i, 2) {
bits = append(bits, byte(b - rune('0')))
}
return bits
}
FormatUint will return the string representation of the given uint to a base, in this case 2, so we're encoding it in binary. So the returned string for i=2 looks like this: "10". In bytes this is [49 48] as 1 is 49 and 0 is 48 in ASCII and Unicode. So we just need to iterate over the string, subtracting 48 from each rune (unicode character) and converting it to a byte.
Here is another method:
package main
import (
"bytes"
"fmt"
"math/bits"
)
func unsigned(x uint) []byte {
b := make([]byte, bits.UintSize)
for i := range b {
if bits.LeadingZeros(x) == 0 {
b[i] = 1
}
x = bits.RotateLeft(x, 1)
}
return b
}
func trimUnsigned(x uint) []byte {
return bytes.TrimLeft(unsigned(x), string(0))
}
func main() {
b := trimUnsigned(2)
fmt.Println(b) // [1 0]
}
https://golang.org/pkg/math/bits#LeadingZeros
I'm using levigo, the leveldb bindings for Go. My keys are int64's and need to be kept sorted. By default, leveldb uses a bytewise comparator so I'm trying to use varint encoding.
func i2b(x int64) []byte {
b := make([]byte, binary.MaxVarintLen64)
n := binary.PutVarint(b, x)
return key[:n]
}
My keys are not being sorted correctly. I wrote the following as a test.
var prev int64 = 0
for i := int64(1); i < 1e5; i++ {
if bytes.Compare(i2b(i), i2b(prev)) <= 0 {
log.Fatalf("bytewise: %d > %d", b2i(prev), i)
}
prev = i
}
output: bytewise: 127 > 128
playground
I'm not sure where the problem is. Am I doing the encoding wrong? Is varint not the right encoding to use?
EDIT:
BigEndian fixed width encoding is bytewise comparable
func i2b(x int64) []byte {
b := make([]byte, 8)
binary.BigEndian.PutUint64(b, uint64(x))
return b
}
The varint encoding is not bytewise comparable* wrt to the order of the values it caries. One option how to write the ordering/collating function (cmp bellow) is for example:
package main
import (
"encoding/binary"
"log"
)
func i2b(x int64) []byte {
var b [binary.MaxVarintLen64]byte
return b[:binary.PutVarint(b[:], x)]
}
func cmp(a, b []byte) int64 {
x, n := binary.Varint(a)
if n < 0 {
log.Fatal(n)
}
y, n := binary.Varint(b)
if n < 0 {
log.Fatal(n)
}
return x - y
}
func main() {
var prev int64 = 0
for i := int64(1); i < 1e5; i++ {
if cmp(i2b(i), i2b(prev)) <= 0 {
log.Fatal("fail")
}
prev = i
}
}
Playground
(*) The reason is (also) the bit fiddling performed.