I am trying to convert the string to uint on 32-bit ubuntu using the following code. But it always convert it in uint64 despite explicitly passing 32 as the argument in the function. Below in the code mw is the object of the image magick library. Which returns uint when mw.getImageWidth() and mw.getImageHeight() is called. Also, it accepts the uint type argument in the resize function.
width := strings.Split(imgResize, "x")[0]
height := strings.Split(imgResize, "x")[1]
var masterWidth uint = mw.GetImageWidth()
var masterHeight uint = mw.GetImageHeight()
mw := imagick.NewMagickWand()
defer mw.Destroy()
err = mw.ReadImageBlob(img)
if err != nil {
log.Fatal(err)
}
var masterWidth uint = mw.GetImageWidth()
var masterHeight uint = mw.GetImageHeight()
wd, _ := strconv.ParseUint(width, 10, 32)
ht, _ := strconv.ParseUint(height, 10, 32)
if masterWidth < wd || masterHeight < ht {
err = mw.ResizeImage(wd, ht, imagick.FILTER_BOX, 1)
if err != nil {
panic(err)
}
}
Error is :
# command-line-arguments
test.go:94: invalid operation: masterWidth < wd (mismatched types uint and uint64)
goImageCode/test.go:94: invalid operation: masterHeight < ht (mismatched types uint and uint64)
goImageCode/test.go:100: cannot use wd (type uint64) as type uint in argument to mw.ResizeImage
goImageCode/AmazonAWS.go:100: cannot use ht (type uint64) as type uint in argument to mw.ResizeImage
Package strconv
func ParseUint
func ParseUint(s string, base int, bitSize int) (n uint64, err error)
ParseUint is like ParseInt but for unsigned numbers.
func ParseInt
func ParseInt(s string, base int, bitSize int) (i int64, err error)
ParseInt interprets a string s in the given base (2 to 36) and returns
the corresponding value i. If base == 0, the base is implied by the
string's prefix: base 16 for "0x", base 8 for "0", and base 10
otherwise.
The bitSize argument specifies the integer type that the result must
fit into. Bit sizes 0, 8, 16, 32, and 64 correspond to int, int8,
int16, int32, and int64.
The errors that ParseInt returns have concrete type *NumError and
include err.Num = s. If s is empty or contains invalid digits, err.Err
= ErrSyntax and the returned value is 0; if the value corresponding to s cannot be represented by a signed integer of the given size, err.Err
= ErrRange and the returned value is the maximum magnitude integer of the appropriate bitSize and sign.
The bitSize argument specifies the integer type that the result must
fit into. The uint type size is implementation defined, either 32 or 64 bits. The ParseUint return type is always uint64. For example,
package main
import (
"fmt"
"strconv"
)
func main() {
width := "42"
u64, err := strconv.ParseUint(width, 10, 32)
if err != nil {
fmt.Println(err)
}
wd := uint(u64)
fmt.Println(wd)
}
Output:
42
Here is short way to convert string to uint.
import (
"strconv"
)
...
func StringToUint(s string) uint {
i, _ := strconv.Atoi(s)
return uint(i)
}
Related
Here's my code:
package main
import (
"fmt"
"reflect"
"strconv"
)
func main() {
i, _ := strconv.ParseInt("10", 10, 8)
fmt.Println(reflect.TypeOf(i))
}
I expect i to be 8-bits long (the 3rd argument to strconv.ParseInt). It is int64, however (and the docs state that strconv.ParseInt will return int64).
What's the point of ParseInt if it always returns int64 (why not just use Atoi?)
Note this from the function's doc:
The bitSize argument specifies the integer type that the result must
fit into. Bit sizes 0, 8, 16, 32, and 64 correspond to int, int8,
int16, int32, and int64. For a bitSize below 0 or above 64 an error is
returned.
So it's guaranteed that you can convert your result to a byte with byte(i).
Go doesn't have generics yet, so having a single ParseInt that can accept pointers to multiple integer types is difficult. Instead the guarantee is done with the bitSize argument
Package strconv
import "strconv"
func ParseInt
func ParseInt(s string, base int, bitSize int) (i int64, err error)
ParseInt interprets a string s in the given base (0, 2 to 36) and bit
size (0 to 64) and returns the corresponding value i.
If base == 0, the base is implied by the string's prefix: base 16 for
"0x", base 8 for "0", and base 10 otherwise. For bases 1, below 0 or
above 36 an error is returned.
The bitSize argument specifies the integer type that the result must
fit into. Bit sizes 0, 8, 16, 32, and 64 correspond to int, int8,
int16, int32, and int64. For a bitSize below 0 or above 64 an error is
returned.
The errors that ParseInt returns have concrete type *NumError and
include err.Num = s. If s is empty or contains invalid digits, err.Err
= ErrSyntax and the returned value is 0; if the value corresponding to s cannot be represented by a signed integer of the given size, err.Err
= ErrRange and the returned value is the maximum magnitude integer of the appropriate bitSize and sign.
For example,
package main
import (
"fmt"
"strconv"
)
func main() {
i64, err := strconv.ParseInt("10", 10, 8)
if err != nil {
panic(err)
}
fmt.Printf("%[1]d %[1]T\n", i64)
i8 := int8(i64)
fmt.Printf("%[1]d %[1]T\n", i8)
}
Playground: https://play.golang.org/p/HSHtUnC7qql
Output:
10 int64
10 int8
In Go, we often use functions to hide implementation details.
For example,
package main
import (
"fmt"
"strconv"
)
func ParseInt8(s string, base int) (int8, error) {
i64, err := strconv.ParseInt(s, base, 8)
return int8(i64), err
}
func main() {
i8, err := ParseInt8("10", 10)
if err != nil {
panic(err)
}
fmt.Printf("%[1]d %[1]T\n", i8)
}
Playground: https://play.golang.org/p/HdA3O71U54z
Output:
10 int8
I think what you are really asking is what is the point of the 3rd parameter to ParseInt().
It's to save you having to check for overflow manually like this:
i, err := strconv.Atoi(intString)
if err != nil || i < -128 || i > 127 {
// handle error
}
i8 := int8(i)
I'm struggling a bit with this piece of Go code. I have been searching all over the place, but can't understand what is wrong about it.
Error message is: syntax error: unexpected int at end of statement
for that line near the bottom: func (TOHLCV TOHLCVs) Len() int {
I also have this error message for the second to the last line of code:
syntax error: non-declaration statement outside function body
In case the 2 errors are related
package main
import (
"fmt"
"time"
"strconv"
//from https://github.com/pplcc/plotext/
"log"
"os"
"github.com/360EntSecGroup-Skylar/excelize"
"github.com/pplcc/plotext/custplotter"
"gonum.org/v1/plot"
"github.com/pplcc/plotext"
"gonum.org/v1/plot/vg/vgimg"
"gonum.org/v1/plot/vg/draw"
)
// Len implements the Len method of the TOHLCVer interface.
func (TOHLCV TOHLCVs) Len() int {
return len(TOHLCV)
func main() {
//read excel file******************************************
xlsx, err := excelize.OpenFile("/media/Snaps/test snaps.xlsm")
if err != nil {
fmt.Println(err)
return
}
//read all rows into df
df := xlsx.GetRows("ticker_2")
type TOHLCVer interface {
// Len returns the number of time, open, high, low, close, volume tuples.
Len() int
// TOHLCV returns an time, open, high, low, close, volume tuple.
TOHLCV(int) (float64, float64, float64, float64, float64, float64)
}
type TOHLCVs []struct{ T, O, H, L, C, V float64 }
// Len implements the Len method of the TOHLCVer interface.
func (TOHLCV TOHLCVs) Len() int {
return len(TOHLCV)
}
df3 := make(TOHLCVs, 60) // create slice for 60 rows
idx := 0
this code is adapted from:
https://github.com/pplcc/plotext/blob/master/custplotter/tohlcv.go
Function declarations need to be moved out of other functions, Like this:
package main
import (
"fmt"
"github.com/360EntSecGroup-Skylar/excelize"
)
type TOHLCVer interface {
// Len returns the number of time, open, high, low, close, volume tuples.
Len() int
// TOHLCV returns an time, open, high, low, close, volume tuple.
TOHLCV(int) (float64, float64, float64, float64, float64, float64)
}
type TOHLCVs []struct{ T, O, H, L, C, V float64 }
// Len implements the Len method of the TOHLCVer interface.
func (TOHLCV TOHLCVs) Len() int {
return len(TOHLCV)
}
func main() {
//read excel file******************************************
xlsx, err := excelize.OpenFile("/media/Snaps/test snaps.xlsm")
if err != nil {
fmt.Println(err)
return
}
//read all rows into df
df := xlsx.GetRows("ticker_2")
df3 := make(TOHLCVs, 60) // create slice for 60 rows
idx := 0
}
Type declarations can be inside of a function. But, in this case, it makes more sense for them to be outside. There are some situations where it's helpful to declare a function inside another function:
Passing a function as an argument: https://play.golang.org/p/4NgeUvsexto
Assigning an anonymous function to a variable: https://play.golang.org/p/r1DF9_iP0-k
(I'm not sure about the exact logic you're looking for - the above code doesn't do anything yet. I'll also caution against creating an interface unless you needed it.)
So based on answer of #Tyler Bui-Palsulich and #aec my code now looks like below, and no more error messages :-), thanks all !
package main
import (
"fmt"
"time"
"strconv"
//from https://github.com/pplcc/plotext/
"log"
"os"
"github.com/360EntSecGroup-Skylar/excelize"
"github.com/pplcc/plotext/custplotter"
//"github.com/pplcc/plotext/examples"
"gonum.org/v1/plot"
"github.com/pplcc/plotext"
"gonum.org/v1/plot/vg/vgimg"
"gonum.org/v1/plot/vg/draw"
)
// Len implements the Len method of the TOHLCVer interface.
//func (TOHLCV TOHLCVs) Len() int {
// return len(TOHLCV)
//}
type TOHLCVer interface {
// Len returns the number of time, open, high, low, close, volume tuples.
Len() int
// TOHLCV returns an time, open, high, low, close, volume tuple.
TOHLCV(int) (float64, float64, float64, float64, float64, float64)
}
type TOHLCVs []struct{ T, O, H, L, C, V float64 }
// Len implements the Len method of the TOHLCVer interface.
func (TOHLCV TOHLCVs) Len() int {
return len(TOHLCV)
}
// TOHLCV implements the TOHLCV method of the TOHLCVer interface.
func (TOHLCV TOHLCVs) TOHLCV(i int) (float64, float64, float64, float64, float64, float64) {
return TOHLCV[i].T, TOHLCV[i].O, TOHLCV[i].H, TOHLCV[i].L, TOHLCV[i].C, TOHLCV[i].V
}
func main() {
start := time.Now()
//create data for each chart****************************************************
//******************************************************************************
//read excel file******************************************
xlsx, err := excelize.OpenFile("/media/hugues/M.2 windows/Hugues/Snaps/test snaps.xlsm")
if err != nil {
fmt.Println(err)
return
}
//read all rows into df
df := xlsx.GetRows("ticker_2")
df3 := make(TOHLCVs, 60) // create slice for 60 rows
idx := 0
for _, row := range df[1:61] { // read 60 rows
df3[idx].T, err = strconv.ParseFloat(row[28], 64)
df3[idx].O, err = strconv.ParseFloat(row[29], 64)
df3[idx].H, err = strconv.ParseFloat(row[30], 64)
df3[idx].L, err = strconv.ParseFloat(row[31], 64)
df3[idx].C, err = strconv.ParseFloat(row[32], 64)
df3[idx].V, err = strconv.ParseFloat(row[33], 64)
idx++
}
I am trying to parse a string into an integer in Go. The Problem I found with it is in the documentation its mentioned syntax is as follows:
ParseInt(s string, base int, bitSize int)
where, s is the string to be parsed, base is implied by the string's prefix: base 16 for "0x", base 8 for "0", and base 10 otherwise.
The bitSize parameter is where I am facing problem. As per documentation of ParseInt, it specifies the integer type that the result must fit into. Bit sizes 0, 8, 16, 32, and 64 correspond to int, int8, int16, int32, and int64.
But for all the values like 0, 8, 16, 32, and 64. I am getting same type return value. I.e of int64 type.
Could anyone point me out what am I missing.
Code: https://play.golang.org/p/F3LbUh_maY
As per documentation
func ParseInt(s string, base int, bitSize int) (i int64, err error)
ParseInt always return int64 no matter what. Moreover
The bitSize argument specifies the integer type that the result must
fit into
So basically the your bitSize parameter only tells that the string value that you are going to parse should fit the bitSize after parsing. If not, out of range will happen.
Like in this PlayGround: strconv.ParseInt("192", 10, 8) (as you see the value after the parsing would be bigger than maximum value of int8).
If you want to parse it to whatever value you need, just use int8(i) afterwards (int8, int16, int32).
P.S. because you touched the topic how to convert to specific intX, I would outline that it is also possible to convert to unsigned int.
ParseInt always returns an int64, and you need to convert the result to your desired type. When you pass 32 as the third argument, then you'll get a parse error if the parsed value won't fit into an int32, but the returned type is always int64.
For example:
i, err := strconv.ParseInt("9207", 10, 32)
if err != nil {
panic(err)
}
result := int32(i)
fmt.Printf("Parsed int is %d\n", result)
You can use Sscan:
package main
import "fmt"
func main() {
{
var n int8
fmt.Sscan("100", &n)
fmt.Println(n == 100)
}
{
var n int16
fmt.Sscan("100", &n)
fmt.Println(n == 100)
}
{
var n int32
fmt.Sscan("100", &n)
fmt.Println(n == 100)
}
{
var n int64
fmt.Sscan("100", &n)
fmt.Println(n == 100)
}
}
https://golang.org/pkg/fmt#Sscan
I just start to learn Go, and I wrote a prime test program using the ProbablyPrime library.
package main
import (
"fmt"
"math/big"
"math"
"os"
"strconv"
)
func prime_test(n int64, certainty int)(bool,float64){
var probobility float64
i := big.NewInt(n)
isPrime := i.ProbablyPrime(certainty)
probobility = 1 - 1/math.Pow(4,10)
return isPrime, probobility
}
func why_not_prime(n int64)(int64){
var i int64
for i=2 ; i<n/2; i++ {
if n%i == 0 {return i}
}
return i
}
func main() {
var n int64
var certainty int
var isPrime bool
var probobility float64
if len(os.Args) > 1 {
n,_ = strconv.ParseInt(os.Args[1],64,64)
certainty,_ = strconv.Atoi(os.Args[2])
}
isPrime, probobility = prime_test(n,certainty)
if isPrime {
fmt.Printf("%d is probably %0.8f%% a prime.", n, probobility*100)
} else {
var i int64
i = why_not_prime(n)
fmt.Printf("%d is a composite because it can be divided by %d", n, i)
}
}
The code could be successfully compiled. When I run it, it always return 0 is a composite because it can be divided by 2.
I guess there's something wrong with the command line argument parsing. How to fix it?
The problem is with this line:
n,_ = strconv.ParseInt(os.Args[1],64,64)
The documentation of ParseInt(s string, base int, bitSize int) (i int64, err error) states:
ParseInt interprets a string s in the given base (2 to 36) and returns the corresponding value i.
The base can be 36 at the most and you pass 64. In this case an error will be returned (which you discard by using the blank identifier _), and n will have the zero value which is 0 hence you see the output as
0 is a composite because it can be divided by 2
Solution:
Change the line in question to this:
n, _ = strconv.ParseInt(os.Args[1], 10, 64)
and it should work. Also you should not discard errors because you will run into cases like this. Instead handle them properly like this:
var err error
n, err = strconv.ParseInt(os.Args[1], 10, 64)
if err != nil {
log.Fatal(err)
}
Note:
Also note that the first argument (os.Args[0] is the name of the executable), and since you expect and work with 2 extra arguments, you should check if the length of os.Args is greater than 2 not 1:
if len(os.Args) > 2 {
// os.Args[1] and os.Args[2] is valid
}
I have a function which receives a []byte but what I have is an int, what is the best way to go about this conversion ?
err = a.Write([]byte(myInt))
I guess I could go the long way and get it into a string and put that into bytes, but it sounds ugly and I guess there are better ways to do it.
I agree with Brainstorm's approach: assuming that you're passing a machine-friendly binary representation, use the encoding/binary library. The OP suggests that binary.Write() might have some overhead. Looking at the source for the implementation of Write(), I see that it does some runtime decisions for maximum flexibility.
func Write(w io.Writer, order ByteOrder, data interface{}) error {
// Fast path for basic types.
var b [8]byte
var bs []byte
switch v := data.(type) {
case *int8:
bs = b[:1]
b[0] = byte(*v)
case int8:
bs = b[:1]
b[0] = byte(v)
case *uint8:
bs = b[:1]
b[0] = *v
...
Right? Write() takes in a very generic data third argument, and that's imposing some overhead as the Go runtime then is forced into encoding type information. Since Write() is doing some runtime decisions here that you simply don't need in your situation, maybe you can just directly call the encoding functions and see if it performs better.
Something like this:
package main
import (
"encoding/binary"
"fmt"
)
func main() {
bs := make([]byte, 4)
binary.LittleEndian.PutUint32(bs, 31415926)
fmt.Println(bs)
}
Let us know how this performs.
Otherwise, if you're just trying to get an ASCII representation of the integer, you can get the string representation (probably with strconv.Itoa) and cast that string to the []byte type.
package main
import (
"fmt"
"strconv"
)
func main() {
bs := []byte(strconv.Itoa(31415926))
fmt.Println(bs)
}
Check out the "encoding/binary" package. Particularly the Read and Write functions:
binary.Write(a, binary.LittleEndian, myInt)
Sorry, this might be a bit late. But I think I found a better implementation on the go docs.
buf := new(bytes.Buffer)
var num uint16 = 1234
err := binary.Write(buf, binary.LittleEndian, num)
if err != nil {
fmt.Println("binary.Write failed:", err)
}
fmt.Printf("% x", buf.Bytes())
i thought int type has any method for getting int hash to bytes, but first i find math / big method for this
https://golang.org/pkg/math/big/
var f int = 52452356235; // int
var s = big.NewInt(int64(f)) // int to big Int
var b = s.Bytes() // big Int to bytes
// b - byte slise
var r = big.NewInt(0).SetBytes(b) // bytes to big Int
var i int = int(r.Int64()) // big Int to int
https://play.golang.org/p/VAKSGw8XNQq
However, this method uses an absolute value.
If you spend 1 byte more, you can transfer the sign
func IntToBytes(i int) []byte{
if i > 0 {
return append(big.NewInt(int64(i)).Bytes(), byte(1))
}
return append(big.NewInt(int64(i)).Bytes(), byte(0))
}
func BytesToInt(b []byte) int{
if b[len(b)-1]==0 {
return -int(big.NewInt(0).SetBytes(b[:len(b)-1]).Int64())
}
return int(big.NewInt(0).SetBytes(b[:len(b)-1]).Int64())
}
https://play.golang.org/p/mR5Sp5hu4jk
or new(https://play.golang.org/p/7ZAK4QL96FO)
(The package also provides functions for fill into an existing slice)
https://golang.org/pkg/math/big/#Int.FillBytes
Adding this option for dealing with basic uint8 to byte[] conversion
foo := 255 // 1 - 255
ufoo := uint16(foo)
far := []byte{0,0}
binary.LittleEndian.PutUint16(far, ufoo)
bar := int(far[0]) // back to int
fmt.Println("foo, far, bar : ",foo,far,bar)
output :
foo, far, bar : 255 [255 0] 255
Here is another option, based on the Go source code [1]:
package main
import (
"encoding/binary"
"fmt"
"math/bits"
)
func encodeUint(x uint64) []byte {
buf := make([]byte, 8)
binary.BigEndian.PutUint64(buf, x)
return buf[bits.LeadingZeros64(x) >> 3:]
}
func main() {
for x := 0; x <= 64; x += 8 {
buf := encodeUint(1<<x-1)
fmt.Println(buf)
}
}
Result:
[]
[255]
[255 255]
[255 255 255]
[255 255 255 255]
[255 255 255 255 255]
[255 255 255 255 255 255]
[255 255 255 255 255 255 255]
[255 255 255 255 255 255 255 255]
Much faster than math/big:
BenchmarkBig-12 28348621 40.62 ns/op
BenchmarkBit-12 731601145 1.641 ns/op
https://github.com/golang/go/blob/go1.16.5/src/encoding/gob/encode.go#L113-L117
You can try musgo_int. All you need to do is to cast your variable:
package main
import (
"github.com/ymz-ncnk/musgo_int"
)
func main() {
var myInt int = 1234
// from int to []byte
buf := make([]byte, musgo_int.Int(myInt).SizeMUS())
musgo_int.Int(myInt).MarshalMUS(buf)
// from []byte to int
_, err := (*musgo_int.Int)(&myInt).UnmarshalMUS(buf)
if err != nil {
panic(err)
}
}
Convert Integer to byte slice.
import (
"bytes"
"encoding/binary"
"log"
)
func IntToBytes(num int64) []byte {
buff := new(bytes.Buffer)
bigOrLittleEndian := binary.BigEndian
err := binary.Write(buff, bigOrLittleEndian, num)
if err != nil {
log.Panic(err)
}
return buff.Bytes()
}
Maybe the simple way is using protobuf, see the Protocol Buffer Basics: Go
define message like
message MyData {
int32 id = 1;
}
get more in Defining your protocol format
// Write
out, err := proto.Marshal(mydata)
read more in Writing a Message
Try math/big package to convert bytes array to int and to convert int to bytes array.
package main
import (
"fmt"
"math/big"
)
func main() {
// Convert int to []byte
var int_to_encode int64 = 65535
var bytes_array []byte = big.NewInt(int_to_encode).Bytes()
fmt.Println("bytes array", bytes_array)
// Convert []byte to int
var decoded_int int64 = new(big.Int).SetBytes(bytes_array).Int64()
fmt.Println("decoded int", decoded_int)
}
This is the most straight forward (and shortest (and safest) (and maybe most performant)) way:
buf.Bytes() is of type bytes slice.
var val uint32 = 42
buf := new(bytes.Buffer)
err := binary.Write(buf, binary.LittleEndian, val)
if err != nil {
fmt.Println("binary.Write failed:", err)
}
fmt.Printf("% x\n", buf.Bytes())
see also https://stackoverflow.com/a/74819602/589493
What's wrong with converting it to a string?
[]byte(fmt.Sprintf("%d", myint))