Convert an integer to a float number - go

How do I convert an integer value to float64 type?
I tried
float(integer_value)
But this does not work. And can't find any package that does this on Golang.org
How do I get float64 values from integer values?

There is no float type. Looks like you want float64. You could also use float32 if you only need a single-precision floating point value.
package main
import "fmt"
func main() {
i := 5
f := float64(i)
fmt.Printf("f is %f\n", f)
}

Just for the sake of completeness, here is a link to the golang documentation which describes all types. In your case it is numeric types:
uint8 the set of all unsigned 8-bit integers (0 to 255)
uint16 the set of all unsigned 16-bit integers (0 to 65535)
uint32 the set of all unsigned 32-bit integers (0 to 4294967295)
uint64 the set of all unsigned 64-bit integers (0 to 18446744073709551615)
int8 the set of all signed 8-bit integers (-128 to 127)
int16 the set of all signed 16-bit integers (-32768 to 32767)
int32 the set of all signed 32-bit integers (-2147483648 to 2147483647)
int64 the set of all signed 64-bit integers (-9223372036854775808 to 9223372036854775807)
float32 the set of all IEEE-754 32-bit floating-point numbers
float64 the set of all IEEE-754 64-bit floating-point numbers
complex64 the set of all complex numbers with float32 real and imaginary parts
complex128 the set of all complex numbers with float64 real and imaginary parts
byte alias for uint8
rune alias for int32
Which means that you need to use float64(integer_value).

just do these
package main
func main(){
a:= 70
afloat := float64(a)
fmt.Printf("type of a is %T\n", a) // will int
fmt.Printf("type of a is %T\n", afloat) //will float64
}

intutils.ToFloat32
// ToFloat32 converts a int num to a float32 num
func ToFloat32(in int) float32 {
return float32(in)
}
// ToFloat64 converts a int num to a float64 num
func ToFloat64(in int) float64 {
return float64(in)
}

Proper parentheses placement is key:
package main
import (
"fmt"
)
func main() {
var payload uint32
var fpayload float32
payload = 1320
// works
fpayload = float32(payload) / 100.0
fmt.Printf("%T = %d, %T = %f\n", payload, payload, fpayload, fpayload)
// doesn't work
fpayload = float32(payload / 100.0)
fmt.Printf("%T = %d, %T = %f\n", payload, payload, fpayload, fpayload)
}
results:
uint32 = 1320, float32 = 13.200000
uint32 = 1320, float32 = 13.000000
The Go Playground

Type Conversions T() where T is the desired datatype of the result are quite simple in GoLang.
In my program, I scan an integer i from the user input, perform a type conversion on it and store it in the variable f. The output prints the float64 equivalent of the int input. float32 datatype is also available in GoLang
Code:
package main
import "fmt"
func main() {
var i int
fmt.Println("Enter an Integer input: ")
fmt.Scanf("%d", &i)
f := float64(i)
fmt.Printf("The float64 representation of %d is %f\n", i, f)
}
Solution:
>>> Enter an Integer input:
>>> 232332
>>> The float64 representation of 232332 is 232332.000000

Related

Why these two structs have different size in the memory?

Suppose I have these two structs:
package main
import (
"fmt"
"unsafe"
)
type A struct {
int8
int16
bool
}
type B struct {
int8
bool
int16
}
func main() {
fmt.Println(unsafe.Sizeof(A{}), unsafe.Sizeof(B{})) // 6 4
}
Size of A is 6 bytes. however, the size of B is 4 bytes.
I assume that it's related to their layout in the memory, but I'm not sure I understand why it's behave like this.
Isn't something that the compiler can detect and optimize? (rearrange the fields order)
Link to the code
Padding due to alignment.
The Go Programming Language Specification
Size and alignment guarantees
For the numeric types, the following sizes are guaranteed:
type size in bytes
byte, uint8, int8 1
uint16, int16 2
uint32, int32, float32 4
uint64, int64, float64, complex64 8
complex128 16
The following minimal alignment properties are guaranteed:
For a variable x of any type: unsafe.Alignof(x) is at least 1.
For a variable x of struct type: unsafe.Alignof(x) is the largest of all the values unsafe.Alignof(x.f) for each field f of x, but at least
1.
For a variable x of array type: unsafe.Alignof(x) is the same as the alignment of a variable of the array's element type.
A struct or array type has size zero if it contains no fields (or
elements, respectively) that have a size greater than zero. Two
distinct zero-size variables may have the same address in memory.
For example,
package main
import (
"fmt"
"unsafe"
)
type A struct {
x int8
y int16
z bool
}
type B struct {
x int8
y bool
z int16
}
func main() {
var a A
fmt.Println("A:")
fmt.Println("Size: ", unsafe.Sizeof(a))
fmt.Printf("Address: %p %p %p\n", &a.x, &a.y, &a.z)
fmt.Printf("Offset: %d %d %d\n", unsafe.Offsetof(a.x), unsafe.Offsetof(a.y), unsafe.Offsetof(a.z))
fmt.Println()
var b B
fmt.Println("B:")
fmt.Println("Size: ", unsafe.Sizeof(b))
fmt.Printf("Address: %p %p %p\n", &b.x, &b.y, &b.z)
fmt.Printf("Offset: %d %d %d\n", unsafe.Offsetof(b.x), unsafe.Offsetof(b.y), unsafe.Offsetof(b.z))
}
Playground: https://play.golang.org/p/_8yDMungDg0
Output:
A:
Size: 6
Address: 0x10410020 0x10410022 0x10410024
Offset: 0 2 4
B:
Size: 4
Address: 0x10410040 0x10410041 0x10410042
Offset: 0 1 2
You may be matching an external struct, perhaps in another language. It's up to you to tell the compiler what you want. The compiler doesn't guess.

golang how can I convert uint64 to int64? [duplicate]

This question already has answers here:
Convert uint64 to int64 without loss of information
(3 answers)
Closed 5 years ago.
anyone can help me? converting uint64 to int64 pls
//fmt.Println(int64(18446744073709551615))
//constant 18446744073709551615 overflows int64
var x uint64 = 18446744073709551615
var y int64 = int64(x)
fmt.Println(y) //-1
//just like(c)signed long long
//anyone can help me pls!
//How can I using like this?
// -9223372036854775808 +9223372036854775807
func BytesToInt(b []byte) int {
bytesBuffer := bytes.NewBuffer(b)
var tmp int32
binary.Read(bytesBuffer, binary.BigEndian, &tmp)
return int(tmp)
}
What you are asking (to store 18,446,744,073,709,551,615 as an int64 value) is impossible.
A unit64 stores positive integers and has 64 bits available to hold information. It can therefore store any positive integer between 0 and 18,446,744,073,709,551,615 (2^64-1).
An int64 uses one bit to hold the sign, leaving 63 bits to hold information about the number. It can store any value between -9,223,372,036,854,775,808 and +9,223,372,036,854,775,807 (-2^63 and 2^63-1).
Both types can hold 18,446,744,073,709,551,616 unique integers, it is just that the uint64 range starts at zero, where as the int64 values straddle zero.
To hold 18,446,744,073,709,551,615 as a signed integer would require 65 bits.
In your conversion, no information from the underlying bytes is lost. The difference in the integer values returned is due to how the the two types interpret and display the values.
uint64 will display a raw integer value, whereas int64 will use two's complement.
var x uint64 = 18446744073709551615
var y int64 = int64(x)
fmt.Printf("uint64: %v = %#[1]x, int64: %v = %#x\n", x, y, uint64(y))
// uint64: 18446744073709551615 = 0xffffffffffffffff
// int64: -1 = 0xffffffffffffffff
x -= 100
y -= 100
fmt.Printf("uint64: %v = %#[1]x, int64: %v = %#x\n", x, y, uint64(y))
// uint64: 18446744073709551515 = 0xffffffffffffff9b
// int64: -101 = 0xffffffffffffff9b
https://play.golang.com/p/hlWqhnC9Dh

Why am I getting a compile error 'cannot use ... as type uint8 in argument to ...' when the parameter is an int

I am new to Go and was working through a problem in The Go Programming Language. The code should create GIF animations out of random Lissajous figures with the images being produced in the different colors from palate:
// Copyright © 2016 Alan A. A. Donovan & Brian W. Kernighan.
// License: https://creativecommons.org/licenses/by-nc-sa/4.0/
// Run with "web" command-line argument for web server.
// See page 13.
//!+main
// Lissajous generates GIF animations of random Lissajous figures.
package main
import (
"image"
"image/color"
"image/gif"
"io"
"math"
"math/rand"
"os"
)
//!-main
// Packages not needed by version in book.
import (
"log"
"net/http"
"time"
)
//!+main
// #00ff55
var palette = []color.Color{color.RGBA{0x00, 0xff, 0x55, 0xFF}, color.Black, color.RGBA{0x00, 0x00, 0xff, 0xFF}, color.RGBA{0xff, 0x00, 0xff, 0xFF}}
const (
whiteIndex = 0 // first color in palette
)
func main() {
//!-main
// The sequence of images is deterministic unless we seed
// the pseudo-random number generator using the current time.
// Thanks to Randall McPherson for pointing out the omission.
rand.Seed(time.Now().UTC().UnixNano())
if len(os.Args) > 1 && os.Args[1] == "web" {
//!+http
handler := func(w http.ResponseWriter, r *http.Request) {
lissajous(w)
}
http.HandleFunc("/", handler)
//!-http
log.Fatal(http.ListenAndServe("localhost:8000", nil))
return
}
//!+main
lissajous(os.Stdout)
}
func lissajous(out io.Writer) {
const (
cycles = 5 // number of complete x oscillator revolutions
res = 0.001 // angular resolution
size = 100 // image canvas covers [-size..+size]
nframes = 64 // number of animation frames
delay = 8 // delay between frames in 10ms units
)
freq := rand.Float64() * 3.0 // relative frequency of y oscillator
anim := gif.GIF{LoopCount: nframes}
phase := 0.0 // phase difference
colorIndex := 2
for i := 0; i < nframes; i++ {
rect := image.Rect(0, 0, 2*size+1, 2*size+1)
img := image.NewPaletted(rect, palette)
for t := 0.0; t < cycles*2*math.Pi; t += res {
x := math.Sin(t)
y := math.Sin(t*freq + phase)
img.SetColorIndex(size+int(x*size+0.5), size+int(y*size+0.5), colorIndex)
colorIndex++
}
phase += 0.1
anim.Delay = append(anim.Delay, delay)
anim.Image = append(anim.Image, img)
}
gif.EncodeAll(out, &anim) // NOTE: ignoring encoding errors
}
//!-main
Here is the error I am getting
lissajous/main.go:76: cannot use colorIndex (type int) as type uint8 in argument to img.SetColorIndex
Is there a difference between int and uint8 types or something?
The type of colorIndex is int. The argument type is uint8. An int cannot be assigned to a uint8. Here are some options for fixing the program:
Declare colorIndex as an untyped constant.
const colorIndex = 2
Declare colorIndex as uint8 type:
colorIndex := uint8(3)
Convert the value at the call:
img.SetColorIndex(size+int(x*size+0.5), size+int(y*size+0.5), uint8(colorIndex))
You can replace all uses of uint8 in this answer with byte because byte is an alias for uint8.
In variable declarations, a default type is used, and in your case, colorIndex := 2, i.e. colorIndex becomes int, not uint8.
From the docs ( https://golang.org/ref/spec#Short_variable_declarations ):
"If a type is present, each variable is given that type. Otherwise, each variable is given the type of the corresponding initialization value in the assignment. If that value is an untyped constant, it is first converted to its default type;..."
"var i = 42 // i is int"
and then
"An untyped constant has a default type which is the type to which the constant is implicitly converted in contexts where a typed value is required, for instance, in a short variable declaration such as i := 0 where there is no explicit type. The default type of an untyped constant is bool, rune, int, float64, complex128 or string respectively, depending on whether it is a boolean, rune, integer, floating-point, complex, or string constant."
So to get uint8, you should either explicitly declare colorIndex as uint8 var colorIndex uint8 = 2 or cast uint8 in img.SetColorIndex as :
img.SetColorIndex(size+int(x*size+0.5), size+int(y*size+0.5), uint8(colorIndex))

Parse string to specific type of int (int8, int16, int32, int64)

I am trying to parse a string into an integer in Go. The Problem I found with it is in the documentation its mentioned syntax is as follows:
ParseInt(s string, base int, bitSize int)
where, s is the string to be parsed, base is implied by the string's prefix: base 16 for "0x", base 8 for "0", and base 10 otherwise.
The bitSize parameter is where I am facing problem. As per documentation of ParseInt, it specifies the integer type that the result must fit into. Bit sizes 0, 8, 16, 32, and 64 correspond to int, int8, int16, int32, and int64.
But for all the values like 0, 8, 16, 32, and 64. I am getting same type return value. I.e of int64 type.
Could anyone point me out what am I missing.
Code: https://play.golang.org/p/F3LbUh_maY
As per documentation
func ParseInt(s string, base int, bitSize int) (i int64, err error)
ParseInt always return int64 no matter what. Moreover
The bitSize argument specifies the integer type that the result must
fit into
So basically the your bitSize parameter only tells that the string value that you are going to parse should fit the bitSize after parsing. If not, out of range will happen.
Like in this PlayGround: strconv.ParseInt("192", 10, 8) (as you see the value after the parsing would be bigger than maximum value of int8).
If you want to parse it to whatever value you need, just use int8(i) afterwards (int8, int16, int32).
P.S. because you touched the topic how to convert to specific intX, I would outline that it is also possible to convert to unsigned int.
ParseInt always returns an int64, and you need to convert the result to your desired type. When you pass 32 as the third argument, then you'll get a parse error if the parsed value won't fit into an int32, but the returned type is always int64.
For example:
i, err := strconv.ParseInt("9207", 10, 32)
if err != nil {
panic(err)
}
result := int32(i)
fmt.Printf("Parsed int is %d\n", result)
You can use Sscan:
package main
import "fmt"
func main() {
{
var n int8
fmt.Sscan("100", &n)
fmt.Println(n == 100)
}
{
var n int16
fmt.Sscan("100", &n)
fmt.Println(n == 100)
}
{
var n int32
fmt.Sscan("100", &n)
fmt.Println(n == 100)
}
{
var n int64
fmt.Sscan("100", &n)
fmt.Println(n == 100)
}
}
https://golang.org/pkg/fmt#Sscan

Idiomatic Type Conversion in Go

I was playing around with Go and was wondering what the best way is to perform idiomatic type conversions in Go. Basically my problem lays within automatic type conversions between uint8, uint64, and float64. From my experience with other languages a multiplication of a uint8 with a uint64 will yield a uint64 value, but not so in go.
Here is an example that I build and I ask if this is the idiomatic way of writing this code or if I'm missing an important language construct.
package main
import ("math";"fmt")
const(Width=64)
func main() {
var index uint32
var bits uint8
index = 100
bits = 3
var c uint64
// This is the line of interest vvvv
c = uint64(math.Ceil(float64(index * uint32(bits))/float64(Width)))
fmt.Println("Test: %v\n", c)
}
From my point of view the calculation of the ceiling value seems unnecessary complex because of all the explicit type conversions.
Thanks!
There are no implicit type conversions for non-constant values.
You can write
var x float64
x = 1
But you cannot write
var x float64
var y int
y = 1
x = y
See the spec for reference.
There's a good reason, to not allow automatic/implicit type conversions, as they can
become very messy and one has to learn many rules to circumvent the various caveats
that may occur. Take the Integer Conversion Rules in C for example.
For example,
package main
import "fmt"
func CeilUint(a, b uint64) uint64 {
return (a + (b - 1)) / b
}
func main() {
const Width = 64
var index uint32 = 100
var bits uint8 = 3
var c uint64 = CeilUint(uint64(index)*uint64(bits), Width)
fmt.Println("Test:", c)
}
Output:
Test: 5
To add to #nemo terrific answer. The convenience of automatic conversion between numeric types in C is outweighed by the confusion it causes. See https://Golang.org/doc/faq#conversions. Thats why you can't even convert from int to int32 implicitly. See https://stackoverflow.com/a/13852456/12817546.
package main
import (
. "fmt"
. "strconv"
)
func main() {
i := 71
c := []interface{}{byte(i), []byte(string(i)), float64(i), i, rune(i), Itoa(i), i != 0}
checkType(c)
}
func checkType(s []interface{}) {
for k, _ := range s {
Printf("%T %v\n", s[k], s[k])
}
}
byte(i) creates a uint8 with a value of 71, []byte(string(i)) a []uint8 with [71], float64(i) float64 71, i int 71, rune(i) int32 71, Itoa(i) string 71 and i != 0 a bool with a value of true.
Since Go won't convert numeric types automatically for you (See https://stackoverflow.com/a/13851553/12817546) you have to convert between types manually. See https://stackoverflow.com/a/41419962/12817546. Note, Itoa(i) sets an "Integer to an ASCII". See comment in https://stackoverflow.com/a/10105983/12817546.

Resources