Could't convert <nil> into type ...? - go

I tried to using database/sql for query database row into a Go type, my codes snippet following:
type User struct {
user_id int64
user_name string
user_mobile string
password string
email interface{}
nickname string
level byte
locked bool
create_time string
comment string // convert <nil> to *string error
}
func TestQueryUser(t *testing.T) {
db := QueryUser(driverName, dataSourceName)
stmtResults, err := db.Prepare(queryAll)
defer stmtResults.Close()
var r *User = new(User)
arr := []interface{}{
&r.user_id, &r.user_name, &r.user_mobile, &r.password, &r.email,
&r.nickname, &r.level, &r.locked, &r.create_time, &r.comment,
}
err = stmtResults.QueryRow(username).Scan(arr...)
if err != nil {
t.Error(err.Error())
}
fmt.Println(r.email)
}
MySQL:
As your see, some fields that has NULL value, so I have to set interface{} type into User struct of Go, which convert NULL to nil.
--- FAIL: TestQueryUser (0.00s)
user_test.go:48: sql: Scan error on column index 9: unsupported Scan, storing driver.Value type <nil> into type *string
Somebody has a better way? or I must change the MySQL field and set its DEFAULT ' '

First the short answer : There is some types in sql package for example sql.NullString (for nullable string in your table and guess Nullint64 and NullBool and ... usage :) ) and you should use them in your struct.
The long one : There is two interface for this available in go , first is Scanner and the other is Valuer for any special type in database, (for example,I use this mostly with JSONB in postgres) you need to create a type, and implement this two(or one of them) interface on that type.
the scanner is used when you call Scan function. the data from the database driver, normally in []byte is the input and you are responsible for handling it. the other one, is used when the value is used as input in query. the result "normally" is a slice of byte (and an error) if you need to only read data, Scanner is enough, and vice-versa, if you need to write parameter in query the Valuer is enough
for an example of implementation, I recommend to see the types in sql package.
Also there is an example of a type to use with JSONB/JSON type in postgresql
// GenericJSONField is used to handle generic json data in postgres
type GenericJSONField map[string]interface{}
// Scan convert the json field into our type
func (v *GenericJSONField) Scan(src interface{}) error {
var b []byte
switch src.(type) {
case []byte:
b = src.([]byte)
case string:
b = []byte(src.(string))
case nil:
b = make([]byte, 0)
default:
return errors.New("unsupported type")
}
return json.Unmarshal(b, v)
}
// Value try to get the string slice representation in database
func (v GenericJSONField) Value() (driver.Value, error) {
return json.Marshal(v)
}
driver value is often []byte but string and nil is acceptable. so this could handle null-able fields too.

Related

How to use pgtype.Numeric with gorm and sqlite3?

I need to store very large and high precision numbers with gORM, and using a pgtype.Numeric seems like the best bet. However, I cannot because I get an error: sql: Scan error on column index 4, name "foo": cannot scan int64
My model looks something like this:
type Model struct {
gorm.Model
Foo *pgtype.Numeric `gorm:"not null"`
}
Not sure if using pgtype.Numeric is the best (that's what i've seen everyone else use), or I'm doing something wrong. Thanks!
The code that caused the error:
package main
import (
"gorm.io/driver/sqlite"
"gorm.io/gorm"
"math/big"
"github.com/jackc/pgtype"
)
type Model struct {
gorm.Model
Foo *pgtype.Numeric `gorm:"not null"`
}
func main() {
db, err := gorm.Open(sqlite.Open("test.db"), &gorm.Config{})
if err != nil {
panic("failed to connect database")
}
// Migrate the schema
db.AutoMigrate(&Model{})
// Create
db.Create(&Model{Foo: &pgtype.Numeric{Int: big.NewInt(10000000), Status: pgtype.Present}})
var m Model
db.First(&m) // this line causes the error
}
Sqlite3 does not support big integer so there is no way you can accomplish that directly. I run the code and foo column is create as:
`foo` numeric NOT NULL
Which in sqlite https://www.sqlite.org/datatype3.html means
A column with NUMERIC affinity may contain values using all five storage classes... If the TEXT value is a well-formed integer literal that is too large to fit in a 64-bit signed integer, it is converted to REAL.
So your big int will turn into float64. Good thing it paniced instead of losing accuracy silently.
What you can do is convert the big int to string or bytes first and store that.
When debugging the sql.Scanner interface for database deserialization, it is noticeable that the value from the database arrives either as int64 or float64. This then leads to the corresponding error message.
A possible solution is to use a text data type in the database, by adding the type text to the field tag:
`gorm: "type:text;"`
Using the github.com/shopspring/decimal package, you can conveniently create a decimal number using the NewString function.
The adapted code to insert the data:
num, err := decimal.NewFromString("123456789012345.12345678901")
if err != nil {
panic(err)
}
db.Create(&Model{Foo: &num})
The model structure might then look something like this:
type Model struct {
gorm.Model
Foo *decimal.Decimal `gorm: "not null;type:text;"`
}
This would result in the following schema:
Test
If one inserts a breakpoint in decimal.Scan, one can see that the value comes from the database as expected as a string, resulting in the creation of a decimal with NewFromString (see Decimal's scan method).
If you add this line of code to the end of the main function
fmt.Println(m.Foo)
it would result in the following output in the debug console:
123456789012345.12345678901
Complete Program
Your complete program, slightly adapted to the above points, would then look something like this:
package main
import (
"fmt"
"github.com/shopspring/decimal"
"gorm.io/driver/sqlite"
"gorm.io/gorm"
)
type Model struct {
gorm.Model
Foo *decimal.Decimal `gorm:"not null;type:text;"`
}
func main() {
db, err := gorm.Open(sqlite.Open("test.db"), &gorm.Config{})
if err != nil {
panic("failed to connect database")
}
// Migrate the schema
db.AutoMigrate(&Model{})
// Create
num, err := decimal.NewFromString("123456789012345.12345678901")
if err != nil {
panic(err)
}
db.Create(&Model{Foo: &num})
var m Model
db.First(&m)
fmt.Println(m.Foo)
}
pgtype.Numeric and SQLite
If a PostgreSQL database is used, gorm can be used together with pgtype.Numeric to handle decimal numbers like 123456789012345.12345678901. You just need to use the numeric data type on the Postgres side with the appropriate desired precision (e.g. numeric(50,15)).
After all, this is exactly what pgtype is for, see the pgtype readme where it says:
pgtype is the type system underlying the https://github.com/jackc/pgx PostgreSQL driver.
However, if you use a text data type in SQLite for the reasons mentioned above, pgtype.Numeric will not work with SQLite. An attempt with the above number writes 12345678901234512345678901e-11 to the DB and when reading it out the following error occurs:
sql: Scan error on column index 4, name "foo": 12345678901234512345678901e-11 is not a number

Error - redigo.Scan: Cannot convert from Redis bulk string to *string

I have a struct like that
type User struct {
Nickname *string `json:"nickname"`
Phone *string `json:"phone"`
}
Values ​​are placed in redis with HMSET command. (values ​​can be nil)
Now I'm trying to scan values ​​into a structure:
values, err := redis.Values(Cache.Do("HMGET", "key", "nickname", "phone" )
var usr User
_, err := redis.Scan(values, &usr.Nickname, &usr.Phone)
But I get an error
redigo.Scan: cannot assign to dest 0: cannot convert from Redis bulk
string to *string
Please tell me what I'm doing wrong?
The Scan documentation says:
The values pointed at by dest must be an integer, float, boolean, string, []byte, interface{} or slices of these types.
The application passes a pointer to a *string to the function. A *string is not one of the supported types.
There are two approaches for fixing the problem. The first is to allocate string values and pass pointers to the allocated string values to Scan:
usr := User{Nickname: new(string), Phone: new(string)}
_, err := redis.Scan(values, usr.Nickname, usr.Phone)
The second approach is to change the type of the struct fields to string:
type User struct {
Nickname string `json:"nickname"`
Phone string `json:"phone"`
}
...
var usr User
_, err := redis.Scan(values, &usr.Nickname, &usr.Phone)
From the doc it says that []byte is type for bulk string, not *string. You have two options here:
change the particular field type to []byte
or use temporary variable with []byte type on the scan, then after the data retrieved store it to the struct's field

How can I convert a JSON string to a byte array?

I need some help with unmarshaling. I have this example code:
package main
import (
"encoding/json"
"fmt"
)
type Obj struct {
Id string `json:"id"`
Data []byte `json:"data"`
}
func main() {
byt := []byte(`{"id":"someID","data":["str1","str2"]}`)
var obj Obj
if err := json.Unmarshal(byt, &obj); err != nil {
panic(err)
}
fmt.Println(obj)
}
What I try to do here - convert bytes to the struct, where type of one field is []byte. The error I get:
panic: json: cannot unmarshal string into Go struct field Obj.data of
type uint8
That's probably because parser already sees that "data" field is already a slice and tries to represent "str1" as some char bytecode (type uint8?).
How do I store the whole data value as one bytes array? Because I want to unmarshal the value to the slice of strings later. I don't include a slice of strings into struct because this type can change (array of strings, int, string, etc), I wish this to be universal.
My first recommendation would be for you to just use []string instead of []byte if you know the input type is going to be an array of strings.
If data is going to be a JSON array with various types, then your best option is to use []interface{} instead - Go will happily unmarshal the JSON for you and you can perform checks at runtime to cast those into more specific typed variables on an as-needed basis.
If []byte really is what you want, use json.RawMessage, which is of type []byte, but also implements the methods for JSON parsing. I believe this may be what you want, as it will accept whatever ends up in data. Of course, you then have to manually parse Data to figure out just what actually IS in there.
One possible bonus is that this skips any heavy parsing because it just copies the bytes over. When you want to use this data for something, you use a []interface{}, then use a type switch to use individual values.
https://play.golang.org/p/og88qb_qtpSGJ
package main
import (
"encoding/json"
"fmt"
)
type Obj struct {
Id string `json:"id"`
Data json.RawMessage `json:"data"`
}
func main() {
byt := []byte(`{"id":"someID","data":["str1","str2", 1337, {"my": "obj", "id": 42}]}`)
var obj Obj
if err := json.Unmarshal(byt, &obj); err != nil {
panic(err)
}
fmt.Printf("%+v\n", obj)
fmt.Printf("Data: %s\n", obj.Data)
// use it
var d []interface{}
if err := json.Unmarshal(obj.Data, &d); err != nil {
panic(err)
}
fmt.Printf("%+v\n", d)
for _, v := range d {
// you need a type switch to deterine the type and be able to use most of these
switch real := v.(type) {
case string:
fmt.Println("I'm a string!", real)
case float64:
fmt.Println("I'm a number!", real)
default:
fmt.Printf("Unaccounted for: %+v\n", v)
}
}
}
Your question is:
convert bytes array to struct with a field of type []byte
But you do not have a bytearray but a string array. Your question is not the same as your example. So let answer your question, there are more solutions possible depending in how far you want to diverge from your original requirements.
One string can be converted to one byte-slice, two strings need first to be transformed to one string. So that is problem one. The second problem are the square-brackets in your json-string
This works fine, it implicitly converts the string in the json-string to a byte-slice:
byt := []byte(`{"id":"someID","data":"str1str2"}`)
var obj Obj
if err := json.Unmarshal(byt, &obj); err != nil {
panic(err)
}
fmt.Println(obj)

Solution for type assertion without generics with Golang?

I'm using gorm, and it allows many data types like int, uint, int8, uint8 ....
Then I have a plugin in template like this:
f["UNIX2STR"] = func(t interface{}, f string) string {
switch t.(type) {
case int:
return time.Unix(int64(t.(int)), 0).Format(f)
case uint:
return time.Unix(int64(t.(uint)), 0).Format(f)
case uint8:
return time.Unix(int64(t.(uint8)), 0).Format(f)
case *int:
return time.Unix(int64(*t.(*int)), 0).Format(f)
case *uint:
return time.Unix(int64(*t.(*uint)), 0).Format(f)
case *uint8:
return time.Unix(int64(*t.(*uint8)), 0).Format(f)
.....
default:
return ""
}
// return time.Unix(int64(t), 0).Format(f)
}
It converts all integer types to formatted string.
So what am I suppose to do? List all gorm supported int types and cast it to int64?
I have searched many days for solution convert interface{} to its true type without using type assertion but didn't work.
I've not used gorm, but I think that something like this could solve your problem:
func formatUnix(t interface{}, f string) (string, error) {
timestampStr := fmt.Sprint(t)
timestamp, err := strconv.ParseInt(timestampStr, 10, 64)
if err != nil {
return "", err
}
return time.Unix(timestamp, 0).Format(f), nil
}
Rather than listing all potential types, it simply converts the interface{} to a string using fmt.Sprint() and then convert the string to int64 using strconv.ParseInt().
Based on your comments, it sounds like you're concerned with converting any numeric type to a string. This is easily done with fmt.Sprint:
stringValue := fmt.Sprint(i) // i is any type
But this has nothing to do with GORM.
On the other hand, if your problem is that GORM is returning an unpredictable type, just change your select statement to always return a string. For example, for MySQL, something like:
SELECT CAST(someNumberColumn AS VARCHAR) AS stringColumn
or
SELECT CAST(someNumberColumn AS INT) AS intColumn
I do not think that this is a problem with go or gorm. I am a bit baffled that you save your unix timestamps in many different formats. BTW, A unix timestamp is 32 bit, so there is no point in converting (and saving in the first place) any 8 bit ints.
A solution would be to use a unified data type (int64) for all timestamps in your structs. After that your formatting func can accept int64 instead ofinterface{}, without the need of any type assertions.

how to insert an array in sqlite?

I have struct like:
type Foo struct {
bars []string
}
Since sqlite3 doesn't have array data type supported, can we store []string as string and while retrieving return as slice of string? Was trying to implement like below, but getting error because of type mismatch. What need to be done here?
Edit: I have changed the code and look like working
type strArray []string
func (strarr StrArray) Value() (driver.Value, error) {
if strarr != nil {
resarr := strings.Join(strarr, "")
return resarr, nil
}
return nil, nil
}
Complementary to database/sql/driver.Valuer you need also to implement database/sql.Scanner for reading your type from the database.
When you think of how to implement it, it's obvious that in Valuer you should Join your slice with some delimiter character/string (not occurring in the data of course) to be able to Split it back when retrieving.
Assuming that such delimiter would be ; (my wild guess), the code for reading would look like:
func (a *strArray) Scan(value interface{}) error {
if value == nil {
return nil // case when value from the db was NULL
}
s, ok := value.(string)
if !ok {
return fmt.Errorf("failed to cast value to string: %v", value)
}
*a = strings.Split(s, ";")
return nil
}
For writing, you'd need to use strings.Join(strarr, ";") in Valuer implementation.
Other less-trivial implementation would require marshaling your slice and encoding the resulting bytes as string somehow (base32/64? json?). In any case you need to not loose the information what are distinct slice elements when saving them to the database.

Resources