how to insert an array in sqlite? - go

I have struct like:
type Foo struct {
bars []string
}
Since sqlite3 doesn't have array data type supported, can we store []string as string and while retrieving return as slice of string? Was trying to implement like below, but getting error because of type mismatch. What need to be done here?
Edit: I have changed the code and look like working
type strArray []string
func (strarr StrArray) Value() (driver.Value, error) {
if strarr != nil {
resarr := strings.Join(strarr, "")
return resarr, nil
}
return nil, nil
}

Complementary to database/sql/driver.Valuer you need also to implement database/sql.Scanner for reading your type from the database.
When you think of how to implement it, it's obvious that in Valuer you should Join your slice with some delimiter character/string (not occurring in the data of course) to be able to Split it back when retrieving.
Assuming that such delimiter would be ; (my wild guess), the code for reading would look like:
func (a *strArray) Scan(value interface{}) error {
if value == nil {
return nil // case when value from the db was NULL
}
s, ok := value.(string)
if !ok {
return fmt.Errorf("failed to cast value to string: %v", value)
}
*a = strings.Split(s, ";")
return nil
}
For writing, you'd need to use strings.Join(strarr, ";") in Valuer implementation.
Other less-trivial implementation would require marshaling your slice and encoding the resulting bytes as string somehow (base32/64? json?). In any case you need to not loose the information what are distinct slice elements when saving them to the database.

Related

Generic Go code to retrieve multiple rows from BigQuery

I am writing some utils to retrieve multiple rows from BigQuery in a generic way using Go.
e.g.
type User struct {name string, surname string}
type Car struct {model string, platenumber string}
query1:="SELECT name, surname FROM UserTable"
query2:="SELECT model, platenumber FROM CarTable"
cars, _ := query2.GetResults()
users, _ := query1.GetResults()
OR
cars := []Car{}
query2.GetResults(cars) // and it would append to the slice
I am unsure about the signature of GetResults. I need somehow to pass the type to BigQuery library so it can retrieve the data and map it to the struct correctly. But at the same time I need to make it generic so it can be used for different types.
At the moment my GetResults looks like this: it doesn't work, the error is:
bigquery: cannot convert *interface {} to ValueLoader (need pointer to []Value, map[string]Value, or struct)[]
But I cannot pass directly the struct as I want to make it generic.
func (s *Query) GetResults() ([]interface{}, error) {
var result []interface{}
job, err := s.Run()
if err != nil {
s.log.Error(err, "error in running the query")
return nil, err
}
it, err := job.ReadData()
if err != nil {
s.log.Error(err, "error in reading the data")
return nil, err
}
var row interface{}
for {
err := it.Next(&row)
if err != nil {
fmt.Print(err)
break
}
result = append(result, row)
}
return result, nil
}
Is there another way to achieve that? Or is the good way not to create a method like that?
I've tried quite a lot of different things, with or without pointer, with or without array, by modifying the args, or returning a new list, nothing seem to work, and doing all of that feels a bit wrong regarding the nature "easy" of what I am trying to achieve.
I've also looked into doing the following
GetResults[T any]() ([]T, error)
But it's "excluded" as GetResults is part of an interface (and we can't define generic for a method of an interface). And I can't/don't want to define a type for all the interface, as it impacts other interfaces.

How do you pass a slice of *interface{} as arguments?

I want to use Scan() in package sql, but the number of columns, and hence the number of arguments, will change at runtime. This is the signature of Scan():
func (rs *Rows) Scan(dest ...interface{}) error
According to the documentation, *interface{} is one of the types accepted by Scan(). So I want to create a slice of []*interface{} and that expand as arguments.
This is what I thought would work:
func query(database *sql.DB) {
rows, _ := database.Query("select * from testTable")
for rows.Next() {
data := make([]*interface{}, 2)
err := rows.Scan(data...) // Compilation error
fmt.Printf("%v%v\n", *data[0], *data[1])
if err != nil {
fmt.Println(err.Error())
}
}
}
Compilation fails with cannot use data (type []*interface {}) as type []interface {} in argument to rows.Scan. I thought that data... would expand to &data[0], &data[1], but apparently not. I don't understand the error message. *interface{} is compatible with interface{}, so why can't I expand the slice of pointers to interface types?
This works:
func query(database *sql.DB) {
rows, _ := database.Query("select * from testTable")
for rows.Next() {
data := make([]*interface{}, 2)
err := rows.Scan(&data[0], &data[1]) // Only changed this line
fmt.Printf("%v%v\n", *data[0], *data[1]) // Outputs "[48][116 101 120 116]"
if err != nil {
fmt.Println(err.Error())
}
}
}
I can't use this however, because the number of columns is unknown at compile time. How can I write this code so that I can pass a variable number of *interface{} to rows.Scan()?
First, you must not use []*interface{} slice of pointers to interface rather than []interface{} where the interfaces are pointers. []*interface{} is different from []interface{}. Just create a slice of interfaces where each element is a pointer to a concrete type.
Here is a snippet how you would do this.
var x int
var s string
data := []interface{}{&x, &s}
rows.Scan(data...)
Note on the use of the ... spread operator.
Here are some related questions that will explain a bit more:
golang: slice of struct != slice of interface it implements?
Cannot convert []string to []interface {}
If you really want to pass a []*interface{} (perhaps you don't know the concrete types of the output) you must first wrap each *interface{} in a interface{}:
values := make([]interface{}, columnsCount)
for i := range values {
values[i] = new(interface{})
}
Individual values passed into a ...interface{} parameter are automatically wrapped in a interface{}, but just like []int... won't satisfy ...interface{}, neither will []*interface{}....

How can I convert a JSON string to a byte array?

I need some help with unmarshaling. I have this example code:
package main
import (
"encoding/json"
"fmt"
)
type Obj struct {
Id string `json:"id"`
Data []byte `json:"data"`
}
func main() {
byt := []byte(`{"id":"someID","data":["str1","str2"]}`)
var obj Obj
if err := json.Unmarshal(byt, &obj); err != nil {
panic(err)
}
fmt.Println(obj)
}
What I try to do here - convert bytes to the struct, where type of one field is []byte. The error I get:
panic: json: cannot unmarshal string into Go struct field Obj.data of
type uint8
That's probably because parser already sees that "data" field is already a slice and tries to represent "str1" as some char bytecode (type uint8?).
How do I store the whole data value as one bytes array? Because I want to unmarshal the value to the slice of strings later. I don't include a slice of strings into struct because this type can change (array of strings, int, string, etc), I wish this to be universal.
My first recommendation would be for you to just use []string instead of []byte if you know the input type is going to be an array of strings.
If data is going to be a JSON array with various types, then your best option is to use []interface{} instead - Go will happily unmarshal the JSON for you and you can perform checks at runtime to cast those into more specific typed variables on an as-needed basis.
If []byte really is what you want, use json.RawMessage, which is of type []byte, but also implements the methods for JSON parsing. I believe this may be what you want, as it will accept whatever ends up in data. Of course, you then have to manually parse Data to figure out just what actually IS in there.
One possible bonus is that this skips any heavy parsing because it just copies the bytes over. When you want to use this data for something, you use a []interface{}, then use a type switch to use individual values.
https://play.golang.org/p/og88qb_qtpSGJ
package main
import (
"encoding/json"
"fmt"
)
type Obj struct {
Id string `json:"id"`
Data json.RawMessage `json:"data"`
}
func main() {
byt := []byte(`{"id":"someID","data":["str1","str2", 1337, {"my": "obj", "id": 42}]}`)
var obj Obj
if err := json.Unmarshal(byt, &obj); err != nil {
panic(err)
}
fmt.Printf("%+v\n", obj)
fmt.Printf("Data: %s\n", obj.Data)
// use it
var d []interface{}
if err := json.Unmarshal(obj.Data, &d); err != nil {
panic(err)
}
fmt.Printf("%+v\n", d)
for _, v := range d {
// you need a type switch to deterine the type and be able to use most of these
switch real := v.(type) {
case string:
fmt.Println("I'm a string!", real)
case float64:
fmt.Println("I'm a number!", real)
default:
fmt.Printf("Unaccounted for: %+v\n", v)
}
}
}
Your question is:
convert bytes array to struct with a field of type []byte
But you do not have a bytearray but a string array. Your question is not the same as your example. So let answer your question, there are more solutions possible depending in how far you want to diverge from your original requirements.
One string can be converted to one byte-slice, two strings need first to be transformed to one string. So that is problem one. The second problem are the square-brackets in your json-string
This works fine, it implicitly converts the string in the json-string to a byte-slice:
byt := []byte(`{"id":"someID","data":"str1str2"}`)
var obj Obj
if err := json.Unmarshal(byt, &obj); err != nil {
panic(err)
}
fmt.Println(obj)

how to access nested Json key values in Golang

Team,
new to Programming.
I have data available after unmarshaling the Json as shown below, which has nested Key values. flat key values I am able to access, how do I access nested key values.
Here is the byte slice data shown below after unmarshaling —>
tables:[map[name:basic__snatpool_members] map[name:net__snatpool_members] map[name:optimizations__hosts] map[columnNames:[name] name:pool__hosts rows:[map[row:[ry.hj.com]]]] traffic_group:/Common/traffic-group-1
Flat key values I am able to access by using the following code
p.TrafficGroup = m[“traffic_group”].(string)
here is the complete function
func dataToIapp(name string, d *schema.ResourceData) bigip.Iapp {
var p bigip.Iapp
var obj interface{}
jsonblob := []byte(d.Get("jsonfile").(string))
err := json.Unmarshal(jsonblob, &obj)
if err != nil {
fmt.Println("error", err)
}
m := obj.(map[string]interface{}) // Important: to access property
p.Name = m[“name”].(string)
p.Partition = m[“partition”].(string)
p.InheritedDevicegroup = m[“inherited_devicegroup”].(string)
}
Note: This may not work with your JSON structure. I inferred what it would be based on your question but without the actual structure, I cannot guarantee this to work without modification.
If you want to access them in a map, you need to assert that the interface pulled from the first map is actually a map. So you would need to do this:
tmp := m["tables"]
tables, ok := tmp.(map[string]string)
if !ok {
//error handling here
}
r.Name = tables["name"].(string)
But instead of accessing the unmarshaled JSON as a map[string]interface{}, why don't you create structs that match your JSON output?
type JSONRoot struct {
Name string `json:"name"`
Partition string `json:"partition"`
InheritedDevicegroup string `json:"inherited_devicegroup"`
Tables map[string]string `json:"tables"` //Ideally, this would be a map of structs
}
Then in your code:
func dataToIapp(name string, d *schema.ResourceData) bigip.Iapp {
var p bigip.Iapp
var obj &JSONRoot{}
jsonblob := []byte(d.Get("jsonfile").(string))
err := json.Unmarshal(jsonblob, &obj)
if err != nil {
fmt.Println("error", err)
}
p.Name = obj.Name
p.Partition = obj.Partition
p.InheritedDevicegroup = obj.InheritedDevicegroup
p.Name = obj.Tables["name"]
}
JSON objects are unmarshaled into map[string]interface{}, JSON arrays into []interface{}, same applies for nested objects/arrays.
So for example if a key/index maps to a nested object you need to type assert the value to map[string]interface{} and if the key/index maps to an array of objects you first need to assert the value to []interface{} and then each element to map[string]interface{}.
e.g. (for brevity this code is not guarding against panic)
tables := obj.(map[string]interface{})["tables"]
table1 := tables.([]interface{})[0]
name := table1.(map[string]interface{})["name"]
namestr := name.(string)
However, if it's the case that the json you are parsing is not dynamic but instead has a specific structure you should define a struct type that mirrors that structure and unmarshal the json into that.
All you have to do is repeatedly accessing the map via type-switching or assertion:
for _, table := range m["tables"] {
switch val := table {
case string:
fmt.Println("table is string")
case int:
fmt.Println("table is integer")
// This is your case, since JSON is unmarshaled to type []interface{} and map[string]interface{}
case []interface{}:
fmt.Println("table is a slice of interface{}")
for _, tb := range value {
if m, ok := tb.(map[string]interface{}); ok {
// Now it's accessible
fmt.Println(m["name"])
}
}
default:
fmt.Println("unknown type")
}
}
You might want to handle errors better than this.
To read more, check out my writing from a while ago https://medium.com/code-zen/dynamically-creating-instances-from-key-value-pair-map-and-json-in-go-feef83ab9db2.

Could't convert <nil> into type ...?

I tried to using database/sql for query database row into a Go type, my codes snippet following:
type User struct {
user_id int64
user_name string
user_mobile string
password string
email interface{}
nickname string
level byte
locked bool
create_time string
comment string // convert <nil> to *string error
}
func TestQueryUser(t *testing.T) {
db := QueryUser(driverName, dataSourceName)
stmtResults, err := db.Prepare(queryAll)
defer stmtResults.Close()
var r *User = new(User)
arr := []interface{}{
&r.user_id, &r.user_name, &r.user_mobile, &r.password, &r.email,
&r.nickname, &r.level, &r.locked, &r.create_time, &r.comment,
}
err = stmtResults.QueryRow(username).Scan(arr...)
if err != nil {
t.Error(err.Error())
}
fmt.Println(r.email)
}
MySQL:
As your see, some fields that has NULL value, so I have to set interface{} type into User struct of Go, which convert NULL to nil.
--- FAIL: TestQueryUser (0.00s)
user_test.go:48: sql: Scan error on column index 9: unsupported Scan, storing driver.Value type <nil> into type *string
Somebody has a better way? or I must change the MySQL field and set its DEFAULT ' '
First the short answer : There is some types in sql package for example sql.NullString (for nullable string in your table and guess Nullint64 and NullBool and ... usage :) ) and you should use them in your struct.
The long one : There is two interface for this available in go , first is Scanner and the other is Valuer for any special type in database, (for example,I use this mostly with JSONB in postgres) you need to create a type, and implement this two(or one of them) interface on that type.
the scanner is used when you call Scan function. the data from the database driver, normally in []byte is the input and you are responsible for handling it. the other one, is used when the value is used as input in query. the result "normally" is a slice of byte (and an error) if you need to only read data, Scanner is enough, and vice-versa, if you need to write parameter in query the Valuer is enough
for an example of implementation, I recommend to see the types in sql package.
Also there is an example of a type to use with JSONB/JSON type in postgresql
// GenericJSONField is used to handle generic json data in postgres
type GenericJSONField map[string]interface{}
// Scan convert the json field into our type
func (v *GenericJSONField) Scan(src interface{}) error {
var b []byte
switch src.(type) {
case []byte:
b = src.([]byte)
case string:
b = []byte(src.(string))
case nil:
b = make([]byte, 0)
default:
return errors.New("unsupported type")
}
return json.Unmarshal(b, v)
}
// Value try to get the string slice representation in database
func (v GenericJSONField) Value() (driver.Value, error) {
return json.Marshal(v)
}
driver value is often []byte but string and nil is acceptable. so this could handle null-able fields too.

Resources