I'm using the standard go sql package to interface with AWS Athena.
My query returns for each record a uuid (string) and an array of emails.
Here is the code:
package main
import (
"fmt"
"database/sql"
_ "github.com/segmentio/go-athena"
_"encoding/json"
)
type Contact struct {
userid string
emails []string
}
func main() {
fmt.Println("hello")
db, err := sql.Open("athena", "db=example")
if err != nil {
panic(err)
}
rows, err := db.Query("SELECT userid, transform(value.emails, x -> x.value) from database LIMIT 10")
// Returns
// Row 1: "abc-123", ["email1#gmail.com", "email2#gmail.com"]
// Row 2: "def-456", ["email3#gmail.com"]
if err != nil {
panic(err)
}
for rows.Next() {
var contact Contact
rows.Scan(&contact.userid, &contact.emails)
fmt.Println(contact)
}
}
However, I get this error in the for loop:
panic: unknown type `array` with value [email1#gmail.com]
I'm confused about the array type mentioned and I can't make sense of the error.
How can I map the list of emails returned to a slice of strings in the Contact struct ?
Athena supports structural data types. For example the structural data type array:
Structural types
ARRAY < data_type >
From the message you get I assume the column email is of type ARRAY<VARCHAR>. In addition segmentio/go-athena panics on unsupported operations like Begin for a transaction (which are not supported in Athena). To read data in a Go array you have to put in some logic. See read "SELECT *" columns into []string in go or Read a Postgresql array directly into a Golang Slice for a starter. As you can see with the pq driver, reading an array might be implemented differently than just scanning a row
Related
I need to store very large and high precision numbers with gORM, and using a pgtype.Numeric seems like the best bet. However, I cannot because I get an error: sql: Scan error on column index 4, name "foo": cannot scan int64
My model looks something like this:
type Model struct {
gorm.Model
Foo *pgtype.Numeric `gorm:"not null"`
}
Not sure if using pgtype.Numeric is the best (that's what i've seen everyone else use), or I'm doing something wrong. Thanks!
The code that caused the error:
package main
import (
"gorm.io/driver/sqlite"
"gorm.io/gorm"
"math/big"
"github.com/jackc/pgtype"
)
type Model struct {
gorm.Model
Foo *pgtype.Numeric `gorm:"not null"`
}
func main() {
db, err := gorm.Open(sqlite.Open("test.db"), &gorm.Config{})
if err != nil {
panic("failed to connect database")
}
// Migrate the schema
db.AutoMigrate(&Model{})
// Create
db.Create(&Model{Foo: &pgtype.Numeric{Int: big.NewInt(10000000), Status: pgtype.Present}})
var m Model
db.First(&m) // this line causes the error
}
Sqlite3 does not support big integer so there is no way you can accomplish that directly. I run the code and foo column is create as:
`foo` numeric NOT NULL
Which in sqlite https://www.sqlite.org/datatype3.html means
A column with NUMERIC affinity may contain values using all five storage classes... If the TEXT value is a well-formed integer literal that is too large to fit in a 64-bit signed integer, it is converted to REAL.
So your big int will turn into float64. Good thing it paniced instead of losing accuracy silently.
What you can do is convert the big int to string or bytes first and store that.
When debugging the sql.Scanner interface for database deserialization, it is noticeable that the value from the database arrives either as int64 or float64. This then leads to the corresponding error message.
A possible solution is to use a text data type in the database, by adding the type text to the field tag:
`gorm: "type:text;"`
Using the github.com/shopspring/decimal package, you can conveniently create a decimal number using the NewString function.
The adapted code to insert the data:
num, err := decimal.NewFromString("123456789012345.12345678901")
if err != nil {
panic(err)
}
db.Create(&Model{Foo: &num})
The model structure might then look something like this:
type Model struct {
gorm.Model
Foo *decimal.Decimal `gorm: "not null;type:text;"`
}
This would result in the following schema:
Test
If one inserts a breakpoint in decimal.Scan, one can see that the value comes from the database as expected as a string, resulting in the creation of a decimal with NewFromString (see Decimal's scan method).
If you add this line of code to the end of the main function
fmt.Println(m.Foo)
it would result in the following output in the debug console:
123456789012345.12345678901
Complete Program
Your complete program, slightly adapted to the above points, would then look something like this:
package main
import (
"fmt"
"github.com/shopspring/decimal"
"gorm.io/driver/sqlite"
"gorm.io/gorm"
)
type Model struct {
gorm.Model
Foo *decimal.Decimal `gorm:"not null;type:text;"`
}
func main() {
db, err := gorm.Open(sqlite.Open("test.db"), &gorm.Config{})
if err != nil {
panic("failed to connect database")
}
// Migrate the schema
db.AutoMigrate(&Model{})
// Create
num, err := decimal.NewFromString("123456789012345.12345678901")
if err != nil {
panic(err)
}
db.Create(&Model{Foo: &num})
var m Model
db.First(&m)
fmt.Println(m.Foo)
}
pgtype.Numeric and SQLite
If a PostgreSQL database is used, gorm can be used together with pgtype.Numeric to handle decimal numbers like 123456789012345.12345678901. You just need to use the numeric data type on the Postgres side with the appropriate desired precision (e.g. numeric(50,15)).
After all, this is exactly what pgtype is for, see the pgtype readme where it says:
pgtype is the type system underlying the https://github.com/jackc/pgx PostgreSQL driver.
However, if you use a text data type in SQLite for the reasons mentioned above, pgtype.Numeric will not work with SQLite. An attempt with the above number writes 12345678901234512345678901e-11 to the DB and when reading it out the following error occurs:
sql: Scan error on column index 4, name "foo": 12345678901234512345678901e-11 is not a number
I am developing an API using Go which connects to MySQL database for some query execution. Am using GORM for database operations. But am stuck at printing the SELECT query output for the tables which I don't have the column names.
My use case is that, I need to run the query on multiple tables where I don't have an idea about what their column names and types are. And so I cannot pre-define a struct for all the current and future tables which might get added.
Is there a way to print/save the SELECT query output without a pre-defined struct ?
I tried do some using empty struct but it didn't help me.
P.S: Am a beginner in Go
type Testing struct{}
var test Testing
dsn := fmt.Sprintf("%v:%v#tcp(%v:%v)/%v", myds.DBuser, myds.DBpassword, myds.DBhost, myds.DBport, myds.DBname)
db, err := gorm.Open(mysql.Open(dsn), &gorm.Config{})
if err != nil {
fmt.Println(err)
}
tx := db.Raw(query).Scan(&test)
if tx.Error != nil {
fmt.Println(tx.Error)
}
fmt.Println(test)
You can use an anonymous struct
Let's say you have a struct:
type User struct{
FirstName string
LastName string
}
Query:
SELECT CONCAT(first_name,last_name) AS full_name from users;
Notice the new column full_name
you can simply do
var fullName = struct{FullName string}{}
Notice how I use pascal case & FullName has to be the field name
A capital letter in between will represent a _
Field is public so it can be accessed outside.
full_name(query) = FullName(field)
pass this fullName object as a bucket to your Scan and it should work.
db.Raw(query).Scan(&fullName)
EDIT:
Your query will have some result right?
Let me assume that you have
column_one,column_two... column_n
Now, to get the data from all the columns or selected ones if you want, you simply have to define fields (in anonymous struct) with specific names. In our case:
struct{ColumnOne,ColumnTwo,..ColumnN interface{}}{}
P.S. I have used interface{}, you can use types depending on the data your column returns.
It worked for me by using a map type with interface. This helped me to save the SELECT query results without pre-defined struct or the column names.
dsn := fmt.Sprintf("%v:%v#tcp(%v:%v)/%v", myds.DBuser, myds.DBpassword, myds.DBhost, myds.DBport, myds.DBname)
db, err := gorm.Open(mysql.Open(dsn), &gorm.Config{})
if err != nil {
fmt.Println(err)
}
var result []map[string]interface{}
tx := db.Raw(query).Scan(&result)
if tx.Error != nil {
fmt.Println(tx.Error)
return
}
bytes, _ := json.Marshal(result)
fmt.Println(string(bytes))
I am using Go to open a file with multiple JSON entries, parse the file into a slice with a custom type, then insert the slice data into an Oracle database. According to the godror documentation at https://godror.github.io/godror/doc/tuning.html, I should be able to feed a slice into the insert command, and have the database/sql Exec method iterate thru the struct for me. I am at a loss for how to do this. I am sure there is a simple solution to this.
To slightly complicate things, I have a database column that is not in the struct for the host name of the computer the app is running on. This column should be filled in for every row the app inserts. In other words, every row of this table needs to have a column filled in with the host name of the machine is running on. Is there a more elegant way to do this than to just add a 'hostname' field to my struct that has the running system's host name, over and over again?
What follows is a simplified version of my code.
package main
import (
"database/sql"
_ "github.com/godror/godror"
)
type MyType struct {
var1 string `json:"var1"`
var2 string `json:"var2"`
}
func main() {
hostname, err := os.Hostname()
if err != nil {
//log.Println("Error when getting host name")
log.Fatal(err)
}
mySlice := parseFile("/path/to/file", false)
db, err := sql.Open("godror", "user/pass#oraHost/oraDb")
sql := `INSERT INTO mytable (var1, var2, host) values (:1 :2 :3)`
// this is the line where everything breaks down, and i am not sure
// what should go here.
_, err = db.Exec(sql, mySlice[var1], mySlice[var2], hostname)
}
func parseFile(filePath string, deleteFile bool) []MyType {
// a few lines of code that opens a text file, parses it into a slice
// of type MyType, and returns it
}
not sure, if you already went through, does this test case, TestExecuteMany help ? mentioned in https://github.com/godror/godror/blob/master/z_test.go has example usage for array insert.
res, err := tx.ExecContext(ctx,
`INSERT INTO `+tbl+ //nolint:gas
` (f_id, f_int, f_num, f_num_6, F_num_5_2, F_vc, F_dt)
VALUES
(:1, :2, :3, :4, :5, :6, :7)`,
ids, ints, nums, int32s, floats, strs, dates)
for batch insert of structs:
https://github.com/jmoiron/sqlx
Team,
new to Programming.
I have data available after unmarshaling the Json as shown below, which has nested Key values. flat key values I am able to access, how do I access nested key values.
Here is the byte slice data shown below after unmarshaling —>
tables:[map[name:basic__snatpool_members] map[name:net__snatpool_members] map[name:optimizations__hosts] map[columnNames:[name] name:pool__hosts rows:[map[row:[ry.hj.com]]]] traffic_group:/Common/traffic-group-1
Flat key values I am able to access by using the following code
p.TrafficGroup = m[“traffic_group”].(string)
here is the complete function
func dataToIapp(name string, d *schema.ResourceData) bigip.Iapp {
var p bigip.Iapp
var obj interface{}
jsonblob := []byte(d.Get("jsonfile").(string))
err := json.Unmarshal(jsonblob, &obj)
if err != nil {
fmt.Println("error", err)
}
m := obj.(map[string]interface{}) // Important: to access property
p.Name = m[“name”].(string)
p.Partition = m[“partition”].(string)
p.InheritedDevicegroup = m[“inherited_devicegroup”].(string)
}
Note: This may not work with your JSON structure. I inferred what it would be based on your question but without the actual structure, I cannot guarantee this to work without modification.
If you want to access them in a map, you need to assert that the interface pulled from the first map is actually a map. So you would need to do this:
tmp := m["tables"]
tables, ok := tmp.(map[string]string)
if !ok {
//error handling here
}
r.Name = tables["name"].(string)
But instead of accessing the unmarshaled JSON as a map[string]interface{}, why don't you create structs that match your JSON output?
type JSONRoot struct {
Name string `json:"name"`
Partition string `json:"partition"`
InheritedDevicegroup string `json:"inherited_devicegroup"`
Tables map[string]string `json:"tables"` //Ideally, this would be a map of structs
}
Then in your code:
func dataToIapp(name string, d *schema.ResourceData) bigip.Iapp {
var p bigip.Iapp
var obj &JSONRoot{}
jsonblob := []byte(d.Get("jsonfile").(string))
err := json.Unmarshal(jsonblob, &obj)
if err != nil {
fmt.Println("error", err)
}
p.Name = obj.Name
p.Partition = obj.Partition
p.InheritedDevicegroup = obj.InheritedDevicegroup
p.Name = obj.Tables["name"]
}
JSON objects are unmarshaled into map[string]interface{}, JSON arrays into []interface{}, same applies for nested objects/arrays.
So for example if a key/index maps to a nested object you need to type assert the value to map[string]interface{} and if the key/index maps to an array of objects you first need to assert the value to []interface{} and then each element to map[string]interface{}.
e.g. (for brevity this code is not guarding against panic)
tables := obj.(map[string]interface{})["tables"]
table1 := tables.([]interface{})[0]
name := table1.(map[string]interface{})["name"]
namestr := name.(string)
However, if it's the case that the json you are parsing is not dynamic but instead has a specific structure you should define a struct type that mirrors that structure and unmarshal the json into that.
All you have to do is repeatedly accessing the map via type-switching or assertion:
for _, table := range m["tables"] {
switch val := table {
case string:
fmt.Println("table is string")
case int:
fmt.Println("table is integer")
// This is your case, since JSON is unmarshaled to type []interface{} and map[string]interface{}
case []interface{}:
fmt.Println("table is a slice of interface{}")
for _, tb := range value {
if m, ok := tb.(map[string]interface{}); ok {
// Now it's accessible
fmt.Println(m["name"])
}
}
default:
fmt.Println("unknown type")
}
}
You might want to handle errors better than this.
To read more, check out my writing from a while ago https://medium.com/code-zen/dynamically-creating-instances-from-key-value-pair-map-and-json-in-go-feef83ab9db2.
I tried to using database/sql for query database row into a Go type, my codes snippet following:
type User struct {
user_id int64
user_name string
user_mobile string
password string
email interface{}
nickname string
level byte
locked bool
create_time string
comment string // convert <nil> to *string error
}
func TestQueryUser(t *testing.T) {
db := QueryUser(driverName, dataSourceName)
stmtResults, err := db.Prepare(queryAll)
defer stmtResults.Close()
var r *User = new(User)
arr := []interface{}{
&r.user_id, &r.user_name, &r.user_mobile, &r.password, &r.email,
&r.nickname, &r.level, &r.locked, &r.create_time, &r.comment,
}
err = stmtResults.QueryRow(username).Scan(arr...)
if err != nil {
t.Error(err.Error())
}
fmt.Println(r.email)
}
MySQL:
As your see, some fields that has NULL value, so I have to set interface{} type into User struct of Go, which convert NULL to nil.
--- FAIL: TestQueryUser (0.00s)
user_test.go:48: sql: Scan error on column index 9: unsupported Scan, storing driver.Value type <nil> into type *string
Somebody has a better way? or I must change the MySQL field and set its DEFAULT ' '
First the short answer : There is some types in sql package for example sql.NullString (for nullable string in your table and guess Nullint64 and NullBool and ... usage :) ) and you should use them in your struct.
The long one : There is two interface for this available in go , first is Scanner and the other is Valuer for any special type in database, (for example,I use this mostly with JSONB in postgres) you need to create a type, and implement this two(or one of them) interface on that type.
the scanner is used when you call Scan function. the data from the database driver, normally in []byte is the input and you are responsible for handling it. the other one, is used when the value is used as input in query. the result "normally" is a slice of byte (and an error) if you need to only read data, Scanner is enough, and vice-versa, if you need to write parameter in query the Valuer is enough
for an example of implementation, I recommend to see the types in sql package.
Also there is an example of a type to use with JSONB/JSON type in postgresql
// GenericJSONField is used to handle generic json data in postgres
type GenericJSONField map[string]interface{}
// Scan convert the json field into our type
func (v *GenericJSONField) Scan(src interface{}) error {
var b []byte
switch src.(type) {
case []byte:
b = src.([]byte)
case string:
b = []byte(src.(string))
case nil:
b = make([]byte, 0)
default:
return errors.New("unsupported type")
}
return json.Unmarshal(b, v)
}
// Value try to get the string slice representation in database
func (v GenericJSONField) Value() (driver.Value, error) {
return json.Marshal(v)
}
driver value is often []byte but string and nil is acceptable. so this could handle null-able fields too.