godror SQL driver, and a slice of structs - oracle

I am using Go to open a file with multiple JSON entries, parse the file into a slice with a custom type, then insert the slice data into an Oracle database. According to the godror documentation at https://godror.github.io/godror/doc/tuning.html, I should be able to feed a slice into the insert command, and have the database/sql Exec method iterate thru the struct for me. I am at a loss for how to do this. I am sure there is a simple solution to this.
To slightly complicate things, I have a database column that is not in the struct for the host name of the computer the app is running on. This column should be filled in for every row the app inserts. In other words, every row of this table needs to have a column filled in with the host name of the machine is running on. Is there a more elegant way to do this than to just add a 'hostname' field to my struct that has the running system's host name, over and over again?
What follows is a simplified version of my code.
package main
import (
"database/sql"
_ "github.com/godror/godror"
)
type MyType struct {
var1 string `json:"var1"`
var2 string `json:"var2"`
}
func main() {
hostname, err := os.Hostname()
if err != nil {
//log.Println("Error when getting host name")
log.Fatal(err)
}
mySlice := parseFile("/path/to/file", false)
db, err := sql.Open("godror", "user/pass#oraHost/oraDb")
sql := `INSERT INTO mytable (var1, var2, host) values (:1 :2 :3)`
// this is the line where everything breaks down, and i am not sure
// what should go here.
_, err = db.Exec(sql, mySlice[var1], mySlice[var2], hostname)
}
func parseFile(filePath string, deleteFile bool) []MyType {
// a few lines of code that opens a text file, parses it into a slice
// of type MyType, and returns it
}

not sure, if you already went through, does this test case, TestExecuteMany help ? mentioned in https://github.com/godror/godror/blob/master/z_test.go has example usage for array insert.
res, err := tx.ExecContext(ctx,
`INSERT INTO `+tbl+ //nolint:gas
` (f_id, f_int, f_num, f_num_6, F_num_5_2, F_vc, F_dt)
VALUES
(:1, :2, :3, :4, :5, :6, :7)`,
ids, ints, nums, int32s, floats, strs, dates)
for batch insert of structs:
https://github.com/jmoiron/sqlx

Related

Creating prepared statements with Snowflake driver in Go

I am trying to create prepared statements using the Snowflake driver for Go: https://github.com/snowflakedb/gosnowflake
However, every time I try to prepare a statement that requires strings, it fails to bind them to the overall query.
Here is an example:
import (
"database/sql"
_ "github.com/snowflakedb/gosnowflake"
)
func getData(db *sql.DB) error {
tbl := "DATASET"
s := "Bob"
stmt := "INSERT INTO ? SELECT * FROM OTHER_TBL WHERE name = ?"
_, err := db.Exec(stmt, dataset, s)
if err != nil {
return err
}
return nil
}
The query always fails with a Syntax error stating it didn't recognize the ?. I've also tried the .Prepare() method and I get the same result.
I've also tried single quoting the ? placeholder, but in my actual query, I have several ? and it doesn't bind any of them.
I'm left with only having success if I use fmt.Sprintf() but I'd like to avoid that method.
In your snippet code, you are parameterized the table name.
Snowflake implements database/sql interfaces. as my experience with this library, parameterization table or column names are not allowed.

How to use pgtype.Numeric with gorm and sqlite3?

I need to store very large and high precision numbers with gORM, and using a pgtype.Numeric seems like the best bet. However, I cannot because I get an error: sql: Scan error on column index 4, name "foo": cannot scan int64
My model looks something like this:
type Model struct {
gorm.Model
Foo *pgtype.Numeric `gorm:"not null"`
}
Not sure if using pgtype.Numeric is the best (that's what i've seen everyone else use), or I'm doing something wrong. Thanks!
The code that caused the error:
package main
import (
"gorm.io/driver/sqlite"
"gorm.io/gorm"
"math/big"
"github.com/jackc/pgtype"
)
type Model struct {
gorm.Model
Foo *pgtype.Numeric `gorm:"not null"`
}
func main() {
db, err := gorm.Open(sqlite.Open("test.db"), &gorm.Config{})
if err != nil {
panic("failed to connect database")
}
// Migrate the schema
db.AutoMigrate(&Model{})
// Create
db.Create(&Model{Foo: &pgtype.Numeric{Int: big.NewInt(10000000), Status: pgtype.Present}})
var m Model
db.First(&m) // this line causes the error
}
Sqlite3 does not support big integer so there is no way you can accomplish that directly. I run the code and foo column is create as:
`foo` numeric NOT NULL
Which in sqlite https://www.sqlite.org/datatype3.html means
A column with NUMERIC affinity may contain values using all five storage classes... If the TEXT value is a well-formed integer literal that is too large to fit in a 64-bit signed integer, it is converted to REAL.
So your big int will turn into float64. Good thing it paniced instead of losing accuracy silently.
What you can do is convert the big int to string or bytes first and store that.
When debugging the sql.Scanner interface for database deserialization, it is noticeable that the value from the database arrives either as int64 or float64. This then leads to the corresponding error message.
A possible solution is to use a text data type in the database, by adding the type text to the field tag:
`gorm: "type:text;"`
Using the github.com/shopspring/decimal package, you can conveniently create a decimal number using the NewString function.
The adapted code to insert the data:
num, err := decimal.NewFromString("123456789012345.12345678901")
if err != nil {
panic(err)
}
db.Create(&Model{Foo: &num})
The model structure might then look something like this:
type Model struct {
gorm.Model
Foo *decimal.Decimal `gorm: "not null;type:text;"`
}
This would result in the following schema:
Test
If one inserts a breakpoint in decimal.Scan, one can see that the value comes from the database as expected as a string, resulting in the creation of a decimal with NewFromString (see Decimal's scan method).
If you add this line of code to the end of the main function
fmt.Println(m.Foo)
it would result in the following output in the debug console:
123456789012345.12345678901
Complete Program
Your complete program, slightly adapted to the above points, would then look something like this:
package main
import (
"fmt"
"github.com/shopspring/decimal"
"gorm.io/driver/sqlite"
"gorm.io/gorm"
)
type Model struct {
gorm.Model
Foo *decimal.Decimal `gorm:"not null;type:text;"`
}
func main() {
db, err := gorm.Open(sqlite.Open("test.db"), &gorm.Config{})
if err != nil {
panic("failed to connect database")
}
// Migrate the schema
db.AutoMigrate(&Model{})
// Create
num, err := decimal.NewFromString("123456789012345.12345678901")
if err != nil {
panic(err)
}
db.Create(&Model{Foo: &num})
var m Model
db.First(&m)
fmt.Println(m.Foo)
}
pgtype.Numeric and SQLite
If a PostgreSQL database is used, gorm can be used together with pgtype.Numeric to handle decimal numbers like 123456789012345.12345678901. You just need to use the numeric data type on the Postgres side with the appropriate desired precision (e.g. numeric(50,15)).
After all, this is exactly what pgtype is for, see the pgtype readme where it says:
pgtype is the type system underlying the https://github.com/jackc/pgx PostgreSQL driver.
However, if you use a text data type in SQLite for the reasons mentioned above, pgtype.Numeric will not work with SQLite. An attempt with the above number writes 12345678901234512345678901e-11 to the DB and when reading it out the following error occurs:
sql: Scan error on column index 4, name "foo": 12345678901234512345678901e-11 is not a number

How to use Go / GORM to print SELECT query output without pre-defined struct

I am developing an API using Go which connects to MySQL database for some query execution. Am using GORM for database operations. But am stuck at printing the SELECT query output for the tables which I don't have the column names.
My use case is that, I need to run the query on multiple tables where I don't have an idea about what their column names and types are. And so I cannot pre-define a struct for all the current and future tables which might get added.
Is there a way to print/save the SELECT query output without a pre-defined struct ?
I tried do some using empty struct but it didn't help me.
P.S: Am a beginner in Go
type Testing struct{}
var test Testing
dsn := fmt.Sprintf("%v:%v#tcp(%v:%v)/%v", myds.DBuser, myds.DBpassword, myds.DBhost, myds.DBport, myds.DBname)
db, err := gorm.Open(mysql.Open(dsn), &gorm.Config{})
if err != nil {
fmt.Println(err)
}
tx := db.Raw(query).Scan(&test)
if tx.Error != nil {
fmt.Println(tx.Error)
}
fmt.Println(test)
You can use an anonymous struct
Let's say you have a struct:
type User struct{
FirstName string
LastName string
}
Query:
SELECT CONCAT(first_name,last_name) AS full_name from users;
Notice the new column full_name
you can simply do
var fullName = struct{FullName string}{}
Notice how I use pascal case & FullName has to be the field name
A capital letter in between will represent a _
Field is public so it can be accessed outside.
full_name(query) = FullName(field)
pass this fullName object as a bucket to your Scan and it should work.
db.Raw(query).Scan(&fullName)
EDIT:
Your query will have some result right?
Let me assume that you have
column_one,column_two... column_n
Now, to get the data from all the columns or selected ones if you want, you simply have to define fields (in anonymous struct) with specific names. In our case:
struct{ColumnOne,ColumnTwo,..ColumnN interface{}}{}
P.S. I have used interface{}, you can use types depending on the data your column returns.
It worked for me by using a map type with interface. This helped me to save the SELECT query results without pre-defined struct or the column names.
dsn := fmt.Sprintf("%v:%v#tcp(%v:%v)/%v", myds.DBuser, myds.DBpassword, myds.DBhost, myds.DBport, myds.DBname)
db, err := gorm.Open(mysql.Open(dsn), &gorm.Config{})
if err != nil {
fmt.Println(err)
}
var result []map[string]interface{}
tx := db.Raw(query).Scan(&result)
if tx.Error != nil {
fmt.Println(tx.Error)
return
}
bytes, _ := json.Marshal(result)
fmt.Println(string(bytes))

sql.Rows to slice of strings

I'm using the standard go sql package to interface with AWS Athena.
My query returns for each record a uuid (string) and an array of emails.
Here is the code:
package main
import (
"fmt"
"database/sql"
_ "github.com/segmentio/go-athena"
_"encoding/json"
)
type Contact struct {
userid string
emails []string
}
func main() {
fmt.Println("hello")
db, err := sql.Open("athena", "db=example")
if err != nil {
panic(err)
}
rows, err := db.Query("SELECT userid, transform(value.emails, x -> x.value) from database LIMIT 10")
// Returns
// Row 1: "abc-123", ["email1#gmail.com", "email2#gmail.com"]
// Row 2: "def-456", ["email3#gmail.com"]
if err != nil {
panic(err)
}
for rows.Next() {
var contact Contact
rows.Scan(&contact.userid, &contact.emails)
fmt.Println(contact)
}
}
However, I get this error in the for loop:
panic: unknown type `array` with value [email1#gmail.com]
I'm confused about the array type mentioned and I can't make sense of the error.
How can I map the list of emails returned to a slice of strings in the Contact struct ?
Athena supports structural data types. For example the structural data type array:
Structural types
ARRAY < data_type >
From the message you get I assume the column email is of type ARRAY<VARCHAR>. In addition segmentio/go-athena panics on unsupported operations like Begin for a transaction (which are not supported in Athena). To read data in a Go array you have to put in some logic. See read "SELECT *" columns into []string in go or Read a Postgresql array directly into a Golang Slice for a starter. As you can see with the pq driver, reading an array might be implemented differently than just scanning a row

Query result is memory address

I'm new to go and still confused about pointers but I have followed the instructions for querying multiple rows but the result I get back is series of memory addresses instead of actual values.
This same structure, minus the rows.Next() works just fine for a single user so I'm confused as to the origin of the problem here.
Ultimately I'm trying to use the results of the function in a template but I'm trying to figure out the structure of it so I can range it in my HTML.
For example, if I try to run the code below, I get something like: &{[0xc... 0xc... 0xc...]}
type User struct {
Id int `json:"int"`
Name string `json:"name"`
Role string `json:"role"`
}
type Users struct {
Users []*User
}
func getUsers(company string) *Users {
users := Users{}
rows, err := db.Query("SELECT Id, Name, Role...")
// Check err
defer rows.Close()
for rows.Next() {
user := &User{}
err = rows.Scan(&user.Id, &user.Name, &user.Role)
// Check err
users.Users = append(users.Users, user)
}
err = rows.Err()
// Check err
return &users
}
This is how I'm attempting to use the function
func userView(w http.ResponseWriter, r *http.Request) {
res := getUsers("test") // Should return 3 results
fmt.Println(res.Users)
}
The problem isn't in your fetching of the data, it's in your display of the data. fmt.Println() prints memory addresses when given pointers--so it's behaving exactly as expected.
If you instead do:
fmt.Printf("%+v", res.Users)
you'll get a different result, probably closer to what you expect.
If you're planning to use a template, then you should do so--your template should be able to access the fields of each User just fine.
But the short answer is: Your testing method is invalid.
Type Users is a slice of pointers. If you print the return value of getUsers it looks like a bunch of memory addresses. This is OK.
If you want to print something more meaningful, write a String() method for Users in which you dereference each pointer and build a string containing struct fields.

Resources