jmoiron/sqlx, ...interface{}, and abstracting some boilerplate - go

I thought I'd try to be a little bit "clever" and abstract some of my boilerplate SQL code (using sqlx -- https://github.com/jmoiron/sqlx). The idea is to feed a code a function pointer to process the result, along with the sql string and args that produce the rows. As it happens, the code works fine provided I strip out the "sqlArgs interface" stuff, but in the "cleverer" format errors with the statement
sql: converting Exec argument $1 type: unsupported type []interface {}, a slice of interface
Here are two versions, the first one that errors, the second that works but without parameterization:
//GetRows (doesn't work)
func GetRows(parseRows func(*sqlx.Rows), sql string, sqlArgs ...interface{}) {
db := sqlx.MustConnect("mysql", ConnString)
defer db.Close()
rows, err := db.Queryx(sql, sqlArgs)
defer rows.Close()
if err != nil {
panic(err)
}
parseRows(rows)
}
//GetRows ... (works, but doesn't allow parameterization)
func GetRows(fp func(*sqlx.Rows), sql string) {
db := sqlx.MustConnect("mysql", ConnString)
defer db.Close()
rows, err := db.Queryx(sql)
defer rows.Close()
if err != nil {
panic(err)
}
fp(rows)
}
The idea is to call the code something like this:
func getUser(userID string) User {
var users []*User
parseRows := func(rows *sqlx.Rows) {
for rows.Next() {
var u User
err := rows.StructScan(&u)
if err != nil {
panic(err)
}
users = append(users, u)
}
}
sql := "SELECT * FROM users WHERE userid = ?;"
sqlutils.GetRows(parseRows, sql, userID)
if len(users) == 1{
return users[0]
}
return User{}
}
I guess my code doesn't actually pass through the userID from call to call, but instead it passes an []interface{}, which the sql package can't handle. I'm not sure about that, however. In any case, is there any way to accomplish this idea? Thanks.

Related

type checking without hardcoding type or storing type information in array

I'm trying to figure does go provides an option to avoid hardcoding type in the switch branch without reflection or run-time type checking.
The idea here GetGeneric shouldn't know anything about the type it receives. The caller tells to GetGeneric use type X, GetGeneric pass
to stmt.Get that populate a model data.
In code below stmt.Get requires & of a type to be concrete type.
Here is a small example. But I really don't like that I need a hardcode
type for each model as a seperare case branch and it defeats the entire purpose, and I'm restricted on an interface that stmt.Get provides.
func (pdb *PersistentDb) GetGeneric(ctx context.Context, sqlStmt string,
model interface{}, args ...interface{}) (interface{}, error) {
if pdb == nil {
return nil, fmt.Errorf("you need to initilize persistent db first")
}
db, err := pdb.connectx(ctx)
if err != nil {
callistolog.Errorf("Failed connect to database error: %v", err)
return nil, err
}
defer db.Close()
stmt, errPrepare := db.Preparex(sqlStmt)
if errPrepare != nil {
callistolog.Errorf("Failed prepare sql statement error: %v", errPrepare)
return nil, errPrepare
}
defer stmt.Close()
switch t := model.(type) {
case models.Role:
err = stmt.Get(&t, args...)
return t, err
case models.Tenants:
err = stmt.Get(&t, args...)
return t, err
// here for each models.Type I need add branch
default:
callistolog.Errorf("got unknown type")
}
return model, fmt.Errorf("unknown type")
}
One option a run-time check via reflection, but I was thinking
can I declare somehow array/slice of types and optimize switch branch
to if branch, and in the loop iterate.
Something like ( pseudocode)
if type T in an array of models.
take T as generic and make it concrete type T_hat
pass T_hat address to stmt.Get
In some language, you can have an array of Object[], etc
and you can check the object type.
In this example, the caller provides a model as a generic interface
and inside a GetGeneric model type-checked to concrete type based on the fact if that type in an array of known types and if it is t become concret so stmt.Get accepts that.
For example,
How I can store type information not a value in slice and then dispatch to
stmt.Get(&t, args...)
types := []interface{} {model, model1, model2}
for _,v := range types {
}
Thank you
Assuming you are using sqlx adding another layer of reflection just makes no sense, instead use it as it was designed to be used.
func (pdb *PersistentDb) GetGeneric(ctx context.Context, sqlStmt string, model interface{}, args ...interface{}) error {
if pdb == nil {
return fmt.Errorf("you need to initilize persistent db first")
}
db, err := pdb.connectx(ctx)
if err != nil {
callistolog.Errorf("Failed connect to database error: %v", err)
return err
}
defer db.Close()
stmt, errPrepare := db.Preparex(sqlStmt)
if errPrepare != nil {
callistolog.Errorf("Failed prepare sql statement error: %v", errPrepare)
return errPrepare
}
defer stmt.Close()
if err := stmt.Get(model, args...)
callistolog.Error(err)
return err
}
return nil
}
role := models.Role{}
if err := pdb.GetGeneric(ctx, "select ...", &role, args...); err != nil {
return err
}
fmt.Println(role)
// ...
tentants := models.Tenants{}
if err := pdb.GetGeneric(ctx, "select ...", &tenants, args...); err != nil {
return err
}
fmt.Println(tentants)
Note that preparing a statement manually is unnecessary if you don't intend to reuse it.

Cannot use args (type []string) as type []interface {} [duplicate]

This question already has answers here:
Type converting slices of interfaces
(9 answers)
Closed 3 years ago.
my golang sqlite insert function. i'm using this package "github.com/mattn/go-sqlite3"
func Insert(args ...string)(err error){
db, err:=sql.Open("sqlite3","sqlite.db")
if err !=nil {
return
}
q, err := db.Prepare(args[0])
if err !=nil{
return
}
_,err = q.Exec(args[1:]...)
return
}
main (){
err := Insert("INSERT INTO table(first,last) VALUES(?,?)","Nantha","nk")
if err !=nil{
fmt.Println(err.Error())
return
}
}
i'm getting this error
cannot use args (type []string) as type []interface {} in argument to
q.Exec
The error is pretty clear, the function expects type []interface{} but you're passing in a value of type []string. You have to first convert []string to []interface{} before passing it to Exec. And the way to do that is to loop over the strings and add each one to a new slice of interface{}.
https://golang.org/doc/faq#convert_slice_of_interface
As an alternative approach, you can change the Insert argument types.
func Insert(query string, args ...interface{}) (err error) {
db, err := sql.Open("sqlite3", "sqlite.db")
if err != nil {
return err
}
q, err := db.Prepare(query)
if err != nil {
return err
}
_, err = q.Exec(args...)
return err
}
func main() {
err := Insert("INSERT INTO table(first,last) VALUES(?,?)", "Nantha", "nk")
if err !=nil{
fmt.Println(err.Error())
return
}
}
Please note that you're using the database/sql package incorrectly. Many of the objects returned from that package's functions/methods need to be closed to release the underlying resources.
This is true for *sql.DB returned by Open, *sql.Stmt returned by Prepare, *sql.Rows returned by Query, etc.
So your function should look closer to something like this:
func Insert(query string, args ...interface{}) (err error) {
db, err := sql.Open("sqlite3", "sqlite.db")
if err != nil {
return err
}
defer db.Close()
q, err := db.Prepare(query)
if err != nil {
return err
}
defer q.Close()
_, err = q.Exec(args...)
return err
}
Also note that sql.DB is reusable, that means that you don't have to sql.Open a new instance every time you need to talk to the database.
From the docs on Open:
The returned DB is safe for concurrent use by multiple goroutines and
maintains its own pool of idle connections. Thus, the Open function
should be called just once. It is rarely necessary to close a DB.
If you keep doing it the way you're doing it, openning a new DB every time you call Insert or any other function that needs to talk to the DB, your program will perform worse than if you had a single DB and have your functions reuse that.

How to dump huge csv data (4GB) into mysql

If anyone had tried this before using Go, please get the idea with code, that would be really appreciated.
I wrote few line which is slow
// This is to read the csv file
func usersFileLoader(filename string, channel chan User) {
defer close(channel)
file, err := os.Open(filename)
if err != nil {
panic(err)
}
defer file.Close()
var user User
reader := csv.NewReader(file)
for {
err := Unmarshal(reader, &user)
if err == io.EOF {
break
}
if err != nil {
panic(err)
}
channel <- user
}
}
// This is to insert csv file
func saveUser(channel <-chan User, db *sql.DB) {
stmt, err := db.Prepare(`
INSERT INTO Users( id, name, address) values ( ?, ?, ?)`)
if err != nil {
log.Fatal(err)
}
for usr := range channel {
_, err := stmt.Exec(
usr.ID,
usr.Name,
usr.Address,
)
if err != nil {
log.Fatal(err)
}
}
}
// here is the struct of the user
type User struct {
ID int `csv:"id"`
Name int `csv:"name"`
Address int `csv:"address"`
}
// here is my main func
func main() {
db := DBconnect(ConnectionString(dbConfig()))
channel := make(chan User)
go usersFileLoader("../user.csv", channel)
saveUser(channel, db)
defer db.Close()
}
// This code is working but slow for me.
Share your thought and ideas
I wouldn't attempt to use Go's built in standard library functions for loading a very large CSV file into MySQL (unless, of course, you are simply trying to learn how they work).
For best performance I would simply use MySQL's built in LOAD DATA INFILE functionality.
For example:
result, err := db.Exec("LOAD DATA INFILE ?", filename)
if err != nil {
log.Fatal(err)
}
log.Printf("%d rows inserted\n", result.RowsAffected())
If you haven't used LOAD DATA INFILE before, note carefully the documentation regarding LOCAL. Depending on your server configuration and permissions, you might need to use LOAD DATA LOCAL INFILE instead. (If you intend to use Docker containers, for instance, you will absolutely need to use LOCAL.)

GO: Return map from SQL query

I am querying a mysql database in a GO function and want to return key value pairs in a map but can't quite figure out how to accomplish this. So far I have this function:
func GetData(callIds []string) map[string]Records {
//db insert
db, err := sql.Open("mysql", mySql)
if err != nil {
fmt.Printf(err.Error())
}
defer db.Close()
//db query
var foo string
err = db.QueryRow("select foo from bardata where callId = %v", 1).Scan(&foo)
if err != nil {
fmt.Printf(err.Error())
}
fmt.Println(foo)
return nil
I want to return a map with the key being callId and value being foo for each row returned from the query.
First, you need to build up your query. As it is, you're not even using your function input. Since we have a variable number of arguments, we need to do a little work to construct the right number of placeholders:
query := `select callid, foo from bardata where callid in (` +
strings.Repeat(`?,`, len(callIds) - 1) + `?)`
then, execute with the values passed in:
rows, err := db.Query(query, callIds...)
if err != nil {
// handle it
}
defer rows.Close()
then collect the results:
ret := map[string]string{}
for rows.Next() {
var callid, foo string
err = rows.Scan(&callid, &foo)
if err != nil {
// handle it
}
ret[callid] = foo
}
return ret
Caveats:
This will cause a placeholder mismatch error if callIds is an empty slice. If that's possible, then you need to detect it and handle it separately (maybe by returning an error or an empty map — querying the DB shouldn't be necessary).
This returns a map[string]string where the values are whatever "foo" is. In your question you have the function returning a map[string]Records but there's no information about what a Records might be or how to fetch one.
You might want to handle sql.ErrNoRows differently from other errors.
package main
import (
"fmt"
"github.com/bobby96333/goSqlHelper"
)
func main(){
fmt.Println("hello")
conn,err :=goSqlHelper.MysqlOpen("user:password#tcp(127.0.0.1:3306)/dbname")
checkErr(err)
row,err := conn.QueryRow("select * from table where col1 = ? and col2 = ?","123","abc")
checkErr(err)
if *row==nil {
fmt.Println("no found row")
}else{
fmt.Printf("%+v",row)
}
}
func checkErr(err error){
if err!=nil {
panic(err)
}
}
output:
&map[col1:abc col2:123]

Go func (*DB) Query return when such a row does not exist

Signature is func (db *DB) Query(query string, args ...interface{}) (*Rows, error).
What does Go func (*DB) Query return if the query and call is:
rows, err := db.Query("SELECT username FROM userstable WHERE username=$1", registerInstance.Username)
when there is no such row in the table userstable.
Does it return a non-nil error or return empty string value as Result and non-nil error is returned only when an error occurs?
In this case, you'll definitely want to use QueryRow instead of Query (assuming you'd only ever get one user with the same username).
From http://go-database-sql.org/retrieving.html
Go defines a special error constant, called sql.ErrNoRows, which is returned from QueryRow() when the result is empty. This needs to be handled as a special case in most circumstances. An empty result is often not considered an error by application code, and if you don’t check whether an error is this special constant, you’ll cause application-code errors you didn’t expect.
When using Query, you'll be looping over the results with something like:
rows, err := db.Query("SELECT username FROM userstable WHERE username=$1", registerInstance.Username)
if err != nil {
log.Fatal(err)
}
defer rows.Close()
var users []User
for rows.Next() {
user = User{}
err := rows.Scan(&user.Username)
if err != nil {
log.Fatal(err)
}
users = append(users, user)
}
rows.Close()
if (len(users) == 0) {
//handle this case
}

Resources