I know that Insert multiple data at once more efficiency:
INSERT INTO test(n1, n2, n3)
VALUES(v1, v2, v3),(v4, v5, v6),(v7, v8, v9);
How to do that in golang?
data := []map[string]string{
{"v1":"1", "v2":"1", "v3":"1"},
{"v1":"2", "v2":"2", "v3":"2"},
{"v1":"3", "v2":"3", "v3":"3"},
}
//I do not want to do it
for _, v := range data {
sqlStr := "INSERT INTO test(n1, n2, n3) VALUES(?, ?, ?)"
stmt, _ := db.Prepare(sqlStr)
res, _ := stmt.Exec(v["v1"], v["v2"], v["v3"])
}
Use string splice, but it's not good. db.Prepare more safer, right?
sqlStr := "INSERT INTO test(n1, n2, n3) VALUES"
for k, v := range data {
if k == 0 {
sqlStr += fmt.Sprintf("(%v, %v, %v)", v["v1"], v["v2"], v["v3"])
} else {
sqlStr += fmt.Sprintf(",(%v, %v, %v)", v["v1"], v["v2"], v["v3"])
}
}
res, _ := db.Exec(sqlStr)
I need a function safer and efficient insert mulitple data at once.
why not something like this? (writing here without testing so there might be syntax errors):
sqlStr := "INSERT INTO test(n1, n2, n3) VALUES "
vals := []interface{}{}
for _, row := range data {
sqlStr += "(?, ?, ?),"
vals = append(vals, row["v1"], row["v2"], row["v3"])
}
//trim the last ,
sqlStr = sqlStr[0:len(sqlStr)-1]
//prepare the statement
stmt, _ := db.Prepare(sqlStr)
//format all vals at once
res, _ := stmt.Exec(vals...)
For Postgres lib pq supports bulk inserts: https://godoc.org/github.com/lib/pq#hdr-Bulk_imports
But same can be achieved through below code but where it is really helpful is when one tries to perform bulk conditional update (change the query accordingly).
For performing similar bulk inserts for Postgres, you can use the following function.
// ReplaceSQL replaces the instance occurrence of any string pattern with an increasing $n based sequence
func ReplaceSQL(old, searchPattern string) string {
tmpCount := strings.Count(old, searchPattern)
for m := 1; m <= tmpCount; m++ {
old = strings.Replace(old, searchPattern, "$"+strconv.Itoa(m), 1)
}
return old
}
So above sample becomes
sqlStr := "INSERT INTO test(n1, n2, n3) VALUES "
vals := []interface{}{}
for _, row := range data {
sqlStr += "(?, ?, ?),"
vals = append(vals, row["v1"], row["v2"], row["v3"])
}
//trim the last ,
sqlStr = strings.TrimSuffix(sqlStr, ",")
//Replacing ? with $n for postgres
sqlStr = ReplaceSQL(sqlStr, "?")
//prepare the statement
stmt, _ := db.Prepare(sqlStr)
//format all vals at once
res, _ := stmt.Exec(vals...)
Gorm V2 (released on 30th August 2020) now supports batch insert query.
// Pass slice data to method Create, GORM will generate a single SQL statement
// to insert all the data and backfill primary key values,
// hook methods will be invoked too.
var users = []User{{Name: "jinzhu1"}, {Name: "jinzhu2"}, {Name: "jinzhu3"}}
DB.Create(&users)
for _, user := range users {
user.ID // 1,2,3
}
For more details refer to the official documentation here: https://gorm.io/docs/create.html.
If you enable multi statements , then you can execute multiple statement at once.
With that , you should be able to handle multiple inserts.
https://github.com/go-sql-driver/mysql#multistatements
After extensive research this worked for me:
var values []interface{}
for _, scope := range scopes {
values = append(values, scope.ID, scope.Code, scope.Description)
}
sqlStr := `INSERT INTO scopes (application_id, scope, description) VALUES %s`
sqlStr = setupBindVars(sqlStr, "(?, ?, ?)", len(scopes))
_, err = s.db.ExecContext(ctx, sqlStr, values...)
// helper function to replace ? with the right number of sets of bind vars
func setupBindVars(stmt, bindVars string, len int) string {
bindVars += ","
stmt = fmt.Sprintf(stmt, strings.Repeat(bindVars, len))
return strings.TrimSuffix(stmt, ",")
}
From https://gorm.io/docs/create.html#Batch-Insert
Code sample:
var users = []User{{Name: "jinzhu1"}, {Name: "jinzhu2"}, {Name: "jinzhu3"}}
DB.Create(&users)
this is an efficient way to do transition which will do network call only after commit.
func insert(requestObj []models.User) (bool, error) {
tx := db.Begin()
defer func() {
if r := recover(); r != nil {
tx.Rollback()
}
}()
for _, obj := range requestObj {
if err := tx.Create(&obj).Error; err != nil {
logging.AppLogger.Errorf("Failed to create user")
tx.Rollback()
return false, err
}
}
err := tx.Commit().Error
if err != nil {
return false, err
}
return true, nil
}
I ended up with this, after combining the feedback on posted answers:
const insertQuery := "INSERT INTO test(n1, n2, n3) VALUES "
const row = "(?, ?, ?)"
var inserts []string
vars vals []interface{}
for _, row := range data {
inserts = append(inserts, row)
vals = append(vals, row["v1"], row["v2"], row["v3"])
}
sqlStr := insertQuery + strings.Join(inserts, ",")
//prepare the statement
stmt, _ := db.Prepare(sqlStr)
//close stmt after use
defer stmt.Close()
//format all vals at once
res, _ := stmt.Exec(vals...)
Related
I had an API that I wrote in python flask for the backend of the website and app, which works fine. I recently learned Go and rewrote the whole API in Go. I expected much lower CPU and memory utilization from Go binary file but MariaDB now almost 99% utilization.
I try to limit max connection, maxtimeout, maxidletime,max...etc all option in GitHub page still the no use. I have the connection as global variable in the code, and I defer result.close() after every db.prepare and db.query. I know Go is much faster than python so it make sense to make more request to server but its only test environment it should cause that much cpu utilization any suggestion on how to deal with MariaDB in golang?
FYI: the original site it working from 2015, it have at least over millon row of data, I can recreate the database using gorm and insert the data again but I really want to just use pure SQL (no ORM thank you).
func getfulldate(c *fiber.Ctx) error {
pid := c.FormValue("pid")
result, err := db.Prepare("select concat(p.firstName, ' ', p.middle, ' ', p.lastName, ' ', p.forthName) as fullname,gender,bID,married,barcode,comment,address,if(p2.phone is null, 0, p2.phone) as phone,rName,occupation,weight,height,cast(Birthdate as date) as Birthdate from profile p left join (select phID, pID, phone from phonefix group by pID) p2 on p.pID = p2.pID left join (select pID, weight, height from bmifix group by pID) B on p.pID = B.pID, religion r where r.rgID = p.rgID and p.pID = ? ")
defer result.Close()
if err != nil {
return c.JSON(fiber.Map{"access_token": "wrong"})
}
rows, err := result.Query(pid)
defer rows.Close()
if err != nil {
return c.JSON(fiber.Map{"access_token": "wrong"})
}
columns, err := rows.Columns()
if err != nil {
return err
}
count := len(columns)
tableData := make([]map[string]interface{}, 0)
values := make([]interface{}, count)
valuePtrs := make([]interface{}, count)
for rows.Next() {
for i := 0; i < count; i++ {
valuePtrs[i] = &values[i]
}
rows.Scan(valuePtrs...)
entry := make(map[string]interface{})
for i, col := range columns {
var v interface{}
val := values[i]
b, ok := val.([]byte)
if ok {
v = string(b)
} else {
v = val
}
entry[col] = v
}
tableData = append(tableData, entry)
}
currentTime := time.Now().Format("2006-01-02")
result, err = db.Prepare("select viID,state as done,dob from visitfix where patientID = ?")
defer result.Close()
if err != nil {
return c.JSON(fiber.Map{"access_token": "wrong"})
}
rows, err = result.Query(pid)
defer rows.Close()
if err != nil {
return c.JSON(fiber.Map{"access_token": "wrong"})
}
columns = []string{"viID", "done", "dob"}
count = len(columns)
tableDatas := make([]map[string]interface{}, 0)
values = make([]interface{}, count)
valuePtrs = make([]interface{}, count)
for rows.Next() {
for i := 0; i < count; i++ {
valuePtrs[i] = &values[i]
}
rows.Scan(valuePtrs...)
entry := make(map[string]interface{})
for i, col := range columns {
var v interface{}
val := values[i]
b, ok := val.([]byte)
if ok {
v = string(b)
} else {
v = val
}
if i == 2 {
var state string
format := "2006-1-2"
datea, err := time.Parse(format, string(b))
if err != nil {
return err
}
mydate := datea.Format("2006-01-02")
if mydate == currentTime {
state = "today"
}
if mydate < currentTime {
state = "older"
}
if mydate > currentTime {
state = "newer"
}
entry["state"] = state
}
entry[col] = v
}
tableDatas = append(tableDatas, entry)
}
alldata := [][]map[string]interface{}{tableData, tableDatas}
dat, err := json.Marshal(alldata)
if err != nil {
return err
}
return c.SendString(string(dat))
}
The go process in itself should not have any other effect on (any) database as any other language barring better driver implementations for retrieving the row data (cursor implementation of MySQL was (and might still be) broken).
As for the cursor usage: This can spread the load over a longer time but you would need a serious amount of data to notice a difference between driver implementations and languages. Compiled languages will be able to put a higher load on the db at that point in time. But again: This would be rare.
The primary candidate to look at here, is most likely indexing: You are querying:
result, err := db.Prepare(`select concat(p.firstName, ' ', p.middle, ' ', p.lastName, ' ', p.forthName)
as fullname,gender,bID,married,barcode,comment,address,if(p2.phone is null, 0, p2.phone)
as phone,rName,occupation,weight,height,cast(Birthdate as date) as Birthdate
from profile p
left join (select phID, pID, phone from phonefix group by pID) p2 on p.pID = p2.pID
left join (select pID, weight, height from bmifix group by pID) B on p.pID = B.pID, religion r
where r.rgID = p.rgID and p.pID = ? `)
You want to have indexes on p.pID, r.rgID and p.rgID.
Then also the group by on pID in the left joins might be executed in a suboptimal fashion (run an explain and check the execution plan).
An improvement could be to remove the group by statements: You don't have a group by function, so no use for the group by in the left joins:
result, err := db.Prepare(`select concat(p.firstName, ' ', p.middle, ' ', p.lastName, ' ', p.forthName)
as fullname,gender,bID,married,barcode,comment,address,if(p2.phone is null, 0, p2.phone)
as phone,rName,occupation,weight,height,cast(Birthdate as date) as Birthdate
from profile p
left join (select phID, pID, phone from phonefix) p2 on p.pID = p2.pID
left join (select pID, weight, height from bmifix) B on p.pID = B.pID
left join religion r
where r.rgID = p.rgID and p.pID = ? `)
Since you are always retrieving a single pID, and that seems to be a unique record, the following might even be better (can not test: Don't have your DB :)): If possible go to inner joins: That should outperform a left join
I'm working on some code that's meant to dump mysql data to a .csv file. I'd like to pass in a command line arg that allows the user to input what ID is run for the query, eg. go run main.go 2 would run the query
SELECT * FROM table where id = 2;
I know that Go has the os package where I can then pass something like:
args := os.Args
if len(args) < 2 {
fmt.Println("Supply ID")
os.Exit(1)
}
testID := os.Args[1]
fmt.Println(testID)
}
Here's the code Im currently working on. How can I set a command line argument for the Query?
rows, _ := db.Query("SELECT * FROM table where id = ?;")
err := sqltocsv.WriteFile("table.csv",rows)
if err != nil {
panic(err)
}
columns, _ := rows.Columns()
count := len(columns)
values := make([]interface{}, count)
valuePtrs := make([]interface{}, count)
for rows.Next() {
for i := range columns {
valuePtrs[i] = &values[i]
}
rows.Scan(valuePtrs...)
for i, col := range columns {
val := values[i]
b, ok := val.([]byte)
var v interface{}
if ok {
v = string(b)
} else {
v = val
}
fmt.Println(col, v)
}
}
}
Just add your parameters to Query:
rows, _ := db.Query("SELECT * FROM table where id = ?;", os.Args[1])
testID need to be and interface{} type and add to Query
var testID interface{}
testID = os.Args[1]
And add to Query
rows, _ := db.Query("SELECT * FROM table where id = ?;", testID)
Edit:
Why interface{}?
Query function accept interface{} types in arguments. More info
This code delivers AFAIK correct JSON output [{},{}], but each row is appended and replaces all previous rows, so the result shows only copies of the last row.
var rows *sql.Rows
rows, err = db.Query(query)
cols, _ := rows.Columns()
colnames, _ := rows.Columns()
vals := make([]interface{}, len(cols))
for i, _ := range cols {
vals[i] = &cols[i]
}
m := make(map[string]interface{})
for i, val := range vals {
m[colnames[i]] = val
}
list := make([]map[string]interface{}, 0)
for rows.Next() {
err = rows.Scan(vals...)
list = append(list, m)
}
json, _ := json.Marshal(list)
fmt.Fprintf(w,"%s\n", json)
This is what happens behind the scenes looping through the rows:
loop 1: {“ID”:“1”,“NAME”: "John }
loop 2: {“ID”:“2”,“NAME”: “Jane Doe”}{“ID”:“2”,“NAME”: “Jane Doe”}
loop 3: {“ID”:“3”,“NAME”: “Donald Duck”}{“ID”:“3”,“NAME”: “Donald Duck”}{“ID”:“3”,“NAME”: “Donald Duck”}
The rows.Scan fetches the correct values, but it appends AND replaces all previous values.
The final output is this
[{“ID”:“3”,“NAME”: “Donald Duck”},{“ID”:“3”,“NAME”: “Donald Duck”},{“ID”:“3”,“NAME”: “Donald Duck”}]
But should be this:
[{“ID”:“1”,“NAME”: “John Doe”},{“ID”:“2”,“NAME”: “Jane Doe”},{“ID”:“3”,“NAME”: “Donald Duck”}]
What am I doing wrong?
You may downvote this, but please explain why. I am still a newbie on Golang and want to learn.
I fixed it and explained with comments what you did wrong:
// 1. Query
var rows *sql.Rows
rows, err = db.Query(query)
cols, _ := rows.Columns()
// 2. Iterate
list := make([]map[string]interface{}, 0)
for rows.Next() {
vals := make([]interface{}, len(cols))
for i, _ := range cols {
// Previously you assigned vals[i] a pointer to a column name cols[i].
// This meant that everytime you did rows.Scan(vals),
// rows.Scan would see pointers to cols and modify them
// Since cols are the same for all rows, they shouldn't be modified.
// Here we assign a pointer to an empty string to vals[i],
// so rows.Scan can fill it.
var s string
vals[i] = &s
// This is effectively like saying:
// var string1, string2 string
// rows.Scan(&string1, &string2)
// Except the above only scans two string columns
// and we allow as many string columns as the query returned us — len(cols).
}
err = rows.Scan(vals...)
// Don't forget to check errors.
if err != nil {
log.Fatal(err)
}
// Make a new map before appending it.
// Remember maps aren't copied by value, so if we declared
// the map m outside of the rows.Next() loop, we would be appending
// and modifying the same map for each row, so all rows in list would look the same.
m := make(map[string]interface{})
for i, val := range vals {
m[cols[i]] = val
}
list = append(list, m)
}
// 3. Print.
b, _ := json.MarshalIndent(list, "", "\t")
fmt.Printf("%s\n", b)
Don't worry, this was hard for me to understand when I was a beginner as well.
Now, something fun:
var list []map[string]interface{}
rows, err := db.Queryx(query)
for rows.Next() {
row := make(map[string]interface{})
err = rows.MapScan(row)
if err != nil {
log.Fatal(err)
}
list = append(list, row)
}
b, _ := json.MarshalIndent(list, "", "\t")
fmt.Printf("%s\n", b)
This does the same as the code above it, but with sqlx. A bit simpler, no?
sqlx is an extension on top of database/sql with methods to scan rows directly to maps and structs, so you don't have to do that manually.
I think your model looks nicer as a struct:
type Person struct {
ID int
Name string
}
var people []Person
rows, err := db.Queryx(query)
for rows.Next() {
var p Person
err = rows.StructScan(&p)
if err != nil {
log.Fatal(err)
}
people = append(people, p)
}
Don't you think?
I have the ff:
func getSlice(distinctSymbols []string) []symbols {
// Prepare connection
stmt1, err := db.Prepare("Select count(*) from stockticker_daily where symbol = $1;")
checkError(err)
defer stmt1.Close()
stmt2, err := db.Prepare("Select date from stockticker_daily where symbol = $1 order by date asc limit 1;")
checkError(err)
defer stmt2.Close()
var symbolsSlice []symbols
c := make(chan symbols)
for _, symbol := range distinctSymbols {
go worker(symbol, stmt1, stmt2, c)
**symbolsFromChannel := <-c**
**symbolsSlice = append(symbolsSlice, symbolsFromChannel})**
}
return symbolsSlice
}
func worker(symbol string, stmt1 *sql.Stmt, stmt2 *sql.Stmt, symbolsChan chan symbols) {
var countdp int
var earliestdate string
row := stmt1.QueryRow(symbol)
if err := row.Scan(&countdp); err != nil {
log.Fatal(err)
}
row = stmt2.QueryRow(symbol)
if err := row.Scan(&earliestdate); err != nil {
log.Fatal(err)
}
symbolsChan <- symbols{symbol, countdp, earliestdate}
}
Please take a look at the first function, I know it won't work as I expect since the line symbolsFromChannel := <-c will block until it receives from the channel, so the iteration on the goroutine go worker will not continue unless the block is removed. What is the best or correct way to do that?
Just do the loop twice, e.g.
for _, symbol := range distinctSymbols {
go worker(symbol, stmt1, stmt2, c)
}
for range distinctSymbols {
symbolsSlice = append(symbolsSlice, <-c)
}
I have a function to delete "User" node and return count deleted node, but it always return -1.
func DeleteUser(userid int) (int, error) {
stmt := `
MATCH (u:User) WHERE u.userId = {userid} delete u RETURN count(u) ;
`
params := neoism.Props{"userid": userid}
res := -1
cq := neoism.CypherQuery{
Statement: stmt,
Parameters: params,
Result: &res,
}
err := conn.Cypher(&cq)
return res, err
}
1) res must be of type []struct
2) Don't use ";" at the end of query.
stmt := MATCH (u:User) WHERE u.userId = {userid} delete u RETURN count(u)