Related
I had an API that I wrote in python flask for the backend of the website and app, which works fine. I recently learned Go and rewrote the whole API in Go. I expected much lower CPU and memory utilization from Go binary file but MariaDB now almost 99% utilization.
I try to limit max connection, maxtimeout, maxidletime,max...etc all option in GitHub page still the no use. I have the connection as global variable in the code, and I defer result.close() after every db.prepare and db.query. I know Go is much faster than python so it make sense to make more request to server but its only test environment it should cause that much cpu utilization any suggestion on how to deal with MariaDB in golang?
FYI: the original site it working from 2015, it have at least over millon row of data, I can recreate the database using gorm and insert the data again but I really want to just use pure SQL (no ORM thank you).
func getfulldate(c *fiber.Ctx) error {
pid := c.FormValue("pid")
result, err := db.Prepare("select concat(p.firstName, ' ', p.middle, ' ', p.lastName, ' ', p.forthName) as fullname,gender,bID,married,barcode,comment,address,if(p2.phone is null, 0, p2.phone) as phone,rName,occupation,weight,height,cast(Birthdate as date) as Birthdate from profile p left join (select phID, pID, phone from phonefix group by pID) p2 on p.pID = p2.pID left join (select pID, weight, height from bmifix group by pID) B on p.pID = B.pID, religion r where r.rgID = p.rgID and p.pID = ? ")
defer result.Close()
if err != nil {
return c.JSON(fiber.Map{"access_token": "wrong"})
}
rows, err := result.Query(pid)
defer rows.Close()
if err != nil {
return c.JSON(fiber.Map{"access_token": "wrong"})
}
columns, err := rows.Columns()
if err != nil {
return err
}
count := len(columns)
tableData := make([]map[string]interface{}, 0)
values := make([]interface{}, count)
valuePtrs := make([]interface{}, count)
for rows.Next() {
for i := 0; i < count; i++ {
valuePtrs[i] = &values[i]
}
rows.Scan(valuePtrs...)
entry := make(map[string]interface{})
for i, col := range columns {
var v interface{}
val := values[i]
b, ok := val.([]byte)
if ok {
v = string(b)
} else {
v = val
}
entry[col] = v
}
tableData = append(tableData, entry)
}
currentTime := time.Now().Format("2006-01-02")
result, err = db.Prepare("select viID,state as done,dob from visitfix where patientID = ?")
defer result.Close()
if err != nil {
return c.JSON(fiber.Map{"access_token": "wrong"})
}
rows, err = result.Query(pid)
defer rows.Close()
if err != nil {
return c.JSON(fiber.Map{"access_token": "wrong"})
}
columns = []string{"viID", "done", "dob"}
count = len(columns)
tableDatas := make([]map[string]interface{}, 0)
values = make([]interface{}, count)
valuePtrs = make([]interface{}, count)
for rows.Next() {
for i := 0; i < count; i++ {
valuePtrs[i] = &values[i]
}
rows.Scan(valuePtrs...)
entry := make(map[string]interface{})
for i, col := range columns {
var v interface{}
val := values[i]
b, ok := val.([]byte)
if ok {
v = string(b)
} else {
v = val
}
if i == 2 {
var state string
format := "2006-1-2"
datea, err := time.Parse(format, string(b))
if err != nil {
return err
}
mydate := datea.Format("2006-01-02")
if mydate == currentTime {
state = "today"
}
if mydate < currentTime {
state = "older"
}
if mydate > currentTime {
state = "newer"
}
entry["state"] = state
}
entry[col] = v
}
tableDatas = append(tableDatas, entry)
}
alldata := [][]map[string]interface{}{tableData, tableDatas}
dat, err := json.Marshal(alldata)
if err != nil {
return err
}
return c.SendString(string(dat))
}
The go process in itself should not have any other effect on (any) database as any other language barring better driver implementations for retrieving the row data (cursor implementation of MySQL was (and might still be) broken).
As for the cursor usage: This can spread the load over a longer time but you would need a serious amount of data to notice a difference between driver implementations and languages. Compiled languages will be able to put a higher load on the db at that point in time. But again: This would be rare.
The primary candidate to look at here, is most likely indexing: You are querying:
result, err := db.Prepare(`select concat(p.firstName, ' ', p.middle, ' ', p.lastName, ' ', p.forthName)
as fullname,gender,bID,married,barcode,comment,address,if(p2.phone is null, 0, p2.phone)
as phone,rName,occupation,weight,height,cast(Birthdate as date) as Birthdate
from profile p
left join (select phID, pID, phone from phonefix group by pID) p2 on p.pID = p2.pID
left join (select pID, weight, height from bmifix group by pID) B on p.pID = B.pID, religion r
where r.rgID = p.rgID and p.pID = ? `)
You want to have indexes on p.pID, r.rgID and p.rgID.
Then also the group by on pID in the left joins might be executed in a suboptimal fashion (run an explain and check the execution plan).
An improvement could be to remove the group by statements: You don't have a group by function, so no use for the group by in the left joins:
result, err := db.Prepare(`select concat(p.firstName, ' ', p.middle, ' ', p.lastName, ' ', p.forthName)
as fullname,gender,bID,married,barcode,comment,address,if(p2.phone is null, 0, p2.phone)
as phone,rName,occupation,weight,height,cast(Birthdate as date) as Birthdate
from profile p
left join (select phID, pID, phone from phonefix) p2 on p.pID = p2.pID
left join (select pID, weight, height from bmifix) B on p.pID = B.pID
left join religion r
where r.rgID = p.rgID and p.pID = ? `)
Since you are always retrieving a single pID, and that seems to be a unique record, the following might even be better (can not test: Don't have your DB :)): If possible go to inner joins: That should outperform a left join
I'm working on some code that's meant to dump mysql data to a .csv file. I'd like to pass in a command line arg that allows the user to input what ID is run for the query, eg. go run main.go 2 would run the query
SELECT * FROM table where id = 2;
I know that Go has the os package where I can then pass something like:
args := os.Args
if len(args) < 2 {
fmt.Println("Supply ID")
os.Exit(1)
}
testID := os.Args[1]
fmt.Println(testID)
}
Here's the code Im currently working on. How can I set a command line argument for the Query?
rows, _ := db.Query("SELECT * FROM table where id = ?;")
err := sqltocsv.WriteFile("table.csv",rows)
if err != nil {
panic(err)
}
columns, _ := rows.Columns()
count := len(columns)
values := make([]interface{}, count)
valuePtrs := make([]interface{}, count)
for rows.Next() {
for i := range columns {
valuePtrs[i] = &values[i]
}
rows.Scan(valuePtrs...)
for i, col := range columns {
val := values[i]
b, ok := val.([]byte)
var v interface{}
if ok {
v = string(b)
} else {
v = val
}
fmt.Println(col, v)
}
}
}
Just add your parameters to Query:
rows, _ := db.Query("SELECT * FROM table where id = ?;", os.Args[1])
testID need to be and interface{} type and add to Query
var testID interface{}
testID = os.Args[1]
And add to Query
rows, _ := db.Query("SELECT * FROM table where id = ?;", testID)
Edit:
Why interface{}?
Query function accept interface{} types in arguments. More info
I inserted in PostgreSQL table an UUID created with go.uuid :
import (
"github.com/satori/go.uuid"
)
func main() {
usid := uuid.Must(uuid.NewV4())
fmt.Println("usid := uuid.Must(uuid.NewV4")
fmt.Println(usid.String())
res, err := stmt.Exec(cn, csn, ccn, id)
if err != nil || res == nil {
log.Fatal(err)
}
}
sStmt := "insert into basicuserinfo (cn, csn, ccn, appUserAccountID )
values ($1, $2, $3, $4)"
stmt, err := db.Prepare(sStmt)
if err != nil {
log.Fatal(err)
}
defer stmt.Close()
fmt.Println("# Inserting values")
And actually a row is inserted in the postgreSQL db:
cn | csn | ccn | id
| 2412fcd3-8712-4a0f-830a-d77bd4cf2195
In order to query with golang variables I tried first, to use a prepared statement.
First, using db.PrepareContext :
sel := "SELECT appUserAccountID FROM basicuserinfo WHERE cn = $1 AND csn
= $2 AND ccn = $3"
stmt, err = db.PrepareContext(ctx,sel)
if err != nil {
log.Fatal(err)
}
var utenteID string
err = stmt.QueryRowContext(ctx,cn, csn, ccn).Scan(&utenteID)
log.Printf("ID= %s\n", utenteID)
fmt.Println("aaaaaaaaa") // Just to check that the this line gets executed
Whose execution remains "idle", in stand-by, without ending and producing any output:
marco#pc01:~/go/marcoGolang/goPostgres$ go run pqExample00.go
# Inserting values
Then I tried in this way, with db.Prepare():
stmt, err = db.Prepare(sel)
if err != nil {
log.Fatal(err)
}
res, err = stmt.Exec(cnv, csnv, ccnv)
In this second case the execution seems ending successfully, but I do not know how to convert the sql.Result into a correct form which can be handled, for example simply displayed with fmt.Println() or getting other handlings.
So... how to correctly query and handle a UUID created with go.uuid and inserted into PostgreSQL 11?
Looking forward to your kind help.
Marco
Unless you have a reason for using PrepareContext, you could just do this?
db, err := sql.Open("postgres", connectionString)
sel := "SELECT appUserAccountID FROM basicuserinfo WHERE cn = $1 AND csn = $2 AND ccn = $3"
var idn string
err = db.QueryRow(sel, cn, csn, ccn).Scan(&idn)
if err != nil {
log.Fatal(err)
}
fmt.Printf("idn: %v\n", idn)
I have the ff:
func getSlice(distinctSymbols []string) []symbols {
// Prepare connection
stmt1, err := db.Prepare("Select count(*) from stockticker_daily where symbol = $1;")
checkError(err)
defer stmt1.Close()
stmt2, err := db.Prepare("Select date from stockticker_daily where symbol = $1 order by date asc limit 1;")
checkError(err)
defer stmt2.Close()
var symbolsSlice []symbols
c := make(chan symbols)
for _, symbol := range distinctSymbols {
go worker(symbol, stmt1, stmt2, c)
**symbolsFromChannel := <-c**
**symbolsSlice = append(symbolsSlice, symbolsFromChannel})**
}
return symbolsSlice
}
func worker(symbol string, stmt1 *sql.Stmt, stmt2 *sql.Stmt, symbolsChan chan symbols) {
var countdp int
var earliestdate string
row := stmt1.QueryRow(symbol)
if err := row.Scan(&countdp); err != nil {
log.Fatal(err)
}
row = stmt2.QueryRow(symbol)
if err := row.Scan(&earliestdate); err != nil {
log.Fatal(err)
}
symbolsChan <- symbols{symbol, countdp, earliestdate}
}
Please take a look at the first function, I know it won't work as I expect since the line symbolsFromChannel := <-c will block until it receives from the channel, so the iteration on the goroutine go worker will not continue unless the block is removed. What is the best or correct way to do that?
Just do the loop twice, e.g.
for _, symbol := range distinctSymbols {
go worker(symbol, stmt1, stmt2, c)
}
for range distinctSymbols {
symbolsSlice = append(symbolsSlice, <-c)
}
I know that Insert multiple data at once more efficiency:
INSERT INTO test(n1, n2, n3)
VALUES(v1, v2, v3),(v4, v5, v6),(v7, v8, v9);
How to do that in golang?
data := []map[string]string{
{"v1":"1", "v2":"1", "v3":"1"},
{"v1":"2", "v2":"2", "v3":"2"},
{"v1":"3", "v2":"3", "v3":"3"},
}
//I do not want to do it
for _, v := range data {
sqlStr := "INSERT INTO test(n1, n2, n3) VALUES(?, ?, ?)"
stmt, _ := db.Prepare(sqlStr)
res, _ := stmt.Exec(v["v1"], v["v2"], v["v3"])
}
Use string splice, but it's not good. db.Prepare more safer, right?
sqlStr := "INSERT INTO test(n1, n2, n3) VALUES"
for k, v := range data {
if k == 0 {
sqlStr += fmt.Sprintf("(%v, %v, %v)", v["v1"], v["v2"], v["v3"])
} else {
sqlStr += fmt.Sprintf(",(%v, %v, %v)", v["v1"], v["v2"], v["v3"])
}
}
res, _ := db.Exec(sqlStr)
I need a function safer and efficient insert mulitple data at once.
why not something like this? (writing here without testing so there might be syntax errors):
sqlStr := "INSERT INTO test(n1, n2, n3) VALUES "
vals := []interface{}{}
for _, row := range data {
sqlStr += "(?, ?, ?),"
vals = append(vals, row["v1"], row["v2"], row["v3"])
}
//trim the last ,
sqlStr = sqlStr[0:len(sqlStr)-1]
//prepare the statement
stmt, _ := db.Prepare(sqlStr)
//format all vals at once
res, _ := stmt.Exec(vals...)
For Postgres lib pq supports bulk inserts: https://godoc.org/github.com/lib/pq#hdr-Bulk_imports
But same can be achieved through below code but where it is really helpful is when one tries to perform bulk conditional update (change the query accordingly).
For performing similar bulk inserts for Postgres, you can use the following function.
// ReplaceSQL replaces the instance occurrence of any string pattern with an increasing $n based sequence
func ReplaceSQL(old, searchPattern string) string {
tmpCount := strings.Count(old, searchPattern)
for m := 1; m <= tmpCount; m++ {
old = strings.Replace(old, searchPattern, "$"+strconv.Itoa(m), 1)
}
return old
}
So above sample becomes
sqlStr := "INSERT INTO test(n1, n2, n3) VALUES "
vals := []interface{}{}
for _, row := range data {
sqlStr += "(?, ?, ?),"
vals = append(vals, row["v1"], row["v2"], row["v3"])
}
//trim the last ,
sqlStr = strings.TrimSuffix(sqlStr, ",")
//Replacing ? with $n for postgres
sqlStr = ReplaceSQL(sqlStr, "?")
//prepare the statement
stmt, _ := db.Prepare(sqlStr)
//format all vals at once
res, _ := stmt.Exec(vals...)
Gorm V2 (released on 30th August 2020) now supports batch insert query.
// Pass slice data to method Create, GORM will generate a single SQL statement
// to insert all the data and backfill primary key values,
// hook methods will be invoked too.
var users = []User{{Name: "jinzhu1"}, {Name: "jinzhu2"}, {Name: "jinzhu3"}}
DB.Create(&users)
for _, user := range users {
user.ID // 1,2,3
}
For more details refer to the official documentation here: https://gorm.io/docs/create.html.
If you enable multi statements , then you can execute multiple statement at once.
With that , you should be able to handle multiple inserts.
https://github.com/go-sql-driver/mysql#multistatements
After extensive research this worked for me:
var values []interface{}
for _, scope := range scopes {
values = append(values, scope.ID, scope.Code, scope.Description)
}
sqlStr := `INSERT INTO scopes (application_id, scope, description) VALUES %s`
sqlStr = setupBindVars(sqlStr, "(?, ?, ?)", len(scopes))
_, err = s.db.ExecContext(ctx, sqlStr, values...)
// helper function to replace ? with the right number of sets of bind vars
func setupBindVars(stmt, bindVars string, len int) string {
bindVars += ","
stmt = fmt.Sprintf(stmt, strings.Repeat(bindVars, len))
return strings.TrimSuffix(stmt, ",")
}
From https://gorm.io/docs/create.html#Batch-Insert
Code sample:
var users = []User{{Name: "jinzhu1"}, {Name: "jinzhu2"}, {Name: "jinzhu3"}}
DB.Create(&users)
this is an efficient way to do transition which will do network call only after commit.
func insert(requestObj []models.User) (bool, error) {
tx := db.Begin()
defer func() {
if r := recover(); r != nil {
tx.Rollback()
}
}()
for _, obj := range requestObj {
if err := tx.Create(&obj).Error; err != nil {
logging.AppLogger.Errorf("Failed to create user")
tx.Rollback()
return false, err
}
}
err := tx.Commit().Error
if err != nil {
return false, err
}
return true, nil
}
I ended up with this, after combining the feedback on posted answers:
const insertQuery := "INSERT INTO test(n1, n2, n3) VALUES "
const row = "(?, ?, ?)"
var inserts []string
vars vals []interface{}
for _, row := range data {
inserts = append(inserts, row)
vals = append(vals, row["v1"], row["v2"], row["v3"])
}
sqlStr := insertQuery + strings.Join(inserts, ",")
//prepare the statement
stmt, _ := db.Prepare(sqlStr)
//close stmt after use
defer stmt.Close()
//format all vals at once
res, _ := stmt.Exec(vals...)