Go-SQL-Driver causing maria-db CPU utilization very high - go

I had an API that I wrote in python flask for the backend of the website and app, which works fine. I recently learned Go and rewrote the whole API in Go. I expected much lower CPU and memory utilization from Go binary file but MariaDB now almost 99% utilization.
I try to limit max connection, maxtimeout, maxidletime,max...etc all option in GitHub page still the no use. I have the connection as global variable in the code, and I defer result.close() after every db.prepare and db.query. I know Go is much faster than python so it make sense to make more request to server but its only test environment it should cause that much cpu utilization any suggestion on how to deal with MariaDB in golang?
FYI: the original site it working from 2015, it have at least over millon row of data, I can recreate the database using gorm and insert the data again but I really want to just use pure SQL (no ORM thank you).
func getfulldate(c *fiber.Ctx) error {
pid := c.FormValue("pid")
result, err := db.Prepare("select concat(p.firstName, ' ', p.middle, ' ', p.lastName, ' ', p.forthName) as fullname,gender,bID,married,barcode,comment,address,if(p2.phone is null, 0, p2.phone) as phone,rName,occupation,weight,height,cast(Birthdate as date) as Birthdate from profile p left join (select phID, pID, phone from phonefix group by pID) p2 on p.pID = p2.pID left join (select pID, weight, height from bmifix group by pID) B on p.pID = B.pID, religion r where r.rgID = p.rgID and p.pID = ? ")
defer result.Close()
if err != nil {
return c.JSON(fiber.Map{"access_token": "wrong"})
}
rows, err := result.Query(pid)
defer rows.Close()
if err != nil {
return c.JSON(fiber.Map{"access_token": "wrong"})
}
columns, err := rows.Columns()
if err != nil {
return err
}
count := len(columns)
tableData := make([]map[string]interface{}, 0)
values := make([]interface{}, count)
valuePtrs := make([]interface{}, count)
for rows.Next() {
for i := 0; i < count; i++ {
valuePtrs[i] = &values[i]
}
rows.Scan(valuePtrs...)
entry := make(map[string]interface{})
for i, col := range columns {
var v interface{}
val := values[i]
b, ok := val.([]byte)
if ok {
v = string(b)
} else {
v = val
}
entry[col] = v
}
tableData = append(tableData, entry)
}
currentTime := time.Now().Format("2006-01-02")
result, err = db.Prepare("select viID,state as done,dob from visitfix where patientID = ?")
defer result.Close()
if err != nil {
return c.JSON(fiber.Map{"access_token": "wrong"})
}
rows, err = result.Query(pid)
defer rows.Close()
if err != nil {
return c.JSON(fiber.Map{"access_token": "wrong"})
}
columns = []string{"viID", "done", "dob"}
count = len(columns)
tableDatas := make([]map[string]interface{}, 0)
values = make([]interface{}, count)
valuePtrs = make([]interface{}, count)
for rows.Next() {
for i := 0; i < count; i++ {
valuePtrs[i] = &values[i]
}
rows.Scan(valuePtrs...)
entry := make(map[string]interface{})
for i, col := range columns {
var v interface{}
val := values[i]
b, ok := val.([]byte)
if ok {
v = string(b)
} else {
v = val
}
if i == 2 {
var state string
format := "2006-1-2"
datea, err := time.Parse(format, string(b))
if err != nil {
return err
}
mydate := datea.Format("2006-01-02")
if mydate == currentTime {
state = "today"
}
if mydate < currentTime {
state = "older"
}
if mydate > currentTime {
state = "newer"
}
entry["state"] = state
}
entry[col] = v
}
tableDatas = append(tableDatas, entry)
}
alldata := [][]map[string]interface{}{tableData, tableDatas}
dat, err := json.Marshal(alldata)
if err != nil {
return err
}
return c.SendString(string(dat))
}

The go process in itself should not have any other effect on (any) database as any other language barring better driver implementations for retrieving the row data (cursor implementation of MySQL was (and might still be) broken).
As for the cursor usage: This can spread the load over a longer time but you would need a serious amount of data to notice a difference between driver implementations and languages. Compiled languages will be able to put a higher load on the db at that point in time. But again: This would be rare.
The primary candidate to look at here, is most likely indexing: You are querying:
result, err := db.Prepare(`select concat(p.firstName, ' ', p.middle, ' ', p.lastName, ' ', p.forthName)
as fullname,gender,bID,married,barcode,comment,address,if(p2.phone is null, 0, p2.phone)
as phone,rName,occupation,weight,height,cast(Birthdate as date) as Birthdate
from profile p
left join (select phID, pID, phone from phonefix group by pID) p2 on p.pID = p2.pID
left join (select pID, weight, height from bmifix group by pID) B on p.pID = B.pID, religion r
where r.rgID = p.rgID and p.pID = ? `)
You want to have indexes on p.pID, r.rgID and p.rgID.
Then also the group by on pID in the left joins might be executed in a suboptimal fashion (run an explain and check the execution plan).
An improvement could be to remove the group by statements: You don't have a group by function, so no use for the group by in the left joins:
result, err := db.Prepare(`select concat(p.firstName, ' ', p.middle, ' ', p.lastName, ' ', p.forthName)
as fullname,gender,bID,married,barcode,comment,address,if(p2.phone is null, 0, p2.phone)
as phone,rName,occupation,weight,height,cast(Birthdate as date) as Birthdate
from profile p
left join (select phID, pID, phone from phonefix) p2 on p.pID = p2.pID
left join (select pID, weight, height from bmifix) B on p.pID = B.pID
left join religion r
where r.rgID = p.rgID and p.pID = ? `)
Since you are always retrieving a single pID, and that seems to be a unique record, the following might even be better (can not test: Don't have your DB :)): If possible go to inner joins: That should outperform a left join

Related

Command line argument for Query

I'm working on some code that's meant to dump mysql data to a .csv file. I'd like to pass in a command line arg that allows the user to input what ID is run for the query, eg. go run main.go 2 would run the query
SELECT * FROM table where id = 2;
I know that Go has the os package where I can then pass something like:
args := os.Args
if len(args) < 2 {
fmt.Println("Supply ID")
os.Exit(1)
}
testID := os.Args[1]
fmt.Println(testID)
}
Here's the code Im currently working on. How can I set a command line argument for the Query?
rows, _ := db.Query("SELECT * FROM table where id = ?;")
err := sqltocsv.WriteFile("table.csv",rows)
if err != nil {
panic(err)
}
columns, _ := rows.Columns()
count := len(columns)
values := make([]interface{}, count)
valuePtrs := make([]interface{}, count)
for rows.Next() {
for i := range columns {
valuePtrs[i] = &values[i]
}
rows.Scan(valuePtrs...)
for i, col := range columns {
val := values[i]
b, ok := val.([]byte)
var v interface{}
if ok {
v = string(b)
} else {
v = val
}
fmt.Println(col, v)
}
}
}
Just add your parameters to Query:
rows, _ := db.Query("SELECT * FROM table where id = ?;", os.Args[1])
testID need to be and interface{} type and add to Query
var testID interface{}
testID = os.Args[1]
And add to Query
rows, _ := db.Query("SELECT * FROM table where id = ?;", testID)
Edit:
Why interface{}?
Query function accept interface{} types in arguments. More info

How to declare a SQL row, if else statement not declared problem

I have a code like this below.
var sql string
if pnt.Type == "newType" {
sql = `select code, count(*) count from (
select code
from code_table
where start >= ? and end <= ?
union
select code
from code_table
where start >= ? and end <= ?
) a group by code`
rows, err := pnt.readConn("testdb").Query(sql, start, end, start, end)
} else {
sql = `select code, count(*) count from code_table where start >= ? and end <= ?` group by code
rows, err := pnt.readConn("testdb").Query(sql, start, end)
}
if err == nil {
defer rows.Close()
for rows.Next() {
var code, count int
rows.Scan(&code, &count)
}
} else {
log.Println(err)
}
This will give me an error something like this "Variable not declared for rows, err"...
I've tried declaring "var err error" and within the if else statement, I use = instead of :=
something like this
var err error
rows, err = pnt.switchConn("base", "read").Query(sql, start, end)
However, I still can't declare the rows cuz I will have different kind of errors for that. I tried declaring it as string but no luck.
This is my first time using golang and the if else thing is giving me a hard time, why can't I just use := inside if else statement. As you can see, I can't use rows, err := outside the if else statement, cuz both have different numbers of parameters.
You are facing issues because of the scope of the variable.
In Golang, := creates a new variable inside a scope.
rows, err := pnt.ReadConn("testdb").Query(sql, start, end, start, end)
Creates a new rows and err variables in the if block which won't be accessible outside the if block.
Shorthand declarations in Go
The fix,
var sql string
var err error
var rows *sql.Rows
if pnt.Type == "newType" {
sql = `select code, count(*) count from (
select code
from code_table
where start >= ? and end <= ?
union
select code
from code_table
where start >= ? and end <= ?
) a group by code`
rows, err = pnt.ReadConn("testdb").Query(sql, start, end, start, end)
} else {
sql = `select code, count(*) count from code_table where start >= ? and end <= ?` group by code
rows, err = pnt.ReadConn("testdb").Query(sql, start, end)
}
if err == nil {
defer rows.Close()
for rows.Next() {
var code, count int
rows.Scan(&code, &count)
}
} else {
log.Println(err)
}
In golang ":=" mean you declare a variable and assign them a this value GO will automatically detect his type so :
Exemples
variable := 15
It’s the same
var variable int = 15
So when you do this rows, err := pnt.switchConn("base", "read").Query(sql, start, end, start, end)
} else {
sql =select code, count(*) count from code_table where start >= ? and end <= ?group by code
rows, err := pnt.switchConn("base", "read").Query(sql, start, end)
}
You declare the same variable rows and err twice

Gorm multiple transactions to remove old search conditions

bg := Db.Begin()
UDebt := make([]UserDebt, 0)
page, _ := strconv.Atoi(c.DefaultPostForm("page", "1"))
limit, _ := strconv.Atoi(c.DefaultPostForm("limit", "20"))
db := Db.Model(&UDebt).Preload("User")
start := c.PostForm("start")
if start != "" {
db = db.Where("datetime >= ?", start)
bg = bg.Where("datetime >= ?", start)
}
debts := make([]UserDebt,0)
bg.Debug().Set("gorm:query_option", "FOR UPDATE").Limit(limit).Offset(page).Find(&debts)
// show sql: SELECT * FROM `user_debt` WHERE (datetime >= '2019-06-16 00:00:00') LIMIT 20 OFFSET 1 FOR UPDATE
// I hope this is a new connection without any conditions.
bg.Debug().Model(&UserBet{}).Where("id in (?)",arrayID).Update("is_read",1)
// show sql: UPDATE `user_bet` SET `is_read` = '1' WHERE (datetime >= '2019-06-16 00:00:00') AND (id in ('17','18','19','20','21','22'))
bg.Commit()
I want the second SQL to remove the datetime condition.
The second SQL takes the first SQL search condition. How do I remove this condition and use it in a transaction?
I'd suggest having two separate query objects:
bg := Db.Begin()
UDebt := make([]UserDebt, 0)
page, _ := strconv.Atoi(c.DefaultPostForm("page", "1"))
limit, _ := strconv.Atoi(c.DefaultPostForm("limit", "20"))
// Use the bg object so this is all done in the transaction
db := bg.Model(&UDebt).Preload("User")
start := c.PostForm("start")
if start != "" {
// Don't change the original bg object
db = db.Where("datetime >= ?", start)
}
debts := make([]UserDebt,0)
// Use the newly created db object to store the query options for that
db.Debug().Set("gorm:query_option", "FOR UPDATE").Limit(limit).Offset(page).Find(&debts)
bg.Debug().Model(&UserBet{}).Where("id in (?)",arrayID).Update("is_read",1)
bg.Commit()

How to append last sql row to a list without replacing previous rows in Golang

This code delivers AFAIK correct JSON output [{},{}], but each row is appended and replaces all previous rows, so the result shows only copies of the last row.
var rows *sql.Rows
rows, err = db.Query(query)
cols, _ := rows.Columns()
colnames, _ := rows.Columns()
vals := make([]interface{}, len(cols))
for i, _ := range cols {
vals[i] = &cols[i]
}
m := make(map[string]interface{})
for i, val := range vals {
m[colnames[i]] = val
}
list := make([]map[string]interface{}, 0)
for rows.Next() {
err = rows.Scan(vals...)
list = append(list, m)
}
json, _ := json.Marshal(list)
fmt.Fprintf(w,"%s\n", json)
This is what happens behind the scenes looping through the rows:
loop 1: {“ID”:“1”,“NAME”: "John }
loop 2: {“ID”:“2”,“NAME”: “Jane Doe”}{“ID”:“2”,“NAME”: “Jane Doe”}
loop 3: {“ID”:“3”,“NAME”: “Donald Duck”}{“ID”:“3”,“NAME”: “Donald Duck”}{“ID”:“3”,“NAME”: “Donald Duck”}
The rows.Scan fetches the correct values, but it appends AND replaces all previous values.
The final output is this
[{“ID”:“3”,“NAME”: “Donald Duck”},{“ID”:“3”,“NAME”: “Donald Duck”},{“ID”:“3”,“NAME”: “Donald Duck”}]
But should be this:
[{“ID”:“1”,“NAME”: “John Doe”},{“ID”:“2”,“NAME”: “Jane Doe”},{“ID”:“3”,“NAME”: “Donald Duck”}]
What am I doing wrong?
You may downvote this, but please explain why. I am still a newbie on Golang and want to learn.
I fixed it and explained with comments what you did wrong:
// 1. Query
var rows *sql.Rows
rows, err = db.Query(query)
cols, _ := rows.Columns()
// 2. Iterate
list := make([]map[string]interface{}, 0)
for rows.Next() {
vals := make([]interface{}, len(cols))
for i, _ := range cols {
// Previously you assigned vals[i] a pointer to a column name cols[i].
// This meant that everytime you did rows.Scan(vals),
// rows.Scan would see pointers to cols and modify them
// Since cols are the same for all rows, they shouldn't be modified.
// Here we assign a pointer to an empty string to vals[i],
// so rows.Scan can fill it.
var s string
vals[i] = &s
// This is effectively like saying:
// var string1, string2 string
// rows.Scan(&string1, &string2)
// Except the above only scans two string columns
// and we allow as many string columns as the query returned us — len(cols).
}
err = rows.Scan(vals...)
// Don't forget to check errors.
if err != nil {
log.Fatal(err)
}
// Make a new map before appending it.
// Remember maps aren't copied by value, so if we declared
// the map m outside of the rows.Next() loop, we would be appending
// and modifying the same map for each row, so all rows in list would look the same.
m := make(map[string]interface{})
for i, val := range vals {
m[cols[i]] = val
}
list = append(list, m)
}
// 3. Print.
b, _ := json.MarshalIndent(list, "", "\t")
fmt.Printf("%s\n", b)
Don't worry, this was hard for me to understand when I was a beginner as well.
Now, something fun:
var list []map[string]interface{}
rows, err := db.Queryx(query)
for rows.Next() {
row := make(map[string]interface{})
err = rows.MapScan(row)
if err != nil {
log.Fatal(err)
}
list = append(list, row)
}
b, _ := json.MarshalIndent(list, "", "\t")
fmt.Printf("%s\n", b)
This does the same as the code above it, but with sqlx. A bit simpler, no?
sqlx is an extension on top of database/sql with methods to scan rows directly to maps and structs, so you don't have to do that manually.
I think your model looks nicer as a struct:
type Person struct {
ID int
Name string
}
var people []Person
rows, err := db.Queryx(query)
for rows.Next() {
var p Person
err = rows.StructScan(&p)
if err != nil {
log.Fatal(err)
}
people = append(people, p)
}
Don't you think?

How to insert multiple data at once

I know that Insert multiple data at once more efficiency:
INSERT INTO test(n1, n2, n3)
VALUES(v1, v2, v3),(v4, v5, v6),(v7, v8, v9);
How to do that in golang?
data := []map[string]string{
{"v1":"1", "v2":"1", "v3":"1"},
{"v1":"2", "v2":"2", "v3":"2"},
{"v1":"3", "v2":"3", "v3":"3"},
}
//I do not want to do it
for _, v := range data {
sqlStr := "INSERT INTO test(n1, n2, n3) VALUES(?, ?, ?)"
stmt, _ := db.Prepare(sqlStr)
res, _ := stmt.Exec(v["v1"], v["v2"], v["v3"])
}
Use string splice, but it's not good. db.Prepare more safer, right?
sqlStr := "INSERT INTO test(n1, n2, n3) VALUES"
for k, v := range data {
if k == 0 {
sqlStr += fmt.Sprintf("(%v, %v, %v)", v["v1"], v["v2"], v["v3"])
} else {
sqlStr += fmt.Sprintf(",(%v, %v, %v)", v["v1"], v["v2"], v["v3"])
}
}
res, _ := db.Exec(sqlStr)
I need a function safer and efficient insert mulitple data at once.
why not something like this? (writing here without testing so there might be syntax errors):
sqlStr := "INSERT INTO test(n1, n2, n3) VALUES "
vals := []interface{}{}
for _, row := range data {
sqlStr += "(?, ?, ?),"
vals = append(vals, row["v1"], row["v2"], row["v3"])
}
//trim the last ,
sqlStr = sqlStr[0:len(sqlStr)-1]
//prepare the statement
stmt, _ := db.Prepare(sqlStr)
//format all vals at once
res, _ := stmt.Exec(vals...)
For Postgres lib pq supports bulk inserts: https://godoc.org/github.com/lib/pq#hdr-Bulk_imports
But same can be achieved through below code but where it is really helpful is when one tries to perform bulk conditional update (change the query accordingly).
For performing similar bulk inserts for Postgres, you can use the following function.
// ReplaceSQL replaces the instance occurrence of any string pattern with an increasing $n based sequence
func ReplaceSQL(old, searchPattern string) string {
tmpCount := strings.Count(old, searchPattern)
for m := 1; m <= tmpCount; m++ {
old = strings.Replace(old, searchPattern, "$"+strconv.Itoa(m), 1)
}
return old
}
So above sample becomes
sqlStr := "INSERT INTO test(n1, n2, n3) VALUES "
vals := []interface{}{}
for _, row := range data {
sqlStr += "(?, ?, ?),"
vals = append(vals, row["v1"], row["v2"], row["v3"])
}
//trim the last ,
sqlStr = strings.TrimSuffix(sqlStr, ",")
//Replacing ? with $n for postgres
sqlStr = ReplaceSQL(sqlStr, "?")
//prepare the statement
stmt, _ := db.Prepare(sqlStr)
//format all vals at once
res, _ := stmt.Exec(vals...)
Gorm V2 (released on 30th August 2020) now supports batch insert query.
// Pass slice data to method Create, GORM will generate a single SQL statement
// to insert all the data and backfill primary key values,
// hook methods will be invoked too.
var users = []User{{Name: "jinzhu1"}, {Name: "jinzhu2"}, {Name: "jinzhu3"}}
DB.Create(&users)
for _, user := range users {
user.ID // 1,2,3
}
For more details refer to the official documentation here: https://gorm.io/docs/create.html.
If you enable multi statements , then you can execute multiple statement at once.
With that , you should be able to handle multiple inserts.
https://github.com/go-sql-driver/mysql#multistatements
After extensive research this worked for me:
var values []interface{}
for _, scope := range scopes {
values = append(values, scope.ID, scope.Code, scope.Description)
}
sqlStr := `INSERT INTO scopes (application_id, scope, description) VALUES %s`
sqlStr = setupBindVars(sqlStr, "(?, ?, ?)", len(scopes))
_, err = s.db.ExecContext(ctx, sqlStr, values...)
// helper function to replace ? with the right number of sets of bind vars
func setupBindVars(stmt, bindVars string, len int) string {
bindVars += ","
stmt = fmt.Sprintf(stmt, strings.Repeat(bindVars, len))
return strings.TrimSuffix(stmt, ",")
}
From https://gorm.io/docs/create.html#Batch-Insert
Code sample:
var users = []User{{Name: "jinzhu1"}, {Name: "jinzhu2"}, {Name: "jinzhu3"}}
DB.Create(&users)
this is an efficient way to do transition which will do network call only after commit.
func insert(requestObj []models.User) (bool, error) {
tx := db.Begin()
defer func() {
if r := recover(); r != nil {
tx.Rollback()
}
}()
for _, obj := range requestObj {
if err := tx.Create(&obj).Error; err != nil {
logging.AppLogger.Errorf("Failed to create user")
tx.Rollback()
return false, err
}
}
err := tx.Commit().Error
if err != nil {
return false, err
}
return true, nil
}
I ended up with this, after combining the feedback on posted answers:
const insertQuery := "INSERT INTO test(n1, n2, n3) VALUES "
const row = "(?, ?, ?)"
var inserts []string
vars vals []interface{}
for _, row := range data {
inserts = append(inserts, row)
vals = append(vals, row["v1"], row["v2"], row["v3"])
}
sqlStr := insertQuery + strings.Join(inserts, ",")
//prepare the statement
stmt, _ := db.Prepare(sqlStr)
//close stmt after use
defer stmt.Close()
//format all vals at once
res, _ := stmt.Exec(vals...)

Resources