I had an API that I wrote in python flask for the backend of the website and app, which works fine. I recently learned Go and rewrote the whole API in Go. I expected much lower CPU and memory utilization from Go binary file but MariaDB now almost 99% utilization.
I try to limit max connection, maxtimeout, maxidletime,max...etc all option in GitHub page still the no use. I have the connection as global variable in the code, and I defer result.close() after every db.prepare and db.query. I know Go is much faster than python so it make sense to make more request to server but its only test environment it should cause that much cpu utilization any suggestion on how to deal with MariaDB in golang?
FYI: the original site it working from 2015, it have at least over millon row of data, I can recreate the database using gorm and insert the data again but I really want to just use pure SQL (no ORM thank you).
func getfulldate(c *fiber.Ctx) error {
pid := c.FormValue("pid")
result, err := db.Prepare("select concat(p.firstName, ' ', p.middle, ' ', p.lastName, ' ', p.forthName) as fullname,gender,bID,married,barcode,comment,address,if(p2.phone is null, 0, p2.phone) as phone,rName,occupation,weight,height,cast(Birthdate as date) as Birthdate from profile p left join (select phID, pID, phone from phonefix group by pID) p2 on p.pID = p2.pID left join (select pID, weight, height from bmifix group by pID) B on p.pID = B.pID, religion r where r.rgID = p.rgID and p.pID = ? ")
defer result.Close()
if err != nil {
return c.JSON(fiber.Map{"access_token": "wrong"})
}
rows, err := result.Query(pid)
defer rows.Close()
if err != nil {
return c.JSON(fiber.Map{"access_token": "wrong"})
}
columns, err := rows.Columns()
if err != nil {
return err
}
count := len(columns)
tableData := make([]map[string]interface{}, 0)
values := make([]interface{}, count)
valuePtrs := make([]interface{}, count)
for rows.Next() {
for i := 0; i < count; i++ {
valuePtrs[i] = &values[i]
}
rows.Scan(valuePtrs...)
entry := make(map[string]interface{})
for i, col := range columns {
var v interface{}
val := values[i]
b, ok := val.([]byte)
if ok {
v = string(b)
} else {
v = val
}
entry[col] = v
}
tableData = append(tableData, entry)
}
currentTime := time.Now().Format("2006-01-02")
result, err = db.Prepare("select viID,state as done,dob from visitfix where patientID = ?")
defer result.Close()
if err != nil {
return c.JSON(fiber.Map{"access_token": "wrong"})
}
rows, err = result.Query(pid)
defer rows.Close()
if err != nil {
return c.JSON(fiber.Map{"access_token": "wrong"})
}
columns = []string{"viID", "done", "dob"}
count = len(columns)
tableDatas := make([]map[string]interface{}, 0)
values = make([]interface{}, count)
valuePtrs = make([]interface{}, count)
for rows.Next() {
for i := 0; i < count; i++ {
valuePtrs[i] = &values[i]
}
rows.Scan(valuePtrs...)
entry := make(map[string]interface{})
for i, col := range columns {
var v interface{}
val := values[i]
b, ok := val.([]byte)
if ok {
v = string(b)
} else {
v = val
}
if i == 2 {
var state string
format := "2006-1-2"
datea, err := time.Parse(format, string(b))
if err != nil {
return err
}
mydate := datea.Format("2006-01-02")
if mydate == currentTime {
state = "today"
}
if mydate < currentTime {
state = "older"
}
if mydate > currentTime {
state = "newer"
}
entry["state"] = state
}
entry[col] = v
}
tableDatas = append(tableDatas, entry)
}
alldata := [][]map[string]interface{}{tableData, tableDatas}
dat, err := json.Marshal(alldata)
if err != nil {
return err
}
return c.SendString(string(dat))
}
The go process in itself should not have any other effect on (any) database as any other language barring better driver implementations for retrieving the row data (cursor implementation of MySQL was (and might still be) broken).
As for the cursor usage: This can spread the load over a longer time but you would need a serious amount of data to notice a difference between driver implementations and languages. Compiled languages will be able to put a higher load on the db at that point in time. But again: This would be rare.
The primary candidate to look at here, is most likely indexing: You are querying:
result, err := db.Prepare(`select concat(p.firstName, ' ', p.middle, ' ', p.lastName, ' ', p.forthName)
as fullname,gender,bID,married,barcode,comment,address,if(p2.phone is null, 0, p2.phone)
as phone,rName,occupation,weight,height,cast(Birthdate as date) as Birthdate
from profile p
left join (select phID, pID, phone from phonefix group by pID) p2 on p.pID = p2.pID
left join (select pID, weight, height from bmifix group by pID) B on p.pID = B.pID, religion r
where r.rgID = p.rgID and p.pID = ? `)
You want to have indexes on p.pID, r.rgID and p.rgID.
Then also the group by on pID in the left joins might be executed in a suboptimal fashion (run an explain and check the execution plan).
An improvement could be to remove the group by statements: You don't have a group by function, so no use for the group by in the left joins:
result, err := db.Prepare(`select concat(p.firstName, ' ', p.middle, ' ', p.lastName, ' ', p.forthName)
as fullname,gender,bID,married,barcode,comment,address,if(p2.phone is null, 0, p2.phone)
as phone,rName,occupation,weight,height,cast(Birthdate as date) as Birthdate
from profile p
left join (select phID, pID, phone from phonefix) p2 on p.pID = p2.pID
left join (select pID, weight, height from bmifix) B on p.pID = B.pID
left join religion r
where r.rgID = p.rgID and p.pID = ? `)
Since you are always retrieving a single pID, and that seems to be a unique record, the following might even be better (can not test: Don't have your DB :)): If possible go to inner joins: That should outperform a left join
I wrote a very simple database query function in golang. The pseudocode is like this:
var ctx = context.Background()
conn, _ := sql.Open("firebirdsql", "SYSDBA:masterkey#localhost/"+BdLocation)
defer conn.Close()
rows, err := conn.QueryContext(ctx,"<sql query>")
// my first attempt was to detect empty result here
switch {
case err == sql.ErrNoRows:
< empty result logic >
rows.Close()
return
case err != nil:
log.Fatal(err)
}
defer rows.Close()
// very 'primitive' logic, the only one that worked
var count int = 0
for rows.Next() {
< process row >
count ++
}
// tried do get empty result here also
if err := rows.Err(); err != nil {
if err == sql.ErrNoRows {
log.Println("Zero rows found")
} else {
panic(err)
}
}
// this worked
if count == 0 {
< empty result logic >
} else{
< return result for processing >
}
Compilation is ok, and it works, but only because I used the "count" variable as a flag. I think I'm following the documentation (and examples) correctly, but err is always nil, even with an empty result. Just as a test I inserted syntax errors in the query, and they were detected.
Am I missing something? Or it just does not work the way I think it does?
Thanks for your help.
You can check for a sql.ErrNoRows (https://golang.org/pkg/database/sql/#pkg-variables)
BUT as you are using a firebird sql driver... It could be another error.
We assume that all drivers implements and returns what was specified by golang... But that could not be true.
Check the actual error and verify it on the Firebird driver.
I'm working on some code that's meant to dump mysql data to a .csv file. I'd like to pass in a command line arg that allows the user to input what ID is run for the query, eg. go run main.go 2 would run the query
SELECT * FROM table where id = 2;
I know that Go has the os package where I can then pass something like:
args := os.Args
if len(args) < 2 {
fmt.Println("Supply ID")
os.Exit(1)
}
testID := os.Args[1]
fmt.Println(testID)
}
Here's the code Im currently working on. How can I set a command line argument for the Query?
rows, _ := db.Query("SELECT * FROM table where id = ?;")
err := sqltocsv.WriteFile("table.csv",rows)
if err != nil {
panic(err)
}
columns, _ := rows.Columns()
count := len(columns)
values := make([]interface{}, count)
valuePtrs := make([]interface{}, count)
for rows.Next() {
for i := range columns {
valuePtrs[i] = &values[i]
}
rows.Scan(valuePtrs...)
for i, col := range columns {
val := values[i]
b, ok := val.([]byte)
var v interface{}
if ok {
v = string(b)
} else {
v = val
}
fmt.Println(col, v)
}
}
}
Just add your parameters to Query:
rows, _ := db.Query("SELECT * FROM table where id = ?;", os.Args[1])
testID need to be and interface{} type and add to Query
var testID interface{}
testID = os.Args[1]
And add to Query
rows, _ := db.Query("SELECT * FROM table where id = ?;", testID)
Edit:
Why interface{}?
Query function accept interface{} types in arguments. More info
bg := Db.Begin()
UDebt := make([]UserDebt, 0)
page, _ := strconv.Atoi(c.DefaultPostForm("page", "1"))
limit, _ := strconv.Atoi(c.DefaultPostForm("limit", "20"))
db := Db.Model(&UDebt).Preload("User")
start := c.PostForm("start")
if start != "" {
db = db.Where("datetime >= ?", start)
bg = bg.Where("datetime >= ?", start)
}
debts := make([]UserDebt,0)
bg.Debug().Set("gorm:query_option", "FOR UPDATE").Limit(limit).Offset(page).Find(&debts)
// show sql: SELECT * FROM `user_debt` WHERE (datetime >= '2019-06-16 00:00:00') LIMIT 20 OFFSET 1 FOR UPDATE
// I hope this is a new connection without any conditions.
bg.Debug().Model(&UserBet{}).Where("id in (?)",arrayID).Update("is_read",1)
// show sql: UPDATE `user_bet` SET `is_read` = '1' WHERE (datetime >= '2019-06-16 00:00:00') AND (id in ('17','18','19','20','21','22'))
bg.Commit()
I want the second SQL to remove the datetime condition.
The second SQL takes the first SQL search condition. How do I remove this condition and use it in a transaction?
I'd suggest having two separate query objects:
bg := Db.Begin()
UDebt := make([]UserDebt, 0)
page, _ := strconv.Atoi(c.DefaultPostForm("page", "1"))
limit, _ := strconv.Atoi(c.DefaultPostForm("limit", "20"))
// Use the bg object so this is all done in the transaction
db := bg.Model(&UDebt).Preload("User")
start := c.PostForm("start")
if start != "" {
// Don't change the original bg object
db = db.Where("datetime >= ?", start)
}
debts := make([]UserDebt,0)
// Use the newly created db object to store the query options for that
db.Debug().Set("gorm:query_option", "FOR UPDATE").Limit(limit).Offset(page).Find(&debts)
bg.Debug().Model(&UserBet{}).Where("id in (?)",arrayID).Update("is_read",1)
bg.Commit()
I want to run a function until it returns 0.
value, _ := FuncX()
if value != 0 {
value, _ := FuncX()
if(value != 0) {
value, _ := FuncX()
if(value != 0) ....
}
}
seems like a pretty ugly way to do it. Whats a possible better way?
A more complex loop header than others have offered, although having nothing in the loop body may trigger coder OCD.
for value,_ := FuncX(); value != 0; value,_ = FuncX() {
}
In fact, this is usually how I read files line by line in Go
// Assume we have some bufio.Reader named buf already created
for line,err := buf.ReadString('\n'); err == nil; line,err = buf.ReadString('\n') {
// Do stuff with the line.
}
If you need line or err outside the loop you just predeclare them and replace the := with =.
for {
value, _ := FuncX()
if value == 0 {
break
}
}
You can use a loop like:
value, _ := FuncX()
for value == 0 {
value, _ = FuncX() // note using the = not :=
}