Gorm multiple transactions to remove old search conditions - go

bg := Db.Begin()
UDebt := make([]UserDebt, 0)
page, _ := strconv.Atoi(c.DefaultPostForm("page", "1"))
limit, _ := strconv.Atoi(c.DefaultPostForm("limit", "20"))
db := Db.Model(&UDebt).Preload("User")
start := c.PostForm("start")
if start != "" {
db = db.Where("datetime >= ?", start)
bg = bg.Where("datetime >= ?", start)
}
debts := make([]UserDebt,0)
bg.Debug().Set("gorm:query_option", "FOR UPDATE").Limit(limit).Offset(page).Find(&debts)
// show sql: SELECT * FROM `user_debt` WHERE (datetime >= '2019-06-16 00:00:00') LIMIT 20 OFFSET 1 FOR UPDATE
// I hope this is a new connection without any conditions.
bg.Debug().Model(&UserBet{}).Where("id in (?)",arrayID).Update("is_read",1)
// show sql: UPDATE `user_bet` SET `is_read` = '1' WHERE (datetime >= '2019-06-16 00:00:00') AND (id in ('17','18','19','20','21','22'))
bg.Commit()
I want the second SQL to remove the datetime condition.
The second SQL takes the first SQL search condition. How do I remove this condition and use it in a transaction?

I'd suggest having two separate query objects:
bg := Db.Begin()
UDebt := make([]UserDebt, 0)
page, _ := strconv.Atoi(c.DefaultPostForm("page", "1"))
limit, _ := strconv.Atoi(c.DefaultPostForm("limit", "20"))
// Use the bg object so this is all done in the transaction
db := bg.Model(&UDebt).Preload("User")
start := c.PostForm("start")
if start != "" {
// Don't change the original bg object
db = db.Where("datetime >= ?", start)
}
debts := make([]UserDebt,0)
// Use the newly created db object to store the query options for that
db.Debug().Set("gorm:query_option", "FOR UPDATE").Limit(limit).Offset(page).Find(&debts)
bg.Debug().Model(&UserBet{}).Where("id in (?)",arrayID).Update("is_read",1)
bg.Commit()

Related

Go-SQL-Driver causing maria-db CPU utilization very high

I had an API that I wrote in python flask for the backend of the website and app, which works fine. I recently learned Go and rewrote the whole API in Go. I expected much lower CPU and memory utilization from Go binary file but MariaDB now almost 99% utilization.
I try to limit max connection, maxtimeout, maxidletime,max...etc all option in GitHub page still the no use. I have the connection as global variable in the code, and I defer result.close() after every db.prepare and db.query. I know Go is much faster than python so it make sense to make more request to server but its only test environment it should cause that much cpu utilization any suggestion on how to deal with MariaDB in golang?
FYI: the original site it working from 2015, it have at least over millon row of data, I can recreate the database using gorm and insert the data again but I really want to just use pure SQL (no ORM thank you).
func getfulldate(c *fiber.Ctx) error {
pid := c.FormValue("pid")
result, err := db.Prepare("select concat(p.firstName, ' ', p.middle, ' ', p.lastName, ' ', p.forthName) as fullname,gender,bID,married,barcode,comment,address,if(p2.phone is null, 0, p2.phone) as phone,rName,occupation,weight,height,cast(Birthdate as date) as Birthdate from profile p left join (select phID, pID, phone from phonefix group by pID) p2 on p.pID = p2.pID left join (select pID, weight, height from bmifix group by pID) B on p.pID = B.pID, religion r where r.rgID = p.rgID and p.pID = ? ")
defer result.Close()
if err != nil {
return c.JSON(fiber.Map{"access_token": "wrong"})
}
rows, err := result.Query(pid)
defer rows.Close()
if err != nil {
return c.JSON(fiber.Map{"access_token": "wrong"})
}
columns, err := rows.Columns()
if err != nil {
return err
}
count := len(columns)
tableData := make([]map[string]interface{}, 0)
values := make([]interface{}, count)
valuePtrs := make([]interface{}, count)
for rows.Next() {
for i := 0; i < count; i++ {
valuePtrs[i] = &values[i]
}
rows.Scan(valuePtrs...)
entry := make(map[string]interface{})
for i, col := range columns {
var v interface{}
val := values[i]
b, ok := val.([]byte)
if ok {
v = string(b)
} else {
v = val
}
entry[col] = v
}
tableData = append(tableData, entry)
}
currentTime := time.Now().Format("2006-01-02")
result, err = db.Prepare("select viID,state as done,dob from visitfix where patientID = ?")
defer result.Close()
if err != nil {
return c.JSON(fiber.Map{"access_token": "wrong"})
}
rows, err = result.Query(pid)
defer rows.Close()
if err != nil {
return c.JSON(fiber.Map{"access_token": "wrong"})
}
columns = []string{"viID", "done", "dob"}
count = len(columns)
tableDatas := make([]map[string]interface{}, 0)
values = make([]interface{}, count)
valuePtrs = make([]interface{}, count)
for rows.Next() {
for i := 0; i < count; i++ {
valuePtrs[i] = &values[i]
}
rows.Scan(valuePtrs...)
entry := make(map[string]interface{})
for i, col := range columns {
var v interface{}
val := values[i]
b, ok := val.([]byte)
if ok {
v = string(b)
} else {
v = val
}
if i == 2 {
var state string
format := "2006-1-2"
datea, err := time.Parse(format, string(b))
if err != nil {
return err
}
mydate := datea.Format("2006-01-02")
if mydate == currentTime {
state = "today"
}
if mydate < currentTime {
state = "older"
}
if mydate > currentTime {
state = "newer"
}
entry["state"] = state
}
entry[col] = v
}
tableDatas = append(tableDatas, entry)
}
alldata := [][]map[string]interface{}{tableData, tableDatas}
dat, err := json.Marshal(alldata)
if err != nil {
return err
}
return c.SendString(string(dat))
}
The go process in itself should not have any other effect on (any) database as any other language barring better driver implementations for retrieving the row data (cursor implementation of MySQL was (and might still be) broken).
As for the cursor usage: This can spread the load over a longer time but you would need a serious amount of data to notice a difference between driver implementations and languages. Compiled languages will be able to put a higher load on the db at that point in time. But again: This would be rare.
The primary candidate to look at here, is most likely indexing: You are querying:
result, err := db.Prepare(`select concat(p.firstName, ' ', p.middle, ' ', p.lastName, ' ', p.forthName)
as fullname,gender,bID,married,barcode,comment,address,if(p2.phone is null, 0, p2.phone)
as phone,rName,occupation,weight,height,cast(Birthdate as date) as Birthdate
from profile p
left join (select phID, pID, phone from phonefix group by pID) p2 on p.pID = p2.pID
left join (select pID, weight, height from bmifix group by pID) B on p.pID = B.pID, religion r
where r.rgID = p.rgID and p.pID = ? `)
You want to have indexes on p.pID, r.rgID and p.rgID.
Then also the group by on pID in the left joins might be executed in a suboptimal fashion (run an explain and check the execution plan).
An improvement could be to remove the group by statements: You don't have a group by function, so no use for the group by in the left joins:
result, err := db.Prepare(`select concat(p.firstName, ' ', p.middle, ' ', p.lastName, ' ', p.forthName)
as fullname,gender,bID,married,barcode,comment,address,if(p2.phone is null, 0, p2.phone)
as phone,rName,occupation,weight,height,cast(Birthdate as date) as Birthdate
from profile p
left join (select phID, pID, phone from phonefix) p2 on p.pID = p2.pID
left join (select pID, weight, height from bmifix) B on p.pID = B.pID
left join religion r
where r.rgID = p.rgID and p.pID = ? `)
Since you are always retrieving a single pID, and that seems to be a unique record, the following might even be better (can not test: Don't have your DB :)): If possible go to inner joins: That should outperform a left join

Data inconsistency when writing to Galera-clustered MariaDB with go-sql-driver/mysql

Description
Data inconsistency is observed when two golang program are concurrently writing to the same DB tables through different MariaDB instances. The MariaDB instances are clustered using Galera.
The test database looks like as follows:
CREATE TABLE `Inc` (
`Id` int(11) NOT NULL,
`Cnt` int(11) NOT NULL DEFAULT 0,
PRIMARY KEY (`Id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
CREATE TABLE `Role` (
`Name` varchar(64) COLLATE utf8_unicode_ci DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
INSERT INTO `Inc` (`Id`, `Cnt`) VALUES ('1', 0);
The golang program is running 10 goroutines, each does the following DB operations:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
START TRANSACTION;
SELECT `Cnt` FROM `Inc` WHERE `Id`=1 FOR UPDATE;
INSERT INTO `Role` (`Name`) VALUES(<some string unique to this transaction>);
UPDATE `Inc` SET `Cnt`=<previous select result + 1> WHERE `Id`=1 and `Cnt`=<previous select result>;
COMMIT;
Simply put, the golang program reads Inc.Cnt, inserts a unique row to Role, and then increases Inc.Cnt by one. When the program completes, Inc.Cnt should be identical to the number of rows in Role:
mysql> START TRANSACTION; SELECT `Cnt` FROM `Inc`; SELECT COUNT(*) FROM `Role`; COMMIT;
Query OK, 0 rows affected (0.27 sec)
+------+
| Cnt |
+------+
| 30 |
+------+
1 row in set (0.27 sec)
+----------+
| count(*) |
+----------+
| 30 |
+----------+
1 row in set (0.27 sec)
Query OK, 0 rows affected (0.27 sec)
But if two programs concurrently run against two different MariaDB instances, Inc.Cnt becomes snaller than the number of rows in Role. Stranger still, sometimes the update statement returns no changed row, as if the current transaction is not isolated from other transaction. Furthermore, even if I make the program rollback when the 0 changed row is returned, the row inserted in the same transaction still remains.
I first suspected that this is because multi-master Galera does not support SERIALIZABLE isolation level. It does supports SNAPSHOT transaction isolation level, which checks transaction conflict at commit time. With this optimistic approach, it's understandable that the concurrent writes lead to many deadlock. But I still think data consistency should not break even at SNAPSHOT isolation level.
Is this an expected behaviour of Galera cluster or some mistake in Galera configuration? Or a miss in using go-sql-driver/mysql or any known limitation/bug? I didn't see the above anomaly when the number of goroutines is one.
Any suggestion will be appreciated.
Environment
Golang version: go1.13.5 linux/amd64
MySQl library: go-sql-driver/mysql v1.5.0
MariaDB docker image: mariadb:10.4.13
Num of goroutines per program: 10 (when set to 1, no inconsistency happens)
MariaDB configuration
[mysqld]
user = mysql
server_id = 11 # unique to this node in the Galera cluster
port = 13306
collation-server = utf8_unicode_ci
character-set-server = utf8
skip-character-set-client-handshake
skip_name_resolve = ON
default_storage_engine = InnoDB
binlog_format = ROW
log_bin = /var/log/mysql/mysql-bin.log
relay_log = /var/log/mysql/mysql-relay-bin.log
log_slave_updates = on
innodb_flush_log_at_trx_commit = 0
innodb_flush_method = O_DIRECT
innodb_file_per_table = 1
innodb_autoinc_lock_mode = 2
innodb_lock_schedule_algorithm = FCFS # MariaDB >10.1.19 and >10.2.3 only
innodb_rollback_on_timeout = 1
innodb_print_all_deadlocks = ON
bind-address = 0.0.0.0
wsrep_on = ON
wsrep_provider = /usr/lib/libgalera_smm.so
wsrep_sst_method = mariabackup
wsrep_gtid_mode = ON
wsrep_gtid_domain_id = 9999
wsrep_cluster_name = mycluster
wsrep_cluster_address = gcomm://172.24.50.27:14567,172.24.52.27:14567,172.24.54.27:14567 # all three MariaDB instances that constitute this Galera cluster
wsrep_sst_auth = repl:secret
wsrep_node_address = 172.24.50.27:14567
wsrep_provider_options = "ist.recv_addr=172.24.50.27:14568;socket.ssl_cert=/etc/mysql/certificates/maria-server-cert.pem;socket.ssl_key=/etc/mysql/certificates/maria-server-key.pem;socket.ssl_ca=/etc/mysql/certificates/maria-ca.pem"
wsrep_sst_receive_address = 172.24.50.27:14444
wsrep_log_conflicts = ON
gtid_domain_id = 9011 # unique to this node in the Galera cluster
ssl_cert = /etc/mysql/certificates/maria-server-cert.pem
ssl_key = /etc/mysql/certificates/maria-server-key.pem
ssl_ca = /etc/mysql/certificates/maria-ca.pem
# File Key Management
plugin_load_add = file_key_management
file_key_management_filename = /etc/mysql/encryption/keyfile.enc
file_key_management_filekey = FILE:/etc/mysql/encryption/keyfile.key
file_key_management_encryption_algorithm = AES_CTR
# Enables table encryption, but allows unencrypted tables to be created
innodb_encrypt_tables = OFF
# Encrypt the Redo Log
innodb_encrypt_log = ON
# Binary Log Encryption
encrypt_binlog=ON
Golang writer
package main
import (
"context"
"database/sql"
"fmt"
"github.com/go-sql-driver/mysql"
_ "github.com/go-sql-driver/mysql"
"os"
"strconv"
"sync"
)
func task(ctx context.Context, prefix string, num int, c *sql.Conn) error {
_, err := c.ExecContext(ctx, "SET SESSION wsrep_sync_wait=15")
if err != nil {
return err
}
tx, err := c.BeginTx(ctx, &sql.TxOptions{Isolation: sql.LevelSerializable})
if err != nil {
return err
}
var count int
incRes, err := tx.Query("SELECT Cnt FROM Inc WHERE Id=? FOR UPDATE", 1)
if err != nil {
tx.Rollback()
return err
}
defer incRes.Close()
for incRes.Next() {
err := incRes.Scan(&count)
if err != nil {
tx.Rollback()
return err
}
}
res, err := tx.ExecContext(ctx, "INSERT INTO Role (Name) VALUES (?)", fmt.Sprintf("%s-%03d", prefix, num))
if err != nil {
tx.Rollback()
return err
}
affected, err := res.RowsAffected()
if err != nil {
tx.Rollback()
return err
}
if affected != 1 {
tx.Rollback()
return fmt.Errorf("inconsistency: inserted row is %d", affected)
}
res, err = tx.ExecContext(ctx, "UPDATE Inc SET Cnt = ? WHERE Id=? AND Cnt=?", count+1, 1, count)
if err != nil {
tx.Rollback()
return err
}
affected, err = res.RowsAffected()
if err != nil {
tx.Rollback()
return err
}
if affected != 1 {
tx.Rollback()
return fmt.Errorf("inconsistency: inserted row is %d", affected)
}
err = tx.Commit()
if err != nil {
tx.Rollback()
return err
}
return nil
}
func main() {
if len(os.Args) != 5 {
fmt.Println("Usage: %s <db host> <db port> <count>")
return
}
host := os.Args[1]
port, err := strconv.Atoi(os.Args[2])
if err != nil {
panic(err)
}
count, err := strconv.Atoi(os.Args[3])
if err != nil {
panic(err)
}
prefix := os.Args[4]
db := "test"
user := "root"
pwd := "secret"
parallelism := 10
driver := "mysql"
connstr := fmt.Sprintf("%s:%s#tcp(%s:%d)/%s", user, pwd, host, port, db)
dbconn, err := sql.Open(driver, connstr)
if err != nil {
panic(err)
}
defer dbconn.Close()
wg := sync.WaitGroup{}
for thread := 0; thread <parallelism ; thread++ {
wg.Add(1)
go func(thread int) {
ctx := context.Background()
defer wg.Done()
for i := 0; i<count; i++ {
conn, err := dbconn.Conn(ctx)
if err != nil {
fmt.Println(err)
}
err = task(ctx, fmt.Sprintf("%s-%d", prefix, thread), i, conn)
if err != nil {
fmt.Printf("error for %s-%d-%d: %s\n",prefix, thread, i, err)
_, ok := err.(*mysql.MySQLError)
if !ok {
break
}
}
conn.Close()
}
}(thread)
}
wg.Wait()
}
Run the program
$ go build
$ ./native 172.24.54.27 13306 50 first
EDITED
As suggested, I added session configuration of SET SESSION wsrep_sync_wait=15 before starting transaction. SQL statements are sent as expected when I checked it using wireshark. But that doesn't change the test results.
EDITED 2
After adding option interpolateParams=true from go-sql-driver, now we do not see data inconsistency. That option is necessary to avoid using prepared statement, making each statemet fully specified. We suspect that query interruption of prepared statements is not handled as expected in Galera multi-master situation. The changed code is as follows:
driver := "mysql"
connstr := fmt.Sprintf("%s:%s#tcp(%s:%d)/%s?interpolateParams=true", user, pwd, host, port, db)
It seems that avoiding prepared statement is a work-around. We'd like to know whether it's a limitation of Galera cluster or our misconfiguration/misuse. So this question is not answered yet.

How to declare a SQL row, if else statement not declared problem

I have a code like this below.
var sql string
if pnt.Type == "newType" {
sql = `select code, count(*) count from (
select code
from code_table
where start >= ? and end <= ?
union
select code
from code_table
where start >= ? and end <= ?
) a group by code`
rows, err := pnt.readConn("testdb").Query(sql, start, end, start, end)
} else {
sql = `select code, count(*) count from code_table where start >= ? and end <= ?` group by code
rows, err := pnt.readConn("testdb").Query(sql, start, end)
}
if err == nil {
defer rows.Close()
for rows.Next() {
var code, count int
rows.Scan(&code, &count)
}
} else {
log.Println(err)
}
This will give me an error something like this "Variable not declared for rows, err"...
I've tried declaring "var err error" and within the if else statement, I use = instead of :=
something like this
var err error
rows, err = pnt.switchConn("base", "read").Query(sql, start, end)
However, I still can't declare the rows cuz I will have different kind of errors for that. I tried declaring it as string but no luck.
This is my first time using golang and the if else thing is giving me a hard time, why can't I just use := inside if else statement. As you can see, I can't use rows, err := outside the if else statement, cuz both have different numbers of parameters.
You are facing issues because of the scope of the variable.
In Golang, := creates a new variable inside a scope.
rows, err := pnt.ReadConn("testdb").Query(sql, start, end, start, end)
Creates a new rows and err variables in the if block which won't be accessible outside the if block.
Shorthand declarations in Go
The fix,
var sql string
var err error
var rows *sql.Rows
if pnt.Type == "newType" {
sql = `select code, count(*) count from (
select code
from code_table
where start >= ? and end <= ?
union
select code
from code_table
where start >= ? and end <= ?
) a group by code`
rows, err = pnt.ReadConn("testdb").Query(sql, start, end, start, end)
} else {
sql = `select code, count(*) count from code_table where start >= ? and end <= ?` group by code
rows, err = pnt.ReadConn("testdb").Query(sql, start, end)
}
if err == nil {
defer rows.Close()
for rows.Next() {
var code, count int
rows.Scan(&code, &count)
}
} else {
log.Println(err)
}
In golang ":=" mean you declare a variable and assign them a this value GO will automatically detect his type so :
Exemples
variable := 15
It’s the same
var variable int = 15
So when you do this rows, err := pnt.switchConn("base", "read").Query(sql, start, end, start, end)
} else {
sql =select code, count(*) count from code_table where start >= ? and end <= ?group by code
rows, err := pnt.switchConn("base", "read").Query(sql, start, end)
}
You declare the same variable rows and err twice

GORM Count returns 0

Problem Statement
Problem 1
I am getting a 0 when trying to retrieve the count.
The query expression is perfect, by which I assume that the query being built is correct.
However, the result is 0 irrespective of the query.
Problem 2
I am required to specify the db name in the Table clause as Table(dbname.tbname)
This isn't required in the any other queries and is required only when using Count()
What might I be missing?
package main
import (
"github.com/jinzhu/gorm"
_ "github.com/jinzhu/gorm/dialects/mssql"
_ "github.com/jinzhu/gorm/dialects/mysql"
_ "github.com/jinzhu/gorm/dialects/postgres"
_ "github.com/jinzhu/gorm/dialects/sqlite"
)
var db *gorm.DB
var err error
func main() {
db, err = gorm.Open("mysql", "gorm:gorm#tcp(localhost:9910)/gorm?charset=utf8&parseTime=True")
if err != nil {
panic(err)
}
db.LogMode(true)
var count int
var db = "db"
var tb = "tb"
var columnname = "somecol"
var date = "2014-01-02"
err := db.Table(db+"."+tb).Where("? >= ?", columnname, date+" 00:01:00").Count(&count)
if err != nil {
Error.Println(err.Error.Error())
}
fmt.Println("The Count is \n", count)
}
Update 1
The following works.
But this is as per my understanding using the result as *sql.Row and then retrieving the result using scan.
But I don't understand,why ...Count(&count) is giving a run time error?

How to insert multiple data at once

I know that Insert multiple data at once more efficiency:
INSERT INTO test(n1, n2, n3)
VALUES(v1, v2, v3),(v4, v5, v6),(v7, v8, v9);
How to do that in golang?
data := []map[string]string{
{"v1":"1", "v2":"1", "v3":"1"},
{"v1":"2", "v2":"2", "v3":"2"},
{"v1":"3", "v2":"3", "v3":"3"},
}
//I do not want to do it
for _, v := range data {
sqlStr := "INSERT INTO test(n1, n2, n3) VALUES(?, ?, ?)"
stmt, _ := db.Prepare(sqlStr)
res, _ := stmt.Exec(v["v1"], v["v2"], v["v3"])
}
Use string splice, but it's not good. db.Prepare more safer, right?
sqlStr := "INSERT INTO test(n1, n2, n3) VALUES"
for k, v := range data {
if k == 0 {
sqlStr += fmt.Sprintf("(%v, %v, %v)", v["v1"], v["v2"], v["v3"])
} else {
sqlStr += fmt.Sprintf(",(%v, %v, %v)", v["v1"], v["v2"], v["v3"])
}
}
res, _ := db.Exec(sqlStr)
I need a function safer and efficient insert mulitple data at once.
why not something like this? (writing here without testing so there might be syntax errors):
sqlStr := "INSERT INTO test(n1, n2, n3) VALUES "
vals := []interface{}{}
for _, row := range data {
sqlStr += "(?, ?, ?),"
vals = append(vals, row["v1"], row["v2"], row["v3"])
}
//trim the last ,
sqlStr = sqlStr[0:len(sqlStr)-1]
//prepare the statement
stmt, _ := db.Prepare(sqlStr)
//format all vals at once
res, _ := stmt.Exec(vals...)
For Postgres lib pq supports bulk inserts: https://godoc.org/github.com/lib/pq#hdr-Bulk_imports
But same can be achieved through below code but where it is really helpful is when one tries to perform bulk conditional update (change the query accordingly).
For performing similar bulk inserts for Postgres, you can use the following function.
// ReplaceSQL replaces the instance occurrence of any string pattern with an increasing $n based sequence
func ReplaceSQL(old, searchPattern string) string {
tmpCount := strings.Count(old, searchPattern)
for m := 1; m <= tmpCount; m++ {
old = strings.Replace(old, searchPattern, "$"+strconv.Itoa(m), 1)
}
return old
}
So above sample becomes
sqlStr := "INSERT INTO test(n1, n2, n3) VALUES "
vals := []interface{}{}
for _, row := range data {
sqlStr += "(?, ?, ?),"
vals = append(vals, row["v1"], row["v2"], row["v3"])
}
//trim the last ,
sqlStr = strings.TrimSuffix(sqlStr, ",")
//Replacing ? with $n for postgres
sqlStr = ReplaceSQL(sqlStr, "?")
//prepare the statement
stmt, _ := db.Prepare(sqlStr)
//format all vals at once
res, _ := stmt.Exec(vals...)
Gorm V2 (released on 30th August 2020) now supports batch insert query.
// Pass slice data to method Create, GORM will generate a single SQL statement
// to insert all the data and backfill primary key values,
// hook methods will be invoked too.
var users = []User{{Name: "jinzhu1"}, {Name: "jinzhu2"}, {Name: "jinzhu3"}}
DB.Create(&users)
for _, user := range users {
user.ID // 1,2,3
}
For more details refer to the official documentation here: https://gorm.io/docs/create.html.
If you enable multi statements , then you can execute multiple statement at once.
With that , you should be able to handle multiple inserts.
https://github.com/go-sql-driver/mysql#multistatements
After extensive research this worked for me:
var values []interface{}
for _, scope := range scopes {
values = append(values, scope.ID, scope.Code, scope.Description)
}
sqlStr := `INSERT INTO scopes (application_id, scope, description) VALUES %s`
sqlStr = setupBindVars(sqlStr, "(?, ?, ?)", len(scopes))
_, err = s.db.ExecContext(ctx, sqlStr, values...)
// helper function to replace ? with the right number of sets of bind vars
func setupBindVars(stmt, bindVars string, len int) string {
bindVars += ","
stmt = fmt.Sprintf(stmt, strings.Repeat(bindVars, len))
return strings.TrimSuffix(stmt, ",")
}
From https://gorm.io/docs/create.html#Batch-Insert
Code sample:
var users = []User{{Name: "jinzhu1"}, {Name: "jinzhu2"}, {Name: "jinzhu3"}}
DB.Create(&users)
this is an efficient way to do transition which will do network call only after commit.
func insert(requestObj []models.User) (bool, error) {
tx := db.Begin()
defer func() {
if r := recover(); r != nil {
tx.Rollback()
}
}()
for _, obj := range requestObj {
if err := tx.Create(&obj).Error; err != nil {
logging.AppLogger.Errorf("Failed to create user")
tx.Rollback()
return false, err
}
}
err := tx.Commit().Error
if err != nil {
return false, err
}
return true, nil
}
I ended up with this, after combining the feedback on posted answers:
const insertQuery := "INSERT INTO test(n1, n2, n3) VALUES "
const row = "(?, ?, ?)"
var inserts []string
vars vals []interface{}
for _, row := range data {
inserts = append(inserts, row)
vals = append(vals, row["v1"], row["v2"], row["v3"])
}
sqlStr := insertQuery + strings.Join(inserts, ",")
//prepare the statement
stmt, _ := db.Prepare(sqlStr)
//close stmt after use
defer stmt.Close()
//format all vals at once
res, _ := stmt.Exec(vals...)

Resources