This code delivers AFAIK correct JSON output [{},{}], but each row is appended and replaces all previous rows, so the result shows only copies of the last row.
var rows *sql.Rows
rows, err = db.Query(query)
cols, _ := rows.Columns()
colnames, _ := rows.Columns()
vals := make([]interface{}, len(cols))
for i, _ := range cols {
vals[i] = &cols[i]
}
m := make(map[string]interface{})
for i, val := range vals {
m[colnames[i]] = val
}
list := make([]map[string]interface{}, 0)
for rows.Next() {
err = rows.Scan(vals...)
list = append(list, m)
}
json, _ := json.Marshal(list)
fmt.Fprintf(w,"%s\n", json)
This is what happens behind the scenes looping through the rows:
loop 1: {“ID”:“1”,“NAME”: "John }
loop 2: {“ID”:“2”,“NAME”: “Jane Doe”}{“ID”:“2”,“NAME”: “Jane Doe”}
loop 3: {“ID”:“3”,“NAME”: “Donald Duck”}{“ID”:“3”,“NAME”: “Donald Duck”}{“ID”:“3”,“NAME”: “Donald Duck”}
The rows.Scan fetches the correct values, but it appends AND replaces all previous values.
The final output is this
[{“ID”:“3”,“NAME”: “Donald Duck”},{“ID”:“3”,“NAME”: “Donald Duck”},{“ID”:“3”,“NAME”: “Donald Duck”}]
But should be this:
[{“ID”:“1”,“NAME”: “John Doe”},{“ID”:“2”,“NAME”: “Jane Doe”},{“ID”:“3”,“NAME”: “Donald Duck”}]
What am I doing wrong?
You may downvote this, but please explain why. I am still a newbie on Golang and want to learn.
I fixed it and explained with comments what you did wrong:
// 1. Query
var rows *sql.Rows
rows, err = db.Query(query)
cols, _ := rows.Columns()
// 2. Iterate
list := make([]map[string]interface{}, 0)
for rows.Next() {
vals := make([]interface{}, len(cols))
for i, _ := range cols {
// Previously you assigned vals[i] a pointer to a column name cols[i].
// This meant that everytime you did rows.Scan(vals),
// rows.Scan would see pointers to cols and modify them
// Since cols are the same for all rows, they shouldn't be modified.
// Here we assign a pointer to an empty string to vals[i],
// so rows.Scan can fill it.
var s string
vals[i] = &s
// This is effectively like saying:
// var string1, string2 string
// rows.Scan(&string1, &string2)
// Except the above only scans two string columns
// and we allow as many string columns as the query returned us — len(cols).
}
err = rows.Scan(vals...)
// Don't forget to check errors.
if err != nil {
log.Fatal(err)
}
// Make a new map before appending it.
// Remember maps aren't copied by value, so if we declared
// the map m outside of the rows.Next() loop, we would be appending
// and modifying the same map for each row, so all rows in list would look the same.
m := make(map[string]interface{})
for i, val := range vals {
m[cols[i]] = val
}
list = append(list, m)
}
// 3. Print.
b, _ := json.MarshalIndent(list, "", "\t")
fmt.Printf("%s\n", b)
Don't worry, this was hard for me to understand when I was a beginner as well.
Now, something fun:
var list []map[string]interface{}
rows, err := db.Queryx(query)
for rows.Next() {
row := make(map[string]interface{})
err = rows.MapScan(row)
if err != nil {
log.Fatal(err)
}
list = append(list, row)
}
b, _ := json.MarshalIndent(list, "", "\t")
fmt.Printf("%s\n", b)
This does the same as the code above it, but with sqlx. A bit simpler, no?
sqlx is an extension on top of database/sql with methods to scan rows directly to maps and structs, so you don't have to do that manually.
I think your model looks nicer as a struct:
type Person struct {
ID int
Name string
}
var people []Person
rows, err := db.Queryx(query)
for rows.Next() {
var p Person
err = rows.StructScan(&p)
if err != nil {
log.Fatal(err)
}
people = append(people, p)
}
Don't you think?
I have the ff:
func getSlice(distinctSymbols []string) []symbols {
// Prepare connection
stmt1, err := db.Prepare("Select count(*) from stockticker_daily where symbol = $1;")
checkError(err)
defer stmt1.Close()
stmt2, err := db.Prepare("Select date from stockticker_daily where symbol = $1 order by date asc limit 1;")
checkError(err)
defer stmt2.Close()
var symbolsSlice []symbols
c := make(chan symbols)
for _, symbol := range distinctSymbols {
go worker(symbol, stmt1, stmt2, c)
**symbolsFromChannel := <-c**
**symbolsSlice = append(symbolsSlice, symbolsFromChannel})**
}
return symbolsSlice
}
func worker(symbol string, stmt1 *sql.Stmt, stmt2 *sql.Stmt, symbolsChan chan symbols) {
var countdp int
var earliestdate string
row := stmt1.QueryRow(symbol)
if err := row.Scan(&countdp); err != nil {
log.Fatal(err)
}
row = stmt2.QueryRow(symbol)
if err := row.Scan(&earliestdate); err != nil {
log.Fatal(err)
}
symbolsChan <- symbols{symbol, countdp, earliestdate}
}
Please take a look at the first function, I know it won't work as I expect since the line symbolsFromChannel := <-c will block until it receives from the channel, so the iteration on the goroutine go worker will not continue unless the block is removed. What is the best or correct way to do that?
Just do the loop twice, e.g.
for _, symbol := range distinctSymbols {
go worker(symbol, stmt1, stmt2, c)
}
for range distinctSymbols {
symbolsSlice = append(symbolsSlice, <-c)
}
I have a function to delete "User" node and return count deleted node, but it always return -1.
func DeleteUser(userid int) (int, error) {
stmt := `
MATCH (u:User) WHERE u.userId = {userid} delete u RETURN count(u) ;
`
params := neoism.Props{"userid": userid}
res := -1
cq := neoism.CypherQuery{
Statement: stmt,
Parameters: params,
Result: &res,
}
err := conn.Cypher(&cq)
return res, err
}
1) res must be of type []struct
2) Don't use ";" at the end of query.
stmt := MATCH (u:User) WHERE u.userId = {userid} delete u RETURN count(u)
I am trying to use maps and slice of those maps to store rows returned from a database query. But what I get in every iteration of the rows.Next() and in final is the slice of the one same row from the query. It seems the problem is related to the memory place being same I store the cols, yet I could not resolve it until now.
What is the thing am I missing here:
The source code is as follows:
package main
import (
"database/sql"
_ "github.com/lib/pq"
"fmt"
"log"
"reflect"
)
var myMap = make(map[string]interface{})
var mySlice = make([]map[string]interface{}, 0)
func main(){
fmt.Println("this is my go playground.")
// DB Connection-
db, err := sql.Open("postgres", "user=postgres dbname=proj2-dbcruddb-dev password=12345 sslmode=disable")
if err != nil {
log.Fatalln(err)
}
rows, err := db.Query("SELECT id, username, password FROM userstable")
defer rows.Close()
if err != nil {
log.Fatal(err)
}
colNames, err := rows.Columns()
if err != nil {
log.Fatal(err)
}
cols := make([]interface{}, len(colNames))
colPtrs := make([]interface{}, len(colNames))
for i := 0; i < len(colNames); i++ {
colPtrs[i] = &cols[i]
}
for rows.Next() {
err = rows.Scan(colPtrs...)
if err != nil {
log.Fatal(err)
}
fmt.Println("cols: ", cols)
for i, col := range cols {
myMap[colNames[i]] = col
}
mySlice = append(mySlice, myMap)
// Do something with the map
for key, val := range myMap {
fmt.Println("Key:", key, "Value:", val, "Value Type:", reflect.TypeOf(val))
}
fmt.Println("myMap: ", myMap)
fmt.Println("mySlice: ", mySlice)
}
fmt.Println(mySlice)
}
This is because what you are storing in the slice is a pointer to a map rather than a copy of the map.
From Go maps in action:
Map types are reference types, like pointers or slices...
Since you create the map outside the loop that updates it, you keep overwriting the data in the map with new data and are appending a pointer to the same map to the slice each time. Thus you get multiple copies of the same thing in your slice (being the last record read from your table).
To handle, move var myMap = make(map[string]interface{}) into the for rows.Next() loop so a new map is create on each iteration and then appended to the slice.
I know that Insert multiple data at once more efficiency:
INSERT INTO test(n1, n2, n3)
VALUES(v1, v2, v3),(v4, v5, v6),(v7, v8, v9);
How to do that in golang?
data := []map[string]string{
{"v1":"1", "v2":"1", "v3":"1"},
{"v1":"2", "v2":"2", "v3":"2"},
{"v1":"3", "v2":"3", "v3":"3"},
}
//I do not want to do it
for _, v := range data {
sqlStr := "INSERT INTO test(n1, n2, n3) VALUES(?, ?, ?)"
stmt, _ := db.Prepare(sqlStr)
res, _ := stmt.Exec(v["v1"], v["v2"], v["v3"])
}
Use string splice, but it's not good. db.Prepare more safer, right?
sqlStr := "INSERT INTO test(n1, n2, n3) VALUES"
for k, v := range data {
if k == 0 {
sqlStr += fmt.Sprintf("(%v, %v, %v)", v["v1"], v["v2"], v["v3"])
} else {
sqlStr += fmt.Sprintf(",(%v, %v, %v)", v["v1"], v["v2"], v["v3"])
}
}
res, _ := db.Exec(sqlStr)
I need a function safer and efficient insert mulitple data at once.
why not something like this? (writing here without testing so there might be syntax errors):
sqlStr := "INSERT INTO test(n1, n2, n3) VALUES "
vals := []interface{}{}
for _, row := range data {
sqlStr += "(?, ?, ?),"
vals = append(vals, row["v1"], row["v2"], row["v3"])
}
//trim the last ,
sqlStr = sqlStr[0:len(sqlStr)-1]
//prepare the statement
stmt, _ := db.Prepare(sqlStr)
//format all vals at once
res, _ := stmt.Exec(vals...)
For Postgres lib pq supports bulk inserts: https://godoc.org/github.com/lib/pq#hdr-Bulk_imports
But same can be achieved through below code but where it is really helpful is when one tries to perform bulk conditional update (change the query accordingly).
For performing similar bulk inserts for Postgres, you can use the following function.
// ReplaceSQL replaces the instance occurrence of any string pattern with an increasing $n based sequence
func ReplaceSQL(old, searchPattern string) string {
tmpCount := strings.Count(old, searchPattern)
for m := 1; m <= tmpCount; m++ {
old = strings.Replace(old, searchPattern, "$"+strconv.Itoa(m), 1)
}
return old
}
So above sample becomes
sqlStr := "INSERT INTO test(n1, n2, n3) VALUES "
vals := []interface{}{}
for _, row := range data {
sqlStr += "(?, ?, ?),"
vals = append(vals, row["v1"], row["v2"], row["v3"])
}
//trim the last ,
sqlStr = strings.TrimSuffix(sqlStr, ",")
//Replacing ? with $n for postgres
sqlStr = ReplaceSQL(sqlStr, "?")
//prepare the statement
stmt, _ := db.Prepare(sqlStr)
//format all vals at once
res, _ := stmt.Exec(vals...)
Gorm V2 (released on 30th August 2020) now supports batch insert query.
// Pass slice data to method Create, GORM will generate a single SQL statement
// to insert all the data and backfill primary key values,
// hook methods will be invoked too.
var users = []User{{Name: "jinzhu1"}, {Name: "jinzhu2"}, {Name: "jinzhu3"}}
DB.Create(&users)
for _, user := range users {
user.ID // 1,2,3
}
For more details refer to the official documentation here: https://gorm.io/docs/create.html.
If you enable multi statements , then you can execute multiple statement at once.
With that , you should be able to handle multiple inserts.
https://github.com/go-sql-driver/mysql#multistatements
After extensive research this worked for me:
var values []interface{}
for _, scope := range scopes {
values = append(values, scope.ID, scope.Code, scope.Description)
}
sqlStr := `INSERT INTO scopes (application_id, scope, description) VALUES %s`
sqlStr = setupBindVars(sqlStr, "(?, ?, ?)", len(scopes))
_, err = s.db.ExecContext(ctx, sqlStr, values...)
// helper function to replace ? with the right number of sets of bind vars
func setupBindVars(stmt, bindVars string, len int) string {
bindVars += ","
stmt = fmt.Sprintf(stmt, strings.Repeat(bindVars, len))
return strings.TrimSuffix(stmt, ",")
}
From https://gorm.io/docs/create.html#Batch-Insert
Code sample:
var users = []User{{Name: "jinzhu1"}, {Name: "jinzhu2"}, {Name: "jinzhu3"}}
DB.Create(&users)
this is an efficient way to do transition which will do network call only after commit.
func insert(requestObj []models.User) (bool, error) {
tx := db.Begin()
defer func() {
if r := recover(); r != nil {
tx.Rollback()
}
}()
for _, obj := range requestObj {
if err := tx.Create(&obj).Error; err != nil {
logging.AppLogger.Errorf("Failed to create user")
tx.Rollback()
return false, err
}
}
err := tx.Commit().Error
if err != nil {
return false, err
}
return true, nil
}
I ended up with this, after combining the feedback on posted answers:
const insertQuery := "INSERT INTO test(n1, n2, n3) VALUES "
const row = "(?, ?, ?)"
var inserts []string
vars vals []interface{}
for _, row := range data {
inserts = append(inserts, row)
vals = append(vals, row["v1"], row["v2"], row["v3"])
}
sqlStr := insertQuery + strings.Join(inserts, ",")
//prepare the statement
stmt, _ := db.Prepare(sqlStr)
//close stmt after use
defer stmt.Close()
//format all vals at once
res, _ := stmt.Exec(vals...)