I am trying to use maps and slice of those maps to store rows returned from a database query. But what I get in every iteration of the rows.Next() and in final is the slice of the one same row from the query. It seems the problem is related to the memory place being same I store the cols, yet I could not resolve it until now.
What is the thing am I missing here:
The source code is as follows:
package main
import (
"database/sql"
_ "github.com/lib/pq"
"fmt"
"log"
"reflect"
)
var myMap = make(map[string]interface{})
var mySlice = make([]map[string]interface{}, 0)
func main(){
fmt.Println("this is my go playground.")
// DB Connection-
db, err := sql.Open("postgres", "user=postgres dbname=proj2-dbcruddb-dev password=12345 sslmode=disable")
if err != nil {
log.Fatalln(err)
}
rows, err := db.Query("SELECT id, username, password FROM userstable")
defer rows.Close()
if err != nil {
log.Fatal(err)
}
colNames, err := rows.Columns()
if err != nil {
log.Fatal(err)
}
cols := make([]interface{}, len(colNames))
colPtrs := make([]interface{}, len(colNames))
for i := 0; i < len(colNames); i++ {
colPtrs[i] = &cols[i]
}
for rows.Next() {
err = rows.Scan(colPtrs...)
if err != nil {
log.Fatal(err)
}
fmt.Println("cols: ", cols)
for i, col := range cols {
myMap[colNames[i]] = col
}
mySlice = append(mySlice, myMap)
// Do something with the map
for key, val := range myMap {
fmt.Println("Key:", key, "Value:", val, "Value Type:", reflect.TypeOf(val))
}
fmt.Println("myMap: ", myMap)
fmt.Println("mySlice: ", mySlice)
}
fmt.Println(mySlice)
}
This is because what you are storing in the slice is a pointer to a map rather than a copy of the map.
From Go maps in action:
Map types are reference types, like pointers or slices...
Since you create the map outside the loop that updates it, you keep overwriting the data in the map with new data and are appending a pointer to the same map to the slice each time. Thus you get multiple copies of the same thing in your slice (being the last record read from your table).
To handle, move var myMap = make(map[string]interface{}) into the for rows.Next() loop so a new map is create on each iteration and then appended to the slice.
Related
Im a newbie in golang. I am trying to compare two yaml files and update the 2nd file's value if there is any new value in 1st yaml for that particular key.
So the files are of format: These are sample yaml files. Real yaml files have much more nested complicated maps with different datatypes for each key.
1st yaml:
name: john
city: washington
2nd yaml:
name: peter
city: washington
Final result for 2nd yaml file should be:
name: john
city: washington
Tried creating a map string interface for both yaml files using unmarshal. But having trouble how to compare both maps. Was trying to loop over each key of map and search for that key in 2nd yaml map. If key exists update the value in 2nd yaml map. But i am not able to implement that. Any suggestions/better ideas?
Edit: Updated code
package main
import (
"fmt"
"io/ioutil"
"log"
"github.com/imdario/mergo"
"gopkg.in/yaml.v3"
)
func main() {
yfile, err := ioutil.ReadFile("C:/Users/212764682/lifecycle/userconfig.yaml")
if err != nil {
log.Fatal(err)
}
data := make(map[string]interface{})
err2 := yaml.Unmarshal(yfile, &data)
if err2 != nil {
log.Fatal(err2)
} else {
yfile1, err3 := ioutil.ReadFile("C:/Users/212764682/lifecycle/conf.yaml")
yfile2, err4 := ioutil.ReadFile("C:/Users/212764682/lifecycle/prof.yaml")
if err3 != nil && err4 != nil {
log.Fatal(err3)
log.Fatal(err4)
} else {
dat := make(map[string]interface{})
dat2 := make(map[string]interface{})
err5 := yaml.Unmarshal(yfile1, &dat)
err6 := yaml.Unmarshal(yfile2, &dat2)
_ = err5
_ = err6
for key1, element1 := range data {
for key2, element2 := range dat {
if key1 == key2 {
if element1 == element2 {
} else {
element2 = element1
}
} else {
dat[key1] = data[key1]
}
}
}
}
}
}
So im want to compare each key of data with dat. If that key exists in dat, check for value in data. If value different in dat, update with value of data in dat for that key. Also, if any key of data dosent exist in dat, then append that key in dat. But not able to implement it correctly.
You can try to compare the map and then update it if the key exists. Here's some example using your case.
package main
import (
"fmt"
"io/ioutil"
"log"
"gopkg.in/yaml.v3"
)
func main() {
// read file
yfile1, err := ioutil.ReadFile("file1.yaml")
if err != nil {
log.Fatal(err)
return
}
yfile2, err := ioutil.ReadFile("file2.yaml")
if err != nil {
log.Fatal(err)
return
}
// unmarshal ymal
data1 := make(map[string]interface{})
if err := yaml.Unmarshal(yfile1, &data1); err != nil {
log.Fatal(err)
return
}
data2 := make(map[string]interface{})
if err := yaml.Unmarshal(yfile2, &data2); err != nil {
log.Fatal(err)
return
}
// from this we can iterate the key in data2 then check whether it exists in data1
// if so then we can update the value in data2
// iterate key in data2
for key2 := range data2 {
// check whether key2 exists in data1
if val1, ok := data1[key2]; ok {
// update the value of key2 in data2
data2[key2] = val1
}
}
fmt.Printf("data2: %v", data2)
// output:
// data2: map[city:washington name:john]
// you can write the data2 into ymal
newYfile2, err := yaml.Marshal(&data2)
if err != nil {
log.Fatal(err)
return
}
// write to file
if err = ioutil.WriteFile("new_file2.yaml", newYfile2, 0644); err != nil {
log.Fatal(err)
return
}
}
Inside new_file2.yaml will be like this:
city: washington
name: john
One thing that you need to take a not is that map in Go doesn't maintain the order (AFAIK Go doesn't have built-in OrderedMap type per 7 May 2022) so the order key in the new file will be random
Additional note: for error handling, you better handle it right away (right after you got the error). Here's a good article about it Error handling and Go
There is a maps package, since 1.18 I believe. If you don't care about new keys being added to the destination map you can use its copy function.
func Copy[M ~map[K]V, K comparable, V any](dst, src M)
Copy copies all key/value pairs in src adding them to dst. When a key in src is already present in dst, the value in dst will be overwritten by the value associated with the key in src.
The source code of that function is very simple:
func Copy[M ~map[K]V, K comparable, V any](dst, src M) {
for k, v := range src {
dst[k] = v
}
}
You could also do it yourself. The below code is from the helm source code. Unlike the copy function above, it works recursivly:
func mergeMaps(a, b map[string]interface{}) map[string]interface{} {
out := make(map[string]interface{}, len(a))
for k, v := range a {
out[k] = v
}
for k, v := range b {
if v, ok := v.(map[string]interface{}); ok {
if bv, ok := out[k]; ok {
if bv, ok := bv.(map[string]interface{}); ok {
out[k] = mergeMaps(bv, v)
continue
}
}
}
out[k] = v
}
return out
}
This code delivers AFAIK correct JSON output [{},{}], but each row is appended and replaces all previous rows, so the result shows only copies of the last row.
var rows *sql.Rows
rows, err = db.Query(query)
cols, _ := rows.Columns()
colnames, _ := rows.Columns()
vals := make([]interface{}, len(cols))
for i, _ := range cols {
vals[i] = &cols[i]
}
m := make(map[string]interface{})
for i, val := range vals {
m[colnames[i]] = val
}
list := make([]map[string]interface{}, 0)
for rows.Next() {
err = rows.Scan(vals...)
list = append(list, m)
}
json, _ := json.Marshal(list)
fmt.Fprintf(w,"%s\n", json)
This is what happens behind the scenes looping through the rows:
loop 1: {“ID”:“1”,“NAME”: "John }
loop 2: {“ID”:“2”,“NAME”: “Jane Doe”}{“ID”:“2”,“NAME”: “Jane Doe”}
loop 3: {“ID”:“3”,“NAME”: “Donald Duck”}{“ID”:“3”,“NAME”: “Donald Duck”}{“ID”:“3”,“NAME”: “Donald Duck”}
The rows.Scan fetches the correct values, but it appends AND replaces all previous values.
The final output is this
[{“ID”:“3”,“NAME”: “Donald Duck”},{“ID”:“3”,“NAME”: “Donald Duck”},{“ID”:“3”,“NAME”: “Donald Duck”}]
But should be this:
[{“ID”:“1”,“NAME”: “John Doe”},{“ID”:“2”,“NAME”: “Jane Doe”},{“ID”:“3”,“NAME”: “Donald Duck”}]
What am I doing wrong?
You may downvote this, but please explain why. I am still a newbie on Golang and want to learn.
I fixed it and explained with comments what you did wrong:
// 1. Query
var rows *sql.Rows
rows, err = db.Query(query)
cols, _ := rows.Columns()
// 2. Iterate
list := make([]map[string]interface{}, 0)
for rows.Next() {
vals := make([]interface{}, len(cols))
for i, _ := range cols {
// Previously you assigned vals[i] a pointer to a column name cols[i].
// This meant that everytime you did rows.Scan(vals),
// rows.Scan would see pointers to cols and modify them
// Since cols are the same for all rows, they shouldn't be modified.
// Here we assign a pointer to an empty string to vals[i],
// so rows.Scan can fill it.
var s string
vals[i] = &s
// This is effectively like saying:
// var string1, string2 string
// rows.Scan(&string1, &string2)
// Except the above only scans two string columns
// and we allow as many string columns as the query returned us — len(cols).
}
err = rows.Scan(vals...)
// Don't forget to check errors.
if err != nil {
log.Fatal(err)
}
// Make a new map before appending it.
// Remember maps aren't copied by value, so if we declared
// the map m outside of the rows.Next() loop, we would be appending
// and modifying the same map for each row, so all rows in list would look the same.
m := make(map[string]interface{})
for i, val := range vals {
m[cols[i]] = val
}
list = append(list, m)
}
// 3. Print.
b, _ := json.MarshalIndent(list, "", "\t")
fmt.Printf("%s\n", b)
Don't worry, this was hard for me to understand when I was a beginner as well.
Now, something fun:
var list []map[string]interface{}
rows, err := db.Queryx(query)
for rows.Next() {
row := make(map[string]interface{})
err = rows.MapScan(row)
if err != nil {
log.Fatal(err)
}
list = append(list, row)
}
b, _ := json.MarshalIndent(list, "", "\t")
fmt.Printf("%s\n", b)
This does the same as the code above it, but with sqlx. A bit simpler, no?
sqlx is an extension on top of database/sql with methods to scan rows directly to maps and structs, so you don't have to do that manually.
I think your model looks nicer as a struct:
type Person struct {
ID int
Name string
}
var people []Person
rows, err := db.Queryx(query)
for rows.Next() {
var p Person
err = rows.StructScan(&p)
if err != nil {
log.Fatal(err)
}
people = append(people, p)
}
Don't you think?
Take a simple PostreSQL db with an integer array:
CREATE TABLE foo (
id serial PRIMARY KEY,
bar integer[]
);
INSERT INTO foo VALUES(DEFAULT, '{1234567, 20, 30, 40}');
Using pq, these values are for some reason being retrieved as an array of []uint8.
The documentation says that integer types are returned as int64. Does this not apply to arrays as well?
db, err := sql.Open("postgres", "user=a_user password=your_pwd dbname=blah")
if err != nil {
fmt.Println(err)
}
var ret []int
err = db.QueryRow("SELECT bar FROM foo WHERE id=$1", 1).Scan(&ret)
if err != nil {
fmt.Println(err)
}
fmt.Println(ret)
Output:
sql: Scan error on column index 0: unsupported Scan, storing driver.Value type []uint8 into type *[]int64
[]
You cannot use a slice of int as a driver.Value. The arguments to Scan must be of one of the supported types, or implement the sql.Scanner interface.
The reason you're seeing []uint8 in the error message is that the raw value returned from the database is a []byte slice, for which []uint8 is a synonym.
To interpret that []byte slice appropriately as a custom PostgreSQL array type, you should use the appropriate array types defined in the pq package, such as the Int64Array.
Try something like this:
var ret pq.Int64Array
err = db.QueryRow("SELECT bar FROM foo WHERE id=$1", 1).Scan(&ret)
if err != nil {
fmt.Println(err)
}
fmt.Println(ret)
The problem will be more severe if you use fetching multiple rows.
The above code works for a single row, to fetch multiple rows use like this
`rows, err := db.QueryContext(ctx, stmt, courseCode)
if err != nil {
return nil, err
}
defer rows.Close()
var feedbacks []*Feedback1
for rows.Next() {
var feedback Feedback1
var ret pq.Int64Array
var ret1 pq.Int64Array
err := rows.Scan(
&feedback.ID,
&ret,
&ret1,
)
if err != nil {
return nil, err
}
//for loop to convert int64 to int
for i:=0;i<len(ret);i++{
feedback.UnitFeedback = append(feedback.UnitFeedback,int(ret[i]))}
for i:=0;i<len(ret1);i++{
feedback.GeneralFeedback = append(feedback.GeneralFeedback,int(ret1[i]))}
feedbacks = append(feedbacks, &feedback)
}`
Currently on what I've seen so far is that, converting database rows to JSON or to []map[string]interface{} is not simple. I have to create two slices and then loop through columns and create keys every time.
...Some code
tableData := make([]map[string]interface{}, 0)
values := make([]interface{}, count)
valuePtrs := make([]interface{}, count)
for rows.Next() {
for i := 0; i < count; i++ {
valuePtrs[i] = &values[i]
}
rows.Scan(valuePtrs...)
entry := make(map[string]interface{})
for i, col := range columns {
var v interface{}
val := values[i]
b, ok := val.([]byte)
if ok {
v = string(b)
} else {
v = val
}
entry[col] = v
}
tableData = append(tableData, entry)
}
Is there any package for this ? Or I am missing some basics here
I'm dealing with the same issue, as far as my investigation goes it looks that there is no other way.
All the packages that I have seen use basically the same method
Few things you should know, hopefully will save you time:
database/sql package converts all the data to the appropriate types
if you are using the mysql driver(go-sql-driver/mysql) you need to add
params to your db string for it to return type time instead of a string
(use ?parseTime=true, default is false)
You can use tools that were written by the community, to offload the overhead:
A minimalistic wrapper around database/sql, sqlx, uses similar way internally with reflection.
If you need more functionality, try using an "orm": gorp, gorm.
If you interested in diving deeper check out:
Using reflection in sqlx package, sqlx.go line 560
Data type conversion in database/sql package, convert.go line 86
One thing you could do is create a struct that models your data.
**Note: I am using MS SQLServer
So lets say you want to get a user
type User struct {
ID int `json:"id,omitempty"`
UserName string `json:"user_name,omitempty"`
...
}
then you can do this
func GetUser(w http.ResponseWriter, req *http.Request) {
var r Role
params := mux.Vars(req)
db, err := sql.Open("mssql", "server=ServerName")
if err != nil {
log.Fatal(err)
}
err1 := db.QueryRow("select Id, UserName from [Your Datavse].dbo.Users where Id = ?", params["id"]).Scan(&r.ID, &r.Name)
if err1 != nil {
log.Fatal(err1)
}
json.NewEncoder(w).Encode(&r)
if err != nil {
log.Fatal(err)
}
}
Here are the imports I used
import (
"database/sql"
"net/http"
"log"
"encoding/json"
_ "github.com/denisenkom/go-mssqldb"
"github.com/gorilla/mux"
)
This allowed me to get data from the database and get it into JSON.
This takes a while to code, but it works really well.
Not in the Go distribution itself, but there is the wonderful jmoiron/sqlx:
import "github.com/jmoiron/sqlx"
tableData := make([]map[string]interface{}, 0)
for rows.Next() {
entry := make(map[string]interface{})
err := rows.MapScan(entry)
if err != nil {
log.Fatal("SQL error: " + err.Error())
}
tableData = append(tableData, entry)
}
If you know the data type that you are reading, then you can read into the data type without using generic interface.
Otherwise, there is no solution regardless of the language used due to nature of JSON itself.
JSON does not have description of composite data structures. In other words, JSON is a generic key-value structure. When parser encounters what is supposed to be a specific structure there is no identification of that structure in JSON itself. For example, if you have a structure User the parser would not know how a set of key-value pairs maps to your structure User.
The problem of type recognition is usually addressed with document schema (a.k.a. XSD in XML world) or explicitly through passed expected data type.
One quick way to go about being able to get an arbirtrary and generic []map[string]interface{} from these query libraries is to populate an array of interface pointers with the same size of the amount of columns on the query, and then pass that as a parameter on the scan function:
// For example, for the go-mssqldb lib:
queryResponse, err := d.pool.Query(query)
if err != nil {
return nil, err
}
defer queryResponse.Close()
// Holds all the end-results
results := []map[string]interface{}{}
// Getting details about all the fields from the query
fieldNames, err := queryResponse.Columns()
if err != nil {
return nil, err
}
// Creating interface-type pointers within an array of the same
// size of the number of columns we have, so that we can properly
// pass this to the "Scan" function and get all the query parameters back :)
var scanResults []interface{}
for range fieldNames {
var v interface{}
scanResults = append(scanResults, &v)
}
// Parsing the query results into the result map
for queryResponse.Next() {
// This variable will hold the value for all the columns, named by the column name
rowValues := map[string]interface{}{}
// Cleaning up old values just in case
for _, column := range scanResults {
*(column.(*interface{})) = nil
}
// Scan into the array of pointers
err := queryResponse.Scan(scanResults...)
if err != nil {
return nil, err
}
// Map the pointers back to their value and the associated column name
for index, column := range scanResults {
rowValues[fieldNames[index]] = *(column.(*interface{}))
}
results = append(results, rowValues)
}
return results, nil
I'm a beginner at go (and not a good programmer) but I wanted to write a small program which would dump from a switch the list of mac addresses & interfaces name using snmp. I store the snmp values into an array of struct using multiple loops (the code here is to show the behavior).
During the first loop, I store Ports Vlan id & mac addresses into an array of struct (var allTableArray [30]allTable). At the end of this loop, I print the content of the array to be sure the mac addresses are in the array.
But when the second loop begins (to register bridge port number), the array seems empty (fmt.Printf("deux %x\n",allTableArray[i].macAddr) and fmt.Printf("trois %s\n",allTableArray[i].ptVlan1id)).
I don't understand why my array seems empty. Do you have any idea ?
package main
import (
"flag"
"fmt"
"os"
"time"
"strings"
"github.com/soniah/gosnmp"
"math/big"
)
type oidMacAddr struct {
oid string
macaddr string
}
type allTable struct {
ptVlan1id string
macAddr []byte
brPortNb *big.Int
ifIndex *big.Int
ifName string
}
var macAddrTable [30]oidMacAddr
func main() {
flag.Parse()
if len(flag.Args()) < 1 {
flag.Usage()
os.Exit(1)
}
target := flag.Args()[0]
showMacAddrTable(target)
}
func printValue(pdu gosnmp.SnmpPDU) error {
fmt.Printf("%s = ", pdu.Name)
//fmt.Println(reflect.TypeOf(pdu.Value.([]byte)))
switch pdu.Type {
case gosnmp.OctetString:
b := pdu.Value.([]byte)
fmt.Printf("STRING: %x\n", b)
default:
fmt.Printf("TYPE %d: %d\n", pdu.Type, gosnmp.ToBigInt(pdu.Value))
}
return nil
}
func showMacAddrTable(target string) () {
var allTableArray [30]allTable
ptVlan1Oid := ".1.3.6.1.2.1.17.4.3.1.1"
brPortOid := ".1.3.6.1.2.1.17.4.3.1.2"
brPortIfIndex := ".1.3.6.1.2.1.17.1.4.1.2"
ifIndexIfName := ".1.3.6.1.2.1.31.1.1.1.1"
community := "public"
gosnmp.Default.Target = target
gosnmp.Default.Community = community
gosnmp.Default.Timeout = time.Duration(10 * time.Second) // Timeout better suited to walking
err := gosnmp.Default.Connect()
if err != nil {
fmt.Printf("Connect err: %v\n", err)
os.Exit(1)
}
var essai []gosnmp.SnmpPDU
essai, err = gosnmp.Default.BulkWalkAll(ptVlan1Oid)
if err != nil {
fmt.Printf("Walk Error: %v\n", err)
os.Exit(1)
}
for i :=0 ; i < len(essai); i++ {
s := strings.TrimPrefix(essai[i].Name, ".1.3.6.1.2.1.17.4.3.1.1")
fmt.Printf("%s = ", s)
fmt.Printf("%x\n", essai[i].Value.([]byte))
bytes := essai[i].Value.([]byte)
macAddrTable[i] = oidMacAddr {s, string(bytes)}
allTableArray[i] = allTable {ptVlan1id: s, macAddr: bytes}
if(allTableArray[i].macAddr != nil){
fmt.Printf("%x\n",allTableArray[i].macAddr)
}
}
essai, err = gosnmp.Default.BulkWalkAll(brPortOid)
if err != nil {
fmt.Printf("Walk Error: %v\n", err)
os.Exit(1)
}
for i:=0 ; i < len(essai); i++ {
s := strings.TrimPrefix(essai[i].Name, ".1.3.6.1.2.1.17.4.3.1.2")
fmt.Printf("%s = ", s)
fmt.Printf("%d\n", essai[i].Value)
for j:=0 ; j < len(allTableArray); j++ {
if (s == allTableArray[j].ptVlan1id) {
allTableArray[j] = allTable {brPortNb: gosnmp.ToBigInt(essai[i].Value) }
}
}
fmt.Printf("deux %x\n",allTableArray[i].macAddr)
fmt.Printf("trois %s\n",allTableArray[i].ptVlan1id)
}
os.Exit(1)
}
Apparently this line
allTableArray[j] = allTable {brPortNb: gosnmp.ToBigInt(essai[i].Value) }
Update each member with a new allTable instance, where every field other than brPortNb is not defined thus becomes nil.
If what you were trying to do is to update each member's brPortNb field, you could have done so by accessing the field and assign the value to it instead of assigning a new allTable to every member.
allTableArray[j].brPortNb = gosnmp.ToBigInt(essai[i].Value)
Also, try simplifying your loops like this, provided len(essai) == len(allTableArray):
for i, v := range essai {
s := strings.TrimPrefix(v.Name, ".1.3.6.1.2.1.17.4.3.1.1")
bytes := v.Value.([]byte)
macAddrTable[i] = oidMacAddr { s, string(bytes) }
allTableArray[i] = allTable { ptVlan1id: s, macAddr: bytes }
s = strings.TrimPrefix(v.Name, ".1.3.6.1.2.1.17.4.3.1.2")
if s == allTableArray[i].ptVlan1id {
allTableArray[i].brPortNb = gosnmp.ToBigInt(v.Value)
}
}
Notice that by using for i, v := range essai syntax, you have access to both the index and the value without having to use essai[i] for the value.
Now your two loops can become just one, plus no embedded loops which are really hard to make sense of.
I Also recommend you work with slice instead of array. It's more flexible that way.