I am currently trying to execute arbitrary queries using the following (shortened) code
func (...) query(...) int {
rows, err := database.Connection.Query(queryString)
if err != nil {
return 1
}
columnNames, err := rows.Columns()
if err != nil {
return 1
}
columns := make([]interface{}, len(columnNames))
columnPointers := make([]interface{}, len(columnNames))
for i := range columnNames {
columnPointers[i] = &columns[i]
}
for rows.Next() {
rows.Scan(columnPointers...)
}
log.Println(columns)
return 0
}
However columns holds an slice of byte arrays so I got no idea how can I get the desired result?
Related
For example
func Query(myvarlist []string) {
stmt, err := tx.Prepare("SELECT * FROM mytable WHERE
myvar = $1 AND myvar2 = $2".......)
defer stmt.Close()
if _, err := stmt.Exec(myvarlist....); err != nil {
}
Can we pass in Exec a variable length variable?
You can do something like the this:
func Query() {
db, err := sql.Open("pgx", `postgresql://CONNSTRING`)
if err != nil {
panic(err)
}
queryParamMap := map[string]string{
"id": "6",
"name": "1",
}
// Build up statement and params
cols := make([]string, len(queryParamMap))
args := make([]any, len(queryParamMap))
i := 0
for k, v := range queryParamMap {
cols[i] = fmt.Sprintf("%s = $%d", k, i+1) // WARNING - SQL Injection possible here if col names are not sanitised
args[i] = v
i++
}
// Using Prepare because the question used it but this is only worthwhile if you will run stmt multiple times
stmt, err := db.Prepare(`SELECT id FROM devices WHERE ` + strings.Join(cols, " and "))
if err != nil {
panic(err)
}
defer stmt.Close()
rows, err := stmt.Query(args...)
if err != nil {
panic(err)
}
defer rows.Close()
for rows.Next() {
var id int
if err = rows.Scan(&id); err != nil {
panic(err)
}
fmt.Println("value:", id)
}
// Should check rows.Error() etc but this is just an example...
}
I've put the column names and values into a map because it was not clear where any extra column names would come from in your question but hopefully this provides the info you need.
This example is also using Query rather than Exec (because it's easier to test) but the same approach will work with Exec.
Note: Take a look at squirrel for an example of how to take this a lot further....
I have some go code that opens a spreadsheet and for each row, uses a lanid in the row to lookup some data. I would like to add this derived data as two new columns in the sheet.
Opening the sheet and looping over all the rows works fine. I just can't figure out how to add the new columns. Any suggestions welcome.
The code below throws an error of
panic: runtime error: index out of range [7] with length 7
like the columns haven't been added.
f, e := excelize.OpenFile("apps.xlsx")
if e != nil {
log.Fatal(err)
}
defer func() {
if err := f.Close(); err != nil {
fmt.Println(err)
}
}()
f.InsertCol("apps", "H")
f.InsertCol("apps", "H")
rows, err := f.GetRows("apps")
if err != nil {
fmt.Println(err)
return
}
for _, row := range rows {
lanid := row[3]
fmt.Print(lanid, "\t")
fmt.Println()
node := orgtee.lookup(lanid)
row[7] = node.title
row[8] = node.domain
}
You can set it by SetCellValue function.
This is the example.
for i := 1; i < len(rows); i++ {
f.SetCellValue("apps", "G"+strconv.Itoa(i), "coba1")
f.SetCellValue("apps", "H"+strconv.Itoa(i), "coba2")
f.SetCellValue("apps", "I"+strconv.Itoa(i), "coba3")
}
rows, err = f.GetRows("apps") // call GetRows again to update rows value
if err != nil {
fmt.Println(err)
return
}
fmt.Println(rows)
I use database/sql and define a struct mapping to DB table columns(tag field):
// Users ...
type Users struct {
ID int64 `field:"id"`
Username string `field:"username"`
Password string `field:"password"`
Tel string `field:"tel"`
}
then I query:
rows, err := db.Query(sql) // select * from users
if err != nil {
fmt.Println(err)
}
defer rows.Close()
for rows.Next() {
user := new(Users)
// works but I don't think it is good code for too many columns
err = rows.Scan(&user.ID, &user.Username, &user.Password, &user.Tel)
// TODO: How to scan in a simple way
if err != nil {
fmt.Println(err)
}
fmt.Println("user: ", user)
list = append(list, *user)
}
if err := rows.Err(); err != nil {
fmt.Println(err)
}
As you can see for rows.Scan() , I have to write all columns , and I don't think it's a good way for 20 or more columns .
How to scan in a clear way.
It's a good practice for using reflect:
for rows.Next() {
user := Users{}
s := reflect.ValueOf(&user).Elem()
numCols := s.NumField()
columns := make([]interface{}, numCols)
for i := 0; i < numCols; i++ {
field := s.Field(i)
columns[i] = field.Addr().Interface()
}
err := rows.Scan(columns...)
if err != nil {
log.Fatal(err)
}
log.Println(user)
}
You may consider using jmoiron's sqlx package. It has support for assigning to a struct.
Excerpt from the readme:
type Place struct {
Country string
City sql.NullString
TelCode int
}
places := []Place{}
err = db.Select(&places, "SELECT * FROM place ORDER BY telcode ASC")
if err != nil {
fmt.Println(err)
return
}
I am get leveldb's all key-val to a map[string][]byte, but it is not running as my expection.
code is as below
package main
import (
"fmt"
"strconv"
"github.com/syndtr/goleveldb/leveldb"
)
func main() {
db, err := leveldb.OpenFile("db", nil)
if err != nil {
panic(err)
}
defer db.Close()
for i := 0; i < 10; i++ {
err := db.Put([]byte("key"+strconv.Itoa(i)), []byte("value"+strconv.Itoa(i)), nil)
if err != nil {
panic(err)
}
}
snap, err := db.GetSnapshot()
if err != nil {
panic(err)
}
if snap == nil {
panic("snap shot is nil")
}
data := make(map[string][]byte)
iter := snap.NewIterator(nil, nil)
for iter.Next() {
Key := iter.Key()
Value := iter.Value()
data[string(Key)] = Value
}
iter.Release()
if iter.Error() != nil {
panic(iter.Error())
}
for k, v := range data {
fmt.Println(string(k) + ":" + string(v))
}
}
but the result is below
key3:value9
key6:value9
key7:value9
key8:value9
key1:value9
key2:value9
key4:value9
key5:value9
key9:value9
key0:value9
rather not key0:value0
Problem is with casting around types (byte[] to string, etc.).
You are trying to print string values. To avoid unnecessary casting apply the following modifications:
Change data initialization into data := make(map[string]string)
Assign values into data with `data[string(Key)] = string(Value) (by the way, don't use capitalization for variables you aren't intend to export)
Print data's values with fmt.Println(k + ":" + v))
This should produce the following result:
key0:value0
key1:value1
key7:value7
key2:value2
key3:value3
key4:value4
key5:value5
key6:value6
key8:value8
key9:value9
I am creating a utility that needs to be aware of all the datasets/tables that exist in my BigQuery project. My current code for getting this information is as follows (using Go API):
func populateExistingTableMap(service *bigquery.Service, cloudCtx context.Context, projectId string) (map[string]map[string]bool, error) {
tableMap := map[string]map[string]bool{}
call := service.Datasets.List(projectId)
//call.Fields("datasets/datasetReference")
if err := call.Pages(cloudCtx, func(page *bigquery.DatasetList) error {
for _, v := range page.Datasets {
if tableMap[v.DatasetReference.DatasetId] == nil {
tableMap[v.DatasetReference.DatasetId] = map[string]bool{}
}
table_call := service.Tables.List(projectId, v.DatasetReference.DatasetId)
//table_call.Fields("tables/tableReference")
if err := table_call.Pages(cloudCtx, func(page *bigquery.TableList) error {
for _, t := range page.Tables {
tableMap[v.DatasetReference.DatasetId][t.TableReference.TableId] = true
}
return nil
}); err != nil {
return errors.New("Error Parsing Table")
}
}
return nil
}); err != nil {
return tableMap, err
}
return tableMap, nil
}
For a project with about 5000 datasets, each with up to 10 tables, this code takes almost 15 minutes to return. Is there a faster way to iterate through the names of all existing datasets/tables? I have tried using the Fields method to return only the fields I need (you can see those lines commented out above), but that results in only 50 (exactly 50) of my datasets being returned.
Any ideas?
Here is an updated version of my code, with concurrency, that reduced the processing time from about 15 minutes to 3 minutes.
func populateExistingTableMap(service *bigquery.Service, cloudCtx context.Context, projectId string) (map[string]map[string]bool, error) {
tableMap = map[string]map[string]bool{}
call := service.Datasets.List(projectId)
//call.Fields("datasets/datasetReference")
if err := call.Pages(cloudCtx, func(page *bigquery.DatasetList) error {
var wg sync.WaitGroup
wg.Add(len(page.Datasets))
for _, v := range page.Datasets {
if tableMap[v.DatasetReference.DatasetId] == nil {
tableMap[v.DatasetReference.DatasetId] = map[string]bool{}
}
go func(service *bigquery.Service, datasetID string, projectId string) {
defer wg.Done()
table_call := service.Tables.List(projectId, datasetID)
//table_call.Fields("tables/tableReference")
if err := table_call.Pages(cloudCtx, func(page *bigquery.TableList) error {
for _, t := range page.Tables {
tableMap[datasetID][t.TableReference.TableId] = true
}
return nil // NOTE: returning a non-nil error stops pagination.
}); err != nil {
// TODO: Handle error.
fmt.Println(err)
}
}(service, v.DatasetReference.DatasetId, projectId)
}
wg.Wait()
return nil // NOTE: returning a non-nil error stops pagination.
}); err != nil {
return tableMap, err
// TODO: Handle error.
}
return tableMap, nil
}