golang Couchbase n1ql query pass params to? - go

I'm trying to find a way to pass parameters to query, but not quite sure how. The API on the web site looks a little bit outdated?
myQuery := gocb.NewN1qlQuery("SELECT * FROM default")
rows, err := myBucket.ExecuteN1qlQuery(myQuery)
if err != nil {
fmt.Printf("N1QL query error: %s\n", err)
}
var row interface{}
for rows.Next(&row) {
fmt.Printf("Row: %+v\n", row)
}
if err := rows.Close(); err != nil {
fmt.Printf("N1QL query error: %s\n", err)
}
Because, actually ExecuteN1qlQuery takes two params:
func (b *Bucket) ExecuteN1qlQuery(q *N1qlQuery, params interface{}) (ViewResults, error)
I am not sure just how to use it... Like I would like to create a query with placeholders, and pass values to ExecuteN1qlQuery via params. Like with SQL (prepare -> execute). For example something like that:
myQuery := gocb.NewN1qlQuery("SELECT * FROM default where a=? and b=?")
rows, err := myBucket.ExecuteN1qlQuery(myQuery,[]string{"b","c"})
if err != nil {
fmt.Printf("N1QL query error: %s\n", err)
}
var row interface{}
for rows.Next(&row) {
fmt.Printf("Row: %+v\n", row)
}
if err := rows.Close(); err != nil {
fmt.Printf("N1QL query error: %s\n", err)
}

The example you posted for how to do do this is from our developer guide repo on github:
https://github.com/couchbaselabs/devguide-examples/blob/master/go/query-placeholders.go.
Basically, you're using $ which references an interface and a corresponding positional parameter beginning with 1.
For your example it would look something like:
// Setup a new query with a placeholder
myQuery := gocb.NewN1qlQuery("SELECT * FROM default where a=$1 and b=$2")
// Setup an array for parameters
var myParams []interface{}
myParams = append(myParams,"foo")
myParams = append(myParams,"bar")
// Execute Query
rows, err := bucket.ExecuteN1qlQuery(myQuery, myParams)
if err != nil {
fmt.Println("ERROR EXECUTING N1QL QUERY:", err)
}
// Iterate through rows and print output
var row interface{}
for rows.Next(&row) {
fmt.Printf("Results: %+v\n", row)
}

Just found example
myQuery := gocb.NewN1qlQuery("SELECT airportname, city, country FROM `travel-sample` " +
"WHERE type='airport' AND city=$1 ")
// Setup an array for parameters
var myParams []interface{}
myParams = append(myParams, "Reno")
// Execute Query
rows, err := bucket.ExecuteN1qlQuery(myQuery, myParams)

Related

Reading BigQuery in Golang. Not all expected results are given. What to do?

Given that the SQL is running perfectly in Query Editor. Still after assigning it to a struct, the data seems to have different values. Why is it like that?
var RunQuery = func(req *http.Request, query string)(*bigquery.RowIterator, error){
ctx := appengine.NewContext(req)
ctxWithDeadline, _ := context.WithTimeout(ctx, 30*time.Minute)
bqClient, bqErr := bigquery.NewClient(ctxWithDeadline, project, option.WithCredentialsFile(serviceAccount))
if bqErr != nil {
log.Errorf(ctx, "%v", bqErr)
return nil, bqErr
}
q := bqClient.Query(query)
job, err := q.Run(ctx)
if err != nil {
log.Errorf(ctx, "%v", err)
return nil, err
}
status, err := job.Wait(ctx)
if err != nil {
log.Errorf(ctx, "%v", err)
return nil, err
}
if err := status.Err(); err != nil {
log.Errorf(ctx, "%v", err)
return nil, err
}
it, err := job.Read(ctx)
if err != nil {
log.Errorf(ctx, "%v", err)
return nil, err
}
log.Infof(ctx, "Total Rows: %v", it.TotalRows)
return it, nil
}
type Customers struct {
CustomerName string `bigquery:"customer_name"`
CustomerAge int `bigquery:"customer_age"`
}
var rowsRead int
func main() {
query := `SELECT
name as customer_name,
age as customer_age
FROM customer_table
WHERE customerStatus = '0'`
customerInformation, customerInfoErr := RunQuery(req, query, false)
if customerInfoErr != nil {
log.Errorf(ctx, "Fetching customer information error :: %v", customerInfoErr)
return
}
for {
var row Customers
err := customerInformation.Next(&row)
log.Infof(ctx, "row %v", row)
if err == iterator.Done {
log.Infof(ctx, "ITERATION COMPLETE. Rows read %v", rowsRead)
break
}
rowsRead++
}
}
Let's say i have Query Results of
customer_name|customer_age
cat | 2
dog | 3
horse | 10
But after assigning it to a struct the results was
customer_name|customer_age
"" | 2
dog | ""
"" | ""
Why is it like this? i even tested it on chunk where i set the limit to 1000, still the same results. But the query results in Query Editor is what i expect
Solved it using Value Loader bigquery.Value. Instead of using expected struct in mapping the query results. used map[string]bigquery.Value. Still don't know why mapping query results with expected struct is not working perfectly. Here is my solution.
for {
row := make(map[string]bigquery.Value)
err := customerInformation.Next(&row)
log.Infof(ctx, "row %v", row)
if err == iterator.Done {
log.Infof(ctx, "ITERATION COMPLETE. Rows read %v", rowsRead)
break
}
rowsRead++
}
From the documentation:
If dst is a pointer to a struct, each column in the schema will be matched with an exported field of the struct that has the same name, ignoring the case. Unmatched schema columns and struct fields will be ignored.
cloud.google.com/go/bigquery
Here you try to resolve customer_age to a struct property named CustomerAge. If you update it to Customer_Age or customer_age it should work.

Read BQ query result without struct

Has anybody tried storing result from a query to a map?
I want to able to read data from BQ tables without having the need to define a struct that matches the BQ table schema.
I have tried following https://kylewbanks.com/blog/query-result-to-map-in-golang, but I want to use a RowIterator instead of the approach in this link.
Here's the code I am struggling with:
//Removed error handling for brewity
ctx := context.Background()
client, _ := bigquery.NewClient(ctx, ProjectID)
query := fmt.Sprintf("SELECT * FROM `%s.%s.%s` LIMIT 5;", ProjectID, DatasetId, ResourceName)
queryResult := client.Query(query)
it, _ := queryResult.Read(ctx)
for {
row := make(map[string]bigquery.Value)
err := it.Next(&row)
if err == iterator.Done {
break
}
if err != nil {
fmt.Printf("Error happened")
}}
I am not sure how to proceed after this, I would ideally like to convert the data into a JSON format.
for {
var values []bigquery.Value
err := it.Next(&values)
if err == iterator.Done {
break
}
if err != nil {
// TODO: Handle error.
}
fmt.Println(values)
}
Place rows into slice as you can store a row using anything that implements the ValueLoader interface, or with a slice or map of bigquery.Value
ref: godocs bq

jmoiron/sqlx, ...interface{}, and abstracting some boilerplate

I thought I'd try to be a little bit "clever" and abstract some of my boilerplate SQL code (using sqlx -- https://github.com/jmoiron/sqlx). The idea is to feed a code a function pointer to process the result, along with the sql string and args that produce the rows. As it happens, the code works fine provided I strip out the "sqlArgs interface" stuff, but in the "cleverer" format errors with the statement
sql: converting Exec argument $1 type: unsupported type []interface {}, a slice of interface
Here are two versions, the first one that errors, the second that works but without parameterization:
//GetRows (doesn't work)
func GetRows(parseRows func(*sqlx.Rows), sql string, sqlArgs ...interface{}) {
db := sqlx.MustConnect("mysql", ConnString)
defer db.Close()
rows, err := db.Queryx(sql, sqlArgs)
defer rows.Close()
if err != nil {
panic(err)
}
parseRows(rows)
}
//GetRows ... (works, but doesn't allow parameterization)
func GetRows(fp func(*sqlx.Rows), sql string) {
db := sqlx.MustConnect("mysql", ConnString)
defer db.Close()
rows, err := db.Queryx(sql)
defer rows.Close()
if err != nil {
panic(err)
}
fp(rows)
}
The idea is to call the code something like this:
func getUser(userID string) User {
var users []*User
parseRows := func(rows *sqlx.Rows) {
for rows.Next() {
var u User
err := rows.StructScan(&u)
if err != nil {
panic(err)
}
users = append(users, u)
}
}
sql := "SELECT * FROM users WHERE userid = ?;"
sqlutils.GetRows(parseRows, sql, userID)
if len(users) == 1{
return users[0]
}
return User{}
}
I guess my code doesn't actually pass through the userID from call to call, but instead it passes an []interface{}, which the sql package can't handle. I'm not sure about that, however. In any case, is there any way to accomplish this idea? Thanks.

GO: Return map from SQL query

I am querying a mysql database in a GO function and want to return key value pairs in a map but can't quite figure out how to accomplish this. So far I have this function:
func GetData(callIds []string) map[string]Records {
//db insert
db, err := sql.Open("mysql", mySql)
if err != nil {
fmt.Printf(err.Error())
}
defer db.Close()
//db query
var foo string
err = db.QueryRow("select foo from bardata where callId = %v", 1).Scan(&foo)
if err != nil {
fmt.Printf(err.Error())
}
fmt.Println(foo)
return nil
I want to return a map with the key being callId and value being foo for each row returned from the query.
First, you need to build up your query. As it is, you're not even using your function input. Since we have a variable number of arguments, we need to do a little work to construct the right number of placeholders:
query := `select callid, foo from bardata where callid in (` +
strings.Repeat(`?,`, len(callIds) - 1) + `?)`
then, execute with the values passed in:
rows, err := db.Query(query, callIds...)
if err != nil {
// handle it
}
defer rows.Close()
then collect the results:
ret := map[string]string{}
for rows.Next() {
var callid, foo string
err = rows.Scan(&callid, &foo)
if err != nil {
// handle it
}
ret[callid] = foo
}
return ret
Caveats:
This will cause a placeholder mismatch error if callIds is an empty slice. If that's possible, then you need to detect it and handle it separately (maybe by returning an error or an empty map — querying the DB shouldn't be necessary).
This returns a map[string]string where the values are whatever "foo" is. In your question you have the function returning a map[string]Records but there's no information about what a Records might be or how to fetch one.
You might want to handle sql.ErrNoRows differently from other errors.
package main
import (
"fmt"
"github.com/bobby96333/goSqlHelper"
)
func main(){
fmt.Println("hello")
conn,err :=goSqlHelper.MysqlOpen("user:password#tcp(127.0.0.1:3306)/dbname")
checkErr(err)
row,err := conn.QueryRow("select * from table where col1 = ? and col2 = ?","123","abc")
checkErr(err)
if *row==nil {
fmt.Println("no found row")
}else{
fmt.Printf("%+v",row)
}
}
func checkErr(err error){
if err!=nil {
panic(err)
}
}
output:
&map[col1:abc col2:123]

How to get the count value from a go-sqlite3 query?

I am using go-sqlite3 to retrieve the number of rows with a column of a certain value:
query := "select count(notebook) from pages where notebook="
result, err := db.Query(fmt.Sprint(query, id))
Where id is passed to the function running the query.
How can I retrieve the count value from result?
This should work:
// Output will be stored here.
var output string
id := "1234"
// Prepare your query
query, err := db.Prepare("select count(notebook) from pages where notebook = ?")
if err != nil {
fmt.Printf("%s", err)
}
defer query.Close()
// Execute query using 'id' and place value into 'output'
err = query.QueryRow(id).Scan(&output)
// Catch errors
switch {
case err == sql.ErrNoRows:
fmt.Printf("No notebook with that ID.")
case err != nil:
fmt.Printf("%s", err)
default:
fmt.Printf("Counted %s notebooks\n", output)
}

Resources