Is there any way to use the SubQuery function in Select?? I have seen it as part of Where clause, but I need on select.
I am solving this temporary doing this:
func GetUserProviders(userID int) ([]userprovider, error) {
providers := []userprovider{}
query := `SELECT (count(users_providers.user_id) > 0)
FROM users_providers
WHERE users_providers.user_id = '` + strconv.Itoa(userID) + `' AND users_providers.provider_id=providers.id`
rows, err := db.DB.Table("providers").
Select("providers.id, providers.name, (" + query + ") as checked").Rows()
if err == nil {
for rows.Next() {
var provider = userprovider{}
db.DB.ScanRows(rows, &provider)
providers = append(providers, provider)
}
}
return providers, err
}
But I would prefer, if possible, to use the function of the ORM instead concatenating strings.
In this case there is no danger, but for other cases, it would be great if there was any function to tranform
// SQL expression
type expr struct {
expr string
args []interface{}
}
into a sanitized String.
Thanks in advance.
Ok... I found the solution:
q := db.DB.Table("users_providers").
Select("(count(users_providers.user_id) > 0)").
Where("users_providers.user_id = ? AND users_providers.provider_id=providers.id", userID).
SubQuery()
rows, err := db.DB.Table("providers").
Select("providers.id, providers.name, ? as checked", q).
Rows()
The Select function accepts 2 arguments: one for the query and the other one for the args, working in the same way than the Where.
Thanks any way :)
Related
I am using apache-arrow/go to read parquet data.
I can parse the data to table by using apach-arrow.
reader, err := ipc.NewReader(buf, ipc.WithAllocator(alloc))
if err != nil {
log.Println(err.Error())
return nil
}
defer reader.Release()
records := make([]array.Record, 0)
for reader.Next() {
rec := reader.Record()
rec.Retain()
defer rec.Release()
records = append(records, rec)
}
table := array.NewTableFromRecords(reader.Schema(), records)
Here, i can get the column info from table.Colunmn(index), such as:
for i, _ := range table.Schema().Fields() {
a := table.Column(i)
log.Println(a)
}
But the Column struct is defined as
type Column struct {
field arrow.Field
data *Chunked
}
and the println result is like
["WARN" "WARN" "WARN" "WARN" "WARN" "WARN" "WARN" "WARN" "WARN" "WARN"]
However, this is not a string or slice. Is there anyway that i can get the data of each column with string type or []interface{} ?
Update:
I find that i can use reflect to get the element from col.
log.Println(col.(*array.Int64).Value(0))
But i am not sure if this is the recommended way to use it.
When working with Arrow data, there's a couple concepts to understand:
Array: Metadata + contiguous buffers of data
Record Batch: A schema + a collection of Arrays that are all the same length.
Chunked Array: A group of Arrays of varying lengths but all the same data type. This allows you to treat multiple Arrays as one single column of data without having to copy them all into a contiguous buffer.
Column: Is just a Field + a Chunked Array
Table: A collection of Columns allowing you to treat multiple non-contiguous arrays as a single large table without having to copy them all into contiguous buffers.
In your case, you're reading multiple record batches (groups of contiguous Arrays) and treating them as a single large table. There's a few different ways you can work with the data:
One way is to use a TableReader:
tr := array.NewTableReader(tbl, 5)
defer tr.Release()
for tr.Next() {
rec := tr.Record()
for i, col := range rec.Columns() {
// do something with the Array
}
}
Another way would be to interact with the columns directly as you were in your example:
for i := 0; i < table.NumCols(); i++ {
col := table.Column(i)
for _, chunk := range col.Data().Chunks() {
// do something with chunk (an arrow.Array)
}
}
Either way, you eventually have an arrow.Array to deal with, which is an interface containing one of the typed Array types. At this point you are going to have to switch on something, you could type switch on the type of the Array itself:
switch arr := col.(type) {
case *array.Int64:
// do stuff with arr
case *array.Int32:
// do stuff with arr
case *array.String:
// do stuff with arr
...
}
Alternately, you could type switch on the data type:
switch col.DataType().ID() {
case arrow.INT64:
// type assertion needed col.(*array.Int64)
case arrow.INT32:
// type assertion needed col.(*array.Int32)
...
}
For getting the data out of the array, primitive types which are stored contiguously tend to have a *Values method which will return a slice of the type. For example array.Int64 has Int64Values() which returns []int64. Otherwise, all of the types have .Value(int) methods which return the value at a particular index as you showed in your example.
Hope this helps!
Make sure you use v9
(import "github.com/apache/arrow/go/v9/arrow") because it have implemented json.Marshaller (from go-json)
Use "github.com/goccy/go-json" for Marshaler (because of this)
Then you can use TableReader to Marshal it then Unmarshal with type []any
In your example maybe look like this:
import (
"github.com/apache/arrow/go/v9/arrow"
"github.com/apache/arrow/go/v9/arrow/array"
"github.com/apache/arrow/go/v9/arrow/memory"
"github.com/goccy/go-json"
)
...
tr := array.NewTableReader(tabel, 6)
defer tr.Release()
// fmt.Printf("tbl.NumRows() = %+v\n", tbl.NumRows())
// fmt.Printf("tbl.NumColumn = %+v\n", tbl.NumCols())
// keySlice is for sorting same as data source
keySlice := make([]string, 0, tabel.NumCols())
res := make(map[string][]any, 0)
var key string
for tr.Next() {
rec := tr.Record()
for i, col := range rec.Columns() {
key = rec.ColumnName(i)
if res[key] == nil {
res[key] = make([]any, 0)
keySlice = append(keySlice, key)
}
var tmp []any
b2, err := json.Marshal(col)
if err != nil {
panic(err)
}
err = json.Unmarshal(b2, &tmp)
if err != nil {
panic(err)
}
// fmt.Printf("key = %s\n", key)
// fmt.Printf("tmp = %+v\n", tmp)
res[key] = append(res[key], tmp...)
}
}
fmt.Println("res", res)
In the Params model I have an array of int Cat_id
I make a request: localhost:8080/products/?cat_id=1,2
And I want to display multiple products from these two categories. How can I parsely build my query?
My func:
func GetAllIproducts(q *models.Products, pagination *models.Params) (*[]models.Products, error) {
var prod []models.Products
offset := (pagination.Page - 1) * pagination.Limit
result := config.DB.Model(&models.Products{}).Where(q).Where("cat_id=?", pagination.Cat_id).Limit(pagination.Limit).Offset(offset).Find(&prod) //Problem is here
if result.Error != nil {
msg := result.Error
return nil, msg
}
return &prod, nil
}
When i use Debug i got this:
SELECT * FROM "products" WHERE cat_id=(1,2) AND "products"."deleted_at" IS NULL
Assuming that the cat_id is an integer (lets assume int64), you could these two things:
Convert pagination.Cat_id string to an []int64 slice (lets call this variable catIDs of type []int64) to get a slice with separated int64 elements.
Change your Where clause to something like this:
result := config.DB.Model(&models.Products{}).Where(q).Where("cat_id IN (?)", catIDs).Limit(pagination.Limit).Offset(offset).Find(&prod)
I have a service which takes a SQL Query, runs the query on Amazon Redshift, using the database/sql drivers. However, I can't convert the result to a struct, because the queries are big data tasks on various tables, not created within this service. So I have to return a 'loose' data structure. I'm parsing the data returned into JSON and storing it in S3.
However, I'm having some odd issues with the data types returned. The queries, for numeric columns, return a map of uint8's instead of a numeric value. I understand that this is because the database driver can't have an opinion on what to convert it to because it could be imprecise. But I can't seem to be able to convert between []uint8 and an integer either.
Here's my code that queries the database:
// Execute executes SQL commands
func (r *Runner) Execute(query string, args ...interface{}) (types.Results, error) {
var results types.Results
rows, err := r.db.Query(query, args...)
if err != nil {
return results, err
}
columns, _ := rows.Columns()
colNum := len(columns)
values := make([]interface{}, colNum)
for i := range values {
var ii interface{}
values[i] = &ii
}
for rows.Next() {
rows.Scan(values...)
result := make(types.Result)
for i, colName := range columns {
rawValue := *(values[i].(*interface{}))
if reflect.TypeOf(rawValue).String() == "[]uint8" {
byteVal := rawValue.([]byte)
val := Intfrombytes(byteVal)
log.Println("Converted:", val)
}
result[colName] = rawValue
}
results = append(results, result)
}
return results, nil
}
I created the following function to attempt to convert between []uint8 and uint32.
func Intfrombytes(bytes []uint8) uint16 {
bits := binary.LittleEndian.Uint16(bytes)
return bits
}
However, if I insert 200 into that table, I get back 12339. The approach feels pretty flaky, generally. I'm doubting my decision to use Go for this as I'm dealing with undefined, loose data structures.
Is there a better approach to generic queries such as my example, or is there a way I can convert my numeric results into an integer?
I think you might be interpreting a string ([]uint8 == []byte), actually. See https://play.golang.org/p/Rfpey2NPiI7
originalValue := []uint8{0x32, 0x30, 0x30} // "200"
bValue := []byte(originalValue) // byte is a uint8 anyway
fmt.Printf("Converted to uint16: %d\n", binary.LittleEndian.Uint16(bValue))
fmt.Printf("Actual value: %s", string(bValue))
This has bitten me before when dealing with pq and some crypto code.
I try to create bulk insert. I use gorm github.com/jinzhu/gorm
import (
"fmt"
dB "github.com/edwinlab/api/repositories"
)
func Update() error {
tx := dB.GetWriteDB().Begin()
sqlStr := "INSERT INTO city(code, name) VALUES (?, ?),(?, ?)"
vals := []interface{}{}
vals = append(vals, "XX1", "Jakarta")
vals = append(vals, "XX2", "Bandung")
tx.Exec(sqlStr, vals)
tx.Commit()
return nil
}
But I got an error:
Error 1136: Column count doesn't match value count at row 1 becuse i return wrong query
INSERT INTO city(code, name) VALUES ('XX1','Jakarta','XX2','Bandung', %!v(MISSING)),(%!v(MISSING), %!v(MISSING))
If I use manual query it works:
tx.Exec(sqlStr, "XX1", "Jakarta", "XX2", "Bandung")
It will generate:
INSERT INTO city(code, name) VALUES ('XX1', 'Jakarta'),('XX2', 'Bandung')
The problem is how to make array interface to generate string like "XX1", "Jakarta", ...
Thanks for help.
If you want to pass elements of a slice to a function with variadic parameter, you have to use ... to tell the compiler you want to pass all elements individually and not pass the slice value as a single argument, so simply do:
tx.Exec(sqlStr, vals...)
This is detailed in the spec: Passing arguments to ... parameters.
Tx.Exec() has the signature of:
func (tx *Tx) Exec(query string, args ...interface{}) (Result, error)
So you have to pass vals.... Also don't forget to check returned error, e.g.:
res, err := tx.Exec(sqlStr, vals...)
if err != nil {
// handle error
}
How can i do this simplified in Golang
var planningDate string
date, ok := data["planningDate"]
if !ok {
planningDate = util.TimeStamp()
} else {
planningDate = date
}
Thanx
I don't see any way to do this in a single line, as there is no ternary operator in Go. You cannot use | either as operands are not numbers. However, here is a solution in three lines (assuming date was just a temporary variable):
planningDate, ok := data["planningDate"]
if !ok {
planningDate = util.TimeStamp()
}
You can do something like:
func T(exp bool, a, b interface{}) interface{} {
if exp {
return a
}
return b
}
and use it whenever you want, like a ternary-operator:
planningDate = T((ok), date, util.TimeStamp())