How to append to empty interface (that has been verified to be a *[]struct)?
func main() {
var mySlice []myStruct // myStruct can be any struct (dynamic)
decode(&mySlice, "...")
}
func decode(dest interface{}, src string) {
// assume dest has been verified to be *[]struct
var modelType reflect.Type = getStructType(dest)
rows, fields := getRows(src)
for _, row := range rows {
// create new struct of type modelType and assign all fields
model := reflect.New(modelType)
for field := fields {
fieldValue := getRowValue(row, field)
model.Elem().FieldByName(field).Set(fieldValue)
}
castedModelRow := model.Elem().Interface()
// append model to dest; how to do this?
// dest = append(dest, castedModelRow)
}
}
Things I've tried:
This simply panics: reflect: call of reflect.Append on ptr Value (as we pass &mySlice instead of mySlice)
dest = reflect.Append(reflect.ValueOf(dest), reflect.ValueOf(castedModelRow))
This works but doesn't set the value back to dest... in main func, len(mySlice) remains 0 after decode function is called.
func decode(dest interface{}, src string) {
...
result := reflect.MakeSlice(reflect.SliceOf(modelType), rowCount, rowCount)
for _, row : range rows {
...
result = reflect.Append(result, reflect.ValueOf(castedModelRow))
}
dest = reflect.ValueOf(result)
}
Here's how to fix the second decode function shown in the question. The statement
dest = reflect.ValueOf(result)
modifies local variable dest, not the caller's value. Use the following statement to modify the caller's slice:
reflect.ValueOf(dest).Elem().Set(result)
The code in the question appends decoded elements after the elements created in reflect.MakeSlice. The resulting slice has len(rows) zero values followed by len(rows) decoded values. Fix by changing
result = reflect.Append(result, reflect.ValueOf(castedModelRow))
to:
result.Index(i).Set(model)
Here's the update version of the second decode function in the question:
func decode(dest interface{}, src string) {
var modelType reflect.Type = getStructType(dest)
rows, fields := getRows(src)
result := reflect.MakeSlice(reflect.SliceOf(modelType), len(rows), len(rows))
for i, row := range rows {
model := reflect.New(modelType).Elem()
for _, field := range fields {
fieldValue := getRowValue(row, field)
model.FieldByName(field).Set(fieldValue)
}
result.Index(i).Set(model)
}
reflect.ValueOf(dest).Elem().Set(result)
}
Run it on the Playground.
You were very close with your original solution. You had to de-reference the pointer before calling the append operation. This solution would be helpful if your dest already had some existing elements and you don't want to lose them by creating a newSlice.
tempDest := reflect.ValueOf(dest).Elem()
tempDest = reflect.Append(tempDest, reflect.ValueOf(model.Interface()))
Similar to how #I Love Reflection pointed out, you finally need to set the new slice back to the pointer.
reflect.ValueOf(dest).Elem().Set(tempDest)
Overall Decode:
var modelType reflect.Type = getStructType(dest)
rows, fields := getRows(src)
tempDest := reflect.ValueOf(dest).Elem()
for _, row := range rows {
model := reflect.New(modelType).Elem()
for _, field := range fields {
fieldValue := getRowValue(row, field)
model.FieldByName(field).Set(fieldValue)
}
tempDest = reflect.Append(tempDest, reflect.ValueOf(model.Interface()))
}
reflect.ValueOf(dest).Elem().Set(tempDest)
Related
I am using apache-arrow/go to read parquet data.
I can parse the data to table by using apach-arrow.
reader, err := ipc.NewReader(buf, ipc.WithAllocator(alloc))
if err != nil {
log.Println(err.Error())
return nil
}
defer reader.Release()
records := make([]array.Record, 0)
for reader.Next() {
rec := reader.Record()
rec.Retain()
defer rec.Release()
records = append(records, rec)
}
table := array.NewTableFromRecords(reader.Schema(), records)
Here, i can get the column info from table.Colunmn(index), such as:
for i, _ := range table.Schema().Fields() {
a := table.Column(i)
log.Println(a)
}
But the Column struct is defined as
type Column struct {
field arrow.Field
data *Chunked
}
and the println result is like
["WARN" "WARN" "WARN" "WARN" "WARN" "WARN" "WARN" "WARN" "WARN" "WARN"]
However, this is not a string or slice. Is there anyway that i can get the data of each column with string type or []interface{} ?
Update:
I find that i can use reflect to get the element from col.
log.Println(col.(*array.Int64).Value(0))
But i am not sure if this is the recommended way to use it.
When working with Arrow data, there's a couple concepts to understand:
Array: Metadata + contiguous buffers of data
Record Batch: A schema + a collection of Arrays that are all the same length.
Chunked Array: A group of Arrays of varying lengths but all the same data type. This allows you to treat multiple Arrays as one single column of data without having to copy them all into a contiguous buffer.
Column: Is just a Field + a Chunked Array
Table: A collection of Columns allowing you to treat multiple non-contiguous arrays as a single large table without having to copy them all into contiguous buffers.
In your case, you're reading multiple record batches (groups of contiguous Arrays) and treating them as a single large table. There's a few different ways you can work with the data:
One way is to use a TableReader:
tr := array.NewTableReader(tbl, 5)
defer tr.Release()
for tr.Next() {
rec := tr.Record()
for i, col := range rec.Columns() {
// do something with the Array
}
}
Another way would be to interact with the columns directly as you were in your example:
for i := 0; i < table.NumCols(); i++ {
col := table.Column(i)
for _, chunk := range col.Data().Chunks() {
// do something with chunk (an arrow.Array)
}
}
Either way, you eventually have an arrow.Array to deal with, which is an interface containing one of the typed Array types. At this point you are going to have to switch on something, you could type switch on the type of the Array itself:
switch arr := col.(type) {
case *array.Int64:
// do stuff with arr
case *array.Int32:
// do stuff with arr
case *array.String:
// do stuff with arr
...
}
Alternately, you could type switch on the data type:
switch col.DataType().ID() {
case arrow.INT64:
// type assertion needed col.(*array.Int64)
case arrow.INT32:
// type assertion needed col.(*array.Int32)
...
}
For getting the data out of the array, primitive types which are stored contiguously tend to have a *Values method which will return a slice of the type. For example array.Int64 has Int64Values() which returns []int64. Otherwise, all of the types have .Value(int) methods which return the value at a particular index as you showed in your example.
Hope this helps!
Make sure you use v9
(import "github.com/apache/arrow/go/v9/arrow") because it have implemented json.Marshaller (from go-json)
Use "github.com/goccy/go-json" for Marshaler (because of this)
Then you can use TableReader to Marshal it then Unmarshal with type []any
In your example maybe look like this:
import (
"github.com/apache/arrow/go/v9/arrow"
"github.com/apache/arrow/go/v9/arrow/array"
"github.com/apache/arrow/go/v9/arrow/memory"
"github.com/goccy/go-json"
)
...
tr := array.NewTableReader(tabel, 6)
defer tr.Release()
// fmt.Printf("tbl.NumRows() = %+v\n", tbl.NumRows())
// fmt.Printf("tbl.NumColumn = %+v\n", tbl.NumCols())
// keySlice is for sorting same as data source
keySlice := make([]string, 0, tabel.NumCols())
res := make(map[string][]any, 0)
var key string
for tr.Next() {
rec := tr.Record()
for i, col := range rec.Columns() {
key = rec.ColumnName(i)
if res[key] == nil {
res[key] = make([]any, 0)
keySlice = append(keySlice, key)
}
var tmp []any
b2, err := json.Marshal(col)
if err != nil {
panic(err)
}
err = json.Unmarshal(b2, &tmp)
if err != nil {
panic(err)
}
// fmt.Printf("key = %s\n", key)
// fmt.Printf("tmp = %+v\n", tmp)
res[key] = append(res[key], tmp...)
}
}
fmt.Println("res", res)
This question already has answers here:
Access struct property by name
(5 answers)
Golang dynamic access to a struct property
(2 answers)
How to access to a struct parameter value from a variable in Golang
(1 answer)
Closed 9 months ago.
Came from javascript background, and just started with Golang. I am learning all the new terms in Golang, and creating new question because I cannot find the answer I need (probably due to lack of knowledge of terms to search for)
I created a custom type, created an array of types, and I want to create a function where I can retrieve all the values of a specific key, and return an array of all the values (brands in this example)
type Car struct {
brand string
units int
}
....
var cars []Car
var singleCar Car
//So i have a loop here and inside the for-loop, i create many single cars
singleCar = Car {
brand: "Mercedes",
units: 20
}
//and i append the singleCar into cars
cars = append(cars, singleCar)
Now what I want to do is to create a function that I can retrieve all the brands, and I tried doing the following. I intend to have key as a dynamic value, so I can search by specific key, e.g. brand, model, capacity etc.
func getUniqueByKey(v []Car, key string) []string {
var combined []string
for i := range v {
combined = append(combined, v[i][key])
//this line returns error -
//invalid operation: cannot index v[i] (map index expression of type Car)compilerNonIndexableOperand
}
return combined
//This is suppose to return ["Mercedes", "Honda", "Ferrari"]
}
The above function is suppose to work if i use getUniqueByKey(cars, "brand") where in this example, brand is the key. But I do not know the syntaxes so it's returning error.
Seems like you're trying to get a property using a slice accessor, which doesn't work in Go. You'd need to write a function for each property. Here's an example with the brands:
func getUniqueBrands(v []Car) []string {
var combined []string
tempMap := make(map[string]bool)
for _, c := range v {
if _, p := tempMap[c.brand]; !p {
tempMap[c.brand] = true
combined = append(combined, c.brand)
}
}
return combined
}
Also, note the for loop being used to get the value of Car here. Go's range can be used to iterate over just indices or both indices and values. The index is discarded by assigning to _.
I would recommend re-using this code with an added switch-case block to get the result you want. If you need to return multiple types, use interface{} and type assertion.
Maybe you could marshal your struct into json data then convert it to a map. Example code:
package main
import (
"encoding/json"
"fmt"
)
type RandomStruct struct {
FieldA string
FieldB int
FieldC string
RandomFieldD bool
RandomFieldE interface{}
}
func main() {
fieldName := "FieldC"
randomStruct := RandomStruct{
FieldA: "a",
FieldB: 5,
FieldC: "c",
RandomFieldD: false,
RandomFieldE: map[string]string{"innerFieldA": "??"},
}
randomStructs := make([]RandomStruct, 0)
randomStructs = append(randomStructs, randomStruct, randomStruct, randomStruct)
res := FetchRandomFieldAndConcat(randomStructs, fieldName)
fmt.Println(res)
}
func FetchRandomFieldAndConcat(randomStructs []RandomStruct, fieldName string) []interface{} {
res := make([]interface{}, 0)
for _, randomStruct := range randomStructs {
jsonData, _ := json.Marshal(randomStruct)
jsonMap := make(map[string]interface{})
err := json.Unmarshal(jsonData, &jsonMap)
if err != nil {
fmt.Println(err)
// panic(err)
}
value, exists := jsonMap[fieldName]
if exists {
res = append(res, value)
}
}
return res
}
I would like to loop through a slice of structs, and populate a struct field (which is a map) by passing in each struct to a function.
I have the below struct
type thing struct {
topicThing map[string]int
}
and I have the below functions
func main() {
ths := make([]thing, 0)
for i := 0; i < 10; i++ {
var th thing
ths = append(ths, th)
}
for _, th := range ths {
dothing(&th)
}
for _, th := range ths {
fmt.Println(th.topicThing)
}
}
func dothing(th *thing) {
tc := make(map[string]int)
tc["Hello"] = 1
tc["Bye"] = 2
th.topicThing = tc
}
The main function creates a slice of things (refered as ths), and passes each thing to the dothing() function by iterating over them.
Within dothing(), I create a new map, populate it with data, and assigns it to the passed in thing's attribute. However, by the time we iterate over ths in the main function to print topicThing of each thing, the map is empty.
Since make() creates objects within the heap, I was hoping it would be accessible even outside of the function scope. Can anyone tell me why this is happening?
P.S.
if I change the dothing() function like below:
func dothing(th *thing) {
th.topicThing["Hello"] = 1
th.topicThing["Bye"] = 2
}
The code works as expected, meaning the map is populated with data when accessed in the main function.
The range copies your object.
So when you do this,
for _, th := range ths {
dothing(&th)
}
you are actually dothing on a copy.
For example, with this main:
func main() {
ths := make([]thing, 0)
for i := 0; i < 10; i++ {
var th thing
ths = append(ths, th)
}
for _, th := range ths {
dothing(&th)
fmt.Println(th.topicThing)
}
it will print the right thing, since we are still working on the copy.
In order to not copy, use the array index:
for idx, _ := range ths {
dothing(&ths[idx])
}
I connect BigQuery with Go language as the following API documentation demonstrates,
https://cloud.google.com/bigquery/docs/reference/libraries?hl=en_US
After that, I need to get sql results in specific row and column, and judge if it equals a specific string. Could I change bigquery.Value to string and how to do that?
See how to use RowIterator.Next() here.
Next loads the next row into dst. Its return value is iterator.Done if there are no more results. Once Next returns iterator.Done, all subsequent calls will return iterator.Done.
dst may implement ValueLoader, or may be a *[]Value, *map[string]Value, or struct pointer.
Value is of type interface{} so if you are sure that value you have is string str := fmt.Sprintf("%v", row[i]) should work. It is often better to define a struct type that has members representing fields of query result row (with types mapped according to the table in documentation I linked above) and give the pointer to it to RowIterator.Next() instead of slice/map of bigquery.Value.
type myRow struct {
Name string
Num int
}
// ...
q := client.Query("select name, num from t1")
it, err := q.Read(ctx)
// handle err
for {
// instead of: var row []bigquery.Value
var row myRow // <-- use custom struct type here
err := it.Next(&row)
if err == iterator.Done {
break
}
// handle err != nil
someFuncThatTakesString(row.Name)
}
Team,
new to Programming.
I have data available after unmarshaling the Json as shown below, which has nested Key values. flat key values I am able to access, how do I access nested key values.
Here is the byte slice data shown below after unmarshaling —>
tables:[map[name:basic__snatpool_members] map[name:net__snatpool_members] map[name:optimizations__hosts] map[columnNames:[name] name:pool__hosts rows:[map[row:[ry.hj.com]]]] traffic_group:/Common/traffic-group-1
Flat key values I am able to access by using the following code
p.TrafficGroup = m[“traffic_group”].(string)
here is the complete function
func dataToIapp(name string, d *schema.ResourceData) bigip.Iapp {
var p bigip.Iapp
var obj interface{}
jsonblob := []byte(d.Get("jsonfile").(string))
err := json.Unmarshal(jsonblob, &obj)
if err != nil {
fmt.Println("error", err)
}
m := obj.(map[string]interface{}) // Important: to access property
p.Name = m[“name”].(string)
p.Partition = m[“partition”].(string)
p.InheritedDevicegroup = m[“inherited_devicegroup”].(string)
}
Note: This may not work with your JSON structure. I inferred what it would be based on your question but without the actual structure, I cannot guarantee this to work without modification.
If you want to access them in a map, you need to assert that the interface pulled from the first map is actually a map. So you would need to do this:
tmp := m["tables"]
tables, ok := tmp.(map[string]string)
if !ok {
//error handling here
}
r.Name = tables["name"].(string)
But instead of accessing the unmarshaled JSON as a map[string]interface{}, why don't you create structs that match your JSON output?
type JSONRoot struct {
Name string `json:"name"`
Partition string `json:"partition"`
InheritedDevicegroup string `json:"inherited_devicegroup"`
Tables map[string]string `json:"tables"` //Ideally, this would be a map of structs
}
Then in your code:
func dataToIapp(name string, d *schema.ResourceData) bigip.Iapp {
var p bigip.Iapp
var obj &JSONRoot{}
jsonblob := []byte(d.Get("jsonfile").(string))
err := json.Unmarshal(jsonblob, &obj)
if err != nil {
fmt.Println("error", err)
}
p.Name = obj.Name
p.Partition = obj.Partition
p.InheritedDevicegroup = obj.InheritedDevicegroup
p.Name = obj.Tables["name"]
}
JSON objects are unmarshaled into map[string]interface{}, JSON arrays into []interface{}, same applies for nested objects/arrays.
So for example if a key/index maps to a nested object you need to type assert the value to map[string]interface{} and if the key/index maps to an array of objects you first need to assert the value to []interface{} and then each element to map[string]interface{}.
e.g. (for brevity this code is not guarding against panic)
tables := obj.(map[string]interface{})["tables"]
table1 := tables.([]interface{})[0]
name := table1.(map[string]interface{})["name"]
namestr := name.(string)
However, if it's the case that the json you are parsing is not dynamic but instead has a specific structure you should define a struct type that mirrors that structure and unmarshal the json into that.
All you have to do is repeatedly accessing the map via type-switching or assertion:
for _, table := range m["tables"] {
switch val := table {
case string:
fmt.Println("table is string")
case int:
fmt.Println("table is integer")
// This is your case, since JSON is unmarshaled to type []interface{} and map[string]interface{}
case []interface{}:
fmt.Println("table is a slice of interface{}")
for _, tb := range value {
if m, ok := tb.(map[string]interface{}); ok {
// Now it's accessible
fmt.Println(m["name"])
}
}
default:
fmt.Println("unknown type")
}
}
You might want to handle errors better than this.
To read more, check out my writing from a while ago https://medium.com/code-zen/dynamically-creating-instances-from-key-value-pair-map-and-json-in-go-feef83ab9db2.