I am getting nested data from mongo and I want to flatten that out in a structure to store it in a csv file.
The data looks like this:
{
"_id" : "bec7bfaa-7a47-4f61-a463-5966a2b5c8ce",
"data" : {
"driver" : {
"etaToStore" : 156
},
"createdAt" : 1532590052,
"_id" : "07703a33-a3c3-4ad5-9e06-d05063474d8c"
}
}
And the structure I want to eventually get should be something like this
type EventStruct struct {
Id string `bson:"_id"`
DataId string `bson:"data._id"`
EtaToStore string `bson:"data.driver.etaToStore"`
CreatedAt int `bson:"data.createdAt"`
}
This doesn't work, so following some SO answers I broke it down into multiple structures:
// Creating a structure for the inner struct that I will receive from the query
type DriverStruct struct {
EtaToStore int `bson:"etaToStore"`
}
type DataStruct struct {
Id string `bson:"_id"`
Driver DriverStruct `bson:"driver"`
CreatedAt int `bson:"createdAt"`
}
// Flattenning out the structure & getting the fields we need only
type EventStruct struct {
Id string `bson:"_id"`
Data DataStruct `bson:"data"`
}
This gets all the data from the Mongo query result but it's nested:
{
"Id": "bec7bfaa-7a47-4f61-a463-5966a2b5c8ce",
"Data": {
"Id": a33-a3c3-4ad5-9e06-d05063474d8c,
"Driver": {
"EtaToStore": 156
},
"CreatedAt": 1532590052
}
}
What I want to end up with is:
{
"Id": "bec7bfaa-7a47-4f61-a463-5966a2b5c8ce",
"DataId": "a33-a3c3-4ad5-9e06-d05063474d8c",
"EtaToStore": 156,
"CreatedAt": 1532590052
}
I'm sure there's an easy way to do this but I can't figure it out, help!
You can implement the json.Unmarshaler interface to add a custom method to unmarshal the json. Then in that method, you can use the nested struct format, but return the flattened one at the end.
func (es *EventStruct) UnmarshalJSON(data []byte) error {
// define private models for the data format
type driverInner struct {
EtaToStore int `bson:"etaToStore"`
}
type dataInner struct {
ID string `bson:"_id" json:"_id"`
Driver driverInner `bson:"driver"`
CreatedAt int `bson:"createdAt"`
}
type nestedEvent struct {
ID string `bson:"_id"`
Data dataInner `bson:"data"`
}
var ne nestedEvent
if err := json.Unmarshal(data, &ne); err != nil {
return err
}
// create the struct in desired format
tmp := &EventStruct{
ID: ne.ID,
DataID: ne.Data.ID,
EtaToStore: ne.Data.Driver.EtaToStore,
CreatedAt: ne.Data.CreatedAt,
}
// reassign the method receiver pointer
// to store the values in the struct
*es = *tmp
return nil
}
Runnable example: https://play.golang.org/p/83VHShfE5rI
This question is a year and a half old, but I ran into it today while reacting to an API update which put me in the same situation, so here's my solution (which, admittedly, I haven't tested with bson, but I'm assuming the json and bson field tag reader implementations handle them the same way)
Embedded (sometimes referred to as anonymous) fields can capture JSON, so you can compose several structs into a compound one which behaves like a single structure.
{
"_id" : "bec7bfaa-7a47-4f61-a463-5966a2b5c8ce",
"data" : {
"driver" : {
"etaToStore" : 156
},
"createdAt" : 1532590052,
"_id" : "07703a33-a3c3-4ad5-9e06-d05063474d8c"
}
}
type DriverStruct struct {
EtaToStore string `bson:"etaToStore"`
type DataStruct struct {
DriverStruct `bson:"driver"`
DataId string `bson:"_id"`
CreatedAt int `bson:"createdAt"`
}
type EventStruct struct {
DataStruct `bson:"data"`
Id string `bson:"_id"`
}
You can access the nested fields of an embedded struct as though the parent struct contained an equivalent field, so e.g. EventStructInstance.EtaToStore is a valid way to get at them.
Benefits:
You don't have to implement the Marshaller or Unmarshaller interfaces, which is a little overkill for this problem
Doesn't require any copying fields between intermediate structs
Handles both marshalling and unmarshalling for free
Read more about embedded fields here.
You can use basically the same logic as:
package utils
// FlattenIntegers flattens nested slices of integers
func FlattenIntegers(slice []interface{}) []int {
var flat []int
for _, element := range slice {
switch element.(type) {
case []interface{}:
flat = append(flat, FlattenIntegers(element.([]interface{}))...)
case []int:
flat = append(flat, element.([]int)...)
case int:
flat = append(flat, element.(int))
}
}
return flat
}
(Source: https://gist.github.com/Ullaakut/cb1305ede48f2391090d57cde355074f)
By adapting it for what's in your JSON. If you want it to be generic, then you'll need to support all of the types it can contain.
Related
Let's say I have the first struct as
type Person struct {
Name string `json:"person_name"`
Age int `json:"person_age"`
Data map[string]interface{} `json:"data"`
}
and I am trying to marshal an array of the above struct
Things work well till here and a sample response I receive is
[
{
"person_name":"name",
"person_age":12,"data":{}
},
{
"person_name":"name2",
"person_age":12,"data":{}
}
]
Now, I need to append another struct over here and the final response should like
[
{
"person_name":"name",
"person_age":12,"data":{}
},
{
"person_name":"name2",
"person_age":12,"data":{}
},
{
"newData":"value"
}
]
So can someone help on this and how i can achieve this ?
I tried by creating an []interface{} and then iterating over person to append each data, but the issue in this approach is that it makes the Data as null if in case it's an empty string.
I would need it be an empty map only.
Let me prefix this by saying this looks to me very much like you might be dealing with an X-Y problem. I can't really think of many valid use-cases where one would end up with a defined data-type that has to somehow be marshalled alongside a completely different, potentially arbitrary/freeform data structure. It's possible, though, and this is how you could do it:
So you just want to append a completely different struct to the data-set, then marshal it and return the result as JSON? You'll need to create a new slice for that:
personData := []Person{} // person 1 and 2 here
more := map[string]string{ // or some other struct
"newdata": "value",
}
allData := make([]any, 0, len(personData) + 1) // the +1 is for the more, set cap to however many objects you need to marshal
for _, p := range personData {
allData = append(allData, p) // copy over to this slice, because []Person is not compatible with []any
}
allData = append(allData, more)
bJSON, err := json.Marshal(allData)
if err != nil {
// handle
}
fmt.Println(string(bJSON))
Essentially, because you're trying to marshal a slice containing multiple different types, you have to add all objects to a slice of type any (short for interface{}) before marshalling it all in one go
Cleaner approaches
There are much, much cleaner approaches that allow you to unmarshal the data, too, assuming the different data-types involved are known beforehand. Consider using a wrapper type like so:
type Person struct {} // the one you have
type NewData {
NewData string `json:"newdata"`
}
type MixedData struct {
*Person
*NewData
}
In this MixedData type, both Person and NewData are embedded, so MixedData will essentially act as a merged version of all embedded types (fields with the same name should be overridden at this level). With this type, you can marshal and unmarshal the JSON accordingly:
allData := []MixedData{
{
Person: &person1,
},
{
Person: &person2,
},
{
NewData: &newData,
},
}
Similarly, when you have a JSON []byte input, you can unmarshal it same as you would any other type:
data := []MixedData{}
if err := json.Unmarshal(&data, in); err != nil {
// handle
}
fmt.Printf("%#v\n", data) // it'll be there
It pays to add some functions/getters to the MixedData type, though:
func (m MixedData) IsPerson() bool { return m.Person != nil }
func (m MixedData) Person() *Person {
if m.Person == nil {
return nil
}
cpy := *m.Person // create a copy to avoid shared pointers
return &cpy // return pointer to the copy
}
Do the same for all embedded types and this works like a charm.
As mentioned before, should your embedded types contain fields with the same name, then you should override them in the MixedData type. Say you have a Person and Address type, and both have an ID field:
type MixedData struct {
ID string `json:"id"`
*Person
*Address
}
This will set the ID value on the MixedData type, and all other (non-shared) fields on the corresponding embedded struct. You can then use the getters to set the ID where needed, or use a custom unmarshaller, but I'll leave that to you to implement
I'm trying to figure out how to create a slice I can more easily manipulate and use JUST the values from to later iterate over to make a number of API requests. The slice of integers are API IDs. I am successfully making a struct with custom types after making a GET to retrieve the JSON Array of IDs, but I now need to pull only the values from that JSON array and dump them into a slice without the key "id" (which will likely need to change over time in size).
This is my JSON:
{
"data": [
{
"id": 38926
},
{
"id": 38927
}
],
"meta": {
"pagination": {
"total": 163795,
"current_page": 3,
"total_pages": 81898
}
}
}
And I would like this from it:
{38926, 38927}
If you want custom Unmarshaling behavior, you need a custom type with its own json.Unmarshaler e.g.
type ID int
func (i *ID) UnmarshalJSON(data []byte) error {
id := struct {
ID int `json:"id"`
}{}
err := json.Unmarshal(data, &id)
if err != nil {
return err
}
*i = ID(id.ID)
return nil
}
To use this, reference this type in your struct e.g.
type data struct {
IDs []ID `json:"data"`
}
var d data
working example: https://go.dev/play/p/i3MAy85nr4X
There is the following field on dynamo
{
"config": {
"BASE_AUTH_URL_KEY": "https://auth.blab.bob.com",
"BASE_URL": "https://api.dummy.data.com",
"CONN_TIME_OUT_SECONDS": "300000",
"READ_TIME_OUT_SECONDS": "300000"
},
"id": "myConfig"
}
and getting the element with dynamodbattribute
import(
"github.com/aws/aws-sdk-go/service/dynamodb"
"github.com/aws/aws-sdk-go/service/dynamodb/dynamodbattribute")
result, err := svc.GetItem(&dynamodb.GetItemInput{
TableName: aws.String(tableName),
Key: map[string]*dynamodb.AttributeValue{
"id": {
S: aws.String(configId),
},
},
})
this code its working but when i try to retrieve the object its rendered like this
map[config:{
M: {
BASE_AUTH_URL_KEY: {
S: "https://auth.blab.bob.com"
},
CONN_TIME_OUT_SECONDS: {
S: "300000"
},
READ_TIME_OUT_SECONDS: {
S: "300000"
},
BASE_URL: {
S: "https://api.dummy.data.com"
}
}
} id:{
S: "myConfig"
}]
for that reason when i try to unmarshal my object the object unmarshalled returns as {}
type Config struct {
id string
baseAuthUrlKey string
baseUrl string
connectTimeOutSecs string
readTimeOutSecs string
}
item := Config{}
err = dynamodbattribute.UnmarshalMap(result.Item, &item)
how can i assign the value return from the GetItem that seems to be a map to my struct ?
The root of the issue is that your Config struct is incorrectly structured.
I recommend using json-to-go when converting JSON to Go structs; this tool will help you catch issues like this in the future.
Once you get your struct constructed correctly, you'll also notice that your struct fields are not capitalized, meaning they will not be exported (i.e. able to be used by other packages), which is another reason that your UnmarshalMap code will not return the result you are expecting.
Here is a good answer on struct field visibility and its importance, briefly summarized above.
Below is a corrected version of your struct that, combined with your UnmarshalMap code, will correctly allow you to print your item and not receive a {} which is no fun.
type Item struct {
Config struct {
BaseAuthUrlKey string `json:"BASE_AUTH_URL_KEY"`
BaseUrl string `json:"BASE_URL"`
ConnTimeoutSeconds string `json:"CONN_TIME_OUT_SECONDS"`
ReadTimeoutSeconds string `json:"READ_TIME_OUT_SECONDS"`
} `json:"config"`
ID string `json:"id"`
}
I have this model data which I use to save data to the database
type Nos struct {
UnitCode string `json:"unitCode" bson:"unitCode"`
Version string `json:"version" bson:"version"`
Reviews struct {
ReviewCommentsHistory []reviewCommentsHistory `json:"reviewCommentsHistory" bson:"reviewCommentsHistory"`
}
ID bson.ObjectId `bson:"_id"`
CreatedAt time.Time `bson:"created_at"`
UpdatedAt time.Time `bson:"updated_at"`
}
type reviewCommentsHistory struct {
ReviewHistoryDate time.Time `json:"reviewHistoryDate" bson:"reviewHistoryDate,omitempty"`
}
My mongodb data is as follows
{
"_id" : ObjectId("5a992d5975885e236c8dc723"),
"unitCode" : "G&J/N3601",
"version" : "3",
"Reviews" : {
"reviewCommentsHistory" : [
{
"reviewHistoryDate" : ISODate("2018-04-28T18:30:00.000Z")
}
]
}
}
Using golang package mgo I have written the following piece of code to get the document
func (nosDal NosDal) FindNos(unitCode string, version string) ([]model.Nos, error) {
var result []model.Nos
var err error
col := repository.DB.C("nos")
err = col.Find(bson.M{"unitCode": strings.ToUpper(unitCode), "version": version}).All(&result)
fmt.Println(result[0])
return result, err
}
My response returns the value of null for Reviews.reviewCommentsHistory. Is there an issue with my model? Any pointers would be useful on how to check if the response is mapping to my model
This is my output
{
"unitCode": "G&J/N3601",
"version": "3",
"Reviews": {
"reviewCommentsHistory": null
},
"ID": "5a992d5975885e236c8dc723",
"CreatedAt": "2018-03-02T16:24:17.19+05:30",
"UpdatedAt": "2018-03-05T18:04:28.478+05:30"
}
The problem is that for the Nos.Reviews field you did not specify any bson tag, which means the default mapping will be applied, which means the field name will be used with lowercase letter: "reviews". But your MongoDB contains the document with a capital letter: "Reviews", so the mapping will fail (unmarshaling will not match the MongoDB "Reviews" to the Nos.Reviews field).
Specify the missing tag:
Reviews struct {
ReviewCommentsHistory []reviewCommentsHistory `json:"reviewCommentsHistory" bson:"reviewCommentsHistory"`
} `json:"Reviews" bson:"Reviews"`
And it will work.
I have the following structs...
type Menu struct {
Id string `protobuf:"bytes,1,opt,name=id" json:"id,omitempty"`
Name string `protobuf:"bytes,2,opt,name=name" json:"name,omitempty"`
Description string `protobuf:"bytes,3,opt,name=description" json:"description,omitempty"`
Mixers []*Mixer `protobuf:"bytes,4,rep,name=mixers" json:"mixers,omitempty"`
Sections []*Section `protobuf:"bytes,5,rep,name=sections" json:"sections,omitempty"`
}
And...
type Menu struct {
ID bson.ObjectId `json:"id" bson:"_id"`
Name string `json:"name" bson:"name"`
Description string `json:"description" bson:"description"`
Mixers []Mixer `json:"mixers" bson:"mixers"`
Sections []Section `json:"sections" bson:"sections"`
}
I basically need to convert between the two struct types, I've attempted to use mergo, but that can only merge structs that are assignable to one another. The only solution I have so far is iterating through each struct, converting the ID by re-assigning it and converting its type between string and bson.ObjectId. Then iterating through each map field and doing the same. Which feels like an inefficient solution.
So I'm attempting to use reflection to be more generic in converting between the two ID's. But I can't figure out how I can effectively merge all of the other fields that do match automatically, so I can just worry about converting between the ID types.
Here's the code I have so far...
package main
import (
"fmt"
"reflect"
"gopkg.in/mgo.v2/bson"
)
type Sub struct {
Id bson.ObjectId
}
type PbSub struct {
Id string
}
type PbMenu struct {
Id string
Subs []PbSub
}
type Menu struct {
Id bson.ObjectId
Subs []Sub
}
func main() {
pbMenus := []*PbMenu{
&PbMenu{"1", []PbSub{PbSub{"1"}}},
&PbMenu{"2", []PbSub{PbSub{"1"}}},
&PbMenu{"3", []PbSub{PbSub{"1"}}},
}
newMenus := Serialise(pbMenus)
fmt.Println(newMenus)
}
type union struct {
PbMenu
Menu
}
func Serialise(menus []*PbMenu) []Menu {
newMenus := []Menu{}
for _, v := range menus {
m := reflect.TypeOf(*v)
fmt.Println(m)
length := m.NumField()
for i := 0; i < length; i++ {
field := reflect.TypeOf(v).Field(i)
fmt.Println(field.Type.Kind())
if field.Type.Kind() == reflect.Map {
fmt.Println("is map")
}
if field.Name == "Id" && field.Type.String() == "string" {
// Convert ID type
id := bson.ObjectId(v.Id)
var dst Menu
dst.Id = id
// Need to merge other matching struct fields
newMenus = append(newMenus, dst)
}
}
}
return newMenus
}
I'm can't just manually re-assign the fields because I'm hoping to detect maps on the structs fields and recursively perform this function on them, but the fields won't be the same on embedded structs.
Hope this makes sense!
I think that it is probably better to write your own converter, because you will always have some cases that are not covered by existing libs\tools for that.
My initial implementation of it would be something like this: basic impl of structs merger