GO is a complex nested structure - go

I wanted to clarify how to set values for
type ElkBulkInsert struct {
Index []struct {
_Index string `json:"_index"`
_Id string `json:"_id"`
} `json:"index"`
}
to make json.Marshall
there were no problems for the usual structure
package main
import (
"encoding/json"
"fmt"
)
type ElkBulkInsert struct {
Index []struct {
_Index string `json:"_index"`
_Id string `json:"_id"`
} `json:"index"`
}
type ElkBulIsertUrl struct {
Url string `json:"url"`
}
func main() {
sf := ElkBulIsertUrl{
Url: "http://mail.ru",
}
dd, _ := json.Marshal(sf)
fmt.Println(string(dd))
}

It's really unclear what you're asking here. Are you asking why the JSON output doesn't match what you expect? Are you unsure how to initialise/set values on the Index field of type []struct{...}?
Because it's quite unclear, I'll attempt to explain why your JSON output may appear to have missing fields (or why not all fields are getting populated), how you can initialise your fields, and how you may be able to improve the types you have.
General answer
If you want to marshal/unmarshal into a struct/type you made, there's a simple rule to keep in mind:
json.Marshal and json.Unmarshal can only access exported fields. An exported field have Capitalised identifiers. Your Index fieldin the ElkBulkInsert is a slice of an anonymous struct, which has no exported fields (_Index and _Id start with an underscore).
Because you're using the json:"_index" tags anyway, the field name itself doesn't have to even resemble the fields of the JSON itself. It's obviously preferable they do in most cases, but it's not required. As an aside: you have a field called Url. It's generally considered better form to follow the standards WRT initialisms, and rename that field to URL:
Words in names that are initialisms or acronyms (e.g. "URL" or "NATO") have a consistent case. For example, "URL" should appear as "URL" or "url" (as in "urlPony", or "URLPony"), never as "Url". Here's an example: ServeHTTP not ServeHttp.
This rule also applies to "ID" when it is short for "identifier," so write "appID" instead of "appId".
Code generated by the protocol buffer compiler is exempt from this rule. Human-written code is held to a higher standard than machine-written code.
With that being said, simply changing the types to this will work:
type ElkBulkInsert struct {
Index []struct {
Index string `json:"_index"`
ID string `json:"_id"`
} `json:"index"`
}
type ElkBulIsertUrl struct {
URL string `json:"url"`
}
Of course, this implies the data for ElkBulkInsert looks something like:
{
"index": [
{
"_index": "foo",
"_id": "bar"
},
{
"_index": "fizz",
"_id": "buzz"
}
]
}
When you want to set values for a structure like this, I generally find it easier to shy away from using anonymous struct fields like the one you have in your Index slice, and use something like:
type ElkInsertIndex struct {
ID string `json:"_id"`
Index string `json:"_index"`
}
type ElkBulkInsert struct {
Index []ElkInsertIndex `json:"index"`
}
This makes it a lot easier to populate the slice:
bulk := ElkBulkInsert{
Index: make([]ElkInsertIndex, 0, 10), // as an example
}
for i := 0; i < 10; i++ {
bulk.Index = append(bulk.Index, ElkInsertIndex{
ID: fmt.Sprintf("%d", i),
Index: fmt.Sprintf("Idx#%d", i), // wherever these values come from
})
}
Even easier (for instance when writing fixtures or unit tests) is to create a pre-populated literal:
data := ElkBulkInsert{
Index: []ElkInsertIndex{
{
ID: "foo",
Index: "bar",
},
{
ID: "fizz",
Index: "buzz",
},
},
}
With your current type, using the anonymous struct type, you can still do the same thing, but it looks messier, and requires more maintenance: you have to repeat the type:
data := ElkBulkInsert{
Index: []struct{
ID string `json:"_id"`
Index string `json:"_index"`
} {
ID: "foo",
Index: "bar",
},
{ // you can omit the field names if you know the order, and initialise all of them
"fizz",
"buzz",
},
}
Omitting field names when initialising in possible in both cases, but I'd advise against it. As fields get added/renamed/moved around over time, maintaining this mess becomes a nightmare. That's also why I'd strongly suggest you use move away from the anonymous struct here. They can be useful in places, but when you're representing a known data-format that comes from, or is sent to an external party (as JSON tends to do), I find it better to have all the relevant types named, labelled, documented somewhere. At the very least, you can add comments to each type, detailing what values it represents, and you can generate a nice, informative godoc from it all.

Related

UUID field within Go Pact consumer test

I'm currently looking at adding Pact testing into my Go code, and i'm getting stuck on how to deal with field types of UUID.
I have the following struct, which I use to deserialise a response from an API to
import (
"github.com/google/uuid"
)
type Foo struct {
ID uuid.UUID `json:"id"`
Name string `json:"name"`
Description string `json:"description"`
}
Now, when I try and write my consumer test, it looks something like this
pact.
AddInteraction().
Given("A result exists").
UponReceiving("A request to get all results").
WithRequest(dsl.Request{
Method: "get",
Path: dsl.String("/v1"),
}).
WillRespondWith(dsl.Response{
Status: 200,
Headers: dsl.MapMatcher{"Content-Type": dsl.String("application/json")},
Body: dsl.Match(&Foo{}),
})
The problem now, is the mocked response comes through as below, where it tries to put an array of bytes in the "id" field, I'm assuming since behind the scenes that is what the google/uuid library stores it as.
{
"id": [
1
],
"name": "string",
"description": "string"
}
Has anyone encountered this? And what would be the best way forward - the only solution I can see is changing my model to be a string, and then manually convert to a UUID within my code.
You currently can't use the Match function this way, as it recurses non primitive structures, albeit it should be possible to override this behaviour with struct tags. Could you please raise a feature request?
The simplest approach is to not use the Match method, and just manually express the contract details in the usual way.
The internal representation of uuid.UUID is type UUID [16]byte but json representation of UUID is string
var u uuid.UUID
u, _ = uuid.Parse("f47ac10b-58cc-0372-8567-0e02b2c3d479")
foo1 := Foo{u, "n", "d"}
res, _ := json.Marshal(foo1)
fmt.Println(string(res))
{
"id": "f47ac10b-58cc-0372-8567-0e02b2c3d479",
"name": "n",
"description": "d"
}
and then load marshaled []byte
var foo Foo
json.Unmarshal(res, &foo)

How to make a 2-level depth type definition in a struct?

Currently I have the following definition for my structs:
type WholeJson struct {
Features []Temp
}
type Temp struct {
Properties Human
}
type Human struct {
Name string
Age uint
}
Which is working when unmarshaling a JSON string into a variable of type WholeJson, which would have the following structure:
{
"features":[
{
"properties": {
"name": "John Doe",
"age": 50
}
}
]
}
Go Playground sample here: https://play.golang.org/p/3WTLxR0EZWP
But I don't know how to write it in a simpler way. It is obvious that not both WholeJson and Temp are necessary as far as using the information they will hold. The only reason I have Temp is because I simply don't know how to avoid defining it (and still have the program work).
Presumably the Features property of WholeJson would have an array to some interface{}, but I can't nail the syntax. And I'm assuming the code for reading the unmarshalled data (from the playground sample) will stay the same.
How would I "squash" those those two structs into one (the Human struct i'm assuming is okay if it stays on its own), and still have useful data in the end, where I could loop through the features key to go through all the data?
It's OK to define a type for each level of the hierarchy and preferred when constructing values from Go code. The types do not need to be exported.
Use anonymous types to eliminate the defined types:
var sourceData struct {
Features []struct {
Properties Human
}
}
var jsonString string = `{
"features":[
{
"properties": {
"name": "John Doe",
"age": 50
}
}
]
}`
json.Unmarshal([]byte(jsonString), &sourceData)
fmt.Println(sourceData.Features[0].Properties.Name)

Is there a better way to declare json variable

Declaring a variable of type map[string]map[string]... is not ideal, is there a better way
snaps := map[string]map[string]map[string]map[string]string{
"distros": {
"aws": {
"eu-west-1": {
"snap-0": "/dev/sdm",
},
"eu-west-2": {
"snap-1": "/dev/sdm",
},
},
},
}
fmt.Println(snaps["distros"]["aws"]["eu-west-1"])
The simplest way would be to use the type map[string]interface{}. Since the empty interface, interface{}, refers to any type and therefore handles the arbitrarily nested nature of JSON.
To do this you'll have to write your literal data as a string first and then parse the string into a Go map.
With that in mind here is a refactor of your example:
first: import "encoding/json", then
snapsStr := `{
"distros": {
"aws": {
"eu-west-1" : {
"snap-0": "/dev/sdm"
},
"eu-west-2": {
"snap-1": "/dev/sdm"
}
}
}
}`
var snaps map[string]interface{}
json.Unmarshal([]byte(snapsStr), &snaps)
And now snaps is as desired.
This is the most generic format for JSON data in Go and is one of the ways that the Go JSON library handles types for JSON. See these docs: https://golang.org/pkg/encoding/json/#Unmarshal

Share structure between GraphQL schemas

I have a Apollo GraphQL server talking to an API returning responses with roughly the following structure:
{
"pagination": {
"page": 1,
// more stuff
},
sorting: {
// even more stuff
},
data: [ // Actual data ]
}
This structure is going to be shared across pretty much all responses from this API, that I'm using extensively. data is going to be an array most of the time, but can also be an object.
How can I write this in an efficient way, so that I don't have to repeat all these pagination and sorting fields on every data type in my schemas?
Thanks a lot!
I've sorted your problem by creating a lib called graphql-s2s. It enhances your schema by adding support for type inheritance, generic types and metadata. In your case, creating a generic type for your Paginated object could be a viable solution. Here is an example:
const { transpileSchema } = require('graphql-s2s')
const { makeExecutableSchema } = require('graphql-tools')
const schema = `
type Paged<T> {
data: [T]
cursor: ID
}
type Node {
id: ID!
creationDate: String
}
type Person inherits Node {
firstname: String!
middlename: String
lastname: String!
age: Int!
gender: String
}
type Teacher inherits Person {
title: String!
}
type Student inherits Person {
nickname: String!
questions: Paged<Question>
}
type Question inherits Node {
name: String!
text: String!
}
type Query {
students: Paged<Student>
teachers: Paged<Teacher>
}
`
const executableSchema = makeExecutableSchema({
typeDefs: [transpileSchema(schema)],
resolvers: resolver
})
I've written more details about this here (in Part II).
When you define your schema, you will end up abstracting out pagination, sorting, etc. as separate types. So the schema will look something like:
type Bar {
pagination: Pagination
sorting: SortingOptions
data: BarData # I'm an object
}
type Foo {
pagination: Pagination
sorting: SortingOptions
data: [FooData] # I'm an array
}
# more types similar to above
type Pagination {
page: Int
# more fields
}
type SortingOptions {
# more fields
}
type BarData {
# more fields
}
So you won't have to list each field within Pagination multiple times regardless. Each type that uses Pagination, however, will still need to specify it as a field -- there's no escaping that requirement.
Alternatively, you could set up a single Type to use for all your objects. In this case, the data field would be an Interface (Data), with FooData, BarData, etc. each implementing it. In your resolver for Data, you would define a __resolveType function to determine which kind of Data to return. You can pass in a typename variable with your query and then use that variable in the __resolveType function to return the correct type.
You can see a good example of Interface in action in the Apollo docs.
The downside to this latter approach is that you have to return either a single Data object or an Array of them -- you can't mix and match -- so you would probably have to change the structure of the returned object to make it work.

What flag to fmt.Printf to follow pointers recursively?

I'm having trouble printing a struct when failing test cases. It's a pointer to a slice of pointers to structs, or *[]*X. The problem is that I need to know the contents of the X-structs inside the slice, but I cannot get it to print the whole chain. It only prints their addresses, since it's a pointer. I need it to follow the pointers.
This is however useless since the function I want to test modifies their contents, and modifying the test code to not use pointers just means I'm not testing the code with pointers (so that wouldn't work).
Also, just looping through the slice won't work, since the real function uses reflect and might handle more than one layer of pointers.
Simplified example:
package main
import "fmt"
func main() {
type X struct {
desc string
}
type test struct {
in *[]*X
want *[]*X
}
test1 := test{
in: &[]*X{
&X{desc: "first"},
&X{desc: "second"},
&X{desc: "third"},
},
}
fmt.Printf("%#v", test1)
}
example output:
main.test{in:(*[]*main.X)(0x10436180), want:(*[]*main.X)(nil)}
(code is at http://play.golang.org/p/q8Its5l_lL )
I don't think fmt.Printf has the functionality you are looking for.
You can use the https://github.com/davecgh/go-spew library.
spew.Dump(test1)
You can use the valast (valast.String()) library alternatively.
For the example the out is:
test{in: &[]*X{
{desc: "first"},
{desc: "second"},
{desc: "third"},
}}
For spew (spew.Sdump()) it looks like:
(main.test) {
in: (*[]*main.X)(0xc000098090)((len=3 cap=3) {
(*main.X)(0xc000088300)({
desc: (string) (len=5) "first"
}),
(*main.X)(0xc000088310)({
desc: (string) (len=6) "second"
}),
(*main.X)(0xc000088320)({
desc: (string) (len=5) "third"
})
}),
want: (*[]*main.X)(<nil>)
}

Resources