This is my body/ how the api posts data:
{
"data": {
"email": "string",
"first_name": "string",
"last_name": "string",
}
}
and this is my postProfileRequest struct, which maybe i need to change to accomodate data?
type postProfileRequest struct {
Profile Profile
}
where as this is Profile
type Profile struct {
ID int `json:"id"`
Email string `json:"email"`
FirstName string `json:"first_name"`
LastName string `json:"last_name"`
}
and i'd like to decode the body request without the data part, so the code below works, since i cant do r.Body.data, i was wondering what would be the best way to do this?
var req postProfileRequest
json.NewDecoder(r.Body).Decode(&req.Profile)
Use the following to decode to a Profile without the data part:
var req postProfileRequest
// Create a value that matches the structure of
// the JSON.
v := struct{ Data *Profile }{&req.Profile}
json.NewDecoder(r.Body).Decode(&v)
fmt.Println(req.Profile) // The data field was decoded to req.Profile
Run it on the playground.
Related
I'm creating a Golang app to communicate with proprietary third-party JSON-RPC API that is sensitive to the order of elements.
Request should be sent using POST over HTTPS. Here is a request sample:
{
"jsonrpc": "2.0",
"method": "ab_123",
"params": [
"ee0fe150-6648-11ed-bd23-c1392b9c96ae",
"test#example.com",
"NONE"
]
}
In API docs it's documented that first param should be id, second param is email and third param is name. There are may be dozens of params, some of them are optional (can have value NONE).
I'm looking for the most elegant way to create the request struct that will be readable and easy maintainable.
Here is what I have so far:
type APIRequest struct {
Jsonrpc string `json:"jsonrpc"`
Method string `json:"method"`
Params []string `json:"params"`
}
var request = APIRequest{
Jsonrpc: "2.0",
Method: "ab_123",
Params: []string{
"ee0fe150-6648-11ed-bd23-c1392b9c96ae",
"test#example.com",
"NONE",
},
}
I wanted to clarify how to set values for
type ElkBulkInsert struct {
Index []struct {
_Index string `json:"_index"`
_Id string `json:"_id"`
} `json:"index"`
}
to make json.Marshall
there were no problems for the usual structure
package main
import (
"encoding/json"
"fmt"
)
type ElkBulkInsert struct {
Index []struct {
_Index string `json:"_index"`
_Id string `json:"_id"`
} `json:"index"`
}
type ElkBulIsertUrl struct {
Url string `json:"url"`
}
func main() {
sf := ElkBulIsertUrl{
Url: "http://mail.ru",
}
dd, _ := json.Marshal(sf)
fmt.Println(string(dd))
}
It's really unclear what you're asking here. Are you asking why the JSON output doesn't match what you expect? Are you unsure how to initialise/set values on the Index field of type []struct{...}?
Because it's quite unclear, I'll attempt to explain why your JSON output may appear to have missing fields (or why not all fields are getting populated), how you can initialise your fields, and how you may be able to improve the types you have.
General answer
If you want to marshal/unmarshal into a struct/type you made, there's a simple rule to keep in mind:
json.Marshal and json.Unmarshal can only access exported fields. An exported field have Capitalised identifiers. Your Index fieldin the ElkBulkInsert is a slice of an anonymous struct, which has no exported fields (_Index and _Id start with an underscore).
Because you're using the json:"_index" tags anyway, the field name itself doesn't have to even resemble the fields of the JSON itself. It's obviously preferable they do in most cases, but it's not required. As an aside: you have a field called Url. It's generally considered better form to follow the standards WRT initialisms, and rename that field to URL:
Words in names that are initialisms or acronyms (e.g. "URL" or "NATO") have a consistent case. For example, "URL" should appear as "URL" or "url" (as in "urlPony", or "URLPony"), never as "Url". Here's an example: ServeHTTP not ServeHttp.
This rule also applies to "ID" when it is short for "identifier," so write "appID" instead of "appId".
Code generated by the protocol buffer compiler is exempt from this rule. Human-written code is held to a higher standard than machine-written code.
With that being said, simply changing the types to this will work:
type ElkBulkInsert struct {
Index []struct {
Index string `json:"_index"`
ID string `json:"_id"`
} `json:"index"`
}
type ElkBulIsertUrl struct {
URL string `json:"url"`
}
Of course, this implies the data for ElkBulkInsert looks something like:
{
"index": [
{
"_index": "foo",
"_id": "bar"
},
{
"_index": "fizz",
"_id": "buzz"
}
]
}
When you want to set values for a structure like this, I generally find it easier to shy away from using anonymous struct fields like the one you have in your Index slice, and use something like:
type ElkInsertIndex struct {
ID string `json:"_id"`
Index string `json:"_index"`
}
type ElkBulkInsert struct {
Index []ElkInsertIndex `json:"index"`
}
This makes it a lot easier to populate the slice:
bulk := ElkBulkInsert{
Index: make([]ElkInsertIndex, 0, 10), // as an example
}
for i := 0; i < 10; i++ {
bulk.Index = append(bulk.Index, ElkInsertIndex{
ID: fmt.Sprintf("%d", i),
Index: fmt.Sprintf("Idx#%d", i), // wherever these values come from
})
}
Even easier (for instance when writing fixtures or unit tests) is to create a pre-populated literal:
data := ElkBulkInsert{
Index: []ElkInsertIndex{
{
ID: "foo",
Index: "bar",
},
{
ID: "fizz",
Index: "buzz",
},
},
}
With your current type, using the anonymous struct type, you can still do the same thing, but it looks messier, and requires more maintenance: you have to repeat the type:
data := ElkBulkInsert{
Index: []struct{
ID string `json:"_id"`
Index string `json:"_index"`
} {
ID: "foo",
Index: "bar",
},
{ // you can omit the field names if you know the order, and initialise all of them
"fizz",
"buzz",
},
}
Omitting field names when initialising in possible in both cases, but I'd advise against it. As fields get added/renamed/moved around over time, maintaining this mess becomes a nightmare. That's also why I'd strongly suggest you use move away from the anonymous struct here. They can be useful in places, but when you're representing a known data-format that comes from, or is sent to an external party (as JSON tends to do), I find it better to have all the relevant types named, labelled, documented somewhere. At the very least, you can add comments to each type, detailing what values it represents, and you can generate a nice, informative godoc from it all.
I'm currently looking at adding Pact testing into my Go code, and i'm getting stuck on how to deal with field types of UUID.
I have the following struct, which I use to deserialise a response from an API to
import (
"github.com/google/uuid"
)
type Foo struct {
ID uuid.UUID `json:"id"`
Name string `json:"name"`
Description string `json:"description"`
}
Now, when I try and write my consumer test, it looks something like this
pact.
AddInteraction().
Given("A result exists").
UponReceiving("A request to get all results").
WithRequest(dsl.Request{
Method: "get",
Path: dsl.String("/v1"),
}).
WillRespondWith(dsl.Response{
Status: 200,
Headers: dsl.MapMatcher{"Content-Type": dsl.String("application/json")},
Body: dsl.Match(&Foo{}),
})
The problem now, is the mocked response comes through as below, where it tries to put an array of bytes in the "id" field, I'm assuming since behind the scenes that is what the google/uuid library stores it as.
{
"id": [
1
],
"name": "string",
"description": "string"
}
Has anyone encountered this? And what would be the best way forward - the only solution I can see is changing my model to be a string, and then manually convert to a UUID within my code.
You currently can't use the Match function this way, as it recurses non primitive structures, albeit it should be possible to override this behaviour with struct tags. Could you please raise a feature request?
The simplest approach is to not use the Match method, and just manually express the contract details in the usual way.
The internal representation of uuid.UUID is type UUID [16]byte but json representation of UUID is string
var u uuid.UUID
u, _ = uuid.Parse("f47ac10b-58cc-0372-8567-0e02b2c3d479")
foo1 := Foo{u, "n", "d"}
res, _ := json.Marshal(foo1)
fmt.Println(string(res))
{
"id": "f47ac10b-58cc-0372-8567-0e02b2c3d479",
"name": "n",
"description": "d"
}
and then load marshaled []byte
var foo Foo
json.Unmarshal(res, &foo)
Currently I have the following definition for my structs:
type WholeJson struct {
Features []Temp
}
type Temp struct {
Properties Human
}
type Human struct {
Name string
Age uint
}
Which is working when unmarshaling a JSON string into a variable of type WholeJson, which would have the following structure:
{
"features":[
{
"properties": {
"name": "John Doe",
"age": 50
}
}
]
}
Go Playground sample here: https://play.golang.org/p/3WTLxR0EZWP
But I don't know how to write it in a simpler way. It is obvious that not both WholeJson and Temp are necessary as far as using the information they will hold. The only reason I have Temp is because I simply don't know how to avoid defining it (and still have the program work).
Presumably the Features property of WholeJson would have an array to some interface{}, but I can't nail the syntax. And I'm assuming the code for reading the unmarshalled data (from the playground sample) will stay the same.
How would I "squash" those those two structs into one (the Human struct i'm assuming is okay if it stays on its own), and still have useful data in the end, where I could loop through the features key to go through all the data?
It's OK to define a type for each level of the hierarchy and preferred when constructing values from Go code. The types do not need to be exported.
Use anonymous types to eliminate the defined types:
var sourceData struct {
Features []struct {
Properties Human
}
}
var jsonString string = `{
"features":[
{
"properties": {
"name": "John Doe",
"age": 50
}
}
]
}`
json.Unmarshal([]byte(jsonString), &sourceData)
fmt.Println(sourceData.Features[0].Properties.Name)
I found some issues understanding the documentation. I have REST API with Go and I tried to create an endpoint where I need to extract users only by their role. I tried different solutions, but can't achieve what I need. This is what I made and I'm not sure how to continue so I can take the users only by exact role. If someone can help me it will be appreciated, because I can't find more advanced stuff in the documentation.
The JSON response from this endpoint is:
[
{
"ID": 1,
"CreatedAt": "2020-12-09T14:40:55.171011+02:00",
"UpdatedAt": "2020-12-09T14:40:55.175537+02:00",
"DeletedAt": null,
"email": "h#go.com",
"password": "$2a$14$KN2wAOnfecAriBW0xeAJke.okEUlcpDHVeuk",
"bearer": "eyJhbGciOiJIUzI1NiIs2lhdCI6MTYwNjMwNTEzNn0.J2wBp8ecA9TebP6L73qZ1OZmo02DwQy9vTySt0fil4c",
"first_name": "H",
"last_name": "Pro",
"phone": "353456",
"salesforce_id": "sfsdddfsdf",
"webflow_id": "wfwfwfaawfw",
"Roles": null
},
{
"ID": 2,
"CreatedAt": "2020-12-09T14:40:55.171011+02:00",
"UpdatedAt": "2020-12-09T14:40:55.175537+02:00",
"DeletedAt": null,
"email": "s#go.com",
"password": "$2wAOnfecAriBW0xeAJke.okEUlcpDHVeuk",
"bearer": "eyJhbGiIs2lhdCI6MTYwNjMwNTEzNn0.J2wBp8ecA9TebP6L73qZ1OZmo02DwQy9vTy0fil4c",
"first_name": "S",
"last_name": "Test",
"phone": "3556",
"salesforce_id": "sfsdf",
"webflow_id": "wfwfwfw",
"Roles": null
}
]
User struct:
type User struct {
gorm.Model
Email string `json:"email"`
Password string `json:"password"`
Token string `json:"bearer"`
FirstName string `json:"first_name"`
LastName string `json:"last_name"`
Phone string `json:"phone"`
SalesforceID string `json:"salesforce_id"`
WebflowID string `json:"webflow_id"`
Roles []*Roles `gorm:"constraint:OnUpdate:CASCADE,OnDelete:SET NULL;many2many:user_roles;"`
}
Role struct:
type Roles struct {
gorm.Model
Name string `json:"name"`
Users []*User `gorm:"many2many:user_roles"`
}
The sql query I made and works for my case in Postgre:
SELECT * FROM users
JOIN user_roles ON users.id = user_roles.user_id
JOIN roles ON user_roles.roles_id = roles.id
WHERE user_roles.roles_id = 1
the User struct:
type User struct {
gorm.Model
Email string `json:"email"`
Password string `json:"password"`
Token string `json:"bearer"`
FirstName string `json:"first_name"`
LastName string `json:"last_name"`
Phone string `json:"phone"`
SalesforceID string `json:"salesforce_id"`
WebflowID string `json:"webflow_id"`
Roles []Roles `json:"roles" gorm:"constraint:OnUpdate:CASCADE,OnDelete:SET NULL;many2many:user_roles;"`
}
the struct of Roles
type Roles struct {
gorm.Model
Name string `json:"name"`
Users []*User `gorm:"many2many:user_roles"`
}
the endpoint:
func (h *Handler) GetAllEmployees(c *gin.Context) {
var users []models.User
//var roles []models.Roles //the comment here is because I get error message roles is defined but never used
var roleid int = 1
//you need to extract roleID from your query string or request body
tx := h.db.DB.
Preload("Roles").
Joins("INNER JOIN user_roles ON users.id = user_roles.user.id").
Joins("INNER JOIN roles ON user_roles.roles_id = roles.id").
Where("user_roles.roles_id = ?", roleid).
Find(&users)
if tx.Error != nil {
//handle error
}
c.JSON(200, users)
}
EDIT:
Based on all the answers below, following things need to happen here:
You need to rename Roles struct into Role
You need to load Roles for each object
Filter users by a specific role ID
First, you need to rename the Roles struct to Role, and change references to it:
Role struct
type Role struct {
gorm.Model
Name string `json:"name"`
Users []User `gorm:"many2many:user_roles"`
}
User struct
type User struct {
gorm.Model
Email string `json:"email"`
Password string `json:"password"`
Token string `json:"bearer"`
FirstName string `json:"first_name"`
LastName string `json:"last_name"`
Phone string `json:"phone"`
SalesforceID string `json:"salesforce_id"`
WebflowID string `json:"webflow_id"`
Roles []Role `gorm:"constraint:OnUpdate:CASCADE,OnDelete:SET NULL;many2many:user_roles;"`
}
Next, modify your code a bit. There is one join that is not necessary here:
func (h *Handler) GetAllEmployeesByRoleID(c *gin.Context) {
var users []models.User
var roleID int = 1
//you need to extract roleID from your query string or request body
tx := h.db.DB.
Preload("Roles").
Joins("INNER JOIN user_roles ur ON ur.user_id = users.id").
Where("ur.role_id = ?", roleID).
Find(&users)
if tx.Error != nil {
//handle error
c.JSON(500, tx.Error)
return
}
c.JSON(200, users)
}
So, you add Preload("Roles") to load all roles that a user can have, as I assumed that, even if you filter by roleID, you want to see all roles that a user can have.
You can use Joins and Where to construct a similar query you already used. A JOIN with the roles table isn't necessary since you have the needed info in the user_roles table.