Background
I'm trying to analyze data from the Reddit api on users. I've declared a User struct like:
type User struct {
Kind string `json:"kind"`
Data struct {
...
Subreddit struct {
...
} `json:"subreddit"`
...
CreatedUtc float64 `json:"created_utc"` <---
...
} `json:"data"`
}
I request the data from the api and print it here.
func GetUser(url string) User {
var response User
resp, err := http.Get(url)
if err != nil {
...
}
defer resp.Body.Close()
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
...
}
err = json.Unmarshal(body, &response)
if err != nil {
...
}
fmt.Print(response.Data.CreatedUtc) <---
return response
}
Problem
When I request this endpoint it prints 0 while I can see in the browser that the created_utc timestamp is 1562538742. This seems to happen in the vast majority (but not all) cases.
Am I doing something wrong with my type conversions?
To understand why it is zero, you must first understand that in Go, types are not automatically references like in other languages. The variable var abc int will have a value of 0 by default.
When testing whether JSON is parsing correctly, you can change the values of the type to pointers. With this, any field that isn't filled is nil rather than the default value for that type.
Doing this, you can see if the value being returned is true, or if there is another failure, such as incorrect data model or failed network call.
Credit to #JimB for pointing out that I wasn't checking the status code of the response. I had expected that it would throw an error if it was > 400 but according to the docs that is not the case.
In my case, modifying the request to contain a user agent header resolved the issue.
Related
When working with DynamoDB in Golang, if a call to query has more results, it will set LastEvaluatedKey on the QueryOutput, which you can then pass in to your next call to query as ExclusiveStartKey to pick up where you left off.
This works great when the values stay in Golang. However, I am writing a paginated API endpoint, so I would like to serialize this key so I can hand it back to the client as a pagination token. Something like this, where something is the magic package that does what I want:
type GetDomainObjectsResponse struct {
Items []MyDomainObject `json:"items"`
NextToken string `json:"next_token"`
}
func GetDomainObjects(w http.ResponseWriter, req *http.Request) {
// ... parse query params, set up dynamoIn ...
dynamoIn.ExclusiveStartKey = something.Decode(params.NextToken)
dynamoOut, _ := db.Query(dynamoIn)
response := GetDomainObjectsResponse{}
dynamodbattribute.UnmarshalListOfMaps(dynamoOut.Items, &response.Items)
response.NextToken := something.Encode(dynamoOut.LastEvaluatedKey)
// ... marshal and write the response ...
}
(please forgive any typos in the above, it's a toy version of the code I whipped up quickly to isolate the issue)
Because I'll need to support several endpoints with different search patterns, I would love a way to generate pagination tokens that doesn't depend on the specific search key.
The trouble is, I haven't found a clean and generic way to serialize the LastEvaluatedKey. You can marshal it directly to JSON (and then e.g. base64 encode it to get a token), but doing so is not reversible. LastEvaluatedKey is a map[string]types.AttributeValue, and types.AttributeValue is an interface, so while the json encoder can read it, it can't write it.
For example, the following code panics with panic: json: cannot unmarshal object into Go value of type types.AttributeValue.
lastEvaluatedKey := map[string]types.AttributeValue{
"year": &types.AttributeValueMemberN{Value: "1993"},
"title": &types.AttributeValueMemberS{Value: "Benny & Joon"},
}
bytes, err := json.Marshal(lastEvaluatedKey)
if err != nil {
panic(err)
}
decoded := map[string]types.AttributeValue{}
err = json.Unmarshal(bytes, &decoded)
if err != nil {
panic(err)
}
What I would love would be a way to use the DynamoDB-flavored JSON directly, like what you get when you run aws dynamodb query on the CLI. Unfortunately the golang SDK doesn't support this.
I suppose I could write my own serializer / deserializer for the AttributeValue types, but that's more effort than this project deserves.
Has anyone found a generic way to do this?
OK, I figured something out.
type GetDomainObjectsResponse struct {
Items []MyDomainObject `json:"items"`
NextToken string `json:"next_token"`
}
func GetDomainObjects(w http.ResponseWriter, req *http.Request) {
// ... parse query params, set up dynamoIn ...
eskMap := map[string]string{}
json.Unmarshal(params.NextToken, &eskMap)
esk, _ = dynamodbattribute.MarshalMap(eskMap)
dynamoIn.ExclusiveStartKey = esk
dynamoOut, _ := db.Query(dynamoIn)
response := GetDomainObjectsResponse{}
dynamodbattribute.UnmarshalListOfMaps(dynamoOut.Items, &response.Items)
lek := map[string]string{}
dynamodbattribute.UnmarshalMap(dynamoOut.LastEvaluatedKey, &lek)
response.NextToken := json.Marshal(lek)
// ... marshal and write the response ...
}
(again this is my real solution hastily transferred back to the toy problem, so please forgive any typos)
As #buraksurdar pointed out, attributevalue.Unmarshal takes an inteface{}. Turns out in addition to a concrete type, you can pass in a map[string]string, and it just works.
I believe this will NOT work if the AttributeValue is not flat, so this isn't a general solution [citation needed]. But my understanding is the LastEvaluatedKey returned from a call to Query will always be flat, so it works for this usecase.
Inspired by Dan, here is a solution to serialize and deserialize to/from base64
package dynamodb_helpers
import (
"encoding/base64"
"encoding/json"
"github.com/aws/aws-sdk-go-v2/feature/dynamodb/attributevalue"
"github.com/aws/aws-sdk-go-v2/service/dynamodb/types"
)
func Serialize(input map[string]types.AttributeValue) (*string, error) {
var inputMap map[string]interface{}
err := attributevalue.UnmarshalMap(input, &inputMap)
if err != nil {
return nil, err
}
bytesJSON, err := json.Marshal(inputMap)
if err != nil {
return nil, err
}
output := base64.StdEncoding.EncodeToString(bytesJSON)
return &output, nil
}
func Deserialize(input string) (map[string]types.AttributeValue, error) {
bytesJSON, err := base64.StdEncoding.DecodeString(input)
if err != nil {
return nil, err
}
outputJSON := map[string]interface{}{}
err = json.Unmarshal(bytesJSON, &outputJSON)
if err != nil {
return nil, err
}
return attributevalue.MarshalMap(outputJSON)
}
I have a use case where I have the code as below. I have a request coming in to hit the backend where I need to append data to a map. My question is how do I convert the below type to a []byte to unmarshal?
Any ideas would be appreciated.
type Example struct {
Category string `json:"category"`
Name string `json:"name"`
}
Incoming Postman request json looks like this:
[{"Category":"TestCategory", "Name":"Sample1"}]
but after doing
jsonString Type: []Example
if err := gc.ShouldBindJSON(&jsonString) it looks like [{TestCategory Sample1}] ; how do I convert this to a []byte?
for _, req := range blob{
var jsonString Example
if err := json.Unmarshal([]byte(jsonString), &blob); err != nil { //this does not work
logger.Fatal(err)
}
//I am checking if a key-value is present and appending it to the map
dict := make(map[string][]Example)
dict[req.Category] = append(dict[req.Category], req)
fmt.Println(dict)
if value, ok := dict["TestCategory"]; ok {
fmt.Printf("Found %d\n", value)
} else {
fmt.Println("not found")
}
}
//I was able to test the above logic by declaring the jsonString as a const and it works
There are two directions in which you can move the data:
from JSON to a Go data structure
// This is your payload coming from the request.
jsonStr := `[{"Category":"TestCategory", "Name":"Sample1"}]`
// This is the Go struct that will hold the unmarshalled data.
var examples []Example
err := json.Unmarshal([]byte(jsonStr), &examples)
if err != nil {
log.Fatal(err)
}
fmt.Println("Examples:", examples) // prints "Examples: [{TestCategory Sample1}]"
from a Go data structure to JSON (either string or []byte)
exampleBytes, err := json.Marshal(examples)
if err != nil {
log.Fatal(err)
}
fmt.Println("Example bytes:", string(exampleBytes)) // prints "Example bytes: [{"category":"TestCategory","name":"Sample1"}]"
You should check out "Go by Example" if you haven't already: https://gobyexample.com/json
Looking at your code:
You are looping on blob but instead of using the req you are trying to unmarshal onto the entire blob each time. I'm not sure what you are trying to achieve there but nothing good can come out of changing a struct you're looping over from within the loop.
The request JSON you are listing is an array of JSON objects. You are trying to unmarshal that into a single Example struct. That won't work, you need an array of those.
I'm using the json.unmarshalling function in golang to decode some JSON responses we got from the API. How do I make it handle multiple types?
The response we receive are always status code and a message, but the json field have different names. Sometimes these two fields are called code and message and sometimes they are called statuscode and description, depending on what we query.
say that we queries Apple and this is simply solved by creating an Apple type struct like this:
type Apple struct {
Code int `json:"code"`
Description string `json:"message"`
}
But when we query Peach, the json we got back is no longer code and message anymore, the field names became statuscode and description. So we will need the following:
type Peach struct {
Code int `json:"statuscode"`
Description string `json:"description"`
}
Potentially, we need to set up 50 more types and write duplicate for 50 times?? There MUST be a better way to do this. Unfortunately I'm new to Golang and don't know how polymorphism works in this language. Please help.
As far as I know , You should always decode into structs to benefit from go static types , the methods attached to that struct and perhaps be able to validate your responses with a package like validator , but you could always parse the JSON body into a map like this :
// JsonParse parses the json body of http request
func JsonParse(r *http.Request) (map[string]interface{}, error) {
// Read the r.body into a byte array
body, err := ioutil.ReadAll(r.Body)
if err != nil {
return nil, err
}
// Make a map of String keys and Interface Values
b := make(map[string]interface{})
// Unmarshal the body array into the map
err = json.Unmarshal(body, &b)
if err != nil {
return nil, err
}
return b, nil
}
I thought I have asserted (as far as I've learnt Go), but I keep getting this error
cannot use readBack["SomePIN"] (type interface {}) as type string in argument to c.String: need type assertion
Here is my code (this snippet is from a Request Handler function and I'm using Echo Web framework and Tiedot NoSQL database)
// To get query result document, simply
// read it [as stated in the Tiedot readme.md]
for id := range queryResult {
readBack, err := aCollection.Read(id)
if err != nil {
panic(err)
}
if readBack["OtherID"] == otherID {
if _, ok := readBack["SomePIN"].(string); ok {
return c.String(http.StatusOK, readBack["SomePIN"])
}
}
}
You are asserting readBack["SomePIN"] as a string - in the if statement. That doesn't make any change to readBack["SomePIN"], however - it's still an interface{}. In Go, nothing ever changes type. Here's what will work:
for id := range queryResult {
readBack, err := aCollection.Read(id)
if err != nil {
panic(err)
}
if readBack["OtherID"] == otherID {
if somePIN, ok := readBack["SomePIN"].(string); ok {
return c.String(http.StatusOK, somePIN)
}
}
}
You were tossing the string value from your type assertion, but you want it. So keep it, as somePIN, and then use it.
Final note - using the value, ok = interfaceVal.(type) syntax is a good practice. If interfaceVal turns out to be a non-string, you'll get value = "" and ok = false. If you eliminate the ok value from the type assertion and interfaceVal is a non-string, the program will panic.
It looks like your converting to a concrete type and throwing away the conversion, I think this should work:
if somePinString, ok := readBack["SomePIN"].(string); ok {
return c.String(http.StatusOK, somePinString)
}
UPDATE
moving from gob encoding to json fixed the issue. However I would still like to know why this was failing to work with gob.
so my client code looks like this
account := new(database.Account)
err := client.Call("AccountDb.FindAccount", "username", account)
if err != nil {
logger.FATAL.Print(err.Error())
return
}
logger.INFO.Print(account)
On the server side AccountDb.FindAccount looks like this
func (t *AccountDb) FindAccount(args *string, reply *Account) error {
reply.Username = "this is a test"
return nil
}
The struct for Account looks like this
type Account struct {
Id int
Username string
Password string
Email string
Created time.Time
LastLoggedIn time.Time
AccessLevel int
Banned struct {
reason string
expires time.Time
}
}
if I attempt to perform the rpc the requests starts and the server executes the procedure. However the program then hangs and the procedure does not return! If however I remove the Banned anonymous struct from the Account struct it works fine! Why is this? is there a solution to this problem?
edit
the client and server register code looks like this
Client
client, err = rpc.DialHTTP("tcp", "127.0.0.1:9001")
if err != nil {
logger.FATAL.Panic(err.Error())
}
Server
defer db.Close()
account := new(database.AccountDb)
account.Database = db
rpc.Register(account)
rpc.HandleHTTP()
l, e := net.Listen("tcp", ":9001")
if e != nil {
logger.FATAL.Fatal("listen error:", e)
}
http.Serve(l, nil)
Evidently, the gob encoder fails to marshal the RPC response because the Banned struct has no exported fields. This playground example exhibits the error:
encode error: gob: type struct { reason string; expires time.Time }
has no exported fields
If you export the reason/expires fields by capitalizing them, the gob round-trip works OK (see http://play.golang.org/p/YrYFsk6trQ).
The JSON encoder also requires that serialized fields are exported, but it does not return an error if a struct has none. The decode just returns default/zero values. Run http://play.golang.org/p/OBBkB4tPcZ and note that the Banned information is lost on the round-trip.
If you need to detect these kinds of errors in the rpc package, there are two options:
Edit rpc/debug.go, set the logDebug flag to true, and rebuild the rpc package, or
Implement a ServerCodec that wraps the existing implementation and logs any errors returned by ReadRequestBody/WriteResponse.