Questions about attaching a piece of metadata (an "initial" request) when initiating a client-side streaming gRPC have already been asked before (here, here), but some answers are suggesting that it's not possible and suggest using oneof where first request towards a server contains the metadata in question, and subsequent requests contain the actual data to be processed by the server. I'm wondering if it's safe to encode metadata with a binary encoding of choice and send it to the server where it can be extracted from the Context object and deserialized back into meaningful data. I'm fairly certain that it's perfectly fine when it comes to text-based encodings such as JSON. But what about protobuf? Assuming we define our service like so:
service MyService {
rpc ChitChat (stream ChatMessage) returns (stream ChatMessage);
}
message ChatMessage {
// ...
}
message Meta {
// ...
}
We can include a Meta object in the request:
meta := &pb.Meta{
// ...
}
metab, err := proto.Marshal(meta)
if err != nil {
log.Fatalf("marshaling error: %v", err)
}
newCtx := metadata.NewOutgoingContext(context.Background(), metadata.Pairs("meta-bin", string(metab)))
// ...ChitChat(newCtx)
And access it on the server side:
func (s *server) ChitChat(stream pb.MyService_ChitChatServer) error {
md, ok := metadata.FromIncomingContext(stream.Context())
if !ok {
return fmt.Errorf("no metadata received")
}
metaStr := md.Get("meta-bin")
if len(metaStr) != 1 {
return fmt.Errorf("expected 1 md; got: %v", len(metaStr))
}
meta := new(pb.Meta)
if err := proto.Unmarshal([]byte(metaStr[0]), meta); err != nil {
return fmt.Errorf("error during deserialization: %v", err)
}
// ...
return nil
}
It appears to be working quite well - am I missing something? How easy is it to shoot yourself in the foot with this approach?
Yes, gRPC supports binary headers, so this approach isn't invalid; it is a little less clear that it is expected, but then: that's true for the oneof approach too, so ... not much difference there.
Related
I have a field type proto.Any passed from upstream service, and I need to convert it to proto.Struct. I see there is a UnmarshalAny function but it only takes proto.Message. Anybody can help
End up going with types.Any -> proto message -> jsonpb -> types.Struct
As Jochen mentioned in the comments, you can use anypb and structpb to manage the respective Well Known Types. So you will first import the followings:
"google.golang.org/protobuf/types/known/anypb"
"google.golang.org/protobuf/types/known/structpb"
and then it is basically just marshalling and unmarshalling process:
s := &structpb.Struct{
Fields: map[string]*structpb.Value{
"is_working": structpb.NewBoolValue(true),
},
}
any, err := anypb.New(s) // transform `s` to Any
if err != nil {
log.Fatalf("Error while creating Any from Struct")
}
m := new(structpb.Struct)
if err = any.UnmarshalTo(m); err != nil { // transform `any` back to Struct
log.Fatalf("Error while creating Struct from Any")
}
log.Println(m)
Note that I don't know you proto definition so here instead of doing the any.New marshalling part, you will replace that with the any you receive from your upstream service.
Background
I'm trying to analyze data from the Reddit api on users. I've declared a User struct like:
type User struct {
Kind string `json:"kind"`
Data struct {
...
Subreddit struct {
...
} `json:"subreddit"`
...
CreatedUtc float64 `json:"created_utc"` <---
...
} `json:"data"`
}
I request the data from the api and print it here.
func GetUser(url string) User {
var response User
resp, err := http.Get(url)
if err != nil {
...
}
defer resp.Body.Close()
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
...
}
err = json.Unmarshal(body, &response)
if err != nil {
...
}
fmt.Print(response.Data.CreatedUtc) <---
return response
}
Problem
When I request this endpoint it prints 0 while I can see in the browser that the created_utc timestamp is 1562538742. This seems to happen in the vast majority (but not all) cases.
Am I doing something wrong with my type conversions?
To understand why it is zero, you must first understand that in Go, types are not automatically references like in other languages. The variable var abc int will have a value of 0 by default.
When testing whether JSON is parsing correctly, you can change the values of the type to pointers. With this, any field that isn't filled is nil rather than the default value for that type.
Doing this, you can see if the value being returned is true, or if there is another failure, such as incorrect data model or failed network call.
Credit to #JimB for pointing out that I wasn't checking the status code of the response. I had expected that it would throw an error if it was > 400 but according to the docs that is not the case.
In my case, modifying the request to contain a user agent header resolved the issue.
I'm trying to figure out a way to get the error cause while JSON decoding http.Response.Body
if err := json.NewDecoder(resp.Body).Decode(&lResp); err != nil {
// Get the cause of err
}
The type of err (and errors.Cause(err) using either github.com/pkg/errors or github.com/friendsofgo/errors) is *errors.errorString.
So what I'm able to do right now is the exact opposite of checking for the error type, namely:
if strings.HasSuffix(cause.Error(), "(Client.Timeout exceeded while reading body)") {
//...
}
I can try to use ioutil.ReadAll() and then I'll get *http.httpError as an error when the timeout occurs.
The primary reason is that I don't want to get the partially read JSON structure in the error - only the cause of the error and with current way, it's being done (the error returned) I get:
main.ListingResponse.DataSources: struct CustomType{ /* partially read JSON struct ... */ }.net/http: request canceled (Client.Timeout exceeded while reading body)
Ok, so I ended up reading the response body into into a []byte and then unmarshalling it with json.Unmarshal()
bb, err := ioutil.ReadAll(resp.Body)
if err != nil {
var netError net.Error
if errors.As(err, &netError) {
log.Printf("netError %v", netError)
// handle net.Error...
return nil, netError
}
// handle general errors...
return nil, netError
}
var lResp LResponse
if err := json.Unmarshal(bb, &lResp); err != nil {
return nil, errors.Wrap(err, "failed to unmarshal LResponse")
}
I'm still looking for a solution to use json.NewDecoder(resp.Body).Decode(&str) to avoid copying whole body into memory.
If anyone knows the way to do it, please add your answer.
I've written the following function for validating the X-Hub-Signature request header returned by the GitHub API as part of the webhook's payload.
func isValidSignature(r *http.Request, key string) bool {
// Assuming a non-empty header
gotHash := strings.SplitN(r.Header.Get("X-Hub-Signature"), "=", 2)
if gotHash[0] != "sha1" {
return false
}
defer r.Body.Close()
b, err := ioutil.ReadAll(r.Body)
if err != nil {
log.Printf("Cannot read the request body: %s\n", err)
return false
}
hash := hmac.New(sha1.New, []byte(key))
if _, err := hash.Write(b); err != nil {
log.Printf("Cannot compute the HMAC for request: %s\n", err)
return false
}
expectedHash := hex.EncodeToString(hash.Sum(nil))
log.Println("EXPECTED HASH:", expectedHash)
return gotHash[1] == expectedHash
}
However, this doesn't seem to work as I'm not able to validate with the correct secret. Here is an example output, if that helps:
HUB SIGNATURE: sha1=026b77d2284bb95aa647736c42f32ea821d6894d
EXPECTED HASH: 86b6fa48bf7643494dc3a8459a8af70008f6881a
I've used the logic from hmac-examples repo as a guideline and implemented the code. However, I am unable to understand the reason behind this discrepancy.
I would be grateful if someone can point out the trivial mistake I'm making here.
Refer: Delivery Headers
This is really embarrasing but still I would like to share how I was able to fix it.
I sent in the wrong key as the input which was causing all the confusion.
Lessons learnt:
The above code snippet is absolutely correct and can be used as a validator.
Every one makes stupid mistakes but only the wise own up to them and rectify.
I am a freshman of gRPC, and here is my problem. I'm trying to write a service to expose myOwnService into a gRPC service per the following service method:
rpc HighFive (stream HighRequest) returns (stream HighReply) {}
The server-side code is as follows:
func (s *server) HighFive(stream pb.Greeter_HighFiveServer) error {
// Oops, don't know how to do here ...
myOwnService(stdin io.ReadCloser, stdout io.WriteCloser)
return nil
}
func myOwnService(stdin io.ReadCloser, stdout io.WriteCloser) error {
// read input from stdin, do something, the write result to stdout
...
return nil
}
As you can see above, I have no idea how to make stream work with io.Reader and io.Writer in my original service, so that the caller of the HighFive gRPC service can read and write data just like calling myOwnService normally.
[Update] My current messages are like this, but you can change them if necessary:
message HighRequest {
bytes content = 1;
}
message HighReply {
bytes content = 1;
}
Per the gRPC Basics tutorial section on Bidirectional streaming RPC, each call to your stream parameter's Recv method will give you a decoded HighRequest message, not a byte stream, as you're expecting for your myOwnService function.
Now, if your HighRequest message contains a field of types bytes or string, you may wish to feed that field's content into myOwnService as the stdin parameter by wrapping the raw []byte value via bytes.NewReader.
I see, though, that myOwnService demands an io.ReadCloser. I don't know why you'd expect myOwnService to close its input parameter, but I'll trust you well enough that you need it to then recommend using ioutil.NopCloser to satisfy that demand trivially.
Sketching:
// Next tag: 2
message HighRequest {
bytes content = 1;
}
func (s *server) HighFive(stream pb.Greeter_HighFiveServer) error {
for req, err := stream.Recv(); {
if err != nil {
if err == io.EOF {
return nil
}
return err
}
in := ioutil.NopCloser(bytes.NewReader(req.Content))
out := /* ... */
if err := myOwnService(in, out); err != nil {
return err
}
}
}