Validating GitHub Webhook HMAC signature in Go - go

I've written the following function for validating the X-Hub-Signature request header returned by the GitHub API as part of the webhook's payload.
func isValidSignature(r *http.Request, key string) bool {
// Assuming a non-empty header
gotHash := strings.SplitN(r.Header.Get("X-Hub-Signature"), "=", 2)
if gotHash[0] != "sha1" {
return false
}
defer r.Body.Close()
b, err := ioutil.ReadAll(r.Body)
if err != nil {
log.Printf("Cannot read the request body: %s\n", err)
return false
}
hash := hmac.New(sha1.New, []byte(key))
if _, err := hash.Write(b); err != nil {
log.Printf("Cannot compute the HMAC for request: %s\n", err)
return false
}
expectedHash := hex.EncodeToString(hash.Sum(nil))
log.Println("EXPECTED HASH:", expectedHash)
return gotHash[1] == expectedHash
}
However, this doesn't seem to work as I'm not able to validate with the correct secret. Here is an example output, if that helps:
HUB SIGNATURE: sha1=026b77d2284bb95aa647736c42f32ea821d6894d
EXPECTED HASH: 86b6fa48bf7643494dc3a8459a8af70008f6881a
I've used the logic from hmac-examples repo as a guideline and implemented the code. However, I am unable to understand the reason behind this discrepancy.
I would be grateful if someone can point out the trivial mistake I'm making here.
Refer: Delivery Headers

This is really embarrasing but still I would like to share how I was able to fix it.
I sent in the wrong key as the input which was causing all the confusion.
Lessons learnt:
The above code snippet is absolutely correct and can be used as a validator.
Every one makes stupid mistakes but only the wise own up to them and rectify.

Related

Sending protobuf-encoded metadata with a stream

Questions about attaching a piece of metadata (an "initial" request) when initiating a client-side streaming gRPC have already been asked before (here, here), but some answers are suggesting that it's not possible and suggest using oneof where first request towards a server contains the metadata in question, and subsequent requests contain the actual data to be processed by the server. I'm wondering if it's safe to encode metadata with a binary encoding of choice and send it to the server where it can be extracted from the Context object and deserialized back into meaningful data. I'm fairly certain that it's perfectly fine when it comes to text-based encodings such as JSON. But what about protobuf? Assuming we define our service like so:
service MyService {
rpc ChitChat (stream ChatMessage) returns (stream ChatMessage);
}
message ChatMessage {
// ...
}
message Meta {
// ...
}
We can include a Meta object in the request:
meta := &pb.Meta{
// ...
}
metab, err := proto.Marshal(meta)
if err != nil {
log.Fatalf("marshaling error: %v", err)
}
newCtx := metadata.NewOutgoingContext(context.Background(), metadata.Pairs("meta-bin", string(metab)))
// ...ChitChat(newCtx)
And access it on the server side:
func (s *server) ChitChat(stream pb.MyService_ChitChatServer) error {
md, ok := metadata.FromIncomingContext(stream.Context())
if !ok {
return fmt.Errorf("no metadata received")
}
metaStr := md.Get("meta-bin")
if len(metaStr) != 1 {
return fmt.Errorf("expected 1 md; got: %v", len(metaStr))
}
meta := new(pb.Meta)
if err := proto.Unmarshal([]byte(metaStr[0]), meta); err != nil {
return fmt.Errorf("error during deserialization: %v", err)
}
// ...
return nil
}
It appears to be working quite well - am I missing something? How easy is it to shoot yourself in the foot with this approach?
Yes, gRPC supports binary headers, so this approach isn't invalid; it is a little less clear that it is expected, but then: that's true for the oneof approach too, so ... not much difference there.

How to serialize `LastEvaluatedKey` from DynamoDB's Golang SDK?

When working with DynamoDB in Golang, if a call to query has more results, it will set LastEvaluatedKey on the QueryOutput, which you can then pass in to your next call to query as ExclusiveStartKey to pick up where you left off.
This works great when the values stay in Golang. However, I am writing a paginated API endpoint, so I would like to serialize this key so I can hand it back to the client as a pagination token. Something like this, where something is the magic package that does what I want:
type GetDomainObjectsResponse struct {
Items []MyDomainObject `json:"items"`
NextToken string `json:"next_token"`
}
func GetDomainObjects(w http.ResponseWriter, req *http.Request) {
// ... parse query params, set up dynamoIn ...
dynamoIn.ExclusiveStartKey = something.Decode(params.NextToken)
dynamoOut, _ := db.Query(dynamoIn)
response := GetDomainObjectsResponse{}
dynamodbattribute.UnmarshalListOfMaps(dynamoOut.Items, &response.Items)
response.NextToken := something.Encode(dynamoOut.LastEvaluatedKey)
// ... marshal and write the response ...
}
(please forgive any typos in the above, it's a toy version of the code I whipped up quickly to isolate the issue)
Because I'll need to support several endpoints with different search patterns, I would love a way to generate pagination tokens that doesn't depend on the specific search key.
The trouble is, I haven't found a clean and generic way to serialize the LastEvaluatedKey. You can marshal it directly to JSON (and then e.g. base64 encode it to get a token), but doing so is not reversible. LastEvaluatedKey is a map[string]types.AttributeValue, and types.AttributeValue is an interface, so while the json encoder can read it, it can't write it.
For example, the following code panics with panic: json: cannot unmarshal object into Go value of type types.AttributeValue.
lastEvaluatedKey := map[string]types.AttributeValue{
"year": &types.AttributeValueMemberN{Value: "1993"},
"title": &types.AttributeValueMemberS{Value: "Benny & Joon"},
}
bytes, err := json.Marshal(lastEvaluatedKey)
if err != nil {
panic(err)
}
decoded := map[string]types.AttributeValue{}
err = json.Unmarshal(bytes, &decoded)
if err != nil {
panic(err)
}
What I would love would be a way to use the DynamoDB-flavored JSON directly, like what you get when you run aws dynamodb query on the CLI. Unfortunately the golang SDK doesn't support this.
I suppose I could write my own serializer / deserializer for the AttributeValue types, but that's more effort than this project deserves.
Has anyone found a generic way to do this?
OK, I figured something out.
type GetDomainObjectsResponse struct {
Items []MyDomainObject `json:"items"`
NextToken string `json:"next_token"`
}
func GetDomainObjects(w http.ResponseWriter, req *http.Request) {
// ... parse query params, set up dynamoIn ...
eskMap := map[string]string{}
json.Unmarshal(params.NextToken, &eskMap)
esk, _ = dynamodbattribute.MarshalMap(eskMap)
dynamoIn.ExclusiveStartKey = esk
dynamoOut, _ := db.Query(dynamoIn)
response := GetDomainObjectsResponse{}
dynamodbattribute.UnmarshalListOfMaps(dynamoOut.Items, &response.Items)
lek := map[string]string{}
dynamodbattribute.UnmarshalMap(dynamoOut.LastEvaluatedKey, &lek)
response.NextToken := json.Marshal(lek)
// ... marshal and write the response ...
}
(again this is my real solution hastily transferred back to the toy problem, so please forgive any typos)
As #buraksurdar pointed out, attributevalue.Unmarshal takes an inteface{}. Turns out in addition to a concrete type, you can pass in a map[string]string, and it just works.
I believe this will NOT work if the AttributeValue is not flat, so this isn't a general solution [citation needed]. But my understanding is the LastEvaluatedKey returned from a call to Query will always be flat, so it works for this usecase.
Inspired by Dan, here is a solution to serialize and deserialize to/from base64
package dynamodb_helpers
import (
"encoding/base64"
"encoding/json"
"github.com/aws/aws-sdk-go-v2/feature/dynamodb/attributevalue"
"github.com/aws/aws-sdk-go-v2/service/dynamodb/types"
)
func Serialize(input map[string]types.AttributeValue) (*string, error) {
var inputMap map[string]interface{}
err := attributevalue.UnmarshalMap(input, &inputMap)
if err != nil {
return nil, err
}
bytesJSON, err := json.Marshal(inputMap)
if err != nil {
return nil, err
}
output := base64.StdEncoding.EncodeToString(bytesJSON)
return &output, nil
}
func Deserialize(input string) (map[string]types.AttributeValue, error) {
bytesJSON, err := base64.StdEncoding.DecodeString(input)
if err != nil {
return nil, err
}
outputJSON := map[string]interface{}{}
err = json.Unmarshal(bytesJSON, &outputJSON)
if err != nil {
return nil, err
}
return attributevalue.MarshalMap(outputJSON)
}

Why is json.Unmarshall converting numbers to zero in some cases?

Background
I'm trying to analyze data from the Reddit api on users. I've declared a User struct like:
type User struct {
Kind string `json:"kind"`
Data struct {
...
Subreddit struct {
...
} `json:"subreddit"`
...
CreatedUtc float64 `json:"created_utc"` <---
...
} `json:"data"`
}
I request the data from the api and print it here.
func GetUser(url string) User {
var response User
resp, err := http.Get(url)
if err != nil {
...
}
defer resp.Body.Close()
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
...
}
err = json.Unmarshal(body, &response)
if err != nil {
...
}
fmt.Print(response.Data.CreatedUtc) <---
return response
}
Problem
When I request this endpoint it prints 0 while I can see in the browser that the created_utc timestamp is 1562538742. This seems to happen in the vast majority (but not all) cases.
Am I doing something wrong with my type conversions?
To understand why it is zero, you must first understand that in Go, types are not automatically references like in other languages. The variable var abc int will have a value of 0 by default.
When testing whether JSON is parsing correctly, you can change the values of the type to pointers. With this, any field that isn't filled is nil rather than the default value for that type.
Doing this, you can see if the value being returned is true, or if there is another failure, such as incorrect data model or failed network call.
Credit to #JimB for pointing out that I wasn't checking the status code of the response. I had expected that it would throw an error if it was > 400 but according to the docs that is not the case.
In my case, modifying the request to contain a user agent header resolved the issue.

How to get the error cause of json.NewDecoder(body).Decode() when a http timeout occurs?

I'm trying to figure out a way to get the error cause while JSON decoding http.Response.Body
if err := json.NewDecoder(resp.Body).Decode(&lResp); err != nil {
// Get the cause of err
}
The type of err (and errors.Cause(err) using either github.com/pkg/errors or github.com/friendsofgo/errors) is *errors.errorString.
So what I'm able to do right now is the exact opposite of checking for the error type, namely:
if strings.HasSuffix(cause.Error(), "(Client.Timeout exceeded while reading body)") {
//...
}
I can try to use ioutil.ReadAll() and then I'll get *http.httpError as an error when the timeout occurs.
The primary reason is that I don't want to get the partially read JSON structure in the error - only the cause of the error and with current way, it's being done (the error returned) I get:
main.ListingResponse.DataSources: struct CustomType{ /* partially read JSON struct ... */ }.net/http: request canceled (Client.Timeout exceeded while reading body)
Ok, so I ended up reading the response body into into a []byte and then unmarshalling it with json.Unmarshal()
bb, err := ioutil.ReadAll(resp.Body)
if err != nil {
var netError net.Error
if errors.As(err, &netError) {
log.Printf("netError %v", netError)
// handle net.Error...
return nil, netError
}
// handle general errors...
return nil, netError
}
var lResp LResponse
if err := json.Unmarshal(bb, &lResp); err != nil {
return nil, errors.Wrap(err, "failed to unmarshal LResponse")
}
I'm still looking for a solution to use json.NewDecoder(resp.Body).Decode(&str) to avoid copying whole body into memory.
If anyone knows the way to do it, please add your answer.

How to `catch` specific errors

For example, I am using one Go standard library function as:
func Dial(network, address string) (*Client, error)
This function may return errors, and I just care about errors which report "connection lost" or "connection refused", then do some code to fix these.
It seems like:
client, err := rpc.Dial("tcp", ":1234")
if err == KindOf(ConnectionRefused) {
// do something
}
What's more, how to get all the errors a specific standard library function may return?
There's no standard way to do this.
The most obvious way, which should only be used if no other method is available, is to compare the error string against what you expect:
if err.Error() == "connection lost" { ... }
Or perhaps more robust in some situations:
if strings.HasSuffix(err.Error(), ": connection lost") { ... }
But many libraries will return specific error types, which makes this much easier.
In your case, what's relevant are the various error types exported by the net package: AddrError, DNSConfigError, DNSError, Error, etc.
You probably care most about net.Error, which is used for network errors. So you could check thusly:
if _, ok := err.(net.Error); ok {
// You know it's a net.Error instance
if err.Error() == "connection lost" { ... }
}
What's more, how to get all the errors a specific standard library function may return?
The only fool-proof way to do this is to read the source for the library. Before going to that extreme, a first step is simply to read the godoc, as in the case of the net package, the errors are pretty well documented.
You can now use the errors.Is() function to compare some standard errors:
client, err := net.Dial("tcp", ":1234")
if errors.Is(err, net.ErrClosed) {
fmt.Println("connection has been closed.")
}
A common file opening scenario:
file, err := os.Open("non-existing");
if err != nil {
if errors.Is(err, fs.ErrNotExist) {
fmt.Println("file does not exist")
} else {
fmt.Println(err)
}
}
UPDATE
You can also use errors.As() if you'd like to check the error type:
client, err := net.Dial("tcp", ":1234")
var errC = net.ErrClosed
if errors.As(err, &errC) {
fmt.Printf("connection has been closed: %s", errC)
}
client.Close()
A more detailed explanation can be found in the Go Blog.

Resources