gorilla websocket - chain of closeHandler - go

In gorilla websocket, websocket.Conn struct has a method SetCloseHandler(), which set close handler of the connection.
If the passed handler is nil, it uses a default handler.
I wan't to keep the default handler, but do something else before or after the default handler.
Aka. a handler chain, e.g. some method like:
prependCloseHandler(h)
which add a handler at beginning of handler chain.
appendCloseHandler(h)
which add a handler at end of handler chain.
Then each handler in the chain will be executed in order.
Is there anyway to do that, without coping the default handler as part of my new handler?
Thanks.

The package does not provide a direct mechanism for prepending or appending a handler for a close message. Use this function as a starter for your function:
closeHandler := conn.CloseHandler()
conn.SetCloseHandler(func(code int, text string) error {
// Add your code here ...
err := closeHandler(code, text)
// ... or here.
return err
})
Note that the close handler is called when a close message is received from the peer, not when the connection is closed. Most applications should be good with the default handler.

Related

Using `Context` to implement timeout

Assuming that I have a function that sends web requests to an API endpoint, I would like to add a timeout to the client so that if the call is taking too long, the operation breaks either by returning an error or panicing the current thread.
Another assumption is that, the client function (the function that sends web requests) comes from a library and it has been implemented in a synchronous way.
Let's have a look at the client function's signature:
func Send(params map[string]string) (*http.Response, error)
I would like to write a wrapper around this function to add a timeout mechanism. To do that, I can do:
func SendWithTimeout(ctx context.Context, params map[string]string) (*http.Response, error) {
completed := make(chan bool)
go func() {
res, err := Send(params)
_ = res
_ = err
completed <- true
}()
for {
select {
case <-ctx.Done():
{
return nil, errors.New("Cancelled")
}
case <-completed:
{
return nil, nil // just to test how this method works
}
}
}
}
Now when I call the new function and pass a cancellable context, I successfully get a cancellation error, but the goroutine that is running the original Send function keeps on running to the end.
Since, the function makes an API call meaning that establishing socket/TCP connections are actually involved in the background, it is not a good practice to leave a long-running API behind the scene.
Is there any standard way to interrupt the original Send function when the context.Done() is hit?
This is a "poor" design choice to add context support to an existing API / implementation that did not support it earlier. Context support should be added to the existing Send() implementation that uses it / monitors it, renaming it to SendWithTimeout(), and provide a new Send() function that takes no context, and calls SendWithTimeout() with context.TODO() or context.Background().
For example if your Send() function makes an outgoing HTTP call, that may be achieved by using http.NewRequest() followed by Client.Do(). In the new, context-aware version use http.NewRequestWithContext().
If you have a Send() function which you cannot change, then you're "out of luck". The function itself has to support the context or cancellation. You can't abort it from the outside.
See related:
Terminating function execution if a context is cancelled
Is it possible to cancel unfinished goroutines?
Stopping running function using context timeout in Golang
cancel a blocking operation in Go

golang servehttp calls itself?

Really cannot understand ServeHTTP. I get that it's interface for the Handler and any object that implments ServeHTTP can behave as Handler. My question is the source code
func (sh serverHandler) ServeHTTP(rw ResponseWriter, req *Request) {
handler := sh.srv.Handler
if handler == nil {
handler = DefaultServeMux
}
handler.ServeHTTP(rw, req)
}
This line handler.ServeHTTP seems to be calling itself again??
handler is basically Server.Handler so it's calling itself all over again? What is the purpose of this method here? Is this just prototype? Can someone explain when you don't implment your own serveHTTP.. how does this function work?
serverHandler is an adapter whose purpose is to set things up such that DefaultServeMux is used when a nil handler is passed to ServeHTTP. The net/http package is full of implementations of the http.Handler interface, because it's very useful and composable.
It's created like this:
serverHandler{c.server}.ServeHTTP(w, w.req)
And c.server (the connection's server) is what sh.srv is accessing. So it's not calling itself. Note that the Server type is:
// A Server defines parameters for running an HTTP server.
// The zero value for Server is a valid configuration.
type Server struct {
// Addr optionally specifies the TCP address for the server to listen on,
// in the form "host:port". If empty, ":http" (port 80) is used.
// The service names are defined in RFC 6335 and assigned by IANA.
// See net.Dial for details of the address format.
Addr string
Handler Handler // handler to invoke, http.DefaultServeMux if nil
{...} other fields
}
The Handler field can be legitimately nil, and it's the goal of serverHandler to handle this scenario.
I think you'll be interested in reading this post which explains the flow of events in the net/http package in detail.

How to handle multiple protobuff messages in same RabbitMQ queue?

My problem is I'm using single queue (as an entry-point to my service) and use Go consumer to handle incoming messages.
My consumer
message := pb.GetRequest{}
err := proto.Unmarshal(msg.Body, message)
My problems is my consumer is hard wired to handle GetRequests only. If I need to handle other type of message ie. AddRequest either
I need to define a new queue for each message or
I need to see if the first unmartial (GetRequest), and continue to test if it can be unmartialed to (AddRequest)
Is there any other good way of doing this (provided #1 is not a good option)
Use a switch on the RabbitMQ routing key.
The Channel.Consume method returns a Go channel of type <-chan amqp.Delivery, where amqp.Delivery contains the field RoutingKey.
The routing key is the identifier used to match published messages to consumer subscriptions. You should make sure that your publishers maintain a one-to-one association between routing keys and message types.
The publisher code will look like this:
msg := &pb.AddRequest{} // some protobuf generated type
body, _ := proto.Marshal(msg)
err := ch.Publish(
"my-exchange", // exchange name
"foo.bar.add", // routing key
true, // option: mandatory
true, // option: immediate
amqp.Publishing{
ContentType: "application/x-protobuf",
Body: body,
},
)
In the example above, you must ensure that all and only messages of type *pb.AddRequest are published with the routing key foo.bar.add, i.e. that your message types are deterministic.
If you can do that, then your consumer code can switch on the routing key and unmarshal the MQ payload into a variable of the correct type:
func formatEvent(payload amqp.Delivery) (proto.Message, error) {
var event proto.Message
// switch on the routing key
switch payload.RoutingKey {
case "foo.bar.add":
event = &pb.AddRequest{}
case "foo.bar.get":
event = &pb.GetRequest{}
default:
return nil, fmt.Errorf("unknown routingKey: %s", key)
}
// unmarshal the body into the event variable
if err := proto.Unmarshal(payload.Body, event); err != nil {
return nil, err
}
return event, nil
}
And then you can type-switch on the proto.Message instance to handle each concrete message type. (Of course you can also directly handle the concrete message in the routing key switch; that will depend on how you want to organize your code).
If your consumer is only able to handle some of the messages routed to the queue he consumes from and the consumer can't be extended to handle different types of messages, you will have to prevent the messages from reaching the queue in the first place. This is a job for the RabbitMQ server and possible the producer.
You don't provide enough information that allows us to suggest how to configure the RabbitMQ exchanges, queues and bindings. Maybe the messages carry some header information that allows the RabbitMQ server to distinguish different types of messages. If there is no such information, maybe the message producers can be extended to add such header information.
Simply rejecting (NACK) a message which your consumer can't handle is a bad idea. This will just place the message back into the same queue. If there is no other consumer that can handle it, this message will never be consumed successfully (ACK).

How to safely add values to grpc ServerStream in interceptor

I have a logging interceptor for my grpc server and want to add a value to the metadata (I want to track the request throughout its lifetime):
func (m *middleware) loggingInterceptor(srv interface{},
ss grpc.ServerStream,
info *grpc.StreamServerInfo,
handler grpc.StreamHandler)
md, ok := metadata.FromIncomingContext(ss.Context())
if !ok {
return errors.New("could not get metadata from incoming stream context")
}
// add the transaction id to the metadata so that business logic can track it
md.Append("trans-id", "some-transaction-id")
// call the handler func
return handler(srv, ss)
}
but the docs for FromIncomingContext state that:
// FromIncomingContext returns the incoming metadata in ctx if it exists. The
// returned MD should not be modified. Writing to it may cause races.
// Modification should be made to copies of the returned MD.
Ok, so I look at the copy function and copy the metadata:
mdCopy := md.Copy()
mdCopy.Append("trans-id", "some-transaction-id")
and think "how do I attach this metadata back to the ServerStream context?", and I check if there's some ss.SetContext(newCtx), but I don't see anything of the sort. Am I thinking about this from the wrong perspective, or am I missing something else?
You would need to use NewIncomingContext to create a copy of the current context in the stream.
Then you would have to create a wrappedStream type which overrides Context method in ServerStream to return the modified context. You would need to pass this wrappedStream to the handler that you received in your interceptor.
You can see an example of this here (it overrides other methods here, but the idea is the same):
https://github.com/grpc/grpc-go/blob/master/examples/features/interceptor/server/main.go#L106-L124
Hope this helps.
Easwar is right.
You can either create your own ServerStream implementation, override Context() method with your own context or there's struct inside grpc package which is WrappedServerStream (github.com/grpc-ecosystem/go-grpc-middleware) which you can pass context and original server stream object and use it inside handler.
Example:
// This methods get the current context, and creates a new one
newContext, err := interceptor.authorize(ss.Context(), info.FullMethod)
if err != nil {
log.Printf("authorization failed: %v", err)
return err
}
err = handler(srv, &grpc_middleware.WrappedServerStream{
ServerStream: ss,
WrappedContext: newContext,
})

gin-gonic and gorilla/websocket does not propagate message

So I made a few changes to this example to make it work with gin-gonic
https://github.com/utiq/go-in-5-minutes/tree/master/episode4
The websocket handshake between many clients is succesful. The problem is that when a client sends a message, the message is not propagated to the rest of the clients.
I had a look on your commit changes of episode4.
My observations as follows:
You're creating hub instance on every incoming request at stream handler. hub instance used to keeps track connections, etc. so you're losing it on every request.
You have removed index/home handler (may be you wanted to convert to gin handler or something, I don't know).
Now, let's bring episode4 into action. Please do following changes (as always improve it as you like). I have tested your episode4 with below changes, it's working fine.
Make /ws handler work on server.go:
h := newHub()
wsh := wsHandler{h: h}
r.GET("/ws", func(c *gin.Context) {
wsh.ServeHTTP(c.Writer, c.Request)
})
Remove the stream handler on connection.go:
func stream(c *gin.Context) {
h := newHub()
wsHandler{h: h}.ServeHTTP(c.Writer, c.Request)
}
Adding index HTML handler on server.go: (added it to test episode4 at my end)
r.SetHTMLTemplate(template.Must(template.ParseFiles("index.html")))
r.GET("/", func(c *gin.Context) {
c.HTML(200, "index.html", nil)
})

Resources