CloseHandler is not called for a gorilla/websocket if I am not reading messages anywhere, I simply get a write error eventually - go

I have a websocket server using gorilla/websocket.
I have a situation where I am simply writing messages to a set of websockets. My custom CloseHandler is never called when I close the websocket on the browser side.
However, adding a goroutine that calls ReadMessage indefinitely (till some error) leads to the CloseHandler being invoked.
Here's the basic idea:
In one goroutine, I run something like this:
for {
for client := range clients {
client.stream <- data
}
time.Sleep(time.Second)
}
and the other code, called in a separate goroutine, one per client:
go (func() {
// If I call wsock.ReadMessage here, my CloseHandler works!
})()
for msg := range myclient.stream {
if err := wsock.WriteMessage(websocket.TextMessage, msg); err != nil {
break
}
}
When I close the websocket on the browser side, I expect the CloseHandler to be called, however, it's never called, instead, I eventually get an error on the WriteMessage call.

The close handler is called when a close message is received from the peer. The application must read the connection to receive close and other control messages.
If the application does not read the connection or the peer does not send a close message, then the close handler will not be called.
If your goal is to detect closed connections, then read the connection until an error as returned as shown in the documentation:
func readLoop(c *websocket.Conn) {
for {
if _, _, err := c.NextReader(); err != nil {
c.Close()
break
}
}
}
The application should only set a close handler when the application must perform some action before the connection bounces the close message back to the peer.

Related

Message still in nats limit queue after ack and term sent in Go

I tried writing a subscriber for a NATS limit queue:
sub, err := js.SubscribeSync(fullSubject, nats.Context(ctx))
if err != nil {
return err
}
msg, err := sub.NextMsgWithContext(ctx)
if err != nil {
if errors.Is(err, nats.ErrSlowConsumer) {
log.Printf("Slow consumer error returned. Waiting for reset...")
time.Sleep(50 * time.Millisecond)
continue
} else {
return err
}
}
msg.InProgress()
var message pnats.NatsMessage
if err := conn.unmarshaller(msg.Data, &message); err != nil {
msg.Term()
return err
}
actualSubject := message.Context.FullSubject()
handler, ok := callbacks[message.Context.Category]
if !ok {
msg.Nak()
continue
}
callback, err := handler(&message)
if err == nil {
msg.Ack()
msg.Term()
} else {
msg.Nak()
return err
}
callback(ctx)
The goal of this code is consume any message on a number of subjects and call a callback function associated with the subject. This code works but the issue I'm running into is that I'd like the message to be deleted after the call to handler if that function doesn't return an error. I thought that's what msg.Term was doing but I still see all the messages in the queue.
I had originally designed this around a work queue but I wanted it to work with multiple subscribers so I had to redesign it. Is there any way to make this work?
Based on the code provided, I assume that you are not providing stream and consumer info when creating a subscription with the JetStream library.
In the documentation for the SubscribeSync method, it says that when stream and consumer information is not provided, the library will create an ephemeral consumer and the name of the consumer is picked by the server. It also attempts to figure out which stream the subscription is for.
Here is what I believe happens in your code:
When you call the SubscribeSync method, an ephemeral consumer is created, with your provided topic.
When msg.Ack and msg.Term are called, you do acknowledge the message, but only for that current consumer.
The next time you call the SubscribeSync method, a new ephemeral consumer is created, containing the message that you already deleted on another consumer. Which is how the Jetstream concepts of streams, consumers, and subscriptions work by design.
Based on what you want to accomplish, here are some suggestions:
Use the plain NATS Core library to work with either a pub/sub or a queue. Don't use JetStream. The NATS Core library works with topics directly, whereas the Jetstream library creates additional things (streams and consumers) under the hood if the information is not provided.
Use JetStream but create a stream and a durable consumer yourself, either through code or directly on the NATS server. This way, with a stream and a consumer already defined, you should be able to make it work as intended.

How to hook a channel into a websocket goroutine?

Golang guidelines say that the goroutine that opens and writes to a channel should close it, but then how do I hook it into a websocket (like gorilla/websocket) handler goroutine?
func (server *Server) websocketUpgrade(responseWriter http.ResponseWriter, request *http.Request) {
// handle the request
connection, err := server.websocketUpgrader.Upgrade(responseWriter, request, responseHeaders)
defer connection.Close()
// now I need a channel that the server will write to
// so that the server can send messages to the client
// but it can not be created here because the server has to control it (write to it and close it)
for message := range messageChannel {
connection.WriteMessage(websocket.BinaryMessage, message)
}
}
What are the best practices to handle situations like this where I have no direct access to the goroutine that I need to assign a message channel for?

How to gracefully handle errors in apache kafka producer

I have an ecommerce app where I'm sending a message to a kafka server every time a user adds something to a cart. I can send the message and consume it from a client, however, I am curious about error handling. Once in a while, my Go server fails because of a network error or some other reasons. Adding to cart functionality will be an essential part of the app, so I don't want kafka producer to fail that functionality or become dependent on it. I tried separate them by creating a separate function for kafka Producer and I think the kafka.Produce() function is non-blocking, so even if that fails user still should be able to add items to a cart. Here's a sample code (I put the full code for kafka part, but I trimmed the implementation of adding to cart for readability). Is there a way to quit from kafka function if something goes wrong or if it is longer than couple of seconds-timeout? So, the adding to cart functionality wouldn't hang or cause the server to fail. I'm not very experienced with channels and concurrency in Go, so I can't really tell if this could become an issue with this current design.
ADD TO CART
func addToCart(c *context.Context, rw web.ResponseWriter, req *web.Request) {
cartID := req.PathParams["id"]
var items []map[string]interface{}
if err := json.NewDecoder(req.Body).Decode(&items); err != nil {
errors.Write(rw, 400, "Unable to parse request body JSON or invalid data format.")
return
}
//MAKE SOME OPERATIONS AND SAVE IT TO DATABASE
cart, jsonErr := saveToDB(c, cartID, items)
if jsonErr != nil {
jsonErr.Write(rw)
return
}
webLib.Write204(rw)
deliveryChan := make(chan kafka.Event)
kafkaMessage("cart_topic", "sample-cart-event-message", deliveryChan, rw, rq)
return
}
KAFKA
func kafkaMessage(topic string, message []byte, deliveryChan chan kafka.Event, rw web.ResponseWriter, req *web.Request) {
err := c.KafkaProducer.Produce(&kafka.Message{
TopicPartition: kafka.TopicPartition{Topic: &topic, Partition: kafka.PartitionAny},
Value: message,
}, deliveryChan)
if err != nil {
c.Log("error:%s", err)
return
}
e, ok := <-deliveryChan
if !ok{
c.Log("Channel is closed for kafka producer")
return
}
m, ok := e.(*kafka.Message)
if !ok{
c.Log("There has been an error obtaining the kafka message")
return
}
if m.TopicPartition.Error != nil {
fmt.Printf("Delivery failed: %v\n", m.TopicPartition.Error)
} else {
c.Log("Delivered message to topic %s [%d] at offset %v\n",
*m.TopicPartition.Topic, m.TopicPartition.Partition, m.TopicPartition.Offset)
}
}
So the send to kafka is async but you're, in effect, turning it into a sync function by waiting for a "success" message.
A couple quick options.
1: You can totally disregard the status of the async message by passing a nil channel as deliveryChan. Then you really get a "fire and forget" async model. It sounds like this may be what you are looking for.
2: You can run kafkaMessage in a goroutine by simply changing to
deliveryChan := make(chan kafka.Event)
go kafkaMessage("cart_topic", "sample-cart-event-message", deliveryChan, rw, rq)
return
Then you can keep your waiting for a message, logging, etc in that function. You can even add retries if you want! Be aware that in this case you can get a backlog of goroutines waiting on response messages / retrying / etc since you're essentially queuing up operations as goroutines. For most applications this won't be a problem as well as you're not continually falling behind in processing, but still - something to keep an eye on with monitoring!
There are lots of other patterns to follow here, but these are fairly low lift and give you a few options.

How to create server for persistent stream (aka pubsub) in Golang GRPC

I am building service that needs to send events to all subscribed consumers in Pub/Sub manner eg. send one event to all currently connected clients.
I am using Protobuf for that with the following proto definition:
service EventsService {
rpc ListenForEvents (AgentProcess) returns (stream Event) {}
}
Both server & client are written in Go.
My problem is that when client initiates connection then the stream it is not long-lived, eg. when server returns from ListenForEvents method:
func (e EventsService) ListenForEvents(process *pb.AgentProcess, listener pb.EventsService_ListenForEventsServer) error {
//persist listener here so it can be used later when backend needs to send some messages to client
return nil
}
then the client almost instantly gets EOF error which means that server probably closed connection.
What do I do so that the client is subscribed for a long time to the server? The main problem is that I might not have anything to send to the client when it calls ListenForEvents method on the server, this is why I want this stream to be long lived to be able to send messages later.
The stream terminates when you return from the server function. Instead, you should receive events somehow, and send them to the client without returning from your server. There are probably many ways you can do this. Below is the sketch of one way of doing it.
This relies on the server connection running on a separate goroutine. There is a Broadcast() function that will send messages to all connected clients. It looks like this:
var allRegisteredClients map[*pb.AgentProcess]chan Message
var clientsLock sync.RWMutex{}
func Broadcast(msg Message) {
clientsLock.RLock()
for _,x:=range allRegisteredClients {
x<-msg
}
clientsLock.RUnlock()
}
Then, your clients have to register themselves, and process messages:
func (e EventsService) ListenForEvents(process *pb.AgentProcess, listener pb.EventsService_ListenForEventsServer) error {
clientsLock.Lock()
ch:=make(chan Message)
allRegisteredClients[process]=ch
clientsLock.Unlock()
for msg:=range ch {
// send message
// Deal with errors
// Deal with client terminations
}
clientsLock.Lock()
delete(allRegisteredClients,process)
clientsLock.Unlock()
}
As I said, this is only a sketch of the idea.
I have managed to nail it down.
Basically I never return from method ListenForEvents.
It creates channel, persists in global-like map of subscribed clients and keeps reading from that channel indefinitely.
The whole implementation of server logic:
func (e EventsService) ListenForEvents(process *pb.AgentProcess, listener pb.EventsService_ListenForEventsServer) error {
chans, exists := e.listeners[process.Hostname]
chanForThisClient := make(chan *pb.Event)
if !exists {
e.listeners[process.Hostname] = []chan *pb.Event{chanForThisClient}
} else {
e.listeners[process.Hostname] = append(chans, chanForThisClient)
}
for {
select {
case <-listener.Context().Done():
return nil
case res := <-chanForThisClient:
_ = listener.Send(res)
}
}
return nil
}
You need to provide keepalive settings for grpc client and server
See details here https://github.com/grpc/grpc/blob/master/doc/keepalive.md
Examples https://github.com/grpc/grpc-go/tree/master/examples/features/keepalive

context.Err() on complete

I am making multiple RPC calls to my server where the handler looks like:
func (h *handler) GetData(ctx context.Context, request Payload) (*Data, error) {
go func(ctx context.Context) {
for {
test := 0
select {
case <-ctx.Done():
if ctx.Err() == context.Canceled {
log.Info(ctx.Err())
test = 1
break
}
}
if test == 1 {
break
}
}
}(ctx)
data := fetchData(request)
return data, nil
}
the fetchData API takes around 5 seconds to get data and reply back to my service. Meanwhile if the client requests again, then I abort the old request and fire a new request. The abort is not visible on context object.
Rather ctx.Err() shows a value of context.Canceled even when the calls are not cancelled and end gracefully with expected data.
I am new to Go and don't understand how exactly context manages Cancels, timeout and completion.
Some insight on the behaviour will be helpful.
From the docs (emphasize mine):
For incoming server requests, the context is canceled when the client's connection closes, the request is canceled (with HTTP/2), or when the ServeHTTP method returns.
In other words, cancellation does not necessarly mean that the client aborted the request.
Contexts that can be canceled must be canceled eventually, and the HTTP server takes care of that:
Canceling this context releases resources associated with it, so code should call cancel as soon as the operations running in this Context complete.
What you observe works as intended.

Resources