How to solve pub-sub problem in go and grpc? - go

In go grpc service I have a receiver(publisher) event loop, and publisher can detect that it wants sender to stop. But channel principles says that we should not close channels on receiver side, only on sender side. How should it be threated?
Situation is like following. Imagine a chat. 1st client - subscriber - receives message, and its streaming cannot be done without goroutine due to grpc limitations. And 2nd client - publisher is sending a message to chat, so its another goroutine. You have to pass a message from publisher to subscriber receiving client, ONLY if subscriber not closed its connection (forces closing a channel from receiver side)
The problem in code:
//1st client goroutine - subscriber
func (s *GRPCServer) WatchMessageServer(req *WatchMessageRequest, stream ExampleService_WatchMessageServer) error {
ch := s.NewClientChannel()
// natively blocks goroutine with send to its stream, until send gets an error
for {
msg, ok := <-ch
if !ok {
return nil
}
err := stream.Send(msg) // if this fails - we need to close ch from receiver side to "propagate" closing signal
if err != nil {
return err
}
}
}
//2nd client goroutine - publisher
func (s *GRPCServer) SendMessage(ctx context.Context, req *SendMessageRequest) (*emptypb.Empty, error) {
for i := range s.clientChannels {
s.clientChannels[i] <- req
// no way other than panic, to know when to remove channel from list. or need to make a wrapper with recover..
}
return nil
}

I've initially got a clue by searhing, and solution idea was provided in an answer here thanks to that answer.
Providing streaming solution sample code, i guess its an implementation for a generic pub-sub problem:
//1st client goroutine - subscriber
func (s *GRPCServer) WatchMessageServer(req *WatchMessageRequest, stream ExampleService_WatchMessageServer) error {
s.AddClientToBroadcastList(stream)
select {
case <-stream.Context().Done(): // stackoverflow promised that it would signal when client closes stream
return stream.Context().Err() // stream will be closed immediately after return
case <-s.ctx.Done(): // program shutdown
return s.ctx.Err()
}
}
//2nd client goroutine - publisher
func (s *GRPCServer) SendMessage(ctx context.Context, req *SendMessageRequest) (*emptypb.Empty, error) {
for i := range s.clientStreams {
err := s.clientStreams.Send(req)
if err != nil {
s.RemoveClientFromBroadcastList(s.clientStreams[i])
}
}
return nil
}

Related

Hold subscribe method with custom handler nats golang

I am writing wrapper on top of nats client in golang, I want to take handler function which can be invoked from consumer once I get the message from nats server.
I want to hold custom subscribe method until it receives the message from nats.
Publish:
func (busConfig BusConfig) Publish(service string, data []byte) error {
pubErr := conn.Publish(service, data)
if pubErr != nil {
return pubErr
}
return nil
}
Subscribe:
func (busConfig BusConfig) Subscribe(subject string, handler func(msg []byte)) {
fmt.Println("Subscrbing on : ", subject)
//wg := sync.WaitGroup{}
//wg.Add(1)
subscription, err := conn.Subscribe(subject, func(msg *nats.Msg) {
go func() {
handler(msg.Data)
}()
//wg.Done()
})
if err != nil {
fmt.Println("Subscriber error : ", err)
}
//wg.Wait()
defer subscription.Unsubscribe()
}
test case:
func TestLifeCycleEvent(t *testing.T) {
busClient := GetBusClient()
busClient.Subscribe(SUBJECT, func(input []byte) {
fmt.Println("Life cycle event received :", string(input))
})
busClient.Publish(SUBJECT, []byte("complete notification"))
}
I am seeing message is published but not subscribed, I tried to hold subscribe method using waitgroup but I think this is not the correct solution.
You don't see the message being delivered because Subscribe is an async method that spawns a goroutine to handle the incoming messages and call the callback.
Straight after calling busClient.Publish() your application exits. It does not wait for anything to happen inside Subscribe().
When you use nats.Subscribe(), you usually have a long-running application that exits in specific conditions (like receiving a shutdown signal). WaitGroup can work here, but probably not for real applications, just for tests.
You should also call Flush() method on NATS connection to ensure all buffered messages have been sent before exiting the program.
If you want a synchronous method, you can use nats.SubscribeSync()
Check out examples here: https://natsbyexample.com/examples/messaging/pub-sub/go
For my understanding, I think in NATs we need to respond to the message even if we are not providing the reply address, so it can respond to the message.
func (busConfig BusConfig) Subscribe(subject string, handler func(msg []byte)) {
subscription, err := conn.Subscribe(subject, func(msg *nats.Msg) {
go func() {
handler(msg.Data)
msg.Respond(nil)
}()
})
}

How to communicate between a http request handler and websocket server using gin and gorilla/websocket?

I want to create a realtime api. Github Link of current code
There will be two types of users A and B.
Users A can connect to the service using a websocket to see realtime Updates.
Users B can only make http request to push data to mongodb. I have been following this tutorial. In this tutorial sqlitedb and redis pub/sub is used but i don't want to use that.
slotServer.go
...
func (server *WsServer) Run() {
gb := *models.GetInstanceGlobal()
for {
select {
case client := <-server.register:
server.registerClient(client)
case client := <-server.unregister:
server.unregisterClient(client)
case message := <-server.broadcast:
server.broadcastToClient(message)
case message := <-gb.Channel:
server.createRoomAndBroadCast(message)
}
}
}
...
func (server *WsServer) createRoom(name string) *Room {
room := NewRoom(name)
go room.RunRoom()
server.rooms[room] = true
return room
}
func (server *WsServer) createRoomAndBroadCast(name string) {
room := server.createRoom(name)
var roomList []string
for r := range server.rooms {
roomList = append(roomList, r.GetName())
}
addRoomReply := models.AddRoomReply{
RoomList: roomList,
AddedRoom: room.GetName(),
}
addReply := MessageAddRoom{
Data: addRoomReply,
Action: "room-add-success",
}
server.broadcast <- addReply.encode()
}
I am trying to a listen on channel global. If a string is pushed to it, a function createRoomAndBroadCast will be called.
models.go
....
type GlobalChannel struct {
Channel chan string
}
func GetInstanceGlobal() *GlobalChannel {
return &GlobalChannel{
Channel: make(chan string),
}
}
I am writing to this channel in POST message handler
room.go
...
//add room to mongo
if err := db.CreateRoom(&room); err != nil {
ctx.JSON(http.StatusBadGateway, gin.H{
"data": err,
})
return
}
//write it to channel
gb := models.GetInstanceGlobal()
gb.Channel <- *room.Name
// send reponse to user
ctx.JSON(http.StatusCreated, gin.H{
"data": "Room Created Successfully",
})
...
But my post request gets stuck at line gb := models.GetInstanceGlobal()
In the logs I see the following message
redirecting request 307: /api/v1/room/ --> /api/v1/room/
I don't understand whether I am doing something wrong or my logic is completely wrong.
I started reading about how channels work in golang and thanks to this post found that mentioning buffer limit in a channel is very important.
If ch is unbuffered, then ch <- msg will block until the message is consumed by a receiver
So I changed
models.go
//Channel: make(chan string),
Channel: make(chan string,100),
and it started transmitting the messages.
I'm only going to comment on the code you have shown here and not the overall design.
func GetInstanceGlobal() *GlobalChannel {
return &GlobalChannel{
Channel: make(chan string),
}
}
This is likely a misunderstanding of channels. While you have named it GetInstanceGlobal it is actually returning a brand new channel each time you call it. So if one side is calling it and then trying to receive messages and the other is calling it and pushing messages, the two sides will be different channels and never communicate. This explains why your push is blocking forever. And it also explains why when you add 100 to make it a buffered channel that appears to unblock it. But really all you have done is let it build up to 100 queued messages on the push side and eventually block again. The receiver is still on its own channel.
Likely what the code implies is that you wanted to create the global once, return it, and share it. I'm not going to get into the design issues with globals here. But it might look like this for your case:
var globalChannel *GlobalChannel
func GetInstanceGlobal() *GlobalChannel {
if globalChannel == nil {
globalChannel = &GlobalChannel{
Channel: make(chan string),
}
}
return globalChannel
}

How to close all grpc server streams using gracefulStop?

I'm trying to stop all clients connected to a stream server from server side.
Actually I'm using GracefulStop method to handle it gracefully.
I am waiting for os.Interrupt signal on a channel to perform a graceful stop for gRPC. but it gets stuck on server.GracefulStop() when the client is connected.
func (s *Service) Subscribe(_ *empty.Empty, srv clientapi.ClientApi_SubscribeServer) error {
ctx := srv.Context()
updateCh := make(chan *clientapi.Update, 100)
stopCh := make(chan bool)
defer func() {
stopCh<-true
close(updateCh)
}
go func() {
ticker := time.NewTicker(1 * time.Second)
defer func() {
ticker.Stop()
close(stopCh)
}
for {
select {
case <-stopCh:
return
case <-ticker.C:
updateCh<- &clientapi.Update{Name: "notification": Payload: "sample notification every 1 second"}
}
}
}()
for {
select {
case <-ctx.Done():
return ctx.Err()
case notif := <-updateCh:
err := srv.Send(notif)
if err == io.EOF {
return nil
}
if err != nil {
s.logger.Named("Subscribe").Error("error", zap.Error(err))
continue
}
}
}
}
I expected the context in method ctx.Done() could handle it and break the for loop.
How to close all response streams like this one?
Create a global context for your gRPC service. So walking through the various pieces:
Each gRPC service request would use this context (along with the client context) to fulfill that request
os.Interrupt handler would cancel the global context; thus canceling any currently running requests
finally issue server.GracefulStop() - which should wait for all the active gRPC calls to finish up (if they haven't see the cancelation immediately)
So for example, when setting up the gRPC service:
pctx := context.Background()
globalCtx, globalCancel := context.WithCancel(pctx)
mysrv := MyService{
gctx: globalCtx
}
s := grpc.NewServer()
pb.RegisterMyService(s, mysrv)
os.Interrupt handler initiates and waits for shutdown:
globalCancel()
server.GracefulStop()
gRPC methods:
func(s *MyService) SomeRpcMethod(ctx context.Context, req *pb.Request) error {
// merge client and server contexts into one `mctx`
// (client context will cancel if client disconnects)
// (server context will cancel if service Ctrl-C'ed)
mctx, mcancel := mergeContext(ctx, s.gctx)
defer mcancel() // so we don't leak, if neither client or server context cancels
// RPC WORK GOES HERE
// RPC WORK GOES HERE
// RPC WORK GOES HERE
// pass mctx to any blocking calls:
// - http REST calls
// - SQL queries etc.
// - or if running a long loop; status check the context occasionally like so:
// Example long request (10s)
for i:=0; i<10*1000; i++ {
time.Sleep(1*time.Milliscond)
// poll merged context
select {
case <-mctx.Done():
return fmt.Errorf("request canceled: %s", mctx.Err())
default:
}
}
}
And:
func mergeContext(a, b context.Context) (context.Context, context.CancelFunc) {
mctx, mcancel := context.WithCancel(a) // will cancel if `a` cancels
go func() {
select {
case <-mctx.Done(): // don't leak go-routine on clean gRPC run
case <-b.Done():
mcancel() // b canceled, so cancel mctx
}
}()
return mctx, mcancel
}
Typically clients need to assume that RPCs can terminate (e.g. due to connection errors or server power failure) at any moment. So what we do is GracefulStop, sleep for a short time period to allow in-flight RPCs an opportunity to complete naturally, then hard-Stop the server. If you do need to use this termination signal to end your RPCs, then the answer by #colminator is probably the best choice. But this situation should be unusual, and you may want to spend some time analyzing your design if you do find it is necessary to manually end streaming RPCs at server shutdown.

grpc go : how to know in server side, when client closes the connection

I am using grpc go
i have an rpc which looks roughly like this
196 service MyService {
197 // Operation 1
198 rpc Operation1(OperationRequest) returns (OperationResponse) {
199 option (google.api.http) = {
200 post: "/apiver/myser/oper1"
201 body: "*"
202 };
203 }
Client connects by using grpc.Dial() method
When a client connects, the server does some book keeping. when the client disconnects, the bookkeeping needs to be removed.
is there any callback that can be registered which can be used to know that client has closed the session.
Based on your code, it's an unary rpc call, the client connect to server for only one time, send a request and get a response. The client will wait for the response until timeout.
In server side streaming, you can get the client disconnect from
<-grpc.ServerStream.Context.Done()
signal.
With that above, you can implement your own channel in a go routine to build your logic. Use select statement as:
select {
case <-srv.Context().Done():
return
case res := <-<YOUR OWN CHANNEL, WITH RECEIVED RESQUEST OR YOUR RESPONSE>
....
}
I provide some detailed code here
In client streaming, besides the above signal, you can check whether the server can receive the msg:
req, err := grpc.ServerStream.Recv()
if err == io.EOF {
break
} else if err != nil {
return err
}
Assuming that the server is implemented in go, there's an API on the *grpc.ClientConn that reports state changes in the connection.
func (cc *ClientConn) WaitForStateChange(ctx context.Context, sourceState connectivity.State) bool
https://godoc.org/google.golang.org/grpc#ClientConn.WaitForStateChange
These are the docs on each of the connectivity.State
https://github.com/grpc/grpc/blob/master/doc/connectivity-semantics-and-api.md
If you need to expose a channel that you can listen to for the client closing the connection then you could do something like this:
func connectionOnState(ctx context.Context, conn *grpc.ClientConn, states ...connectivity.State) <-chan struct{} {
done := make(chan struct{})
go func() {
// any return from this func will close the channel
defer close(done)
// continue checking for state change
// until one of break states is found
for {
change := conn.WaitForStateChange(ctx, conn.GetState())
if !change {
// ctx is done, return
// something upstream is cancelling
return
}
currentState := conn.GetState()
for _, s := range states {
if currentState == s {
// matches one of the states passed
// return, closing the done channel
return
}
}
}
}()
return done
}
If you only want to consider connections that are shutting down or shutdown, then you could call it like so:
// any receives from shutdownCh will mean the state Shutdown
shutdownCh := connectionOnState(ctx, conn, connectivity.Shutdown)
as the github issue:link
you can do like this
err = stream.Context().Err()
if err != nil {
break
}

Gorilla websocket error: close 1007 Illegal UTF-8 Sequence

I'm trying to implement a websocket proxy server for GlassFish. If I try to connect more than one client I'm getting error:
ReadMessage Failed: websocket: close 1007 Illegal UTF-8 Sequence.
I'm sure the GlassFish server sending right data, because the same server works properly with another proxy server implemented with node.js.
func GlassFishHandler(conn *websocket.Conn){
defer conn.Close()
conn.SetReadDeadline(time.Now().Add(1000 * time.Second))
conn.SetWriteDeadline(time.Now().Add(1000 * time.Second))
fmt.Println("WS-GOLANG PROXY SERVER: Connected to GlassFish")
for {
messageType, reader, err := conn.NextReader()
if err != nil {
fmt.Println("ReadMessage Failed: ", err) // <- error here
} else {
message, err := ioutil.ReadAll(reader)
if (err == nil && messageType == websocket.TextMessage){
var dat map[string]interface{}
if err := json.Unmarshal(message, &dat); err != nil {
panic(err)
}
// get client destination id
clientId := dat["target"].(string)
fmt.Println("Msg from GlassFish for Client: ", dat);
// pass through
clients[clientId].WriteMessage(websocket.TextMessage, message)
}
}
}
}
Summing up my comments as an answer:
When you are writing to the client, you are taking the clientId from the GlassFish message, fetching the client from a map, and then writing to it - basically clients[clientId].WriteMessage(...).
While your map access can be thread safe, writing is not, as this can be seen as:
// map access - can be safe if you're using a concurrent map
client := clients[clientId]
// writing to a client, not protected at all
client.WriteMessage(...)
So what's probably happening is that two separate goroutines are writing to the same client at the same time. You should protect your client from it by adding a mutex in the WriteMessage method implementation.
BTW actually instead of protecting this method with a mutex, a better, more "go-ish" approach would be to use a channel to write the message, and a goroutine per client that consumes from the channel and writes to the actual socket.
So in the client struct I'd do something like this:
type message struct {
msgtype string
msg string
}
type client struct {
...
msgqueue chan *message
}
func (c *client)WriteMessage(messageType, messageText string) {
// I'm simplifying here, but you get the idea
c.msgqueue <- &message{msgtype: messageType, msg: messageText}
}
func (c *client)writeLoop() {
go func() {
for msg := ragne c.msgqueue {
c.actuallyWriteMessage(msg)
}
}()
}
and when creating a new client instance, just launch the write loop

Resources