What is correct way to close persistent connection? - go

My case is: long running server with connection to Redis. This server wait for SIGTERM signal for terminating. What is the right way to guarantee to release connection after terminating of my application?
I know about defer - is really great, but not for persistent connection, because I do not want to open connection to Redis for each operation.
Thanks!

You would still use defer if you want to ensure some block of code executes before exit. The difference is in it's scope. The scope of your connection and defer statement should be the same. I have no idea what your app is but to provide a concrete example, you need to defer the connection close in the main of you command line app, not in the methods that read and write.
You said "because I do not want to open connection to Redis for each operation" but that only makes defer problematic if you defer the close in the scope of some method that does a single IO operation. If you instead do the defer in the scope above a single operation (where all operations occur) then it will do waht you want;
init connection
defer connectionClose
begin execution of code that does db IO
block here if above is async
program is exiting, my defer is called here
EDIT: As pointed out in the comments, the execution of deferred statements in not guaranteed. I just want to make it clear that you can defer the connection closing at the top level of application.

Related

How to un-wedge go gRPC bidi-streaming server from the blocking Recv() call?

When serving a bidirectional stream in gRPC in golang, the canonical stream handler looks something like this:
func (s *MyServer) MyBidiRPC(stream somepb.MyServer_MyBidiServer) error {
for {
data, err := stream.Recv()
if err == io.EOF {
return nil // clean close
}
if err != nil {
return err // some other error
}
// do things with data here
}
}
Specifically, when the handler for the bidi RPC returns, that is the signal to consider the server side closed.
This is a synchronous programming model -- the server stays blocked inside this goroutine (created by the grpc library) while waiting for messages from the client.
Now, I would like to unblock this Recv() call (which ends up calling RecvMsg() on an underlying grpc.ServerStream,) and return/close the stream, because the server process has decided that it is done with this client.
Unfortunately, I can find no obvious way to do this:
There's no Close() or CloseSend() or CloseRecv() or Shutdown()-like function on the bidi server interface generated for my service
The context inside the stream, which I can get at with stream.Context(), doesn't expose user-accessible the cancel function
I can't find a way to pass in a context on the "starting side" for a new connection accepted by the grpc.Server, where I could inject my own cancel function
I could close the entire grpc.Server by calling Stop(), but that's not what I want to do -- only this particular client connection (grpc.ServerStream) should be finished.
I could send a message to the client that makes the client in turn shut down the conection. However, this doesn't work if the client has fallen off the network, which would be solved with a timeout, which has to be pretty long to be generally robust. I want it now because I'm impatient, and, more importantly, at scale, dangling unresponsive clients can be a high cost.
I could (perhaps) dig through the grpc.ServerStream with reflection until I find the transportStream, and then dig out the cancel function out of that and call it. Or dig through the stream.Context() with reflection, and make my own cancel function reference to call. Neither of these seem well advised for future maintainers.
But surely these can't be the only options? Deciding that a particular client no longer needs to be connected is not magic space-alien science. How do I close this stream such that the Recv() call un-blocks, from the server process side, without involving a round-trip to the client?
Unfortunately I don't think there is a great way to do what you are asking. Depending on your goal, I think you have two options:
Run Recv in a goroutine and return from the bidi handler when you need it to return. This will close the context and unblock Recv. This is obviously suboptimal, as it requires care because you now have code executing outside the scope of the handler's execution. It is, however, the closest answer I can seem to find.
If you are trying to mitigate the impact of misbehaving clients by instituting timeouts, you might be able to offload the work of this to the framework with KeepaliveEnforcementPolicy and/or KeepaliveParams. This is probably preferable if this aligns with the reason you are hoping to close the connection, but otherwise isn't of much use.

Getting data race condition with zerolog

I am using zerolog configured with diodes to prevent a race condition when writing to stdout. This is my log setup:
consoleWriter := zerolog.ConsoleWriter{Out: os.Stdout, NoColor: *logNoColor, TimeFormat: *logDateTimeFormat}
return diode.NewWriter(consoleWriter, 1000, 10*time.Millisecond, onMissedMessages)
I am following the example from here:
zerolog documentation
Finally, I set the global logger with the writer returned from above (f):
log.Logger = zerolog.New(f).With().Timestamp().Logger()
I have a shutdown hook (listens for CTRL+C) which basically just writes a log message when it is called and cancels the root context. I am also writing a log message after 1 second has elapsed using the time.After function.
When I don't press CTRL+C, the application runs as expected; however, when I press CTRL+C prior to the 1 second delayed execution of a method, I get a data race condition. When I remove the log statements, the problem goes away leading me to believe that the diode setup from above isn't helping prevent a race condition.
Just to make sure I'm understanding how golang works:
Canceling a context releases resources tied to that context. Ie. if you have a child context with a channel and close the child context, only that channel would be closed / released
os.Stdout, os.Stderr, etc. are not thread-safe and operations on them must be controlled by the developer.
I can check if a context is Done() from numerous goroutines without a synchronization problem. However, I can and should only read from a channel from a single thread

Proper way to use redis pool in this scenario

I am currently using the redigo library for my project where i create a redis pool
I use defer to release the redis connection every time i get one from the pool
c := redisPool.Get()
defer c.Close()
But it will block forever in this scenario if MaxActive has been set
func function1() {
c := redisPool.Get()
defer c.Close()
function2()
...
}
func function2() {
c := redisPool.Get()
defer c.Close()
...
}
should i use only one redis connection in one goroutine?
You have a few options here.
You can Close() when you are done, returning the connection to the pool and then calling function2. Upside: works, not too complex. Downside: management of returning the connection in the case of multiple exit points from the function.
You can change function2 to take a redis.Conn argument that it uses and just pass that connection off. Upside: defer still works for function1. Downside: you need a connection to call function2 and need to do connection management from the calling site. In your example that is easy enough.
Make sure you have at least N*2 max connections, where N is the max number of concurrent goroutines that will be running. Upside: Your code stays as-is without changes. Downside: limited in the number of concurrent calls to function1 you can make.
You can use following approach to make sure application won't lock/ break.
set wait: true in the pool configuration.
// If Wait is true and the pool is at the MaxActive limit, then Get()
waits // for a connection to be returned to the pool before returning.
Confirm that the server's maxclient limit is larger than MaxActive The default maxclient is 10k.
Most applications can keep connection use low by avoiding long or blocking operations (other than calls to Redis) between the call to Get() and the call to Close().
Hope this helps.

Does Context.Done() unblock when context variable goes out of scope in golang?

Will context.Done() unblock when a context variable goes out of scope and cancel is not explicitly called?
Let's say I have the following code:
func DoStuff() {
ctx, _ := context.WithCancel(context.Background())
go DoWork(ctx)
return
}
Will ctx.Done() unblock in DoWork after the return in DoStuff()?
I found this thread, https://groups.google.com/forum/#!topic/golang-nuts/BbvTlaQwhjw, where the person asking how to use Context.Done() claims that context.Done() will unblock when the context variable leaves scope but no one validated this, and I didn't see anything in the docs.
No, it doesn't cancel automatically when the context leaves scope. Typically one calls defer cancel() (using the callback from ctx.WithCancel()) oneself to make sure that the context is cancelled.
https://blog.golang.org/context provides a good overview of how to use contexts correctly (including the defer pattern above). Also, the source code https://golang.org/src/context/context.go is quite readable and you can see there's no magic that would provide automatic cancellation.
"Unblocking" is not the clearest terminology. Done() returns a channel (or nil) that will receive a struct{} and/or close when the context is "cancelled". What exactly that chan is, or when it is sent on, is up to the individual implementation. It may be sent/closed at some fixed time as with WithDeadline, or manually done as with WithCancel.
The key though, is that this is never "automatic" or guaranteed to happen. If you make a context with WithCancel and read from the Done() channel, that read will block indefinitely until the Cancel() method is called. If that never happens, then you have a wasted goroutine and your application's memory will increase each time you do it.
Once the context is completely out of scope (no executing goroutine is listening to it or has a reference to the parent context), it will get garbage collected and everything will go away.
EDIT: After reading the source though, it looks like WithCancel and friends spawn goroutines to propigate the cancellation. Therefore you must make sure Cancel gets called at some point to avoid goroutine leaks.

How to ensure concurrency in Golang gorilla WebSocket package

I have studied the Godoc of the gorilla/websocket package.
In the Godoc it is clearly stated that
Concurrency
Connections support one concurrent reader and one concurrent writer.
Applications are responsible for ensuring that no more than one goroutine calls the write methods (NextWriter, SetWriteDeadline, WriteMessage, WriteJSON, EnableWriteCompression, SetCompressionLevel) concurrently and that no more than one goroutine calls the read methods (NextReader, SetReadDeadline, ReadMessage, ReadJSON, SetPongHandler, SetPingHandler) concurrently.
The Close and WriteControl methods can be called concurrently with all other
methods.
However, in one of the example provided by the package
func (c *Conn) readPump() {
defer func() {
hub.unregister <- c
c.ws.Close()
}()
c.ws.SetReadLimit(maxMessageSize)
c.ws.SetReadDeadline(time.Now().Add(pongWait))
c.ws.SetPongHandler(func(string) error {
c.ws.SetReadDeadline(time.Now().Add(pongWait)); return nil
})
for {
_, message, err := c.ws.ReadMessage()
if err != nil {
if websocket.IsUnexpectedCloseError(err, websocket.CloseGoingAway) {
log.Printf("error: %v", err)
}
break
}
message = bytes.TrimSpace(bytes.Replace(message, newline, space, -1))
hub.broadcast <- message
}
}
Source: https://github.com/gorilla/websocket/blob/a68708917c6a4f06314ab4e52493cc61359c9d42/examples/chat/conn.go#L50
This line
c.ws.SetPongHandler(func(string) error {
c.ws.SetReadDeadline(time.Now().Add(pongWait)); return nil
})
and this line
_, message, err := c.ws.ReadMessage()
seems to be not synchronized because the first line is a callback function so it should be invoked in a Goroutine created in the package and the second line is executing in the Goroutine that invoke serveWs
More importantly, how should I ensure that no more than one goroutine calls the SetReadDeadline, ReadMessage, SetPongHandler, SetPingHandler concurrently?
I tries to use a Mutex lock and lock it whenever I call the above functions, and unlock it afterwards, but quickly I realize a problem. It is usual (also in the example) that ReadMessage is being called in a for-loop. But if the Mutext is locked before the ReadMessage, then no other Read-functions can acquire the lock and execute until next message is received
Is there any better way in handling this concurrency issue? Thanks in advance.
The best way to ensure that there are no concurrent calls to the read methods is to execute all of the read methods from a single goroutine.
All of the Gorilla websocket examples use this approach including the example pasted in the question. In the example, all calls to the read methods are from the readPump method. The readPump method is called once for a connection on a single goroutine. It follows that the connection read methods are not called concurrently.
The section of the documentation on control messages says that the application must read the connection to process control messages. Based on this and Gorilla's own examples, I think it's safe to assume that the ping, pong and close handlers will be called from the application's reading goroutine as it is in the current implementation. It would be nice if the documentation could be more explicit about this. Maybe file an issue?

Resources