I'm having an issue regarding the disposing of kafka consumer in the end of program execution. Here is code responsible for closing the consumer
func(kc *KafkaConsumer) Dispose() {
Sugar.Info("Disposing of consumer")
kc.mu.Lock()
kc.Consumer.Close();
Sugar.Info("Disposed of consumer")
kc.mu.Unlock()
}
As you might have already noticed, i'm making use of sync.Mutex, inasmuch as consumer is accessed by multiple goroutines. Below is another snippet responsible for reading messages from kafka
func (kc *KafkaConsumer) Consume(signalChan chan os.Signal, ctx context.Context) {
for{
select{
case sig := <-signalChan:
Sugar.Info("Caught signal %v", sig)
break
case <-ctx.Done():
Sugar.Info("Got context done message. Closing consumer...")
kc.Dispose()
break
default:
for{
message, err := kc.Consumer.ReadMessage(-1); if err != nil{
Log.Error(err.Error())
return
}
Sugar.Infof("Got a new message %v",message)
resp := make(chan *KafkaResponseEntity)
go router.UseMessage(*message, resp, ctx)
//Potential deadlock
response := <-resp
/*
Explicit commit of an offset in order to ensure
that request has been successfully processed
*/
kc.Consumer.Commit()
Sugar.Info("Successfully commited an offset")
Sugar.Infof("Just got a response %v", response)
go producer.KP.Produce(response.PaymentId, response.Bytes, "some_random_topic")
}
}
}
}
The problem is that when closing the consumer, program execution simply stalls.
Are there any issues? Should i use cond along with mutex? I'd be very glad if you provide thorough explanation of what might went wrong in my code.
Thanks in advance.
I suspect this is hanging because of:
kc.Consumer.ReadMessage(-1)
Which the documentation states will block indefinitely, hence why it's not closed. The simplest approach would be to make that value a positive time duration (e.g. 1 * time.Second), but then you may get time out errors if messages are not consumed within the timeouts. The time out error is generally innocuous, but is something to account for, from the linked documentation:
Timeout is returned as (nil, err) where err is
`err.(kafka.Error).Code() == kafka.ErrTimedOut`
I'm not yet sure of a good way to utilize the indefinite blocking and allow it to be interrupted. If anyone does know please post the findings!
Related
This one is a tricky issue that bugs me quite a bit.
Essentially, I wrote an integration microservice that provides data streams from Binance crypto exchange using the Go client. A client sends a start messages, starts data stream for a symbol, and at some point, sends a close message to stop the stream. My implementation looks basically like this:
func (c BinanceClient) StartDataStream(clientType bn.ClientType, symbol, interval string) error {
switch clientType {
case bn.SPOT_LIVE:
wsKlineHandler := c.handlers.klineHandler.SpotKlineHandler
wsErrHandler := c.handlers.klineHandler.ErrHandler
_, stopC, err := binance.WsKlineServe(symbol, interval, wsKlineHandler, wsErrHandler)
if err != nil {
fmt.Println(err)
return err
} else {
c.state.clientSymChanMap[clientType][symbol] = stopC
return nil
}
...
}
The clientSymChanMap stores the stopChannel in a nested hashmap so that I can retrieve the stop channel later to stop the data feed. The stop function has been implemented accordingly:
func (c BinanceClient) StopDataStream(clientType bn.ClientType, symbol string) {
//mtd := "StopDataStream: "
stopC := c.state.clientSymChanMap[clientType][symbol]
if isClosed(stopC) {
DbgPrint(" Channel is already closed. Do nothing for: " + symbol)
} else {
close(stopC)
}
// Delete channel from the map otherwise the next StopAll throws a NPE due to closing a dead channel
delete(c.state.clientSymChanMap[clientType], symbol)
return
}
To prevent panics from already closed channels, I use a check function that returns true in case the channel is already close.
func isClosed(ch <-chan struct{}) bool {
select {
case <-ch:
return true
default:
}
return false
}
Looks nice, but has a catch. When I run the code with starting data for just one symbol, it starts and closes the datafeed exactly as expected.
However, when starting multiple data feeds, then the above code somehow never closes the websocket and just keeps streaming data forever. Without the isClosed check, I get panics of trying to close a closed channel, but with the check in place, well, nothing gets closed.
When looking at the implementation of the above binance.WsKlineServe function, it's quite obvious that it just wraps a new websocket with each invocation and then returns the done & stop channel.
The documentation gives the following usage example:
wsKlineHandler := func(event *binance.WsKlineEvent) {
fmt.Println(event)
}
errHandler := func(err error) {
fmt.Println(err)
}
doneC, stopC, err := binance.WsKlineServe("LTCBTC", "1m", wsKlineHandler, errHandler)
if err != nil {
fmt.Println(err)
return
}
<-doneC
Because the doneC channel actually blocks, I removed it and thought that storing the stopC channel and then use it later to stop the datafeed would work. However, it only does so for one single instance. When multiple streams are open, this doesn't work anymore.
Any idea what that's the case and how to fix it?
Firstly, this is dangerous:
if isClosed(stopC) {
DbgPrint(" Channel is already closed. Do nothing for: " + symbol)
} else {
close(stopC) // <- can't be sure channel is still open
}
there is no guarantee that after your polling check of the channel state, that the channel will still be in that same state in the next line of code. So this code could in theory could panic if it's called concurrently.
If you want an asynchronous action to occur on the channel close - it's best to do this explicitly from its own goroutine. So you could try this:
go func() {
stopC := c.state.clientSymChanMap[clientType][symbol]
<-stopC
// stopC definitely closed now
delete(c.state.clientSymChanMap[clientType], symbol)
}()
P.S. you do need some sort of mutex on your map, since the delete is asynchronous - you need to ensure any adds to the map don't datarace with this.
P.P.S Channels are reclaimed by the GC when they go out of scope. If you are no longer reading from it - they do not need to be explicitly closed to be reclaimed by the GC.
Using channels for stopping a goroutine or closing something is very tricky. There are lots of things you can do wrong or forget to do.
context.WithCancel abstracts that complexity away, making the code more readable and maintainable.
Some code snippets:
ctx, cancel := context.WitchCancel(context.TODO())
TheThingToCancel(ctx, ...)
// Whenever you want to stop TheThingToCancel. Can be called multiple times.
cancel()
Then in a for loop you'd often have a select like this:
for {
select {
case <-ctx.Done():
return
default:
}
// do stuff
}
Here some code that is closer to your specific case of an open connection:
func TheThingToCancel(ctx context.Context) (context.CancelFunc, error) {
ctx, cancel := context.WithCancel(ctx)
conn, err := net.Dial("tcp", ":12345")
if err != nil {
cancel()
return nil, err
}
go func() {
<-ctx.Done()
_ = conn.Close()
}()
go func() {
defer func() {
_ = conn.Close()
// make sure context is always cancelled to avoid goroutine leak
cancel()
}()
var bts = make([]byte, 1024)
for {
n, err := conn.Read(bts)
if err != nil {
return
}
fmt.Println(bts[:n])
}
}()
return cancel, nil
}
It returns the cancel function to be able to close it from the outside.
Cancelling a context can be done many times over without a panic like would occur if a channel is closed multiple times. That is one advantage. Also you can derive contexts from other contexts and thereby close a lot of contexts that all stop different routines by closing a parent context. Carefully designed, this is very powerful for shutting down different routines belonging together that also need to be able to be shut down individually.
Bit of a newb to both Go and GRPC, so bear with me.
Using go version go1.14.4 windows/amd64, proto3, and latest grpc (1.31 i think). I'm trying to set up a bidi streaming connection that will likely be open for longer periods of time. Everything works locally, except if I terminate the client (or one of them) it kills the server as well with the following error:
Unable to trade data rpc error: code = Canceled desc = context canceled
This error comes out of this code server side
func (s *exchangeserver) Trade(stream proto.ExchageService_TradeServer) error {
endchan := make(chan int)
defer close(endchan)
go func() {
for {
req, err := stream.Recv()
if err == io.EOF {
break
}
if err != nil {
log.Fatal("Unable to trade data ", err)
break
}
fmt.Println("Got ", req.GetNumber())
}
endchan <- 1
}()
go func() {
for {
resp := &proto.WordResponse{Word: "Hello again "}
err := stream.Send(resp)
if err != nil {
log.Fatal("Unable to send from server ", err)
break
}
time.Sleep(time.Duration(500 * time.Millisecond))
}
endchan <- 1
}()
<-endchan
return nil
}
And the Trade() RPC is so simple it isn't worth posting the .proto.
The error is clearly coming out of the Recv() call, but that call blocks until it sees a message, like the client disconnect, at which point I would expect it to kill the stream, not the whole process. I've tried adding a service handler with HandleConn(context, stats.ConnStats) and it does catch the disconnect before the server dies, but I can't do anything with it. I've even tried creating a global channel that the serve handler pushes a value into when HandleRPC(context, stats.RPCStats) is called and only allowing Recv() to be called when there's a value in the channel, but that can't be right, that's like blocking a blocking function for safety and it didn't work anyway.
This has to be one of those real stupid mistakes that beginner's make. Of what use would GPRC be if it couldn't handle a client disconnect without dying? Yet I have read probably a trillion (ish) posts from every corner of the internet and noone else is having this issue. On the contrary, the more popular version of this question is "My client stream stays open after disconnect". I'd expect that issue. Not this one.
Im not 100% sure how this is supposed to behave but I note that you are starting separate receive and send goroutines up at the same time. This might be valid but is not the typical approach. Instead you would usually receive what you want to process and then start a nested loop to handle the reply .
See an example of typical bidirectional streaming implementation from here: https://grpc.io/docs/languages/go/basics/
func (s *routeGuideServer) RouteChat(stream pb.RouteGuide_RouteChatServer) error {
for {
in, err := stream.Recv()
if err == io.EOF {
return nil
}
if err != nil {
return err
}
key := serialize(in.Location)
... // look for notes to be sent to client
for _, note := range s.routeNotes[key] {
if err := stream.Send(note); err != nil {
return err
}
}
}
}
sending and receiving at the same time might be valid for your use case but if that is what you are trying to do then I believe your handling of the channels is incorrect. Either way, please read on to understand the issue as it is a common one in go.
You have a single channel which only blocks until it receives a single message, once it unblocks the function ends and the channel is closed (by defer).
You are trying to send to this channel from both your send and receive
loop.
When the last one to finish tries to send to the channel it will have been closed (by the first to finish) and the server will panic. Annoyingly, you wont actually see any sign of this as the server will exit before the goroutine can dump its panic (no clues - probably why you landed here)
see an example of the issue here (grpc code stripped out):
https://play.golang.org/p/GjfgDDAWNYr
Note: comment out the last pause in the main func to stop showing the panic reliably (as in your case)
So one simple fix would probably be to simply create two separate channels (one for send, one for receive) and block on both - this however would leave the send loop open necessarily if you don't get a chance to respond so probably better to structure like the example above unless you have good reason to pursue something different.
Another possibility is some sort server/request context mix up but I'm pretty sure the above will fix - drop an update with your server setup code if your still having issues after the above changes
I'm implementing a feature where I need to read files from a directory, parse and export them to a REST service at a regular interval. As part of the same I would like to gracefully handle the program termination (SIGKILL, SIGQUIT etc).
Towards the same I would like to know how to implement Context based cancellation of process.
For executing the flow in regular interval I'm using gocron.
cmd/scheduler.go
func scheduleTask(){
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
s := gocron.NewScheduler()
s.Every(10).Minutes().Do(processTask, ctx)
s.RunAll() // run immediate
<-s.Start() // schedule
for {
select {
case <-(ctx).Done():
fmt.Print("context done")
s.Remove(processTask)
s.Clear()
cancel()
default:
}
}
}
func processTask(ctx *context.Context){
task.Export(ctx)
}
task/export.go
func Export(ctx *context.Context){
pendingFiles, err := filepath.Glob("/tmp/pending/" + "*_task.json")
//error handling
//as there can be 100s of files, I would like to break the loop when context.Done() to return asap & clean up the resources here as well
for _, fileName := range pendingFiles {
exportItem(fileName string)
}
}
func exportItem(fileName string){
data, err := ReadFile(fileName) //not shown here for brevity
//err handling
err = postHTTPData(string(data)) //not shown for brevity
//err handling
}
For the process management, I think the other component is the actual handling of signals, and managing the context from those signals.
I'm not sure of the specifics of go-cron (they have an example showing some of these concepts on their github) but in general I think that the steps involved are:
Registration of os signals handler
Waiting to receive a signal
Canceling top level context in response to a signal
Example:
sigCh := make(chan os.Signal, 1)
defer close(sigCh)
signal.Notify(sigCh, syscall.SIGTERM, syscall.SIGQUIT, syscall.SIGINT)
<-sigCh
cancel()
I'm not sure how this will look in the context of go-cron, but the context that the signal handling code cancels should be a parent of the context that the task and job is given.
Worked this out myself just now. I've always felt the blog post on contexts was A LOT of material to try and understand so a simpler demonstration would be nice.
There are many scenarios you may encounter. Each one is different and will require adaptation. Here's one example:
Say you have a channel that could run for an indeterminate amount of time.
indeterminateChannel := make(chan string)
for s := range indeterminateChannel{
fmt.Println(s)
}
Your producer might look something like:
for {
indeterminateChannel <- "Terry"
}
We don't control the producer, so we need someway to cut out of your print loop if the producer exceeds your time limit.
indeterminateChannel := make(chan string)
// Close the channel when our code exits so OUR for loop no longer occupies
// resources and the goroutine exits.
// The producer will have a problem, but we don't care about their problems.
// In this instance.
defer close(indeterminateChannel)
// we wait for this context to time out below the goroutine.
ctx, cancel := context.WithTimeout(context.TODO(), time.Minute*1)
defer cancel()
go func() {
for s := range indeterminateChannel{
fmt.Println(s)
}
}()
<- ctx.Done // wait for the context to terminate based on a timeout.
You can also check ctx.Err to see if the context exited due to a timeout or because it was canceled.
You might also want to learn about how to properly check if the context failed due to a deadline: How to check if an error is "deadline exceeded" error?
Or if the context was canceled: How to check if a request was cancelled
I have a system with about 100 req/sec. And sometimes it doesn't respond until I restart my go program. I found out that this is because I open transaction and don't close it in some places. That's why all connections were occupied by opened transactions and I couldn't open another one After this I added this code
defer func() {
if r := recover(); r != nil {
tx.Rollback()
return
}
if err == nil {
err = tx.Commit()
} else {
tx.Rollback()
}
}()
This made my program work for a month without interruption. But just now it happened again. Probably because of this issue. Is there a better way to close transaction? Or maybe a way to close a transaction if it was open for 1 minute?
If you want to roll back the transaction after 1 minute, you can use the standard Go timeout pattern (below).
However, there may be better ways to deal with your problems. You don’t give a lot of information about it, but you are probably using standard HTTP requests with contexts. In that case, it may help you to know that a transaction started within a context is automatically rolled back when the context is cancelled. One way to make sure that a context is cancelled if something gets stuck is to give it a deadline.
Excerpted from Channels - Timeouts. The original authors were Chris Lucas, Kwarrtz and Rodolfo Carvalho. Attribution details can be found on the contributor page. The source is licenced under CC BY-SA 3.0 and may be found in the Documentation archive. Reference topic ID: 1263 and example ID: 6050.
Timeouts
Channels are often used to implement timeouts.
func main() {
// Create a buffered channel to prevent a goroutine leak. The buffer
// ensures that the goroutine below can eventually terminate, even if
// the timeout is met. Without the buffer, the send on the channel
// blocks forever, waiting for a read that will never happen, and the
// goroutine is leaked.
ch := make(chan struct{}, 1)
go func() {
time.Sleep(10 * time.Second)
ch <- struct{}{}
}()
select {
case <-ch:
// Work completed before timeout.
case <-time.After(1 * time.Second):
// Work was not completed after 1 second.
}
}
http.Serve either returns an error as soon as it is called or blocks if successfully executing.
How can I make it so that if it blocks it does so in its own goroutine? I currently have the following code:
func serveOrErr(l net.Listener, handler http.Handler) error {
starting := make(chan struct{})
serveErr := make(chan error)
go func() {
starting <- struct{}{}
if err := http.Serve(l, handler); err != nil {
serveErr <- err
}
}()
<-starting
select {
case err := <-serveErr:
return err
default:
return nil
}
}
This seemed like a good start and works on my test machine but I believe that there are no guarantees that serveErr <- err would be called before case err := <-serveErr therefore leading to inconsistent results due to a data race if http.Serve were to produce an error.
http.Serve either returns an error as soon as it is called or blocks if successfully executing
This assumption is not correct. And I believe it rarely occurs. http.Serve calls net.Listener.Accept in the loop – an error can occur any time (socket closed, too many open file descriptors etc.). It's http.ListenAndServe, usually being used for running http servers, which often fails early while binding listening socket (no permissions, address already in use).
In my opinion what you're trying to do is wrong, unless really your net.Listener.Accept is failing on the first call for some reason. Is it? If you want to be 100% sure your server is working, you could try to connect to it (and maybe actually transmit something), but once you successfully bound the socket I don't see it really necessary.
You could use a timeout on your select statement, e.g.
timeout := time.After(5 * time.Millisecond) // TODO: ajust the value
select {
case err := <-serveErr:
return err
case _ := <- timeout:
return nil
}
This way your select will block until serveErr has a value or the specified timeout has elapsed. Note that the execution of your function will therefore block the calling goroutine for up to the duration of the specified timeout.
Rob Pike's excellent talk on go concurrency patterns might be helpful.