I have a server working with websocket connections and a database. Some users can connect by sockets, so I need to increment their "online" in db; and at the moment of their disconnection I also decrement their "online" field in db. But in case the server breaks down I use a local variable replica map[string]int of users online. So I need to postpone the server shutdown until it completes a database request that decrements all users "online" in accordance with my variable replica, because at this way socket connection doesnt send default "close" event.
I have found a package github.com/xlab/closer that handles some system calls and can do some action before program finished, but my database request doesnt work in this way (code below)
func main() {
...
// trying to handle program finish event
closer.Bind(cleanupSocketConnections(&pageHandler))
...
}
// function that handles program finish event
func cleanupSocketConnections(p *controllers.PageHandler) func() {
return func() {
p.PageService.ResetOnlineUsers()
}
}
// this map[string]int contains key=userId, value=count of socket connections
type PageService struct {
Users map[string]int
}
func (p *PageService) ResetOnlineUsers() {
for userId, count := range p.Users {
// decrease online of every user in program variable
InfoService{}.DecreaseInfoOnline(userId, count)
}
}
Maybe I use it incorrectly or may be there is a better way to prevent default program finish?
First of all executing tasks when the server "breaks down" as you said is quite complicated, because breaking down can mean a lot of things and nothing can guarantee clean up functions execution when something goes really bad in your server.
From an engineering point of view (if setting users offline on breakdown is so important), the best would be to have a secondary service, on another server, that receives user connection and disconnection events and ping event, if it receives no updates in a set timeout the service considers your server down and proceeds to set every user offline.
Back to your question, using defer and waiting for termination signals should cover 99% of cases. I commented the code to explain the logic.
// AllUsersOffline is called when the program is terminated, it takes a *sync.Once to make sure this function is performed only
// one time, since it might be called from different goroutines.
func AllUsersOffline(once *sync.Once) {
once.Do(func() {
fmt.Print("setting all users offline...")
// logic to set all users offline
})
}
// CatchSigs catches termination signals and executes f function at the end
func CatchSigs(f func()) {
cSig := make(chan os.Signal, 1)
// watch for these signals
signal.Notify(cSig, syscall.SIGKILL, syscall.SIGTERM, syscall.SIGINT, syscall.SIGQUIT, syscall.SIGHUP) // these are the termination signals in GNU => https://www.gnu.org/software/libc/manual/html_node/Termination-Signals.html
// wait for them
sig := <- cSig
fmt.Printf("received signal: %s", sig)
// execute f
f()
}
func main() {
/* code */
// the once is used to make sure AllUsersOffline is performed ONE TIME.
usersOfflineOnce := &sync.Once{}
// catch termination signals
go CatchSigs(func() {
// when a termination signal is caught execute AllUsersOffline function
AllUsersOffline(usersOfflineOnce)
})
// deferred functions are called even in case of panic events, although execution is not to take for granted (OOM errors etc)
defer AllUsersOffline(usersOfflineOnce)
/* code */
// run server
err := server.Run()
if err != nil {
// error logic here
}
// bla bla bla
}
I think that you need to look at go routines and channel.
Here something (maybe) useful:
https://nathanleclaire.com/blog/2014/02/15/how-to-wait-for-all-goroutines-to-finish-executing-before-continuing/
Related
I'm implementing currently the Golang client for TypeDB and struggle with their session based heartbeat convention. Usually, you implement heartbeat per client so that's relatively easy, just run a gorountine in the background and send a heartbeat every few seconds.
TypeDB, however, chose to implement heartbeat (they call it pulse) on a per session base. which means, every time a new session gets created, I have to start monitoring that session with a separate GoRoutine. Conversely, if the client closes a session, I have to stop the monitoring. What's particularly ugly, I also have to check for stalled session every once in a while. There is is GH issue to switch over to per client heartbeat, but no ETA so I have to make session heartbeat work to prevent serve side session termination.
So far, my solution:
Create a new session
Open that session & check for error
If no error, add session to a hashmap keyed by session ID
This seems to work for now. Code, just for context is here:
https://github.com/marvin-hansen/typedb-client-go/blob/main/src/client/v2/manager_session.go
For monitoring each session, I am mulling over two issues:
Chanel close over multiple gorountines is a bit tricky and may lead to race conditions.
I would need some kind of error group to catch heartbeat failures i.e. in case the server shuts down or a network link error.
With all that in mind, I believe a context.WithCancel might be safe & sane solution.
What I came up so far is this:
Pass the global context as parameter to the heartbeat function
Create a new context WithCancel for each session calling heartbeat
Run heartbeat in a GoRoutine until either cancel gets called (by stopMonitoring) or or error occurs
What's not so clear to me is, how do I track all the cancel functions returned from each tracked session as to ensure I am closing the right GoRotuine matching the session to close ?
Thank you for any hint to solve this.
The code:
func (s SessionManager) startMonitorSession(sessionID []byte) {
// How do I track each goRoutine per session
}
func (s SessionManager) stopMonitorSession(sessionID []byte) {
// How do I call the correct cancel function to stop the GoRoutine matching the session?
}
func (s SessionManager) runHeartbeat(ctx context.Context, sessionID []byte) context.CancelFunc {
// Create a new context, with its cancellation function from the original context
ctx, cancel := context.WithCancel(ctx)
go func() {
select {
case <-ctx.Done():
fmt.Println("Stopped monitoring session: ")
default:
err := s.sendPulseRequest(sessionID)
// If this operation returns an error
// cancel all operations using this local context created above
if err != nil {
cancel()
}
fmt.Println("done")
}
}()
// return cancel function for call site to close at a later stage
return cancel
}
func (s SessionManager) sendPulseRequest(sessionID []byte) error {
mtd := "sendPulse: "
req := requests.GetSessionPulseReq(sessionID)
res, pulseErr := s.client.client.SessionPulse(s.client.ctx, req)
if pulseErr != nil {
dbgPrint(mtd, "Heartbeat error. Close session")
return pulseErr
}
if res.Alive == false {
dbgPrint(mtd, "Server not alive anymore. Close session")
closeErr := s.CloseSession(sessionID)
if closeErr != nil {
return closeErr
}
}
// no error
return nil
}
Update:
Thanks to the comment(s) I managed to solve the bulk of the issue by wrapping session & CancelFunc in a dedicated struct, called TypeDBSession.
That way, the stop function simply pulls the CancelFunc from the struct, calls it, and stops the monitoring GoRoutine. With some more tweaking, tests seems to pass although this is not concurrency safe for the time being.
That being said, this was a non-trivial issue to solve. Again, but thanks to the comments!
If any one is open to suggesting some code improvements especially w.r.t to make this concurrency safe, feel free to comment here or fill a GH issue / PR.
SessionType:
https://github.com/marvin-hansen/typedb-client-go/blob/main/src/client/v2/manager_session_type.go
SessionMonitoring:
https://github.com/marvin-hansen/typedb-client-go/blob/main/src/client/v2/manager_session_monitor.go
Tests:
https://github.com/marvin-hansen/typedb-client-go/tree/main/test/client/session
My two cents:
You may need run the hearbeat repeatedly. Use a for with a time.Ticker around the select
Store a map session id —> func() to track all cancellable context. Perhaps you should convert the id to string
I have a scenario in which I'm processing events on a channel, and one of those events is a heartbeat which needs to occur within a certain timeframe. Events which are not heartbeats will continue consuming the timer, however whenever the heartbeat is received I want to reset the timer. The obvious way to do this would be by using a time.NewTimer.
For example:
func main() {
to := time.NewTimer(3200 * time.Millisecond)
for {
select {
case event, ok := <-c:
if !ok {
return
} else if event.Msg == "heartbeat" {
to.Reset(3200 * time.Millisecond)
}
case remediate := <-to.C:
fmt.Println("do some stuff ...")
return
}
}
}
Note that a time.Ticker won't work here as the remediation should only be triggered if the heartbeat hasn't been received, not every time.
The above solution works in the handful of low volume tests I've tried it on, however I came across a Github issue indicating that resetting a Timer which has not fired is a no-no. Additionally the documentation states:
Reset should be invoked only on stopped or expired timers with drained channels. If a program has already received a value from t.C, the timer is known to have expired and the channel drained, so t.Reset can be used directly. If a program has not yet received a value from t.C, however, the timer must be stopped and—if Stop reports that the timer expired before being stopped—the channel explicitly drained:
if !t.Stop() {
<-t.C
}
t.Reset(d)
This gives me pause, as it seems to describe exactly what I'm attempting to do. I'm resetting the Timer whenever the heartbeat is received, prior to it having fired. I'm not experienced enough with Go yet to digest the whole post, but it certainly seems like I may be headed down a dangerous path.
One other solution I thought of is to simply replace the Timer with a new one whenever the heartbeat occurs, e.g:
else if event.Msg == "heartbeat" {
to = time.NewTimer(3200 * time.Millisecond)
}
At first I was worried that the rebinding to = time.NewTimer(3200 * time.Millisecond) wouldn't be visible within the select:
For all the cases in the statement, the channel operands of receive operations and the channel and right-hand-side expressions of send statements are evaluated exactly once, in source order, upon entering the "select" statement. The result is a set of channels to receive from or send to, and the corresponding values to send.
But in this particular case since we are inside a loop, I would expect that upon each iteration we re-enter select and therefore the new binding should be visible. Is that a fair assumption?
I realize there are similar questions out there, and I've tried to read the relevant posts/documentation, but I am new to Go just want to be sure I'm understanding things correctly here.
So my questions are:
Is my use of timer.Reset() unsafe, or are the cases mentioned in the Github issue highlighting other problems which are not applicable here? Is the explanation in the docs cryptic or do I just need more experience with Go?
If it is unsafe, is my second proposed solution acceptable (rebinding the timer on each iteration).
ADDENDUM
Upon further reading, most of the pitfalls outlined in the issues are describing scenarios in which the timer has already fired (placing a result on the channel), and subsequent to that firing some other process attempts to Reset it. For this narrow case, I understand the need to test with !t.Stop() since a false return of Stop would indicate the timer has already fired, and as such must be drained prior to calling Reset.
What I still do not understand, is why it is necessary to call t.Stop() prior to t.Reset(), when the Timer has yet to fire. None of the examples go into that as far as I can tell.
What I still do not understand, is why it is necessary to call t.Stop() prior to t.Reset(), when the Timer has yet to fire.
The "when the Timer has yet to fire" bit is critical here. The timer fires within a separate go routine (part of the runtime) and this can happen at any time. You have no way of knowing whether the timer has fired at the time you call to.Reset(3200 * time.Millisecond) (it may even fire while that function is running!).
Here is an example that demonstrates this and is somewhat similar to what you are attempting (based on this):
func main() {
eventC := make(chan struct{}, 1)
go keepaliveLoop(eventC )
// Reset the timer 1000 (approx) times; once every millisecond (approx)
// This should prevent the timer from firing (because that only happens after 2 ms)
for i := 0; i < 1000; i++ {
time.Sleep(time.Millisecond)
// Don't block if there is already a reset request
select {
case eventC <- struct{}{}:
default:
}
}
}
func keepaliveLoop(eventC chan struct{}) {
to := time.NewTimer(2 * time.Millisecond)
for {
select {
case <-eventC:
//if event.Msg == "heartbeat"...
time.Sleep(3 * time.Millisecond) // Simulate reset work (delay could be partly dur to whatever is triggering the
to.Reset(2 * time.Millisecond)
case <-to.C:
panic("this should never happen")
}
}
}
Try it in the playground.
This may appear contrived due to the time.Sleep(3 * time.Millisecond) but that is just included to consistently demonstrate the issue. Your code may work 99.9% of the time but there is always the possibility that both the event and timer channels will fire before the select is run (in which a random case will run) or while the code in the case event, ok := <-c: block is running (including while Reset() is in progress). The result of this happening would be unexpected calls of the remediate code (which may not be a big issue).
Fortunately solving the issue is relatively easy (following the advice in the documentation):
time.Sleep(3 * time.Millisecond) // Simulate reset work (delay could be partly dur to whatever is triggering the
if !to.Stop() {
<-to.C
}
to.Reset(2 * time.Millisecond)
Try this in the playground.
This works because to.Stop returns "true if the call stops the timer, false if the timer has already expired or been stopped". Note that things get a more complicated if the timer is used in multiple go-routines "This cannot be done concurrent to other receives from the Timer's channel or other calls to the Timer's Stop method" but this is not the case in your use-case.
Is my use of timer.Reset() unsafe, or are the cases mentioned in the Github issue highlighting other problems which are not applicable here?
Yes - it is unsafe. However the impact is fairly low. The event arriving and timer triggering would need to happen almost concurrently and, in that case, running the remediate code might not be a big issue. Note that the fix is fairly simple (as per the docs)
If it is unsafe, is my second proposed solution acceptable (rebinding the timer on each iteration).
Your second proposed solution also works (but note that the garbage collector cannot free the timer until after it has fired, or been stopped, which may cause issues if you are creating timers rapidly).
Note: Re the suggestion from #JotaSantos
Another thing that could be done is to add a select when draining <-to.C (on the Stop "if") with a default clause. That would prevent the pause.
See this comment for details of why this may not be a good approach (it's also unnecessary in your situation).
I've faced a similar issue. After reading a lot of information, I came up with a solution that goes along these lines:
package main
import (
"fmt"
"time"
)
func main() {
const timeout = 2 * time.Second
// Prepare a timer that is stopped and ready to be reset.
// Stop will never return false, because an hour is too long
// for timer to fire. Thus there's no need to drain timer.C.
timer := time.NewTimer(timeout)
timer.Stop()
// Make sure to stop the timer when we return.
defer timer.Stop()
// This variable is needed because we need to track if we can safely reset the timer
// in a loop. Calling timer.Stop() will return false on every iteration, but we can only
// drain the timer.C once, otherwise it will deadlock.
var timerSet bool
c := make(chan time.Time)
// Simulate events that come in every second
// and every 5th event delays so that timer can fire.
go func() {
var i int
ticker := time.NewTicker(1 * time.Second)
defer ticker.Stop()
for t := range ticker.C {
i++
if i%5 == 0 {
fmt.Println("Sleeping")
time.Sleep(3 * time.Second)
}
c <- t
if i == 20 {
break
}
}
close(c)
}()
for {
select {
case t, ok := <-c:
if !ok {
fmt.Println("Closed channel")
return
}
fmt.Println("Got event", t, timerSet)
// We got an event, and timer was already set.
// We need to stop the timer and drain the channel if needed,
// so that we can safely reset it later.
if timerSet {
if !timer.Stop() {
<-timer.C
}
timerSet = false
}
// If timer was not set, or it was stopped before, it's safe to reset it.
if !timerSet {
timerSet = true
timer.Reset(timeout)
}
case remediate := <-timer.C:
fmt.Println("Timeout", remediate)
// It's important to store that timer is not set anymore.
timerSet = false
}
}
}
Link to playground: https://play.golang.org/p/0QlujZngEGg
In my program I have several go-routines who are essentially running endless processes. Why? you may ask, long story short it is the purpose of my entire application so it's out of question to change that. I would like to give users the ability to stop a single go-routine. I understand that I can use channel to signal the go-routines to stop, however there may be cases where I have, say, 10 go-routines running and I only want to stop 1. The issue is that the number of go-routines I want to run is dynamic and based on user input. What is the best way for me to add the ability to stop a go-routine dynamically and allow for singles to be stopped without the rest?
You need design a map to manage contexts.
Assume you've already known usage of context. It might look like:
ctx, cancel := context.WithCancel(ctx.TODO())
go func(ctx){
for {
select {
case <-ctx.Done():
return
default:
// job
}
}
}(ctx)
cancel()
Ok, now you can convert your question to another, it might called 'how to manage contexts of many goroutine'
type GoroutineManager struct{
m sync.Map
}
func (g *GoroutineManager) Add(cancel context.CancelFunc, key string)) {
g.m.Store(key, cancel)
}
func (g *GoroutineManager) KillGoroutine(key string) {
cancel, exist := g.m.Load(key)
if exist {
cancel()
}
}
Ok, Now you can manage your goroutine like :
ctx, cancel := context.WithCancel(ctx.TODO())
manager.Add(cancel, "routine-job-1")
go func(ctx){
for {
select {
case <-ctx.Done():
return
default:
// job
}
}
}(ctx)
// kill it as your wish
manager.KillGoroutine("routine-job-1")
Can't figure out how I can cancel a task if it takes to much to time compute in the same thread of execution via context semantics?
I use this example as a reference point
https://golang.org/src/context/context_test.go
The goal here call a doWork, if doWork takes to much time to compute, GetValueWithDeadline should after a timeout return 0, or if caller called cancel that cancel a wait, (here it main is caller) or the value returned in in give a time window.
The same scenario can be done In a different way. ( separate goroutine sleep, wakeup check value etc, condition on a mutex, etc) but I really want to understand the correct way to use context.
The channel semantic I understand but here I can't achieve the desired effect, the default case
call to a doWork fault under default case and sleep.
package main
import (
"context"
"fmt"
"log"
"math/rand"
"sync"
"time"
)
type Server struct {
lock sync.Mutex
}
func NewServer() *Server {
s := new(Server)
return s
}
func (s *Server) doWork() int {
s.lock.Lock()
defer s.lock.Unlock()
r := rand.Intn(100)
log.Printf("Going to nap for %d", r)
time.Sleep(time.Duration(r) * time.Millisecond)
return r
}
// I take an example from here and it very unclear where is do work executed
// https://golang.org/src/context/context_test.go
func (s *Server) GetValueWithDeadline(ctx context.Context) int {
val := 0
select {
case <- time.After(150 * time.Millisecond):
fmt.Println("overslept")
return 0
case <- ctx.Done():
fmt.Println(ctx.Err())
return 0
default:
val = s.doWork()
}
return all
}
func main() {
rand.Seed(time.Now().UTC().UnixNano())
s := NewServer()
for i :=0; i < 10; i++ {
d := time.Now().Add(50 * time.Millisecond)
ctx, cancel := context.WithDeadline(context.Background(), d)
log.Print(s.GetValueWithDeadline(ctx))
cancel()
}
}
Thank you
There are multiple problems with your approach.
What problem contexts solve
First, the primary reason contexts were invented in Go is that they allow to unify an approach to cancellation of a set of tasks.
To explain this concept using a simple example, consider a client request to some sever; to simplify further let it be an HTTP request.
The client connects to the server, sends some data telling the server what to do to fulfill the request and then waits for the server to respond.
Let's now suppose the request requires elaborate and time-consuming processing on the server — for instance, suppose it needs to perform multiple complex queries to multiple remote database engines, do multiple HTTP requests to external services and then process the acquired results to actually produce the data the client wants.
So the client starts its request and the server goes on with all those requests.
To hide latency of individual tasks the server has to perform to fulfill the request, it runs them in separate goroutines.
Once each goroutine completes the assigned task, it communicates its result (and/or an error) back to the goroutine which handles the client's request, and so on.
Now suppose that the client fails to wait for the response to its request for whatever reason — a network outage, an explicit timeout in the client's software, the user kills the app which initiated the request etc, — there are lots of possibilities.
As you can see, there's little sense for the server to continue spending resources to finish the tasks which were logically bound to the now-dead request: there's no one to hear back the result anyway.
So it makes sense to reap those tasks once we know the request is not going to be completed, and that's where contexts come into play: you can associate each incoming request with a single context and then either pass it itself to any goroutine spawned to carry out a single task required to be done to fulfill the request, or derive another request from that and pass it instead.
Then, as soon as you cancel the "root" request, that signal is propagated through the whole tree of requests derived from the root one.
Now each goroutine which were given a context, might "listen" on it to be notified when that cancellation signal is sent, and once the goroutine notices that it might drop whatever it was busy doing and exit.
In terms of actual context.Context type that signal is called "done" — as in "we're done doing whatever that context is assotiated with", — and that's why the goroutine which wants to know it should stop doing its work listens on a special channel returned by the context's method called Done.
Back to your example
To make it work, you'd do something like:
func (s *Server) doWork(ctx context.Context) int {
s.lock.Lock()
defer s.lock.Unlock()
r := rand.Intn(100)
log.Printf("Going to nap for %d", r)
select {
case <- time.After(time.Duration(r) * time.Millisecond):
return r
case <- ctx.Done():
return -1
}
}
func (s *Server) GetValueWithTimeout(ctx context.Context, maxTime time.Duration) int {
d := time.Now().Add(maxTime)
ctx, cancel := context.WithDeadline(ctx, d)
defer cancel()
return s.doWork(ctx)
}
func main() {
const maxTime = 50 * time.Millisecond
rand.Seed(time.Now().UTC().UnixNano())
s := NewServer()
for i :=0; i < 10; i++ {
v := s.GetValueWithTimeout(context.Background(), maxTime)
log.Print(v)
}
}
(Playground).
So what happens here?
The GetValueWithTimeout method accepts the maximum time it should take the doWork method to produce a value, calculates the deadline, derives a context which cancels itself once the deadline passes from the context passed to the method and calls doWork with the new context object.
The doWork method arms its own timer to go off after a random time interval and then listens on both the context and the timer.
This one is the critical point: the code which performs some unit of work which is supposed to be cancellable must check the context to become "done" actively, by itself.
So, in our toy example, either the doWork's own timer fires first or the deadline of the generated context gets reached first; whatever happens first, makes the select statement unblock and proceed.
Note that if your "do the work" code wold be more involved — it would actually do something instead of sleeping, — you would most probably need to check on the context's status periodically, usually after performing invividual bits of that work.
I've been working with examples trying to get my first "go routine" running and while I got it running, it won't work as prescribed by the go documentation with the timer.Reset() function.
In my case I believe that the way I am doing it is just fine because I don't actually care what's in the chan buffer, if anything. All as this is meant to do is trigger case <-tmr.C: if anything happened on case _, ok := <-watcher.Events: and then all goes quiet for at least one second. The reason for this is that case _, ok := <-watcher.Events: can get from one to dozens of events microseconds apart and I only care once they are all done and things have settled down again.
However I'm concerned that doing it the way that the documentation says you "must do" doesn't work. If I knew go better I would say the documentation is flawed because it assumes there is something in the buffer when there may not be but I don't know go well enough to have confidence in making that determination so I'm hoping some experts out there can enlighten me.
Below is the code. I haven't put this up on playground because I would have to do some cleaning up (remove calls to other parts of the program) and I'm not sure how I would make it react to filesystem changes for showing it working.
I've clearly marked in the code which alternative works and which doesn't.
func (pm *PluginManager) LoadAndWatchPlugins() error {
// DOING OTHER STUFF HERE
fmt.Println(`m1`)
done := make(chan interface{})
terminated := make(chan interface{})
go pm.watchDir(done, terminated, nil)
fmt.Println(`m2.pre-10`)
time.Sleep(10 * time.Second)
fmt.Println(`m3-post-10`)
go pm.cancelWatchDir(done)
fmt.Println(`m4`)
<-terminated
fmt.Println(`m5`)
os.Exit(0) // Temporary for testing
return Err
}
func (pm *PluginManager) cancelWatchDir(done chan interface{}) {
fmt.Println(`t1`)
time.Sleep(5 * time.Second)
fmt.Println()
fmt.Println(`t2`)
close(done)
}
func (pm *PluginManager) watchDir(done <-chan interface{}, terminated chan interface{}, strings <-chan string) {
watcher, err := fsnotify.NewWatcher()
if err != nil {
Logger("watchDir::"+err.Error(), `plugins`, Error)
}
//err = watcher.Add(pm.pluginDir)
err = watcher.Add(`/srv/plugins/`)
if err != nil {
Logger("watchDir::"+err.Error(), `plugins`, Error)
}
var tmr = time.NewTimer(time.Second)
tmr.Stop()
defer close(terminated)
defer watcher.Close()
defer tmr.Stop()
for {
select {
case <-tmr.C:
fmt.Println(`UPDATE FIRED`)
tmr.Stop()
case _, ok := <-watcher.Events:
if !ok {
return
}
fmt.Println(`Ticker: STOP`)
/*
* START OF ALTERNATIVES
*
* THIS IS BY EXAMPLE AND STATED THAT IT "MUST BE" AT:
* https://golang.org/pkg/time/#Timer.Reset
*
* BUT DOESN'T WORK
*/
if !tmr.Stop() {
fmt.Println(`Ticker: CHAN DRAIN`)
<-tmr.C // STOPS HERE AND GOES NO FURTHER
}
/*
* BUT IF I JUST DO THIS IT WORKS
*/
tmr.Stop()
/*
* END OF ALTERNATIVES
*/
fmt.Println(`Ticker: RESET`)
tmr.Reset(time.Second)
case <-done:
fmt.Println(`DONE TRIGGERED`)
return
}
}
}
Besides what icza said (q.v.), note that the documentation says:
For example, assuming the program has not received from t.C already:
if !t.Stop() {
<-t.C
}
This cannot be done concurrent to other receives from the Timer's channel.
One could argue that this is not a great example since it assumes that the timer was running at the time you called t.Stop. But it does go on to mention that this is a bad idea if there's already some existing goroutine that is or may be reading from t.C.
(The Reset documentation repeats all of this, and kind of in the wrong order because Reset sorts before Stop.)
Essentially, the whole area is a bit fraught. There's no good general answer, because there are at least three possible situations during the return from t.Stop back to your call:
No one is listening to the channel, and no timer-tick is in the channel now. This is often the case if the timer was already stopped before the call to t.Stop. If the timer was already stopped, t.Stop always returns false.
No one is listening to the channel, and a timer-tick is in the channel now. This is always the case when the timer was running but t.Stop was unable to stop it from firing. In this case, t.Stop returns false. It's also the case when the timer was running but fired before you even called t.Stop, and had therefore stopped on its own, so that t.Stop was not able to stop it and returned false.
Someone else is listening to the channel.
In the last situation, you should do nothing. In the first situation, you should do nothing. In the second situation, you probably want to receive from the channel so as to clear it out. That's what their example is for.
One could argue that:
if !t.Stop() {
select {
case <-t.C:
default:
}
}
is a better example. It does one non-blocking attempt that will consume the timer-tick if present, and does nothing if there is no timer-tick. This works whether or not the timer was not actually running when you called t.Stop. Indeed, it even works if t.Stop returns true, though in that case, t.Stop stopped the timer, so the timer never managed to put a timer-tick into the channel. (Thus, if there is a datum in the channel, it must necessarily be left over from a previous failure to clear the channel. If there are no such bugs, the attempt to receive was in turn unnecessary.)
But, if someone else—some other goroutine—is or may be reading the channel, you should not do any of this at all. There is no way to know who (you or them) will get any timer tick that might be in the channel despite the call to Stop.
Meanwhile, if you're not going to use the timer any further, it's relatively harmless just to leave a timer-tick, if there is one, in the channel. It will be garbage-collected when the channel itself is garbage-collected. Of course, whether this is sensible depends on what you are doing with the timer, but in these cases it suffices to just call t.Stop and ignore its return value.
You create a timer and you stop it immediately:
var tmr = time.NewTimer(time.Second)
tmr.Stop()
This doesn't make any sense, I assume this is just an "accident" from your part.
But going further, inside your loop:
case _, ok := <-watcher.Events:
When this happens, you claim this doesn't work:
if !tmr.Stop() {
fmt.Println(`Ticker: CHAN DRAIN`)
<-tmr.C // STOPS HERE AND GOES NO FURTHER
}
Timer.Stop() documents that it returns true if this call stops the timer, and false if the timer has already been stopped (or expired). But your timer was already stopped, right after its creation, so tmr.Stop() returns false properly, so you go inside the if and try to receive from tmr.C, but since the timer was "long" stopped, nothing will be sent on its channel, so this is a blocking (forever) operation.
If you're the one stopping the timer explicitly with timer.Stop(), the recommended "pattern" to drain its channel doesn't make any sense and doesn't work for the 2nd Timer.Stop() call.