I'm implementing currently the Golang client for TypeDB and struggle with their session based heartbeat convention. Usually, you implement heartbeat per client so that's relatively easy, just run a gorountine in the background and send a heartbeat every few seconds.
TypeDB, however, chose to implement heartbeat (they call it pulse) on a per session base. which means, every time a new session gets created, I have to start monitoring that session with a separate GoRoutine. Conversely, if the client closes a session, I have to stop the monitoring. What's particularly ugly, I also have to check for stalled session every once in a while. There is is GH issue to switch over to per client heartbeat, but no ETA so I have to make session heartbeat work to prevent serve side session termination.
So far, my solution:
Create a new session
Open that session & check for error
If no error, add session to a hashmap keyed by session ID
This seems to work for now. Code, just for context is here:
https://github.com/marvin-hansen/typedb-client-go/blob/main/src/client/v2/manager_session.go
For monitoring each session, I am mulling over two issues:
Chanel close over multiple gorountines is a bit tricky and may lead to race conditions.
I would need some kind of error group to catch heartbeat failures i.e. in case the server shuts down or a network link error.
With all that in mind, I believe a context.WithCancel might be safe & sane solution.
What I came up so far is this:
Pass the global context as parameter to the heartbeat function
Create a new context WithCancel for each session calling heartbeat
Run heartbeat in a GoRoutine until either cancel gets called (by stopMonitoring) or or error occurs
What's not so clear to me is, how do I track all the cancel functions returned from each tracked session as to ensure I am closing the right GoRotuine matching the session to close ?
Thank you for any hint to solve this.
The code:
func (s SessionManager) startMonitorSession(sessionID []byte) {
// How do I track each goRoutine per session
}
func (s SessionManager) stopMonitorSession(sessionID []byte) {
// How do I call the correct cancel function to stop the GoRoutine matching the session?
}
func (s SessionManager) runHeartbeat(ctx context.Context, sessionID []byte) context.CancelFunc {
// Create a new context, with its cancellation function from the original context
ctx, cancel := context.WithCancel(ctx)
go func() {
select {
case <-ctx.Done():
fmt.Println("Stopped monitoring session: ")
default:
err := s.sendPulseRequest(sessionID)
// If this operation returns an error
// cancel all operations using this local context created above
if err != nil {
cancel()
}
fmt.Println("done")
}
}()
// return cancel function for call site to close at a later stage
return cancel
}
func (s SessionManager) sendPulseRequest(sessionID []byte) error {
mtd := "sendPulse: "
req := requests.GetSessionPulseReq(sessionID)
res, pulseErr := s.client.client.SessionPulse(s.client.ctx, req)
if pulseErr != nil {
dbgPrint(mtd, "Heartbeat error. Close session")
return pulseErr
}
if res.Alive == false {
dbgPrint(mtd, "Server not alive anymore. Close session")
closeErr := s.CloseSession(sessionID)
if closeErr != nil {
return closeErr
}
}
// no error
return nil
}
Update:
Thanks to the comment(s) I managed to solve the bulk of the issue by wrapping session & CancelFunc in a dedicated struct, called TypeDBSession.
That way, the stop function simply pulls the CancelFunc from the struct, calls it, and stops the monitoring GoRoutine. With some more tweaking, tests seems to pass although this is not concurrency safe for the time being.
That being said, this was a non-trivial issue to solve. Again, but thanks to the comments!
If any one is open to suggesting some code improvements especially w.r.t to make this concurrency safe, feel free to comment here or fill a GH issue / PR.
SessionType:
https://github.com/marvin-hansen/typedb-client-go/blob/main/src/client/v2/manager_session_type.go
SessionMonitoring:
https://github.com/marvin-hansen/typedb-client-go/blob/main/src/client/v2/manager_session_monitor.go
Tests:
https://github.com/marvin-hansen/typedb-client-go/tree/main/test/client/session
My two cents:
You may need run the hearbeat repeatedly. Use a for with a time.Ticker around the select
Store a map session id —> func() to track all cancellable context. Perhaps you should convert the id to string
Related
Can't figure out how I can cancel a task if it takes to much to time compute in the same thread of execution via context semantics?
I use this example as a reference point
https://golang.org/src/context/context_test.go
The goal here call a doWork, if doWork takes to much time to compute, GetValueWithDeadline should after a timeout return 0, or if caller called cancel that cancel a wait, (here it main is caller) or the value returned in in give a time window.
The same scenario can be done In a different way. ( separate goroutine sleep, wakeup check value etc, condition on a mutex, etc) but I really want to understand the correct way to use context.
The channel semantic I understand but here I can't achieve the desired effect, the default case
call to a doWork fault under default case and sleep.
package main
import (
"context"
"fmt"
"log"
"math/rand"
"sync"
"time"
)
type Server struct {
lock sync.Mutex
}
func NewServer() *Server {
s := new(Server)
return s
}
func (s *Server) doWork() int {
s.lock.Lock()
defer s.lock.Unlock()
r := rand.Intn(100)
log.Printf("Going to nap for %d", r)
time.Sleep(time.Duration(r) * time.Millisecond)
return r
}
// I take an example from here and it very unclear where is do work executed
// https://golang.org/src/context/context_test.go
func (s *Server) GetValueWithDeadline(ctx context.Context) int {
val := 0
select {
case <- time.After(150 * time.Millisecond):
fmt.Println("overslept")
return 0
case <- ctx.Done():
fmt.Println(ctx.Err())
return 0
default:
val = s.doWork()
}
return all
}
func main() {
rand.Seed(time.Now().UTC().UnixNano())
s := NewServer()
for i :=0; i < 10; i++ {
d := time.Now().Add(50 * time.Millisecond)
ctx, cancel := context.WithDeadline(context.Background(), d)
log.Print(s.GetValueWithDeadline(ctx))
cancel()
}
}
Thank you
There are multiple problems with your approach.
What problem contexts solve
First, the primary reason contexts were invented in Go is that they allow to unify an approach to cancellation of a set of tasks.
To explain this concept using a simple example, consider a client request to some sever; to simplify further let it be an HTTP request.
The client connects to the server, sends some data telling the server what to do to fulfill the request and then waits for the server to respond.
Let's now suppose the request requires elaborate and time-consuming processing on the server — for instance, suppose it needs to perform multiple complex queries to multiple remote database engines, do multiple HTTP requests to external services and then process the acquired results to actually produce the data the client wants.
So the client starts its request and the server goes on with all those requests.
To hide latency of individual tasks the server has to perform to fulfill the request, it runs them in separate goroutines.
Once each goroutine completes the assigned task, it communicates its result (and/or an error) back to the goroutine which handles the client's request, and so on.
Now suppose that the client fails to wait for the response to its request for whatever reason — a network outage, an explicit timeout in the client's software, the user kills the app which initiated the request etc, — there are lots of possibilities.
As you can see, there's little sense for the server to continue spending resources to finish the tasks which were logically bound to the now-dead request: there's no one to hear back the result anyway.
So it makes sense to reap those tasks once we know the request is not going to be completed, and that's where contexts come into play: you can associate each incoming request with a single context and then either pass it itself to any goroutine spawned to carry out a single task required to be done to fulfill the request, or derive another request from that and pass it instead.
Then, as soon as you cancel the "root" request, that signal is propagated through the whole tree of requests derived from the root one.
Now each goroutine which were given a context, might "listen" on it to be notified when that cancellation signal is sent, and once the goroutine notices that it might drop whatever it was busy doing and exit.
In terms of actual context.Context type that signal is called "done" — as in "we're done doing whatever that context is assotiated with", — and that's why the goroutine which wants to know it should stop doing its work listens on a special channel returned by the context's method called Done.
Back to your example
To make it work, you'd do something like:
func (s *Server) doWork(ctx context.Context) int {
s.lock.Lock()
defer s.lock.Unlock()
r := rand.Intn(100)
log.Printf("Going to nap for %d", r)
select {
case <- time.After(time.Duration(r) * time.Millisecond):
return r
case <- ctx.Done():
return -1
}
}
func (s *Server) GetValueWithTimeout(ctx context.Context, maxTime time.Duration) int {
d := time.Now().Add(maxTime)
ctx, cancel := context.WithDeadline(ctx, d)
defer cancel()
return s.doWork(ctx)
}
func main() {
const maxTime = 50 * time.Millisecond
rand.Seed(time.Now().UTC().UnixNano())
s := NewServer()
for i :=0; i < 10; i++ {
v := s.GetValueWithTimeout(context.Background(), maxTime)
log.Print(v)
}
}
(Playground).
So what happens here?
The GetValueWithTimeout method accepts the maximum time it should take the doWork method to produce a value, calculates the deadline, derives a context which cancels itself once the deadline passes from the context passed to the method and calls doWork with the new context object.
The doWork method arms its own timer to go off after a random time interval and then listens on both the context and the timer.
This one is the critical point: the code which performs some unit of work which is supposed to be cancellable must check the context to become "done" actively, by itself.
So, in our toy example, either the doWork's own timer fires first or the deadline of the generated context gets reached first; whatever happens first, makes the select statement unblock and proceed.
Note that if your "do the work" code wold be more involved — it would actually do something instead of sleeping, — you would most probably need to check on the context's status periodically, usually after performing invividual bits of that work.
The official Go documentation on the datastore package (client library for the GCP datastore service) has the following code snippet for demonstartion:
type Entity struct {
Value string
}
func main() {
ctx := context.Background()
// Create a datastore client. In a typical application, you would create
// a single client which is reused for every datastore operation.
dsClient, err := datastore.NewClient(ctx, "my-project")
if err != nil {
// Handle error.
}
k := datastore.NameKey("Entity", "stringID", nil)
e := new(Entity)
if err := dsClient.Get(ctx, k, e); err != nil {
// Handle error.
}
old := e.Value
e.Value = "Hello World!"
if _, err := dsClient.Put(ctx, k, e); err != nil {
// Handle error.
}
fmt.Printf("Updated value from %q to %q\n", old, e.Value)
}
As one can see, it states that the datastore.Client should ideally only be instantiated once in an application. Now given that the datastore.NewClient function requires a context.Context object does it mean that it should get instantiated only once per HTTP request or can it safely be instantiated once globally with a context.Background() object?
Each operation requires a context.Context object again (e.g. dsClient.Get(ctx, k, e)) so is that the point where the HTTP request's context should be used?
I'm new to Go and can't really find any online resources which explain something like this very well with real world examples and actual best practice patterns.
You may use any context.Context for the datastore client creation, it may be context.Background(), that's completely fine. Client creation may be lengthy, it may require connecting to a remote server, authenticating, fetching configuration etc. If your use case has limited time, you may pass a context with timeout to abort the operation. Also if creation takes longer than the time you have, you may use a context with cancel and abort the mission at your will. These are just options which you may or may not use. But the "tools" are given via context.Context.
Later when you use the datastore.Client during serving (HTTP) client requests, then using the request's context is reasonable, so if a request gets cancelled, then so will its context, and so will the datastore operation you issue, rightfully, because if the client cannot see the result, then there's no point completing the query. Terminating the query early you might not end up using certain resources (e.g. datastore reads), and you may lower the server's load (by aborting jobs whose result will not be sent back to the client).
I've been working with examples trying to get my first "go routine" running and while I got it running, it won't work as prescribed by the go documentation with the timer.Reset() function.
In my case I believe that the way I am doing it is just fine because I don't actually care what's in the chan buffer, if anything. All as this is meant to do is trigger case <-tmr.C: if anything happened on case _, ok := <-watcher.Events: and then all goes quiet for at least one second. The reason for this is that case _, ok := <-watcher.Events: can get from one to dozens of events microseconds apart and I only care once they are all done and things have settled down again.
However I'm concerned that doing it the way that the documentation says you "must do" doesn't work. If I knew go better I would say the documentation is flawed because it assumes there is something in the buffer when there may not be but I don't know go well enough to have confidence in making that determination so I'm hoping some experts out there can enlighten me.
Below is the code. I haven't put this up on playground because I would have to do some cleaning up (remove calls to other parts of the program) and I'm not sure how I would make it react to filesystem changes for showing it working.
I've clearly marked in the code which alternative works and which doesn't.
func (pm *PluginManager) LoadAndWatchPlugins() error {
// DOING OTHER STUFF HERE
fmt.Println(`m1`)
done := make(chan interface{})
terminated := make(chan interface{})
go pm.watchDir(done, terminated, nil)
fmt.Println(`m2.pre-10`)
time.Sleep(10 * time.Second)
fmt.Println(`m3-post-10`)
go pm.cancelWatchDir(done)
fmt.Println(`m4`)
<-terminated
fmt.Println(`m5`)
os.Exit(0) // Temporary for testing
return Err
}
func (pm *PluginManager) cancelWatchDir(done chan interface{}) {
fmt.Println(`t1`)
time.Sleep(5 * time.Second)
fmt.Println()
fmt.Println(`t2`)
close(done)
}
func (pm *PluginManager) watchDir(done <-chan interface{}, terminated chan interface{}, strings <-chan string) {
watcher, err := fsnotify.NewWatcher()
if err != nil {
Logger("watchDir::"+err.Error(), `plugins`, Error)
}
//err = watcher.Add(pm.pluginDir)
err = watcher.Add(`/srv/plugins/`)
if err != nil {
Logger("watchDir::"+err.Error(), `plugins`, Error)
}
var tmr = time.NewTimer(time.Second)
tmr.Stop()
defer close(terminated)
defer watcher.Close()
defer tmr.Stop()
for {
select {
case <-tmr.C:
fmt.Println(`UPDATE FIRED`)
tmr.Stop()
case _, ok := <-watcher.Events:
if !ok {
return
}
fmt.Println(`Ticker: STOP`)
/*
* START OF ALTERNATIVES
*
* THIS IS BY EXAMPLE AND STATED THAT IT "MUST BE" AT:
* https://golang.org/pkg/time/#Timer.Reset
*
* BUT DOESN'T WORK
*/
if !tmr.Stop() {
fmt.Println(`Ticker: CHAN DRAIN`)
<-tmr.C // STOPS HERE AND GOES NO FURTHER
}
/*
* BUT IF I JUST DO THIS IT WORKS
*/
tmr.Stop()
/*
* END OF ALTERNATIVES
*/
fmt.Println(`Ticker: RESET`)
tmr.Reset(time.Second)
case <-done:
fmt.Println(`DONE TRIGGERED`)
return
}
}
}
Besides what icza said (q.v.), note that the documentation says:
For example, assuming the program has not received from t.C already:
if !t.Stop() {
<-t.C
}
This cannot be done concurrent to other receives from the Timer's channel.
One could argue that this is not a great example since it assumes that the timer was running at the time you called t.Stop. But it does go on to mention that this is a bad idea if there's already some existing goroutine that is or may be reading from t.C.
(The Reset documentation repeats all of this, and kind of in the wrong order because Reset sorts before Stop.)
Essentially, the whole area is a bit fraught. There's no good general answer, because there are at least three possible situations during the return from t.Stop back to your call:
No one is listening to the channel, and no timer-tick is in the channel now. This is often the case if the timer was already stopped before the call to t.Stop. If the timer was already stopped, t.Stop always returns false.
No one is listening to the channel, and a timer-tick is in the channel now. This is always the case when the timer was running but t.Stop was unable to stop it from firing. In this case, t.Stop returns false. It's also the case when the timer was running but fired before you even called t.Stop, and had therefore stopped on its own, so that t.Stop was not able to stop it and returned false.
Someone else is listening to the channel.
In the last situation, you should do nothing. In the first situation, you should do nothing. In the second situation, you probably want to receive from the channel so as to clear it out. That's what their example is for.
One could argue that:
if !t.Stop() {
select {
case <-t.C:
default:
}
}
is a better example. It does one non-blocking attempt that will consume the timer-tick if present, and does nothing if there is no timer-tick. This works whether or not the timer was not actually running when you called t.Stop. Indeed, it even works if t.Stop returns true, though in that case, t.Stop stopped the timer, so the timer never managed to put a timer-tick into the channel. (Thus, if there is a datum in the channel, it must necessarily be left over from a previous failure to clear the channel. If there are no such bugs, the attempt to receive was in turn unnecessary.)
But, if someone else—some other goroutine—is or may be reading the channel, you should not do any of this at all. There is no way to know who (you or them) will get any timer tick that might be in the channel despite the call to Stop.
Meanwhile, if you're not going to use the timer any further, it's relatively harmless just to leave a timer-tick, if there is one, in the channel. It will be garbage-collected when the channel itself is garbage-collected. Of course, whether this is sensible depends on what you are doing with the timer, but in these cases it suffices to just call t.Stop and ignore its return value.
You create a timer and you stop it immediately:
var tmr = time.NewTimer(time.Second)
tmr.Stop()
This doesn't make any sense, I assume this is just an "accident" from your part.
But going further, inside your loop:
case _, ok := <-watcher.Events:
When this happens, you claim this doesn't work:
if !tmr.Stop() {
fmt.Println(`Ticker: CHAN DRAIN`)
<-tmr.C // STOPS HERE AND GOES NO FURTHER
}
Timer.Stop() documents that it returns true if this call stops the timer, and false if the timer has already been stopped (or expired). But your timer was already stopped, right after its creation, so tmr.Stop() returns false properly, so you go inside the if and try to receive from tmr.C, but since the timer was "long" stopped, nothing will be sent on its channel, so this is a blocking (forever) operation.
If you're the one stopping the timer explicitly with timer.Stop(), the recommended "pattern" to drain its channel doesn't make any sense and doesn't work for the 2nd Timer.Stop() call.
I have a server working with websocket connections and a database. Some users can connect by sockets, so I need to increment their "online" in db; and at the moment of their disconnection I also decrement their "online" field in db. But in case the server breaks down I use a local variable replica map[string]int of users online. So I need to postpone the server shutdown until it completes a database request that decrements all users "online" in accordance with my variable replica, because at this way socket connection doesnt send default "close" event.
I have found a package github.com/xlab/closer that handles some system calls and can do some action before program finished, but my database request doesnt work in this way (code below)
func main() {
...
// trying to handle program finish event
closer.Bind(cleanupSocketConnections(&pageHandler))
...
}
// function that handles program finish event
func cleanupSocketConnections(p *controllers.PageHandler) func() {
return func() {
p.PageService.ResetOnlineUsers()
}
}
// this map[string]int contains key=userId, value=count of socket connections
type PageService struct {
Users map[string]int
}
func (p *PageService) ResetOnlineUsers() {
for userId, count := range p.Users {
// decrease online of every user in program variable
InfoService{}.DecreaseInfoOnline(userId, count)
}
}
Maybe I use it incorrectly or may be there is a better way to prevent default program finish?
First of all executing tasks when the server "breaks down" as you said is quite complicated, because breaking down can mean a lot of things and nothing can guarantee clean up functions execution when something goes really bad in your server.
From an engineering point of view (if setting users offline on breakdown is so important), the best would be to have a secondary service, on another server, that receives user connection and disconnection events and ping event, if it receives no updates in a set timeout the service considers your server down and proceeds to set every user offline.
Back to your question, using defer and waiting for termination signals should cover 99% of cases. I commented the code to explain the logic.
// AllUsersOffline is called when the program is terminated, it takes a *sync.Once to make sure this function is performed only
// one time, since it might be called from different goroutines.
func AllUsersOffline(once *sync.Once) {
once.Do(func() {
fmt.Print("setting all users offline...")
// logic to set all users offline
})
}
// CatchSigs catches termination signals and executes f function at the end
func CatchSigs(f func()) {
cSig := make(chan os.Signal, 1)
// watch for these signals
signal.Notify(cSig, syscall.SIGKILL, syscall.SIGTERM, syscall.SIGINT, syscall.SIGQUIT, syscall.SIGHUP) // these are the termination signals in GNU => https://www.gnu.org/software/libc/manual/html_node/Termination-Signals.html
// wait for them
sig := <- cSig
fmt.Printf("received signal: %s", sig)
// execute f
f()
}
func main() {
/* code */
// the once is used to make sure AllUsersOffline is performed ONE TIME.
usersOfflineOnce := &sync.Once{}
// catch termination signals
go CatchSigs(func() {
// when a termination signal is caught execute AllUsersOffline function
AllUsersOffline(usersOfflineOnce)
})
// deferred functions are called even in case of panic events, although execution is not to take for granted (OOM errors etc)
defer AllUsersOffline(usersOfflineOnce)
/* code */
// run server
err := server.Run()
if err != nil {
// error logic here
}
// bla bla bla
}
I think that you need to look at go routines and channel.
Here something (maybe) useful:
https://nathanleclaire.com/blog/2014/02/15/how-to-wait-for-all-goroutines-to-finish-executing-before-continuing/
Now I did something like this:
func contextHandler(h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
ctx, cancel := context.WithCancel(r.Context())
ctx, cancel = context.WithTimeout(ctx, config.EnvConfig.RequestTimeout)
defer cancel()
if cn, ok := w.(http.CloseNotifier); ok {
go func(done <-chan struct{}, closed <-chan bool) {
select {
case <-done:
case <-closed:
logger.Debug("message", "client connection has gone away, request will be cancelled")
cancel()
}
}(ctx.Done(), cn.CloseNotify())
}
h.ServeHTTP(w, r.WithContext(ctx))
})
}
Pls pay attention to these two lines:
ctx, cancel := context.WithCancel(r.Context())
ctx, cancel = context.WithTimeout(ctx, config.EnvConfig.RequestTimeout)
According to my tests: deliberately kill the client request and deliberately make the request exceed the deadline, both are working fine(i mean can receive the cancellation signal and timeout signal as expected), just my concern is: the latter cancel function will override the previous one returned by the context.WithCancel(r.Context()), so:
Is it a proper way to use these two APIs together like this?
Is it even necessary to use these two APIs together?
Please help to explain.
Because the CancelFunc returned from your WithCancel call is being immediately overwritten, this causes a resource (i.e. memory) leak in your program. From the context documentation:
The WithCancel, WithDeadline, and WithTimeout functions take a Context (the parent) and return a derived Context (the child) and a CancelFunc. Calling the CancelFunc cancels the child and its children, removes the parent's reference to the child, and stops any associated timers. Failing to call the CancelFunc leaks the child and its children until the parent is canceled or the timer fires.
Removing the WithCancel context from your code will fix this problem.
Additionally, cancellation of the HTTP request is managed by the HTTP server, as described in the http.Request.Context method documentation:
For incoming server requests, the context is canceled when the client's connection closes, the request is canceled (with HTTP/2), or when the ServeHTTP method returns.
When the server cancels the request context, all child contexts will be cancelled.
You can just use WithTimeout(), instead of using both APIs, because WithTimeout() returns you a context.CancelFunc just as WithCancel() does, which could be called at any time to cancel the target process/routine. Of course, the cancellation should before hitting the deadline set by WithTimeout().
So,
Is it a proper way to use these two APIs together like this?
Is it even necessary to use these two APIs together?
No, no need to use both, use returned context.CancelFunc by any API in package context.