is there a possible in go to defer a go routine, or a way to achieve the desired behaviour? The following background: I am pooling connections to a database in a channel. Basically in a handler I call
session, err := getSessionFromQueue()
// ...
// serving content to my client
// ...
go queueSession(session)
What I really would like to do is:
session, err := getSessionFromQueue()
defer go queueSession(session)
// ...
// serving content to my client
// ...
to avoid that my handler is hanging/crashing at some point and the session is not properly returned to the queue. The reason I want to run it as a go routine is that queueSession is potentially blocking for 1 second (in case the queue is full I wait for one second before I completely close the session).
Update
#abhink got me on the right track there. I solved the problem by putting the call to a goroutine in queueBackend.
func queueSession(mongoServer *Server) {
go func(mongoServer *Server) {
select {
case mongoQueue <- mongoServer:
// mongoServer stored in queue, done.
case <- time.After(1 * time.Second):
// cannot queue for whatever reason after 1 second
// abort
mongoServer.Close()
}
}(mongoServer)
}
Now I can simply call
defer queueSession(session)
and it is run as a goroutine.
There is no way to directly defer a goroutine. You can try something like this:
session, err := getSessionFromQueue()
defer func() {
go queueSession(session)
}()
Related
I am in doubt whether all of my spawned goroutines are dying after doing their assigned work.
I have to make two HTTP calls(always), but based on a flag, read the response from either one of them.
what I have done so far is ->
var result error
resultChannel := make(chan error)
var wg sync.WaitGroup
wg.Add(1) // only adding 1, as I don't need to wait for other to complete.
go func() {
_, err := // HTTP call ONE
if flagIsTrue {
defer wg.Done()
resultChannel <- err
}
}()
go func() {
_, err := // HTTP call TWO
if !flagIsTrue {
defer wg.Done()
resultChannel <- err
}
}()
go func() {
wg.Wait()
close(resultChannel)
}()
for err := range resultChannel {
result = err
}
Hence, I will wait for the corresponding call, and listen to its response only. This is working well, but since the app is deployed on the server, where I guess the main goroutine won't die(henceforth killing other goroutines), my main concern is whether the other ignorable thread will die or not after it will get the response from HTTP call(afaik, we need to tell go that a goroutine needs to die).
My concerns:
The assumption(true acc to me) that the main thread does not terminate after serving one of these calls.
Will the ignorable(response is, but necessary to trigger the API call) thread die or not?
Should I use a select case to handle this, if yes then how(other suggestions are welcome)?
If the flagIsTrue is set before creating the goroutines, then only one of the goroutines will be able to write to the channel. The other one will not attempt to write to the channel, and thus will terminate.
You could simply move the check for the flag outside, and create one goroutine based on the flag.
I am trying to create an intermediate layer between user and tcp, with Send and Receive functions. Currently, I am trying to integrate a context, so that the Send and Receive respects a context. However, I don't know how to make them respect the context's cancellation.
Until now, I got the following.
// c.underlying is a net.Conn
func (c *tcpConn) Receive(ctx context.Context) ([]byte, error) {
if deadline, ok := ctx.Deadline(); ok {
// Set the read deadline on the underlying connection according to the
// given context. This read deadline applies to the whole function, so
// we only set it once here. On the next read-call, it will be set
// again, or will be reset in the else block, to not keep an old
// deadline.
c.underlying.SetReadDeadline(deadline)
} else {
c.underlying.SetReadDeadline(time.Time{}) // remove the read deadline
}
// perform reads with
// c.underlying.Read(myBuffer)
return frameData, nil
}
However, as far as I understand that code, this only respects a context.WithTimeout or context.WithDeadline, and not a context.WithCancel.
If possible, I would like to pass that into the connection somehow, or actually abort the reading process.
How can I do that?
Note: If possible, I would like to avoid another function that reads in another goroutine and pushed a result back on a channel, because then, when calling cancel, and I am reading 2GB over the network, that doesn't actually cancel the read, and the resources are still used. If not possible in another way however, I would like to know if there is a better way of doing that than a function with two channels, one for a []byte result and one for an error.
EDIT:
With the following code, I can respect a cancel, but it doesn't abort the read.
// apply deadline ...
result := make(chan interface{})
defer close(result)
go c.receiveAsync(result)
select {
case res := <-result:
if err, ok := res.(error); ok {
return nil, err
}
return res.([]byte), nil
case <-ctx.Done():
return nil, ErrTimeout
}
}
func (c *tcpConn) receiveAsync(result chan interface{}) {
// perform the reads and push either an error or the
// read bytes to the result channel
If the connection can be closed on cancellation, you can setup a goroutine to shutdown the connection on cancellation within the Receive method. If the connection must be reused again later, then there is no way to cancel a Read in progress.
recvDone := make(chan struct{})
defer close(recvDone)
// setup the cancellation to abort reads in process
go func() {
select {
case <-ctx.Done():
c.underlying.CloseRead()
// Close() can be used if this isn't necessarily a TCP connection
case <-recvDone:
}
}()
It will be a little more work if you want to communicate the cancelation error back, but the CloseRead will provide a clean way to stop any pending TCP Read calls.
I have a Go RPC server that serves client requests. A client requests work (or task) from the server and the server assigns a task to the client. The server expects workers (or clients) to finish any task within a time limit. Therefore a timeout event callback mechanism is required on the server-side.
Here is what I tried so far.
func (l *Listener) RequestHandler(request string, reply string) error {
// some other work
// ....
_timer := time.NewTimer(time.Second * 5) // timer for 2 seconds
go func() {
// simulates a client not replying case, with timeout of 2 sec
y := <-_timer.C
fmt.Println("TimeOut for client")
// revert state changes becasue of client fail
}()
// set reply
// update some states
return nil
}
In the above snippet for each request from a worker (or a client) the handler in the server-side starts a timer and a goroutine. The goroutine reverts the changes done by the handler function before sending a reply to the client.
Is there any way of creating a "set of timers" and blocking wait on the "set of timers" ? Further, whenever a timer expires the blocking wait wakes up and provides us with the timer handles. Depending on the timer type we can perform different expiry handler functions in the runtime.
I am trying to implement a similar mechanism in Go that we can implement in C++ with timerfd with epoll.
Full code for the sample implementation of timers in Go. server.go and client.go.
I suggest you to explored the context package
it can be be done like this:
func main() {
c := context.Background()
wg := &sync.WaitGroup{}
f(c, wg)
wg.Wait()
}
func f(c context.Context, wg *sync.WaitGroup) {
c, _ = context.WithTimeout(c, 3*time.Second)
wg.Add(1)
go func(c context.Context) {
defer wg.Done()
select {
case <-c.Done():
fmt.Println("f() Done:", c.Err())
return
case r := <-time.After(5 * time.Second):
fmt.Println("f():", r)
}
}(c)
}
basically you initiate a base context and then derive other contexts from it, when a context is terminated, either by passing the time or a call to its close, it closes its Done channel and the Done channel of all the contexts that are derived from it.
I'm implementing a feature where I need to read files from a directory, parse and export them to a REST service at a regular interval. As part of the same I would like to gracefully handle the program termination (SIGKILL, SIGQUIT etc).
Towards the same I would like to know how to implement Context based cancellation of process.
For executing the flow in regular interval I'm using gocron.
cmd/scheduler.go
func scheduleTask(){
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
s := gocron.NewScheduler()
s.Every(10).Minutes().Do(processTask, ctx)
s.RunAll() // run immediate
<-s.Start() // schedule
for {
select {
case <-(ctx).Done():
fmt.Print("context done")
s.Remove(processTask)
s.Clear()
cancel()
default:
}
}
}
func processTask(ctx *context.Context){
task.Export(ctx)
}
task/export.go
func Export(ctx *context.Context){
pendingFiles, err := filepath.Glob("/tmp/pending/" + "*_task.json")
//error handling
//as there can be 100s of files, I would like to break the loop when context.Done() to return asap & clean up the resources here as well
for _, fileName := range pendingFiles {
exportItem(fileName string)
}
}
func exportItem(fileName string){
data, err := ReadFile(fileName) //not shown here for brevity
//err handling
err = postHTTPData(string(data)) //not shown for brevity
//err handling
}
For the process management, I think the other component is the actual handling of signals, and managing the context from those signals.
I'm not sure of the specifics of go-cron (they have an example showing some of these concepts on their github) but in general I think that the steps involved are:
Registration of os signals handler
Waiting to receive a signal
Canceling top level context in response to a signal
Example:
sigCh := make(chan os.Signal, 1)
defer close(sigCh)
signal.Notify(sigCh, syscall.SIGTERM, syscall.SIGQUIT, syscall.SIGINT)
<-sigCh
cancel()
I'm not sure how this will look in the context of go-cron, but the context that the signal handling code cancels should be a parent of the context that the task and job is given.
Worked this out myself just now. I've always felt the blog post on contexts was A LOT of material to try and understand so a simpler demonstration would be nice.
There are many scenarios you may encounter. Each one is different and will require adaptation. Here's one example:
Say you have a channel that could run for an indeterminate amount of time.
indeterminateChannel := make(chan string)
for s := range indeterminateChannel{
fmt.Println(s)
}
Your producer might look something like:
for {
indeterminateChannel <- "Terry"
}
We don't control the producer, so we need someway to cut out of your print loop if the producer exceeds your time limit.
indeterminateChannel := make(chan string)
// Close the channel when our code exits so OUR for loop no longer occupies
// resources and the goroutine exits.
// The producer will have a problem, but we don't care about their problems.
// In this instance.
defer close(indeterminateChannel)
// we wait for this context to time out below the goroutine.
ctx, cancel := context.WithTimeout(context.TODO(), time.Minute*1)
defer cancel()
go func() {
for s := range indeterminateChannel{
fmt.Println(s)
}
}()
<- ctx.Done // wait for the context to terminate based on a timeout.
You can also check ctx.Err to see if the context exited due to a timeout or because it was canceled.
You might also want to learn about how to properly check if the context failed due to a deadline: How to check if an error is "deadline exceeded" error?
Or if the context was canceled: How to check if a request was cancelled
It doesn't seem possible to have two way communication via channels with a goroutine which is performing file operations, unless you block the channel communication on the file operations. How can I work around the limits this imposes?
Another way to phrase this question...
If I have a loop similar to the following running in a goroutine, how can I tell it to close the connection and exit without blocking on the next Read?
func readLines(response *http.Response, outgoing chan string) error {
defer response.Body.Close()
reader := bufio.NewReader(response.Body)
for {
line, err := reader.ReadString('\n')
if err != nil {
return err
}
outgoing <- line
}
}
It's not possible for it to read from a channel that tells it when to close down because it's blocking on the network reads (in my case, that can take hours).
It doesn't appear to be safe to simply call Close() from outside the goroutine, since the Read/Close methods don't appear to be fully thread safe.
I could simply put a lock around references to response.Body that used inside/outside the routine, but would cause the external code to block until a pending read completes, and I specifically want to be able to interrupt an in-progress read.
To address this scenario, several io.ReadCloser implementations in the standard library support concurrent calls to Read and Close where Close interrupts an active Read.
The response body reader created by net/http Transport is one of those implementations. It is safe to concurrently call Read and Close on the response body.
You can also interrupt an active Read on the response body by calling the Transport CancelRequest method.
Here's how implement cancel using close on the body:
func readLines(response *http.Response, outgoing chan string, done chan struct{}) error {
cancel := make(chan struct{})
go func() {
select {
case <-done:
response.Body.Close()
case <-cancel:
return
}()
defer response.Body.Close()
defer close(cancel) // ensure that goroutine exits
reader := bufio.NewReader(response.Body)
for {
line, err := reader.ReadString('\n')
if err != nil {
return err
}
outgoing <- line
}
}
Calling close(done) from another goroutine will cancel reads on the body.