I'm trying to write a program in go that is similar to cron with the addition that jobs are given a max runtime and if a function exceeds this duration, the job should exit. Here is my my whole code:
package main
import (
"fmt"
"log"
"sync"
"time"
)
type Job struct {
ID string
MaxRuntime time.Duration
Frequency time.Duration
Function func()
}
func testFunc() {
log.Println("OPP11")
time.Sleep(7 * time.Second)
log.Println("OP222")
}
func New(ID, frequency, runtime string, implementation func()) Job {
r, err := time.ParseDuration(runtime)
if err != nil {
panic(err)
}
f, err := time.ParseDuration(frequency)
if err != nil {
panic(err)
}
j := Job{ID: ID, MaxRuntime: r, Frequency: f, Function: implementation}
log.Printf("Created job %#v with frequency %v and max runtime %v", ID, f, r)
return j
}
func (j Job) Run() {
for range time.Tick(j.Frequency) {
start := time.Now()
log.Printf("Job %#v executing...", j.ID)
done := make(chan int)
//quit := make(chan int)
//var wg sync.WaitGroup
//wg.Add(1)
go func() {
j.Function()
done <- 0
}()
select {
case <-done:
elapsed := time.Since(start)
log.Printf("Job %#v completed in %v \n", j.ID, elapsed)
case <-time.After(j.MaxRuntime):
log.Printf("Job %#v halted after %v", j.ID, j.MaxRuntime)
// here should exit the above goroutine
}
}
}
func main() {
// create a new job given its name, frequency, max runtime
// and the function it should run
testJob := New("my-first-job", "3s", "5s", func() {
testFunc()
})
testJob.Run()
}
What I'm trying to do is that in the second case in the select of the Run() function, it should exit the goroutine which is running the function. I tried to do this by wrapping the function in a for loop with a select statement which listens on a quit channel like this:
go func() {
for {
select {
case <-quit:
fmt.Println("quiting goroutine")
return
default:
j.Function()
done <- 0
}
}
}()
And then having quit <- 1 in the Run() function, but that doesnt seem to be doing anything. Is there a better of doing this?
As explained in the comments, the whole problem is that you want to cancel the execution of a function (j.Function) that isn't cancellable.
There's no way to "kill a goroutine". Goroutines work in a cooperative fashion. If you want to be able to "kill it", you need to ensure that the function running in that Goroutine has a mechanism for you to signal that it should stop what it's doing and return, letting the Goroutine that was running it finally terminate.
The standard way of indicating that a function is cancellable is by having it take a context.Context as its first param:
type Job struct {
// ...
Function func(context.Context)
}
Then you create the context and pass it to the j.Function. Since your cancellation logic is simply based on a timeout, there's no need to write all that select ... case <-time.After(...), as that is provided as built-in functionality with a context.Context:
func (j Job) Run() {
for range time.Tick(j.Frequency) {
go j.ExecuteOnce()
}
}
func (j Job) ExecuteOnce() {
log.Printf("Job %#v executing...", j.ID)
ctx, cancel := context.WithTimeout(context.Background(), j.MaxRuntime)
defer cancel()
j.Function(ctx)
}
Now, to finish, you have to rewrite the functions that you're going to be passing to your job scheduler so that they take context.Context and, very importantly, that they use it properly and cancel whatever they're doing when the context is cancelled.
This means that if you're writing the code for those funcs and they will somehow block, you'll be responsible for writing stuff like:
select {
case <-ctx.Done():
return ctx.Err()
case ...your blocking case...:
}
If your funcs are invoking 3rd party code, then that code needs to be aware of context and cancellation, and you'll need to pass down the ctx your funcs receive.
Related
I have written an API that makes DB calls and does some business logic. I am invoking a goroutine that must perform some operation in the background.
Since the API call should not wait for this background task to finish, I am returning 200 OK immediately after calling the goroutine (let us assume the background task will never give any error.)
I read that goroutine will be terminated once the goroutine has completed its task.
Is this fire and forget way safe from a goroutine leak?
Are goroutines terminated and cleaned up once they perform the job?
func DefaultHandler(w http.ResponseWriter, r *http.Request) {
// Some DB calls
// Some business logics
go func() {
// some Task taking 5 sec
}()
w.WriteHeader(http.StatusOK)
}
I would recommend always having your goroutines under control to avoid memory and system exhaustion.
If you are receiving a spike of requests and you start spawning goroutines without control, probably the system will go down soon or later.
In those cases where you need to return an immediate 200Ok the best approach is to create a message queue, so the server only needs to create a job in the queue and return the ok and forget. The rest will be handled by a consumer asynchronously.
Producer (HTTP server) >>> Queue >>> Consumer
Normally, the queue is an external resource (RabbitMQ, AWS SQS...) but for teaching purposes, you can achieve the same effect using a channel as a message queue.
In the example you'll see how we create a channel to communicate 2 processes.
Then we start the worker process that will read from the channel and later the server with a handler that will write to the channel.
Try to play with the buffer size and job time while sending curl requests.
package main
import (
"fmt"
"log"
"net/http"
"time"
)
/*
$ go run .
curl "http://localhost:8080?user_id=1"
curl "http://localhost:8080?user_id=2"
curl "http://localhost:8080?user_id=3"
curl "http://localhost:8080?user_id=....."
*/
func main() {
queueSize := 10
// This is our queue, a channel to communicate processes. Queue size is the number of items that can be stored in the channel
myJobQueue := make(chan string, queueSize) // Search for 'buffered channels'
// Starts a worker that will read continuously from our queue
go myBackgroundWorker(myJobQueue)
// We start our server with a handler that is receiving the queue to write to it
if err := http.ListenAndServe("localhost:8080", myAsyncHandler(myJobQueue)); err != nil {
panic(err)
}
}
func myAsyncHandler(myJobQueue chan<- string) http.HandlerFunc {
return func(rw http.ResponseWriter, r *http.Request) {
// We check that in the query string we have a 'user_id' query param
if userID := r.URL.Query().Get("user_id"); userID != "" {
select {
case myJobQueue <- userID: // We try to put the item into the queue ...
rw.WriteHeader(http.StatusOK)
rw.Write([]byte(fmt.Sprintf("queuing user process: %s", userID)))
default: // If we cannot write to the queue it's because is full!
rw.WriteHeader(http.StatusInternalServerError)
rw.Write([]byte(`our internal queue is full, try it later`))
}
return
}
rw.WriteHeader(http.StatusBadRequest)
rw.Write([]byte(`missing 'user_id' in query params`))
}
}
func myBackgroundWorker(myJobQueue <-chan string) {
const (
jobDuration = 10 * time.Second // simulation of a heavy background process
)
// We continuosly read from our queue and process the queue 1 by 1.
// In this loop we could spawn more goroutines in a controlled way to paralelize work and increase the read throughput, but i don't want to overcomplicate the example.
for userID := range myJobQueue {
// rate limiter here ...
// go func(u string){
log.Printf("processing user: %s, started", userID)
time.Sleep(jobDuration)
log.Printf("processing user: %s, finisehd", userID)
// }(userID)
}
}
There is no "goroutine cleaning" you have to handle, you just launch goroutines and they'll be cleaned when the function launched as a goroutine returns. Quoting from Spec: Go statements:
When the function terminates, its goroutine also terminates. If the function has any return values, they are discarded when the function completes.
So what you do is fine. Note however that your launched goroutine cannot use or assume anything about the request (r) and response writer (w), you may only use them before you return from the handler.
Also note that you don't have to write http.StatusOK, if you return from the handler without writing anything, that's assumed to be a success and HTTP 200 OK will be sent back automatically.
See related / possible duplicate: Webhook process run on another goroutine
#icza is absolutely right there is no "goroutine cleaning" you can use a webhook or a background job like gocraft. The only way I can think of using your solution is to use the sync package for learning purposes.
func DefaultHandler(w http.ResponseWriter, r *http.Request) {
// Some DB calls
// Some business logics
var wg sync.WaitGroup
wg.Add(1)
go func() {
defer wg.Done()
// some Task taking 5 sec
}()
w.WriteHeader(http.StatusOK)
wg.wait()
}
you can wait for a goroutine to finish using &sync.WaitGroup:
// BusyTask
func BusyTask(t interface{}) error {
var wg = &sync.WaitGroup{}
wg.Add(1)
go func() {
// busy doing stuff
time.Sleep(5 * time.Second)
wg.Done()
}()
wg.Wait() // wait for goroutine
return nil
}
// this will wait 5 second till goroutune finish
func main() {
fmt.Println("hello")
BusyTask("some task...")
fmt.Println("done")
}
Other way is to attach a context.Context to goroutine and time it out.
//
func BusyTaskContext(ctx context.Context, t string) error {
done := make(chan struct{}, 1)
//
go func() {
// time sleep 5 second
time.Sleep(5 * time.Second)
// do tasks and signle done
done <- struct{}{}
close(done)
}()
//
select {
case <-ctx.Done():
return errors.New("timeout")
case <-done:
return nil
}
}
//
func main() {
fmt.Println("hello")
ctx, cancel := context.WithTimeout(context.TODO(), 2*time.Second)
defer cancel()
if err := BusyTaskContext(ctx, "some task..."); err != nil {
fmt.Println(err)
return
}
fmt.Println("done")
}
I have the following code in Go using the semaphore library just as an example:
package main
import (
"fmt"
"context"
"time"
"golang.org/x/sync/semaphore"
)
// This protects the lockedVar variable
var lock *semaphore.Weighted
// Only one go routine should be able to access this at once
var lockedVar string
func acquireLock() {
err := lock.Acquire(context.TODO(), 1)
if err != nil {
panic(err)
}
}
func releaseLock() {
lock.Release(1)
}
func useLockedVar() {
acquireLock()
fmt.Printf("lockedVar used: %s\n", lockedVar)
releaseLock()
}
func causeDeadLock() {
acquireLock()
// calling this from a function that's already
// locked the lockedVar should cause a deadlock.
useLockedVar()
releaseLock()
}
func main() {
lock = semaphore.NewWeighted(1)
lockedVar = "this is the locked var"
// this is only on a separate goroutine so that the standard
// go "deadlock" message doesn't print out.
go causeDeadLock()
// Keep the primary goroutine active.
for true {
time.Sleep(time.Second)
}
}
Is there a way to get the acquireLock() function call to print a message after a timeout indicating that there is a potential deadlock but without unblocking the call? I would want the deadlock to persist, but a log message to be written in the event that a timeout is reached. So a TryAcquire isn't exactly what I want.
An example of what I want in psuedo code:
afterFiveSeconds := func() {
fmt.Printf("there is a potential deadlock\n")
}
lock.Acquire(context.TODO(), 1, afterFiveSeconds)
The lock.Acquire call in this example would call the afterFiveSeconds callback if the Acquire call blocked for more than 5 seconds, but it would not unblock the caller. It would continue to block.
I think I've found a solution to my problem.
func acquireLock() {
timeoutChan := make(chan bool)
go func() {
select {
case <-time.After(time.Second * time.Duration(5)):
fmt.Printf("potential deadlock while acquiring semaphore\n")
case <-timeoutChan:
break
}
}()
err := lock.Acquire(context.TODO(), 1)
close(timeoutChan)
if err != nil {
panic(err)
}
}
I am attempting to create a poller in Go that spins up and every 24 hours executes a function.
I want to also be able to stop the polling, I'm attempting to do this by having a done channel and passing down an empty struct to stop the for loop.
In my tests, the for just loops infinitely and I can't seem to stop it, am I using the done channel incorrectly? The ticker case works as expected.
Poller struct {
HandlerFunc HandlerFunc
interval *time.Ticker
done chan struct{}
}
func (p *Poller) Start() error {
for {
select {
case <-p.interval.C:
err := p.HandlerFunc()
if err != nil {
return err
}
case <-p.done:
return nil
}
}
}
func (p *Poller) Stop() {
p.done <- struct{}{}
}
Here is the test that's exeuting the code and causing the infinite loop.
poller := poller.NewPoller(
testHandlerFunc,
time.NewTicker(1*time.Millisecond),
)
err := poller.Start()
assert.Error(t, err)
poller.Stop()
Seems like problem is in your use case, you calling poller.Start() in blocking maner, so poller.Stop() is never called. It's common, in go projects to call goroutine inside of Start/Run methods, so, in poller.Start(), i would do something like that:
func (p *Poller) Start() <-chan error {
errc := make(chan error, 1 )
go func() {
defer close(errc)
for {
select {
case <-p.interval.C:
err := p.HandlerFunc()
if err != nil {
errc <- err
return
}
case <-p.done:
return
}
}
}
return errc
}
Also, there's no need to send empty struct to done channel. Closing channel like close(p.done) is more idiomatic for go.
There is no explicit way in Go to broadcast an event to go routines for something like cancellation. Instead its idiomatic to create a channel that when closed signifies a message such as cancelling any work it has to do. Something like this is a viable pattern:
var done = make(chan struct{})
func cancelled() bool {
select {
case <-done:
return true
default:
return false
}
}
Go-routines can call cancelled to poll for a cancellation.
Then your main loop can respond to such an event but make sure you drain any channels that might cause go-routines to block.
for {
select {
case <-done:
// Drain whatever channels you need to.
for range someChannel { }
return
//.. Other cases
}
}
I am trying to pause and resume groutine. I understand I can sleep the run, but I am looking for is like a button "pause/resume" rather than a timer.
Here is my attempt. I am using the blocking feature of channel to pause, and select to switch what to execute based on channel value. However, the output is always Running in my case.
func main() {
ctx := wctx{}
go func(ctx wctx) {
for {
time.Sleep(1 * time.Second)
select {
case <-ctx.pause:
fmt.Print("Paused")
<-ctx.pause
case <-ctx.resume:
fmt.Print("Resumed")
default:
fmt.Print("Running \n")
}
}
}(ctx)
ctx.pause <- struct{}{}
ctx.resume <- struct{}{}
}
type wctx struct {
pause chan struct{}
resume chan struct{}
}
A select with multiple ready cases chooses one pseudo-randomly. So if the goroutine is "slow" to check those channels, you might send a value on both pause and resume (assuming they are buffered) so receiving from both channels could be ready, and resume could be chosen first, and in a later iteration the pause when the goroutine should not be paused anymore.
For this you should use a "state" variable synchronized by a mutex. Something like this:
const (
StateRunning = iota
StatePaused
)
type wctx struct {
mu sync.Mutex
state int
}
func (w *wctx) SetState(state int) {
w.mu.Lock()
defer w.mu.Unlock()
w.state = state
}
func (w *wctx) State() int {
w.mu.Lock()
defer w.mu.Unlock()
return w.state
}
Testing it:
ctx := &wctx{}
go func(ctx *wctx) {
for {
time.Sleep(1 * time.Millisecond)
switch state := ctx.State(); state {
case StatePaused:
fmt.Println("Paused")
default:
fmt.Println("Running")
}
}
}(ctx)
time.Sleep(3 * time.Millisecond)
ctx.SetState(StatePaused)
time.Sleep(3 * time.Millisecond)
ctx.SetState(StateRunning)
time.Sleep(2 * time.Millisecond)
Output (try it on the Go Playground):
Running
Running
Running
Paused
Paused
Paused
Running
Running
You need to initialize your channels, remember that reads from nil channels always blocks.
A select with a default case never blocks.
Here is a modified version of your program, that fixes the above mentioned issues:
package main
import (
"fmt"
"time"
)
func main() {
ctx := wctx{
pause: make(chan struct{}),
resume: make(chan struct{}),
}
go func(ctx wctx) {
for {
select {
case <-ctx.pause:
fmt.Println("Paused")
case <-ctx.resume:
fmt.Println("Resumed")
}
fmt.Println("Running")
time.Sleep(time.Second)
}
}(ctx)
ctx.pause <- struct{}{}
ctx.resume <- struct{}{}
}
type wctx struct {
pause chan struct{}
resume chan struct{}
}
Following problem:
I have a function that only should allow one caller to execute.
If someone tries to call the function and it is already busy the second caller should immediatly return with an error.
I tried the following:
1. Use a mutex
Would be pretty easy. But the problem is, you cannot check if a mutex is locked. You can only block on it. Therefore it does not work
2. Wait on a channel
var canExec = make(chan bool, 1)
func init() {
canExec <- true
}
func onlyOne() error {
select {
case <-canExec:
default:
return errors.New("already busy")
}
defer func() {
fmt.Println("done")
canExec <- true
}()
// do stuff
}
What I don't like here:
looks really messi
if easy to mistakenly block on the channel / mistakenly write to the channel
3. Mixture of mutex and shared state
var open = true
var myMutex *sync.Mutex
func canExec() bool {
myMutex.Lock()
defer myMutex.Unlock()
if open {
open = false
return true
}
return false
}
func endExec() {
myMutex.Lock()
defer myMutex.Unlock()
open = true
}
func onlyOne() error {
if !canExec() {
return errors.New("busy")
}
defer endExec()
// do stuff
return nil
}
I don't like this either. Using a shard variable with mutex is not that nice.
Any other idea?
I'll throw my preference out there - use the atomic package.
var (
locker uint32
errLocked = errors.New("Locked out buddy")
)
func OneAtATime(d time.Duration) error {
if !atomic.CompareAndSwapUint32(&locker, 0, 1) { // <-----------------------------
return errLocked // All logic in these |
} // four lines |
defer atomic.StoreUint32(&locker, 0) // <-----------------------------
// logic here, but we will sleep
time.Sleep(d)
return nil
}
The idea is pretty simple. Set the initial value to 0 (0 value of uint32). The first thing you do in the function is check if the value of locker is currently 0 and if so it changes it to 1. It does all of this atomically. If it fails simply return an error (or however else you like to handle a locked state). If successful, you immediately defer replacing the value (now 1) with 0. You don't have to use defer obviously, but failing to set the value back to 0 before returning would leave you in a state where the function could no longer be run.
After you do those 4 lines of setup, you do whatever you would normally.
https://play.golang.org/p/riryVJM4Qf
You can make things a little nicer if desired by using named values for your states.
const (
stateUnlocked uint32 = iota
stateLocked
)
var (
locker = stateUnlocked
errLocked = errors.New("Locked out buddy")
)
func OneAtATime(d time.Duration) error {
if !atomic.CompareAndSwapUint32(&locker, stateUnlocked, stateLocked) {
return errLocked
}
defer atomic.StoreUint32(&locker, stateUnlocked)
// logic here, but we will sleep
time.Sleep(d)
return nil
}
You can use a semaphore for this (go get golang.org/x/sync/semaphore)
package main
import (
"errors"
"fmt"
"sync"
"time"
"golang.org/x/sync/semaphore"
)
var sem = semaphore.NewWeighted(1)
func main() {
var wg sync.WaitGroup
for i := 0; i < 10; i++ {
wg.Add(1)
go func() {
defer wg.Done()
if err := onlyOne(); err != nil {
fmt.Println(err)
}
}()
time.Sleep(time.Second)
}
wg.Wait()
}
func onlyOne() error {
if !sem.TryAcquire(1) {
return errors.New("busy")
}
defer sem.Release(1)
fmt.Println("working")
time.Sleep(5 * time.Second)
return nil
}
You could use standard channel approach with select statement.
var (
ch = make(chan bool)
)
func main() {
i := 0
wg := sync.WaitGroup{}
for i < 100 {
i++
wg.Add(1)
go func() {
defer wg.Done()
err := onlyOne()
if err != nil {
fmt.Println("Error: ", err)
} else {
fmt.Println("Ok")
}
}()
go func() {
ch <- true
}()
}
wg.Wait()
}
func onlyOne() error {
select {
case <-ch:
// do stuff
return nil
default:
return errors.New("Busy")
}
}
Do you want a function to be executed exactly once or once at given time? In former case take a look at https://golang.org/pkg/sync/#Once.
If you want once at a time solution:
package main
import (
"fmt"
"sync"
"time"
)
// OnceAtATime protects function from being executed simultaneously.
// Example:
// func myFunc() { time.Sleep(10*time.Second) }
// func main() {
// once := OnceAtATime{}
// once.Do(myFunc)
// once.Do(myFunc) // not executed
// }
type OnceAtATime struct {
m sync.Mutex
executed bool
}
func (o *OnceAtATime) Do(f func()) {
o.m.Lock()
if o.executed {
o.m.Unlock()
return
}
o.executed = true
o.m.Unlock()
f()
o.m.Lock()
o.executed = false
o.m.Unlock()
}
// Proof of concept
func f(m int, done chan<- struct{}) {
for i := 0; i < 10; i++ {
fmt.Printf("%d: %d\n", m, i)
time.Sleep(250 * time.Millisecond)
}
close(done)
}
func main() {
done := make(chan struct{})
once := OnceAtATime{}
go once.Do(func() { f(1, done) })
go once.Do(func() { f(2, done) })
<-done
done = make(chan struct{})
go once.Do(func() { f(3, done) })
<-done
}
https://play.golang.org/p/nZcEcWAgKp
But the problem is, you cannot check if a mutex is locked. You can only block on it. Therefore it does not work
With possible Go 1.18 (Q1 2022), you will be able to test if a mutex is locked... without blocking on it.
See (as mentioned by Go 101) the issue 45435 from Tye McQueen :
sync: add Mutex.TryLock
This is followed by CL 319769, with the caveat:
Use of these functions is almost (but not) always a bad idea.
Very rarely they are necessary, and third-party implementations (using a mutex and an atomic word, say) cannot integrate as well with the race detector as implementations in package sync itself.
The objections (since retracted) were:
Locks are for protecting invariants.
If the lock is held by someone else, there is nothing you can say about the invariant.
TryLock encourages imprecise thinking about locks; it encourages making assumptions about the invariants that may or may not be true.
That ends up being its own source of races.
Thinking more about this, there is one important benefit to building TryLock into Mutex, compared to a wrapper:
failed TryLock calls wouldn't create spurious happens-before edges to confuse the race detector.
And:
A channel-based implementation is possible, but performs poorly in comparison.
There's a reason we have sync.Mutex rather than just using channel for locking.
I came up with the following generic solution for that:
Works for me, or do you see any problem with that?
import (
"sync"
)
const (
ONLYONECALLER_LOCK = "onlyonecaller"
ANOTHER_LOCK = "onlyonecaller"
)
var locks = map[string]bool{}
var mutex = &sync.Mutex{}
func Lock(lock string) bool {
mutex.Lock()
defer mutex.Unlock()
locked, ok := locks[lock]
if !ok {
locks[lock] = true
return true
}
if locked {
return false
}
locks[lock] = true
return true
}
func IsLocked(lock string) bool {
mutex.Lock()
defer mutex.Unlock()
locked, ok := locks[lock]
if !ok {
return false
}
return locked
}
func Unlock(lock string) {
mutex.Lock()
defer mutex.Unlock()
locked, ok := locks[lock]
if !ok {
return
}
if !locked {
return
}
locks[lock] = false
}
see: https://play.golang.org/p/vUUsHcT3L-
How about this package: https://github.com/viney-shih/go-lock . It use channel and semaphore (golang.org/x/sync/semaphore) to solve your problem.
go-lock implements TryLock, TryLockWithTimeout and TryLockWithContext functions in addition to Lock and Unlock. It provides flexibility to control the resources.
Examples:
package main
import (
"fmt"
"time"
"context"
lock "github.com/viney-shih/go-lock"
)
func main() {
casMut := lock.NewCASMutex()
casMut.Lock()
defer casMut.Unlock()
// TryLock without blocking
fmt.Println("Return", casMut.TryLock()) // Return false
// TryLockWithTimeout without blocking
fmt.Println("Return", casMut.TryLockWithTimeout(50*time.Millisecond)) // Return false
// TryLockWithContext without blocking
ctx, cancel := context.WithTimeout(context.Background(), 50*time.Millisecond)
defer cancel()
fmt.Println("Return", casMut.TryLockWithContext(ctx)) // Return false
// Output:
// Return false
// Return false
// Return false
}
Lets keep it simple:
package main
import (
"fmt"
"time"
"golang.org/x/sync/semaphore"
)
var sem *semaphore.NewWeighted(1)
func init() {
sem = emaphore.NewWeighted(1)
}
func doSomething() {
if !sem.TryAcquire(1) {
return errors.New("I'm busy")
}
defer sem.Release(1)
fmt.Println("I'm doing my work right now, then I'll take a nap")
time.Sleep(10)
}
func main() {
go func() {
doSomething()
}()
}