Run multiple functions concurrently at intervals in Go - go

I have a list of functions and their respective intervals. I want to run each function at its interval concurrently.
In JavaScript, I wrote something like:
maps.forEach(({fn, interval}) => {
setInterval(fn, interval)
})
How do I implement this functionality in Golang?

Use a time.Ticker to receive "events" periodically, which you may use to time the execution of a function. You may obtain a time.Ticker by calling time.NewTicker(). The returned ticker has a channel on which values are sent periodically.
Use a goroutine to continuously receive the events and call the function, e.g. with a for range loop.
Let's see 2 functions:
func oneSec() {
log.Println("oneSec")
}
func twoSec() {
log.Println("twoSec")
}
Here's a simple scheduler that periodically calls a given function:
func schedule(f func(), interval time.Duration) *time.Ticker {
ticker := time.NewTicker(interval)
go func() {
for range ticker.C {
f()
}
}()
return ticker
}
Example using it:
func main() {
t1 := schedule(oneSec, time.Second)
t2 := schedule(twoSec, 2*time.Second)
time.Sleep(5 * time.Second)
t1.Stop()
t2.Stop()
}
Example output (try it on the Go Playground):
2009/11/10 23:00:01 oneSec
2009/11/10 23:00:02 twoSec
2009/11/10 23:00:02 oneSec
2009/11/10 23:00:03 oneSec
2009/11/10 23:00:04 twoSec
2009/11/10 23:00:04 oneSec
Note that Ticker.Stop() does not close the ticker's channel, so a for range will not terminate; Stop() only stops sending values on the ticker's channel.
If you want to terminate the goroutines used to schedule the function calls, you may do that with an additional channel. And then those goroutines may use a select statement to "monitor" the ticker's channel and this done channel, and return if receiving from done succeeds.
For example:
func schedule(f func(), interval time.Duration, done <-chan bool) *time.Ticker {
ticker := time.NewTicker(interval)
go func() {
for {
select {
case <-ticker.C:
f()
case <-done:
return
}
}
}()
return ticker
}
And using it:
func main() {
done := make(chan bool)
t1 := schedule(oneSec, time.Second, done)
t2 := schedule(twoSec, 2*time.Second, done)
time.Sleep(5 * time.Second)
close(done)
t1.Stop()
t2.Stop()
}
Try this one on the Go Playground.
Note that even though stopping the tickers is not necessary in this simple example (because when the main goroutine ends, so does the program with it), in real-life examples if the app continues to run, leaving the tickers unstopped wastes resources (they will continue to use a background goroutine, and will continue to try to send values on their channels).
Last words:
If you have a slice of function-interval pairs, simply use a loop to pass each pair to this schedule() function. Something like this:
type pair struct {
f func()
interval time.Duration
}
pairs := []pair{
{oneSec, time.Second},
{twoSec, 2 * time.Second},
}
done := make(chan bool)
ts := make([]*time.Ticker, len(pairs))
for i, p := range pairs {
ts[i] = schedule(p.f, p.interval, done)
}
time.Sleep(5 * time.Second)
close(done)
for _, t := range ts {
t.Stop()
}
Try this one on the Go Playground.

Related

Run function every N seconds with context timeout

I have a basic question about scheduling "cancellable" goroutines.
I want to schedule a function execution, every 3 seconds.
The function can take up to 5 seconds.
In case it takes more than 2999ms I want to stop/terminate it, to avoid overlapping w/ the next one.
I'm doing it wrong:
func main() {
fmt.Println("startProcessing")
go startProcessing()
time.Sleep(time.Second * 60)
fmt.Println("endProcessing after 60s")
}
func startProcessing() {
ticker := time.NewTicker(3 * time.Second)
for _ = range ticker.C {
ctx, _ := context.WithTimeout(context.Background(), (time.Second*3)-time.Millisecond)
fmt.Println("start doSomething")
doSomething(ctx)
}
}
func doSomething(ctx context.Context) {
executionTime := time.Duration(rand.Intn(5)+1) * time.Second
for {
select {
case <-ctx.Done():
fmt.Printf("timed out after %s\n", executionTime)
return
default:
time.Sleep(executionTime)
fmt.Printf("did something in %s\n", executionTime)
return
}
}
}
This is my output now:
startProcessing
start doSomething
did something in 2s
start doSomething
did something in 3s
start doSomething
did something in 3s
start doSomething
did something in 5s
start doSomething
did something in 2s
...
I want to read timed out after 5s instead of did something in 5s.
You just need to put the time.Sleep(executionTime) outside the select and there is no need for the for loop. I think this is somehow what you want but beware that it's not good practice. So take a look at the warning below.
func doSomething(ctx context.Context) {
executionTime := time.Duration(rand.Intn(5)+1) * time.Second
processed := make(chan int)
go func() {
time.Sleep(executionTime)
processed <- 1
}()
select {
case <-ctx.Done():
fmt.Printf("timed out after %s\n", executionTime)
case <-processed:
fmt.Printf("did something in %s\n", executionTime)
}
}
Obs: I changed the original answer a bit. We can not interrupt a goroutine in the middle of its execution. We could delegate the long-running task to another goroutine and receive the result through a dedicated channel.
Warning: I wouldn't recommend that if you expect the processing time to exceed the deadline because now you will have a leaking goroutine.

Go program sleeps forever even after passing short time.Duration

I'm trying to build some short of semaphore in Go. Although when when the channel receives the signal it just sleeps forever.
I've tried changing the way to sleep and the duration to sleep, but it stills just stopping forever.
Here a representation of what I've tried:
func main() {
backOffChan := make(chan struct{})
go func() {
time.Sleep(2)
backOffChan <- struct{}{}
}()
for {
select {
case <-backOffChan:
d := time.Duration(5 * time.Second)
log.Println("reconnecting in %s", d)
select {
case <-time.After(d):
log.Println("reconnected after %s", d)
return
}
default:
}
}
}
I expect that it just returns after printing the log message and returning.
Thanks!
This code has a number of problems, mainly a tight loop using for/select that may not allow the other goroutine to ever get to send on the channel. Since the default case is empty and the select has only one case, the whole select is unnecessary. The following code works correctly:
backOffChan := make(chan struct{})
go func() {
time.Sleep(1 * time.Millisecond)
backOffChan <- struct{}{}
}()
for range backOffChan {
d := time.Duration(10 * time.Millisecond)
log.Printf("reconnecting in %s", d)
select {
case <-time.After(d):
log.Printf("reconnected after %s", d)
return
}
}
This will wait until the backOffChan gets a message without burning a tight loop.
(Note that this code also addresses issues using log.Println with formatting directives - these were corrected to log.Printf).
See it in action here: https://play.golang.org/p/ksAzOq5ekrm

golang design pattern for cancelling routines inflight

I am a golang newbie who is trying to understand the correct design pattern for this problem. My current solution seems very verbose, and I'm not sure what the better approach would be.
I am trying to design a system that:
executes N goroutines
returns the result of each goroutine as soon as it is available
if a goroutine returns a particular value, it should kill other goroutines will cancel.
The goal: I want to kick off a number of goroutines, but I want to cancel the routines if one routine returns a particular result.
I'm trying to understand if my code is super "smelly" or if this is the prescribed way of doing things. I still don't have a great feeling for go, so any help would be appreciated.
Here is what I've written:
package main
import (
"context"
"fmt"
"time"
)
func main() {
ctx := context.Background()
ctx, cancel := context.WithCancel(ctx)
fooCheck := make(chan bool)
barCheck := make(chan bool)
go foo(ctx, 3000, fooCheck)
go bar(ctx, 5000, barCheck)
for fooCheck != nil ||
barCheck != nil {
select {
case res, ok := <-fooCheck:
if !ok {
fooCheck = nil
continue
}
if res == false {
cancel()
}
fmt.Printf("result of foocheck: %t\n", res)
case res, ok := <-barCheck:
if !ok {
barCheck = nil
continue
}
fmt.Printf("result of barcheck: %t\n", res)
}
}
fmt.Printf("here we are at the end of the loop, ready to do some more processing...")
}
func foo(ctx context.Context, pretendWorkTime int, in chan<- bool) {
fmt.Printf("simulate doing foo work and pass ctx down to cancel down the calltree\n")
time.Sleep(time.Millisecond * time.Duration(pretendWorkTime))
select {
case <-ctx.Done():
fmt.Printf("\n\nWe cancelled this operation!\n\n")
break
default:
fmt.Printf("we have done some foo work!\n")
in <- false
}
close(in)
}
func bar(ctx context.Context, pretendWorkTime int, in chan<- bool) {
fmt.Printf("simulate doing bar work and pass ctx down to cancel down the calltree\n")
time.Sleep(time.Millisecond * time.Duration(pretendWorkTime))
select {
case <-ctx.Done():
fmt.Printf("\n\nWe cancelled the bar operation!\n\n")
break
default:
fmt.Printf("we have done some bar work!\n")
in <- true
}
close(in)
}
(play with the code here: https://play.golang.org/p/HAA-LIxWNt0)
The output works as expected, but I'm afraid I'm making some decision which will blow off my foot later.
I would use a single channel to communicate results, so it's much easier to gather the results and it "scales" automatically by its nature. If you need to identify the source of a result, simply use a wrapper which includes the source. Something like this:
type Result struct {
ID string
Result bool
}
To simulate "real" work, the workers should use a loop doing their work in an iterative manner, and in each iteration they should check the cancellation signal. Something like this:
func foo(ctx context.Context, pretendWorkMs int, resch chan<- Result) {
log.Printf("foo started...")
for i := 0; i < pretendWorkMs; i++ {
time.Sleep(time.Millisecond)
select {
case <-ctx.Done():
log.Printf("foo terminated.")
return
default:
}
}
log.Printf("foo finished")
resch <- Result{ID: "foo", Result: false}
}
In our example the bar() is the same just replace all foo word with bar.
And now executing the jobs and terminating the rest early if one does meet our expectation looks like this:
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
resch := make(chan Result, 2)
log.Println("Kicking off workers...")
go foo(ctx, 3000, resch)
go bar(ctx, 5000, resch)
for i := 0; i < cap(resch); i++ {
result := <-resch
log.Printf("Result of %s: %v", result.ID, result.Result)
if !result.Result {
cancel()
break
}
}
log.Println("Done.")
Running this app will output (try it on the Go Playground):
2009/11/10 23:00:00 Kicking off workers...
2009/11/10 23:00:00 bar started...
2009/11/10 23:00:00 foo started...
2009/11/10 23:00:03 foo finished
2009/11/10 23:00:03 Result of foo: false
2009/11/10 23:00:03 Done.
Some things to note. If we terminate early due to unexpected result, the cancel() function will be called, and we break out form the loop. It may be the rest of the workers also complete their work concurrently and send their result, which will not be a problem as we used a buffered channel, so their send will not block and they will end properly. Also, if they don't complete concurrently, they check ctx.Done() in their loop, and they terminate early, so the goroutines are cleaned up nicely.
Also note that the output of the above code does not print bar terminated. This is because the main() function terminates right after the loop, and once the main() function ends, it does not wait for other non-main goroutines to complete. For details, see No output from goroutine in Go. If the app would not terminate immediately, we would see that line printed too. If we add a time.Sleep() at the end of main():
log.Println("Done.")
time.Sleep(3 * time.Millisecond)
Output will be (try it on the Go Playground):
2009/11/10 23:00:00 Kicking off workers...
2009/11/10 23:00:00 bar started...
2009/11/10 23:00:00 foo started...
2009/11/10 23:00:03 foo finished
2009/11/10 23:00:03 Result of foo: false
2009/11/10 23:00:03 Done.
2009/11/10 23:00:03 bar terminated.
Now if you must wait for all workers to end either "normally" or "early" before moving on, you can achieve that in many ways.
One way is to use a sync.WaitGroup. For an example, see Prevent the main() function from terminating before goroutines finish in Golang. Another way would be to have each worker send a Result no matter how they end, and Result could contain the termination condition, e.g. normal or aborted. And the main() goroutine could continue the receive loop until it receives n values from resch. If this solution is chosen, you must ensure each worker sends a value (even if a panic occurs) to not block the main() in such cases (e.g. with using defer).
I'm going to share the most simplistic pattern for what you're talking about. You can extend it for more complicated scenarios.
func doStuff() {
// This can be a chan of anything.
msgCh := make(chan string)
// This is how you tell your go-routine(s) to stop, by closing this chan.
quitCh := make(chan struct{})
defer close(quitCh)
// Start all go routines.
for whileStart() {
go func() {
// Do w/e you need inside of your go-routine.
// Write back the result.
select {
case msgCh <- "my message":
// If we got here then the chan is open.
case <-quitCh:
// If we got here then the quit chan was closed.
}
}()
}
// Wait for all go routines.
for whileWait() {
// Block until a msg comes back.
msg := <-msgCh
// If you found what you want.
if msg == stopMe {
// It's safe to return because of the defer earlier.
return
}
}
}

How to implement a timeout when using sync.WaitGroup.wait? [duplicate]

This question already has answers here:
Timeout for WaitGroup.Wait()
(10 answers)
Closed 7 months ago.
I have come across a situation that i want to trace some goroutine to sync on a specific point, for example when all the urls are fetched. Then, we can put them all and show them in specific order.
I think this is the barrier comes in. It is in go with sync.WaitGroup. However, in real situation that we can not make sure that all the fetch operation will succeed in a short time. So, i want to introduce a timeout when wait for the fetch operations.
I am a newbie to Golang, so can someone give me some advice?
What i am looking for is like this:
wg := &sync.WaigGroup{}
select {
case <-wg.Wait():
// All done!
case <-time.After(500 * time.Millisecond):
// Hit timeout.
}
I know Wait do not support Channel.
If all you want is your neat select, you can easily convert blocking function to a channel by spawning a routine which calls a method and closes/sends on channel once done.
done := make(chan struct{})
go func() {
wg.Wait()
close(done)
}()
select {
case <-done:
// All done!
case <-time.After(500 * time.Millisecond):
// Hit timeout.
}
Send your results to a buffered channel enough to take all results, without blocking, and read them in for-select loop in the main thread:
func work(msg string, d time.Duration, ret chan<- string) {
time.Sleep(d) // Work emulation.
select {
case ret <- msg:
default:
}
}
// ...
const N = 2
ch := make(chan string, N)
go work("printed", 100*time.Millisecond, ch)
go work("not printed", 1000*time.Millisecond, ch)
timeout := time.After(500 * time.Millisecond)
loop:
for received := 0; received < N; received++ {
select {
case msg := <-ch:
fmt.Println(msg)
case <-timeout:
fmt.Println("timeout!")
break loop
}
}
Playground: http://play.golang.org/p/PxeEEJo2dz.
See also: Go Concurrency Patterns: Timing out, moving on.
Another way to do it would be to monitor it internally, your question is limited but I'm going to assume you're starting your goroutines through a loop even if you're not you can refactor this to work for you but you could do one of these 2 examples, the first one will timeout each request to timeout individually and the second one will timeout the entire batch of requests and move on if too much time has passed
var wg sync.WaitGroup
wg.Add(1)
go func() {
success := make(chan struct{}, 1)
go func() {
// send your request and wait for a response
// pretend response was received
time.Sleep(5 * time.Second)
success <- struct{}{}
// goroutine will close gracefully after return
fmt.Println("Returned Gracefully")
}()
select {
case <-success:
break
case <-time.After(1 * time.Second):
break
}
wg.Done()
// everything should be garbage collected and no longer take up space
}()
wg.Wait()
// do whatever with what you got
fmt.Println("Done")
time.Sleep(10 * time.Second)
fmt.Println("Checking to make sure nothing throws errors after limbo goroutine is done")
Or if you just want a general easy way to timeout ALL requests you could do something like
var wg sync.WaitGroup
waiter := make(chan int)
wg.Add(1)
go func() {
success := make(chan struct{}, 1)
go func() {
// send your request and wait for a response
// pretend response was received
time.Sleep(5 * time.Second)
success <- struct{}{}
// goroutine will close gracefully after return
fmt.Println("Returned Gracefully")
}()
select {
case <-success:
break
case <-time.After(1 * time.Second):
// control the timeouts for each request individually to make sure that wg.Done gets called and will let the goroutine holding the .Wait close
break
}
wg.Done()
// everything should be garbage collected and no longer take up space
}()
completed := false
go func(completed *bool) {
// Unblock with either wait
wg.Wait()
if !*completed {
waiter <- 1
*completed = true
}
fmt.Println("Returned Two")
}(&completed)
go func(completed *bool) {
// wait however long
time.Sleep(time.Second * 5)
if !*completed {
waiter <- 1
*completed = true
}
fmt.Println("Returned One")
}(&completed)
// block until it either times out or .Wait stops blocking
<-waiter
// do whatever with what you got
fmt.Println("Done")
time.Sleep(10 * time.Second)
fmt.Println("Checking to make sure nothing throws errors after limbo goroutine is done")
This way your WaitGroup will stay in sync and you won't have any goroutines left in limbo
http://play.golang.org/p/g0J_qJ1BUT try it here you can change the variables around to see it work differently
Edit: I'm on mobile If anybody could fix the formatting that would be great thanks.
If you would like to avoid mixing concurrency logic with business logic, I wrote this library https://github.com/shomali11/parallelizer to help you with that. It encapsulates the concurrency logic so you do not have to worry about it.
So in your example:
package main
import (
"github.com/shomali11/parallelizer"
"fmt"
)
func main() {
urls := []string{ ... }
results = make([]*HttpResponse, len(urls)
options := &Options{ Timeout: time.Second }
group := parallelizer.NewGroup(options)
for index, url := range urls {
group.Add(func(index int, url string, results *[]*HttpResponse) {
return func () {
...
results[index] = &HttpResponse{url, response, err}
}
}(index, url, &results))
}
err := group.Run()
fmt.Println("Done")
fmt.Println(fmt.Sprintf("Results: %v", results))
fmt.Printf("Error: %v", err) // nil if it completed, err if timed out
}

Do go channels preserve order when blocked?

I have a slice of channels that all receive the same message:
func broadcast(c <-chan string, chans []chan<- string) {
for msg := range c {
for _, ch := range chans {
ch <- msg
}
}
}
However, since each of the channels in chans are potentially being read at a different rate, I don't want to block the other channels when I get a slow consumer. I've solved this with goroutines:
func broadcast(c <-chan string, chans []chan<- string) {
for msg := range c {
for _, ch := range chans {
go func() { ch <- msg }()
}
}
}
However, the order of the messages that get passed to each channel is important. I looked to the spec to see if channels preserve order when blocked, and all I found was this:
If the capacity is greater than zero, the channel is asynchronous: communication operations succeed without blocking if the buffer is not full (sends) or not empty (receives), and elements are received in the order they are sent.
To me, if a write is blocked, then it is not "sent", but waiting to be sent. With that assumption, the above says nothing about order of sending when multiple goroutines are blocked on writing.
Are there any guarantees about the order of sends after a channel becomes unblocked?
No, there are no guarantees.
Even when the channel is not full, if two goroutines are started at about the same time to send to it, I don't think there is any guarantee that the goroutine that was started first would actually execute first. So you can't count on the messages arriving in order.
You can drop the message if the channel is full (and then set a flag to pause the client and send them a message that they're dropping messages or whatever).
Something along the lines of (untested):
type Client struct {
Name string
ch chan<-string
}
func broadcast(c <-chan string, chans []*Client) {
for msg := range c {
for _, ch := range chans {
select {
case ch.ch <- msg:
// all okay
default:
log.Printf("Channel was full sending '%s' to client %s", msg, ch.Name)
}
}
}
}
In this code, no guarantees.
The main problem with the given sample code lies not in the channel behavior, but rather in the numerous created goroutines. All the goroutines are "fired" inside the same imbricated loop without further synchronization, so even before they start to send messages, we simply don't know which ones will execute first.
However this rises a legitimate question in general : if we somehow garantee the order of several blocking send instructions, are we guaranteed to receive them in the same order?
The "happens-before" property of the sendings is difficult to create. I fear it is impossible because :
Anything can happen before the sending instruction : for example, other goroutines performing their own sendings or not
A goroutine being blocked in a sending cannot simultaneously manage other sorts of synchronization
For example, if I have 10 goroutines numbered 1 to 10, I have no way of letting them send their own number to the channel, concurrently, in the right order. All I can do is use various kinds of sequential tricks like doing the sorting in 1 single goroutine.
This is an addition to the already posted answers.
As practically everyone stated, that the problem is the order of execution of the goroutines,
you can easily coordinate goroutine execution using channels by passing around the number of the
goroutine you want to run:
func coordinated(coord chan int, num, max int, work func()) {
for {
n := <-coord
if n == num {
work()
coord <- (n+1) % max
} else {
coord <- n
}
}
}
coord := make(chan int)
go coordinated(coord, 0, 3, func() { println("0"); time.Sleep(1 * time.Second) })
go coordinated(coord, 1, 3, func() { println("1"); time.Sleep(1 * time.Second) })
go coordinated(coord, 2, 3, func() { println("2"); time.Sleep(1 * time.Second) })
coord <- 0
or by using a central goroutine which executes the workers in a ordered manner:
func executor(funs chan func()) {
for {
worker := <-funs
worker()
funs <- worker
}
}
funs := make(chan func(), 3)
funs <- func() { println("0"); time.Sleep(1 * time.Second) }
funs <- func() { println("1"); time.Sleep(1 * time.Second) }
funs <- func() { println("2"); time.Sleep(1 * time.Second) }
go executor(funs)
These methods will, of course, remove all parallelism due to synchronization. However,
the concurrent aspect of your program remains.

Resources