How does Go invoke Ticker by interval? - go

Since I am not experinced Go developer, I didn't understand way of working with Ticker. I have following scenario:
A go web service running on specific port 8080, it is getting data from another applications and processing the data. So far so good, but I have a another sendData function in this web service which loop through the some files and send them to another extern service. I am trying to call the sendData() function every 1 minute. Here is how main function looks without Tickers:
func main() {
http.HandleFunc("/data", headers) //line 1
log.Printf("Ready for data ...%s\n", 8080) //line 2
http.ListenAndServe(":8080", nil) //line 3
}
If I add the Ticker after line 2 it's keeping loop infinitively.
If I add after line 3, the programm is not invoking the Ticker.
Any idea how to handle this?
The Ticker part
ticker := schedule(sendData, time.Second, done)
time.Sleep(60 * time.Second)
close(done)
ticker.Stop()
and the schedule from
func schedule(f func(), interval time.Duration, done <-chan bool) *time.Ticker {
ticker := time.NewTicker(interval)
go func() {
for {
select {
case <-ticker.C:
f()
case <-done:
return
}
}
}()
return ticker
So basically I want to sendData evert minute or hour etc. Could someone explain how internally Ticker works?

http.ListenAndServe(":8080", nil) runs an infinite for loop listening for inbound connections, that's why the ticker is not invoked if you call it afterwards.
And then here
ticker := schedule(sendData, time.Second, done)
time.Sleep(60 * time.Second)
close(done)
ticker.Stop()
you're exiting the loop inside schedule() after 60 seconds, so your ticker will run only once or won't run at all (depending on whether the done channel receives the value before or later that the ticker ticks, as they are concurrent we cannot determine their order)
So what you want is the following
func main() {
http.HandleFunc("/data", headers)
ticker := time.NewTicker(time.Minute)
go schedule(ticker)
log.Printf("Ready for data ...%s\n", 8080)
http.ListenAndServe(":8080", nil)
}
func schedule(ticker *time.Ticker) {
for {
// This blocks until a value is received, the ticker
// sends a value to it every one minute (or the interval specified)
<-ticker.C
fmt.Println("Tick")
}
}
As you may have noticed, once the server connection is interrupted the program will terminate so there's no point on having a done channel to exit the loop.
Try it here

You are on the right track - you just need to wrap the ticker declaration in a self executing function and then run it as a goroutine. ListenAndServe and Schedule are both blocking tasks, so they need to run on separate go routines. Luckily go makes this really simple to achieve.
Note - this sample code is meant to stay as close to your example as possible. I would recommend separating the declaration of the ticker from the schedule func.
func main() {
http.HandleFunc("/data", func(w http.ResponseWriter, req *http.Request) {}) //line 1
var done chan bool
go func() {
ticker := schedule(func() { fmt.Println("Tick") }, time.Second, done)
time.Sleep(60 * time.Second)
close(done)
ticker.Stop()
}()
fmt.Printf("Ready for data ...%v\n", 8080) //line 2
http.ListenAndServe(":8080", nil) //line 3
}

Related

Run function every N seconds with context timeout

I have a basic question about scheduling "cancellable" goroutines.
I want to schedule a function execution, every 3 seconds.
The function can take up to 5 seconds.
In case it takes more than 2999ms I want to stop/terminate it, to avoid overlapping w/ the next one.
I'm doing it wrong:
func main() {
fmt.Println("startProcessing")
go startProcessing()
time.Sleep(time.Second * 60)
fmt.Println("endProcessing after 60s")
}
func startProcessing() {
ticker := time.NewTicker(3 * time.Second)
for _ = range ticker.C {
ctx, _ := context.WithTimeout(context.Background(), (time.Second*3)-time.Millisecond)
fmt.Println("start doSomething")
doSomething(ctx)
}
}
func doSomething(ctx context.Context) {
executionTime := time.Duration(rand.Intn(5)+1) * time.Second
for {
select {
case <-ctx.Done():
fmt.Printf("timed out after %s\n", executionTime)
return
default:
time.Sleep(executionTime)
fmt.Printf("did something in %s\n", executionTime)
return
}
}
}
This is my output now:
startProcessing
start doSomething
did something in 2s
start doSomething
did something in 3s
start doSomething
did something in 3s
start doSomething
did something in 5s
start doSomething
did something in 2s
...
I want to read timed out after 5s instead of did something in 5s.
You just need to put the time.Sleep(executionTime) outside the select and there is no need for the for loop. I think this is somehow what you want but beware that it's not good practice. So take a look at the warning below.
func doSomething(ctx context.Context) {
executionTime := time.Duration(rand.Intn(5)+1) * time.Second
processed := make(chan int)
go func() {
time.Sleep(executionTime)
processed <- 1
}()
select {
case <-ctx.Done():
fmt.Printf("timed out after %s\n", executionTime)
case <-processed:
fmt.Printf("did something in %s\n", executionTime)
}
}
Obs: I changed the original answer a bit. We can not interrupt a goroutine in the middle of its execution. We could delegate the long-running task to another goroutine and receive the result through a dedicated channel.
Warning: I wouldn't recommend that if you expect the processing time to exceed the deadline because now you will have a leaking goroutine.

First wait for time.AfterFunc then start time.NewTicker

I am trying to set up a service routine that runs every hour on the hour. It seems to me that either of those two are easy. To run my routine on the hour I can use time.AfterFunc(), first calculating the remaining time until the top of the hour. And to run my routine every hour I can use time.NewTicker().
However, I'm struggling to figure out how to start the NewTicker only after the function passed to AfterFunc() has fired.
My main() function looks something like this:
func main() {
fmt.Println("starting up")
// Here I'm setting up all kinds of HTTP listeners and gRPC listeners, none
// of which is important, save to mention that the app has more happening
// than just this service routine.
// Calculate duration until next hour and call time.AfterFunc()
// For the purposes of this exercise I'm just using 5 seconds so as not to
// have to wait increments of hours to see the results
time.AfterFunc(time.Second * 5, func() {
fmt.Println("AfterFunc")
})
// Set up a ticker to run every hour. Again, for the purposes of this
// exercise I'm ticking every 2 seconds just to see some results
t := time.NewTicker(time.Second * 2)
defer t.Stop()
go func() {
for now := range t.C {
fmt.Println("Ticker")
}
}()
// Block until termination signal is received
osSignals := make(chan os.Signal, 1)
signal.Notify(osSignals, syscall.SIGINT, syscall.SIGTERM, os.Interrupt, os.Kill)
<-osSignals
fmt.Println("exiting gracefully")
}
Of course time.AfterFunc() is blocking and the payload of my Ticker is deliberately put in a go routine so it also won't be blocking. This is so that my HTTP and gRPC listeners can continue to listen but also to allow the block of code at the end of main() to exit gracefully upon a termination signal from the OS. But the obvious downside now is that the Ticker kicks off pretty much immediately and fires twice (2 second intervals) before the function passed to AfterFunc() fires. The output looks like this:
Ticker
Ticker
AfterFunc
Ticker
Ticker
Ticker
etc.
What I wanted of course is:
AfterFunc
Ticker
Ticker
Ticker
Ticker
Ticker
etc.
The following also doesn't work and I'm not exactly sure why. It prints AfterFunc but the Ticker never fires.
time.AfterFunc(time.Second * 5, func() {
fmt.Println("AfterFunc")
t := time.NewTicker(time.Second * 2)
defer t.Stop()
go func() {
for now := range t.C {
fmt.Println("Ticker")
}
}()
})
The Go Programming Language Specification
Program execution
Program execution begins by initializing the main package and then
invoking the function main. When that function invocation returns, the
program exits. It does not wait for other (non-main) goroutines to
complete.
time.AfterFunc(time.Second * 5, func() {
fmt.Println("AfterFunc")
t := time.NewTicker(time.Second * 2)
defer t.Stop()
go func() {
for now := range t.C {
fmt.Println("Ticker")
}
}()
})
defer t.Stop() stops the ticker.
You are not waiting for the goroutine to run.
Gosh, I figured it out not long after posting, though I still think my solution lacks some elegance.
As #peterSO pointed out, the function passed to AfterFunc() executes and stops the Ticker moments after creating it with defer t.Stop()
The solution for me was to define the t variable before the call to AfterFunc() so that it has scope outside of the AfterFunc() payload function, and then stop it at the end of my main() func. Here is the new main() func:
func main() {
fmt.Println("starting up")
var t *time.Ticker
time.AfterFunc(time.Second * 5, func() {
fmt.Println("AfterFunc")
t = time.NewTicker(time.Second * 2)
go func() {
for now := range t.C {
fmt.Println("Ticker")
}
}()
})
//defer t.Stop()
// Block until termination signal is received
osSignals := make(chan os.Signal, 1)
signal.Notify(osSignals, syscall.SIGINT, syscall.SIGTERM, os.Interrupt, os.Kill)
<-osSignals
t.Stop()
logger.Info("exiting gracefully")
}
Strangely though that commented out defer t.Stop() causes a panic (invalid memory address or nil pointer dereference) when the application closes upon a termination signal. If I stop it as in the uncommented t.Stop() at the very end of the code, it works as expected. Not sure why that is.
time.AfterFunc(time.Second*5, func() {
fmt.Println("AfterFunc")
t := time.NewTicker(time.Second * 2)
defer t.Stop()
for range t.C {
fmt.Println("Ticker")
}
})
It produces the output you need:
starting up
AfterFunc
Ticker
Ticker
Ticker
Ticker
Ticker

How to send updates from long running goroutine?

I have a goroutine for a long running job. When the job is done, it pushes the results to a channel. In the meantime, while the job is running, I want to keep updating an API with the status RUNNING.
So far, I have the following code :
func getProgressTimeout() <-chan time.Time {
return time.After(5 * time.Minute)
}
func runCommand(arg *Request) {
chanResult := make(chan Results)
go func(args *Request, c chan Results) {
resp, err := execCommand(args)
c <- Results{
resp: resp,
err: err,
}
}(arg, chanResult)
var err error
progressLoop:
for {
select {
case <-getProgressTimeout():
updateProgress() // this method will send status= RUNNING to a REST API
case out := <-chanResult:
err = jobCompleted(request, out)
break progressLoop
}
}
return err
}
I am new to golang. And I have reached the above code after lot of trial and error, and googling. It's working now. Still it doesn't feel intuitive to me when I look at it (this may very well be because, I am still trying to learn the Go way of doing things). So my question is, can I refactor this into better shape? Is there some existing pattern which is applicable in this kind of scenario? Or if there is some totally different approach to keep sending periodic updates while job is running?
Also, any suggestions to improve upon my golang concurrency are also appreciated. :)
Thanks in advance!
Consider using time.NewTicker, which sends a periodic value to a channel. Here's an example from the documentation:
package main
import (
"fmt"
"time"
)
func main() {
ticker := time.NewTicker(time.Second)
defer ticker.Stop()
done := make(chan bool)
go func() {
time.Sleep(10 * time.Second)
done <- true
}()
for {
select {
case <-done:
fmt.Println("Done!")
return
case t := <-ticker.C:
fmt.Println("Current time: ", t)
}
}
}
Note that the embedded goroutine invoking func emulates a long task by sleeping for 10 seconds, while the caller uses select to wait for the result, while also receiving periodic events from the ticker - this is where you can do the API progress update.

Go routines and depending functions

there I am having some fun with GO and am just very curious about something I am trying to achieve. I have a package here that just gets a feed from Reddit noting special. When I receive the parent JSON file I would then like to retrieve child data. If you see the code below I launch a series of goroutines which I then block, waiting for them to finish using the sync package. What I would like is once the first series of goroutines finish the second series of goroutines using the previous results. There are a few was I was thinking such as for loop and switch statement. But what is the best and most efficient way to do this
func (m redditMatcher) retrieve(dataPoint *collect.DataPoint) (*redditCommentsDocument, error) {
if dataPoint.URI == "" {
return nil, errors.New("No datapoint uri provided")
}
// Get options data -> returns empty struct
// if no options are present
options := m.options(dataPoint.Options)
if len(options.subreddit) <= 0 {
return nil, fmt.Errorf("Matcher fail: Reddit - Subreddit option manditory\n")
}
// Create an buffered channel to receive match results to display.
results := make(chan *redditCommentsDocument, len(options.subreddit))
// Generte requests for each subreddit produced using
// goroutines concurency model
for _, s := range options.subreddit {
// Set the number of goroutines we need to wait for while
// they process the individual subreddit.
waitGroup.Add(1)
go retrieveComment(s.(string), dataPoint.URI, results)
}
// Launch a goroutine to monitor when all the work is done.
waitGroup.Wait()
// HERE I WOULD TO CALL ANOTHER SERIES OFF GOROUTINES
for commentFeed := range results {
// HERE I WOULD LIKE TO CALL GO ROUTINES USING THE RESULTS
// PROVIDED FROM THE PREVIOUS FUNCTIONS
waitGroup.Add(1)
log.Printf("%s\n\n", commentFeed.Kind)
}
waitGroup.Wait()
close(results)
return nil, nil
}
If you want to wait for all of the first series to complete, then you can just pass in a pointer to your waitgroup, wait after calling all the first series functions (which will call Done() on the waitgroup), and then start the second series. Here's a runnable annotated code example that does that:
package main
import(
"fmt"
"sync"
"time"
)
func first(wg *sync.WaitGroup) {
defer wg.Done()
fmt.Println("Starting a first")
// do some stuff... here's a sleep to make some time pass
time.Sleep(250 * time.Millisecond)
fmt.Println("Done with a first")
}
func second(wg *sync.WaitGroup) {
defer wg.Done()
fmt.Println("Starting a second")
// do some followup stuff
time.Sleep(50 * time.Millisecond)
fmt.Println("Done with a second")
}
func main() {
wg := new(sync.WaitGroup) // you'll need a pointer to avoid a copy when passing as parameter to goroutine function
// let's start 5 firsts and then wait for them to finish
wg.Add(5)
go first(wg)
go first(wg)
go first(wg)
go first(wg)
go first(wg)
wg.Wait()
// now that we're done with all the firsts, let's do the seconds
// how about two of these
wg.Add(2)
go second(wg)
go second(wg)
wg.Wait()
fmt.Println("All done")
}
It outputs:
Starting a first
Starting a first
Starting a first
Starting a first
Starting a first
Done with a first
Done with a first
Done with a first
Done with a first
Done with a first
Starting a second
Starting a second
Done with a second
Done with a second
All done
But if you want a "second" to start as soon as a "first" has finished, just have the seconds executing blocking receive operators on the channel while the firsts are running:
package main
import(
"fmt"
"math/rand"
"sync"
"time"
)
func first(res chan int, wg *sync.WaitGroup) {
defer wg.Done()
fmt.Println("Starting a first")
// do some stuff... here's a sleep to make some time pass
time.Sleep(250 * time.Millisecond)
fmt.Println("Done with a first")
res <- rand.Int() // this will block until a second is ready
}
func second(res chan int, wg *sync.WaitGroup) {
defer wg.Done()
fmt.Println("Wait for a value from first")
val := <-res // this will block until a first is ready
fmt.Printf("Starting a second with val %d\n", val)
// do some followup stuff
time.Sleep(50 * time.Millisecond)
fmt.Println("Done with a second")
}
func main() {
wg := new(sync.WaitGroup) // you'll need a pointer to avoid a copy when passing as parameter to goroutine function
ch := make(chan int)
// lets run first twice, and second once for each first result, for a total of four workers:
wg.Add(4)
go first(ch, wg)
go first(ch, wg)
// don't wait before starting the seconds
go second(ch, wg)
go second(ch, wg)
wg.Wait()
fmt.Println("All done")
}
Which outputs:
Wait for a value from first
Starting a first
Starting a first
Wait for a value from first
Done with a first
Starting a second with val 5577006791947779410
Done with a first
Starting a second with val 8674665223082153551
Done with a second
Done with a second
All done

How to implement a timeout when using sync.WaitGroup.wait? [duplicate]

This question already has answers here:
Timeout for WaitGroup.Wait()
(10 answers)
Closed 7 months ago.
I have come across a situation that i want to trace some goroutine to sync on a specific point, for example when all the urls are fetched. Then, we can put them all and show them in specific order.
I think this is the barrier comes in. It is in go with sync.WaitGroup. However, in real situation that we can not make sure that all the fetch operation will succeed in a short time. So, i want to introduce a timeout when wait for the fetch operations.
I am a newbie to Golang, so can someone give me some advice?
What i am looking for is like this:
wg := &sync.WaigGroup{}
select {
case <-wg.Wait():
// All done!
case <-time.After(500 * time.Millisecond):
// Hit timeout.
}
I know Wait do not support Channel.
If all you want is your neat select, you can easily convert blocking function to a channel by spawning a routine which calls a method and closes/sends on channel once done.
done := make(chan struct{})
go func() {
wg.Wait()
close(done)
}()
select {
case <-done:
// All done!
case <-time.After(500 * time.Millisecond):
// Hit timeout.
}
Send your results to a buffered channel enough to take all results, without blocking, and read them in for-select loop in the main thread:
func work(msg string, d time.Duration, ret chan<- string) {
time.Sleep(d) // Work emulation.
select {
case ret <- msg:
default:
}
}
// ...
const N = 2
ch := make(chan string, N)
go work("printed", 100*time.Millisecond, ch)
go work("not printed", 1000*time.Millisecond, ch)
timeout := time.After(500 * time.Millisecond)
loop:
for received := 0; received < N; received++ {
select {
case msg := <-ch:
fmt.Println(msg)
case <-timeout:
fmt.Println("timeout!")
break loop
}
}
Playground: http://play.golang.org/p/PxeEEJo2dz.
See also: Go Concurrency Patterns: Timing out, moving on.
Another way to do it would be to monitor it internally, your question is limited but I'm going to assume you're starting your goroutines through a loop even if you're not you can refactor this to work for you but you could do one of these 2 examples, the first one will timeout each request to timeout individually and the second one will timeout the entire batch of requests and move on if too much time has passed
var wg sync.WaitGroup
wg.Add(1)
go func() {
success := make(chan struct{}, 1)
go func() {
// send your request and wait for a response
// pretend response was received
time.Sleep(5 * time.Second)
success <- struct{}{}
// goroutine will close gracefully after return
fmt.Println("Returned Gracefully")
}()
select {
case <-success:
break
case <-time.After(1 * time.Second):
break
}
wg.Done()
// everything should be garbage collected and no longer take up space
}()
wg.Wait()
// do whatever with what you got
fmt.Println("Done")
time.Sleep(10 * time.Second)
fmt.Println("Checking to make sure nothing throws errors after limbo goroutine is done")
Or if you just want a general easy way to timeout ALL requests you could do something like
var wg sync.WaitGroup
waiter := make(chan int)
wg.Add(1)
go func() {
success := make(chan struct{}, 1)
go func() {
// send your request and wait for a response
// pretend response was received
time.Sleep(5 * time.Second)
success <- struct{}{}
// goroutine will close gracefully after return
fmt.Println("Returned Gracefully")
}()
select {
case <-success:
break
case <-time.After(1 * time.Second):
// control the timeouts for each request individually to make sure that wg.Done gets called and will let the goroutine holding the .Wait close
break
}
wg.Done()
// everything should be garbage collected and no longer take up space
}()
completed := false
go func(completed *bool) {
// Unblock with either wait
wg.Wait()
if !*completed {
waiter <- 1
*completed = true
}
fmt.Println("Returned Two")
}(&completed)
go func(completed *bool) {
// wait however long
time.Sleep(time.Second * 5)
if !*completed {
waiter <- 1
*completed = true
}
fmt.Println("Returned One")
}(&completed)
// block until it either times out or .Wait stops blocking
<-waiter
// do whatever with what you got
fmt.Println("Done")
time.Sleep(10 * time.Second)
fmt.Println("Checking to make sure nothing throws errors after limbo goroutine is done")
This way your WaitGroup will stay in sync and you won't have any goroutines left in limbo
http://play.golang.org/p/g0J_qJ1BUT try it here you can change the variables around to see it work differently
Edit: I'm on mobile If anybody could fix the formatting that would be great thanks.
If you would like to avoid mixing concurrency logic with business logic, I wrote this library https://github.com/shomali11/parallelizer to help you with that. It encapsulates the concurrency logic so you do not have to worry about it.
So in your example:
package main
import (
"github.com/shomali11/parallelizer"
"fmt"
)
func main() {
urls := []string{ ... }
results = make([]*HttpResponse, len(urls)
options := &Options{ Timeout: time.Second }
group := parallelizer.NewGroup(options)
for index, url := range urls {
group.Add(func(index int, url string, results *[]*HttpResponse) {
return func () {
...
results[index] = &HttpResponse{url, response, err}
}
}(index, url, &results))
}
err := group.Run()
fmt.Println("Done")
fmt.Println(fmt.Sprintf("Results: %v", results))
fmt.Printf("Error: %v", err) // nil if it completed, err if timed out
}

Resources