Introduction
zerolog fields
I'm using github.com/rs/zerolog in my golang project.
I know that I can add fields to the output by using something like this:
package main
import (
"os"
"github.com/rs/zerolog"
)
func main() {
logger := zerolog.New(os.Stderr).With().Timestamp().Logger()
logger.Int("myIntField", 42)
logger.Info("a regular log output") // this log entry will also contain the integer field `myIntField`
}
But what I would like to have is something evaluates at runtime of the line logger.Info("a regular log output") what the value of a field myIntField is.
The setting
I have a producer/consumer setup (for example see https://goplay.tools/snippet/hkoMAwqKcwj) with go-routines and I have two integers that are atomically counted down the number of consumer and producer go-routines still in business. Upon tear down of the consumer and producer I want to display these numbers at runtime.
Here's the code when using log instead of zerolog:
package main
import (
"fmt"
"log"
"os"
"sync"
"sync/atomic"
)
func main() {
numProducers := int32(3)
numConsumers := int32(3)
producersRunning := numProducers
consumersRunning := numConsumers
var wg sync.WaitGroup
l := log.New(os.Stderr, "", 0)
// producers
for i := int32(0); i < numProducers; i++ {
idx := i
wg.Add(1)
go (func() {
// producer tear down
defer func() {
atomic.AddInt32(&producersRunning, -1)
l.Printf("producer-%3d . producersRunning: %3d\n", idx, producersRunning)
wg.Done()
}()
// this is where the actual producer works is happening
})()
}
// consumers
for i := int32(0); i < numConsumers; i++ {
idx := i
wg.Add(1)
go (func() {
// consumer tear down
defer func() {
atomic.AddInt32(&consumersRunning, -1)
l.Printf("consumer-%3d . consumersRunning: %3d\n", idx, consumersRunning)
wg.Done()
}()
// this is where the actual consumer works is happening
})()
}
fmt.Println("waiting")
wg.Wait()
}
It outputs something like this:
waiting
producer- 1 . producersRunning: 2
producer- 0 . producersRunning: 1
consumer- 1 . consumersRunning: 2
producer- 2 . producersRunning: 0
consumer- 2 . consumersRunning: 1
consumer- 0 . consumersRunning: 0
A logger per consumer / producer
With zerolog you can create loggers an pass them to each go-rountine:
logger := zerolog.New(os.Stderr)
go myConsumer(logger.With().Str("is", "consumer").Logger())
go myProducer(logger.With().Str("is", "producer").Logger())
Then you can easily find out in the logs if a message came from a consumer or a producer just by looking at the is field in each log line.
But what if I want to always print the number of currently active consumers/producers in each log line? You might be tempted to do something like this:
go myConsumer(logger.With().Str("is", "consumer").Int("consumersRunning", consumersRunning).Logger())
go myProducer(logger.With().Str("is", "producer").Int("producersRunning", producersRunning).Logger())
But of course, this will only print the momentary value of consumersRunning and producersRunning at the time of creating the go-routine. Instead I would like the log output to reflect the values at the time of the log output.
Summary
I hope my question is clear. I'm not sure if it is against the concept of zero-ness but a function like
func (e *Event) DeferredInt(key string, i func()int) *Event
would probably work, if only it existed.
Is there another way to achieve the same effect?
Potential workaround
I mean one way could be to replace the logger variable with a function call like this:
logFunc := func() zerolog.Logger {
return logger.With().Int("runningConcumers", runningConsumers).Logger()
}
And then a log entry can be created with logFunc().Msg("hello"). This defers the evaluation of runningConsumers but also creates a logger for each log entry which feels like overkill.
By now I hope I haven't confused you.
You can add a hook. Hook is evaluated for each logging event
https://go.dev/play/p/Q7doafJGaeE
package main
import (
"os"
"github.com/rs/zerolog"
)
type IntHook struct {
Count int
}
func (h *IntHook) Run(e *zerolog.Event, l zerolog.Level, msg string) {
e.Int("count", h.Count)
h.Count++
}
func main() {
var intHook IntHook
log := zerolog.New(os.Stdout).Hook(&intHook)
log.Info().Msg("hello world")
log.Info().Msg("hello world one more time")
}
Output is
{"level":"info","count":0,"message":"hello world"}
{"level":"info","count":1,"message":"hello world one more time"}
Pointer is required to save Count between calls to Hook.Run
May be for you a HookFunc is better. It is a stateless function that is called for each event. Here is an example of a function hook that calls PRNG for each message: https://go.dev/play/p/xu6aXpUmE0v
package main
import (
"math/rand"
"os"
"github.com/rs/zerolog"
)
func RandomHook(e *zerolog.Event, l zerolog.Level, msg string) {
e.Int("random", rand.Intn(100))
}
func main() {
var randomHook zerolog.HookFunc = RandomHook
log := zerolog.New(os.Stdout).Hook(randomHook)
log.Info().Msg("hello world")
log.Info().Msg("hello world one more time")
}
Output
{"level":"info","random":81,"message":"hello world"}
{"level":"info","random":87,"message":"hello world one more time"}
You can use a zerolog Hook to achieve this. Hooks are interfaces with a Run method which is called before the event data is written to the given io.Writer (in your case os.Stderr).
Here is some example code:
type counter struct {
name string
value int32
}
func (c *counter) inc() { atomic.AddInt32(&c.value, 1) }
func (c *counter) dec() { atomic.AddInt32(&c.value, -1) }
func (c *counter) get() { atomic.LoadInt32(&c.value) }
func (c *counter) Run(e *zerolog.Event, _ zerolog.Level, _ string) {
e.Int32(c.name, c.get())
}
int main() {
numConsumers, numProducers := 3, 3
consumersRunning := &counter{
name: "consumersRunning",
value: int32(numConsumers),
}
producersRunning := &counter{
name: "producersRunning",
value: int32(numProducers),
}
logger := zerolog.New(os.Stderr)
consumerLogger := logger.With().Str("is", "consumer").Logger().Hook(consumersRunning)
producerLogger := logger.With().Str("is", "producer").Logger().Hook(producersRunning)
// your other code
}
You will use the inc and dec methods of the counters to modify the numbers of consumers/producers running.
Related
How can I get a goroutine's runtime ID?
I'm getting interleaved logs from an imported package - one approach would be to add a unique identifier to the logs of each goroutine.
I've found some references to runtime.GoID:
func worker() {
id := runtime.GoID()
log.Println("Goroutine ID:", id)
}
But it looks like this is now outdated/has been removed - https://pkg.go.dev/runtime?
Go deliberately chooses not to provide an ID since it would encourage worse software and hurt the overall ecosystem: https://go.dev/doc/faq#no_goroutine_id
Generally, the desire to de-anonymize goroutines is a design flaw and is strongly not recommended. There is almost always going to be a much better way to solve the issue at hand. Eg, if you need a unique identifier, it should be passed into the function or potentially via context.Context.
However, internally the runtime needs IDs for the implementation. For educational purposes you can find them with something like:
package main
import (
"bytes"
"errors"
"fmt"
"runtime"
"strconv"
)
func main() {
fmt.Println(goid())
done := make(chan struct{})
go func() {
fmt.Println(goid())
done <- struct{}{}
}()
go func() {
fmt.Println(goid())
done <- struct{}{}
}()
<-done
<-done
}
var (
goroutinePrefix = []byte("goroutine ")
errBadStack = errors.New("invalid runtime.Stack output")
)
// This is terrible, slow, and should never be used.
func goid() (int, error) {
buf := make([]byte, 32)
n := runtime.Stack(buf, false)
buf = buf[:n]
// goroutine 1 [running]: ...
buf, ok := bytes.CutPrefix(buf, goroutinePrefix)
if !ok {
return 0, errBadStack
}
i := bytes.IndexByte(buf, ' ')
if i < 0 {
return 0, errBadStack
}
return strconv.Atoi(string(buf[:i]))
}
Example output:
1 <nil>
19 <nil>
18 <nil>
They can also be found (less portably) via assembly by accessing the goid field in the g struct. This is how packages like github.com/petermattis/goid typically do it.
I am calling rest api which expects nonce header. The nonce must be unique timestamp and every consecutive call should have timestamp > previous one. My goal is to launch 10 go routines and from each one do a call to the web api. Since we do not have control over the routine execution order we might end up doing a webapi call with a nonce < previous one. I do not have control over the api implementation.
I have stripped down my code to something very simple which illustrate the problem:
package main
import (
"fmt"
"time"
)
func main() {
count := 10
results := make(chan string, count)
for i := 0; i < 10; i++ {
go someWork(results)
// Enabling the following line would give the
// expected outcome but does look like a hack to me.
// time.Sleep(time.Millisecond)
}
for i := 0; i < count; i++ {
fmt.Println(<-results)
}
}
func someWork(done chan string) {
// prepare http request, do http request, send to done chan the result
done <- time.Now().Format("15:04:05.00000")
}
From the output you can see how we have timestamps which are not chronologically ordered:
13:18:26.98549
13:18:26.98560
13:18:26.98561
13:18:26.98553
13:18:26.98556
13:18:26.98556
13:18:26.98557
13:18:26.98558
13:18:26.98559
13:18:26.98555
What would be the idiomatic way to achieve the expected outcome without adding the sleep line?
Thanks!
As I understand you only need to synchronize (serialize) the goroutines till request send part, that is where the timestamp and nonce need to be sequential. Response processing can be in parallel.
You can use a mutex for this case like in below code
package main
import (
"fmt"
"sync"
"time"
)
func main() {
count := 10
results := make(chan string, count)
var mutex sync.Mutex
for i := 0; i < count; i++ {
go someWork(&mutex, results)
}
for i := 0; i < count; i++ {
fmt.Println(<-results)
}
}
func someWork(mut *sync.Mutex, done chan string) {
// Lock the mutex, go routine getting lock here,
// is guaranteed to create the timestamp and
// perform the request before any other
mut.Lock()
// Get the timestamp
myTimeStamp := time.Now().Format("15:04:05.00000")
// prepare http request, do http request
// Unlock the mutex
mut.Unlock()
// Process response
// send to done chan the result
done <- myTimeStamp
}
But still some duplicate timestamps, may be need more fine-grained timestamp, but that is up to the use case.
I think: you can use a WaitGroup, for example:
package main
import (
"fmt"
"sync"
"time"
)
var wg sync.WaitGroup = sync.WaitGroup{}
var ct int = 0
func hello() {
fmt.Printf("Hello Go %v\n", time.Now().Format("15:04:05.00000"))
// when you are done, call done:
time.Sleep(time.Duration(10 * int(time.Second)))
wg.Done()
}
func main() {
for i := 0; i < 10; i++ {
wg.Add(1)
go hello()
wg.Wait()
}
}
In chapter 8 of The Go Programming Language, there is a description to the concurrency echo server as below:
The arguments to the function started by go are evaluated when the go statement itself is executed; thus input.Text() is evaluated in the main goroutine.
I don't understand this. Why the input.Text() is evaluated at the main goroutine? Shouldn't it be in the go echo() goroutine?
// Copyright © 2016 Alan A. A. Donovan & Brian W. Kernighan.
// License: https://creativecommons.org/licenses/by-nc-sa/4.0/
// See page 224.
// Reverb2 is a TCP server that simulates an echo.
package main
import (
"bufio"
"fmt"
"log"
"net"
"strings"
"time"
)
func echo(c net.Conn, shout string, delay time.Duration) {
fmt.Fprintln(c, "\t", strings.ToUpper(shout))
time.Sleep(delay)
fmt.Fprintln(c, "\t", shout)
time.Sleep(delay)
fmt.Fprintln(c, "\t", strings.ToLower(shout))
}
//!+
func handleConn(c net.Conn) {
input := bufio.NewScanner(c)
for input.Scan() {
go echo(c, input.Text(), 1*time.Second)
}
// NOTE: ignoring potential errors from input.Err()
c.Close()
}
//!-
func main() {
l, err := net.Listen("tcp", "localhost:8000")
if err != nil {
log.Fatal(err)
}
for {
conn, err := l.Accept()
if err != nil {
log.Print(err) // e.g., connection aborted
continue
}
go handleConn(conn)
}
}
code is here: https://github.com/adonovan/gopl.io/blob/master/ch8/reverb2/reverb.go
How go keyword works in Go, see
Go_statements:
The function value and parameters are evaluated as usual in the calling goroutine, but unlike with a regular call, program execution does not wait for the invoked function to complete. Instead, the function begins executing independently in a new goroutine. When the function terminates, its goroutine also terminates. If the function has any return values, they are discarded when the function completes.
The function value and parameters are evaluated in place with the go keyword (same for the defer keyword see an example for defer keyword).
To understand the evaluation order, let's try this:
go have()(fun("with Go."))
Let's run this and read the code comments for the evaluation order:
package main
import (
"fmt"
"sync"
)
func main() {
go have()(fun("with Go."))
fmt.Print("some ") // evaluation order: ~ 3
wg.Wait()
}
func have() func(string) {
fmt.Print("Go ") // evaluation order: 1
return funWithGo
}
func fun(msg string) string {
fmt.Print("have ") // evaluation order: 2
return msg
}
func funWithGo(msg string) {
fmt.Println("fun", msg) // evaluation order: 4
wg.Done()
}
func init() {
wg.Add(1)
}
var wg sync.WaitGroup
Output:
Go have some fun with Go.
Explanation go have()(fun("with Go.")):
First in place evaluation takes place here:
go have()(...) first have() part runs and the result is fmt.Print("Go ") and return funWithGo, then fun("with Go.") runs, and the result is fmt.Print("have ") and return "with Go."; now we have go funWithGo("with Go.").
So the final goroutine call is go funWithGo("with Go.")
This is a call to start a new goroutine so really we don't know when it will run. So there is a chance for the next line to run: fmt.Print("some "), then we wait here wg.Wait(). Now the goroutine runs this funWithGo("with Go.") and the result is fmt.Println("fun", "with Go.") then wg.Done(); that is all.
Let's rewrite the above code, just replace named functions with anonymous one, so this code is same as above:
For example see:
func have() func(string) {
fmt.Print("Go ") // evaluation order: 1
return funWithGo
}
And cut this code select the have part in the go have() and paste then select the have part in func have() and press Delete on the keyboard, then you'll have this:
This is even more beautiful, with the same result, just replace all functions with anonymous functions:
package main
import (
"fmt"
"sync"
)
func main() {
var wg sync.WaitGroup
wg.Add(1)
go func() func(string) {
fmt.Print("Go ") // evaluation order: 1
return func(msg string) {
fmt.Println("fun", msg) // evaluation order: 4
wg.Done()
}
}()(func(msg string) string {
fmt.Print("have ") // evaluation order: 2
return msg
}("with Go."))
fmt.Print("some ") // evaluation order: ~ 3
wg.Wait()
}
Let me explain it with a simple example:
1. Consider this simple code:
i := 1
go fmt.Println(i) // 1
This is clear enough: the output is 1.
But if the Go designers decided to evaluate the function argument at the function run-time nobody knows the value of i; you might change the i in your code (see the next example)
Now let's do this closure:
i := 1
go func() {
time.Sleep(1 * time.Second)
fmt.Println(i) // ?
}()
The output is really unknown, and if the main goroutine exits sooner, it even won't have a chance to run: Wake up and print the i, which is i itself may change to that specific moment.
Now let's solve it like so:
i := 1
go func(i int) {
fmt.Printf("Step 3 i is: %d\n", i) // i = 1
}(i)
This anonymous function argument is of type int and it is a value type, and the value of i is known, and the compiler-generated code pushes the value 1 (i) to the stack, so this function, will use the value 1, when the time comes (A time in the future).
All (The Go Playground):
package main
import (
"fmt"
"sync"
"time"
)
func main() {
i := 1
go fmt.Println(i) // 1 (when = unknown)
go fmt.Println(2) // 2 (when = unknown)
go func() { // closure
time.Sleep(1 * time.Second)
fmt.Println(" This won't have a chance to run", i) // i = unknown (when = unknown)
}()
i = 3
wg := new(sync.WaitGroup)
wg.Add(1)
go func(i int) {
defer wg.Done()
fmt.Printf("Step 3 i is: %d\n", i) // i = 3 (when = unknown)
}(i)
i = 4
go func(step int) { // closure
fmt.Println(step, i) // i=? (when = unknown)
}(5)
i = 5
fmt.Println(i) // i=5
wg.Wait()
}
Output:
5
5 5
2
1
Step 3 i is: 3
The Go Playground output:
5
5 5
1
2
Step 3 i is: 3
As you may be noticed, the order of 1 and 2 is random, and your output may differ (See the code comments).
Maybe I'm just not reading the spec right or my mindset is still stuck with older synchronization methods, but what is the right way in Go to send one type as receive something else as a response?
One way I had come up with was
package main
import "fmt"
type request struct {
out chan string
argument int
}
var input = make(chan *request)
var cache = map[int]string{}
func processor() {
for {
select {
case in := <- input:
if result, exists := cache[in.argument]; exists {
in.out <- result
}
result := fmt.Sprintf("%d", in.argument)
cache[in.argument] = result
in.out <- result
}
}
}
func main() {
go processor()
responseCh := make(chan string)
input <- &request{
responseCh,
1,
}
result := <- responseCh
fmt.Println(result)
}
That cache is not really necessary for this example but otherwise it would cause a datarace.
Is this what I'm supposed to do?
There're plenty of possibilities, depends what is best approach for your problem. When you receive something from a channel, there is nothing like a default way for responding – you need to build the flow by yourself (and you definitely did in the example in your question). Sending a response channel with every request gives you a great flexibility as with every request you can choose where to route the response, but quite often is not necessary.
Here are some other examples:
1. Sending and receiving from the same channel
You can use unbuffered channel for both sending and receiving the responses. This nicely illustrates that unbuffered channels are in fact a synchronisation points in your program. The limitation is of course that we need to send exactly the same type as request and response:
package main
import (
"fmt"
)
func pow2() (c chan int) {
c = make(chan int)
go func() {
for x := range c {
c <- x*x
}
}()
return c
}
func main() {
c := pow2()
c <- 2
fmt.Println(<-c) // = 4
c <- 4
fmt.Println(<-c) // = 8
}
2. Sending to one channel, receiving from another
You can separate input and output channels. You would be able to use buffered version if you wish. This can be used as request/response scenario and would allow you to have a route responsible for sending the requests, another one for processing them and yet another for receiving responses. Example:
package main
import (
"fmt"
)
func pow2() (in chan int, out chan int) {
in = make(chan int)
out = make(chan int)
go func() {
for x := range in {
out <- x*x
}
}()
return
}
func main() {
in, out := pow2()
go func() {
in <- 2
in <- 4
}()
fmt.Println(<-out) // = 4
fmt.Println(<-out) // = 8
}
3. Sending response channel with every request
This is what you've presented in the question. Gives you a flexibility of specifying the response route. This is useful if you want the response to hit the specific processing routine, for example you have many clients with some tasks to do and you want the response to be received by the same client.
package main
import (
"fmt"
"sync"
)
type Task struct {
x int
c chan int
}
func pow2(in chan Task) {
for t := range in {
t.c <- t.x*t.x
}
}
func main() {
var wg sync.WaitGroup
in := make(chan Task)
// Two processors
go pow2(in)
go pow2(in)
// Five clients with some tasks
for n := 1; n < 5; n++ {
wg.Add(1)
go func(x int) {
defer wg.Done()
c := make(chan int)
in <- Task{x, c}
fmt.Printf("%d**2 = %d\n", x, <-c)
}(n)
}
wg.Wait()
}
Worth saying this scenario doesn't necessary need to be implemented with per-task return channel. If the result has some sort of the client context (for example client id), a single multiplexer could be receiving all the responses and then processing them according to the context.
Sometimes it doesn't make sense to involve channels to achieve simple request-response pattern. When designing go programs, I caught myself trying to inject too many channels into the system (just because I think they're really great). Old good function calls is sometimes all we need:
package main
import (
"fmt"
)
func pow2(x int) int {
return x*x
}
func main() {
fmt.Println(pow2(2))
fmt.Println(pow2(4))
}
(And this might be a good solution if anyone encounters similar problem as in your example. Echoing the comments you've received under your question, having to protect a single structure, like cache, it might be better to create a structure and expose some methods, which would protect concurrent use with mutex.)
This code selects all xml files in the same folder, as the invoked executable and asynchronously applies processing to each result in the callback method (in the example below, just the name of the file is printed out).
How do I avoid using the sleep method to keep the main method from exiting? I have problems wrapping my head around channels (I assume that's what it takes, to synchronize the results) so any help is appreciated!
package main
import (
"fmt"
"io/ioutil"
"path"
"path/filepath"
"os"
"runtime"
"time"
)
func eachFile(extension string, callback func(file string)) {
exeDir := filepath.Dir(os.Args[0])
files, _ := ioutil.ReadDir(exeDir)
for _, f := range files {
fileName := f.Name()
if extension == path.Ext(fileName) {
go callback(fileName)
}
}
}
func main() {
maxProcs := runtime.NumCPU()
runtime.GOMAXPROCS(maxProcs)
eachFile(".xml", func(fileName string) {
// Custom logic goes in here
fmt.Println(fileName)
})
// This is what i want to get rid of
time.Sleep(100 * time.Millisecond)
}
You can use sync.WaitGroup. Quoting the linked example:
package main
import (
"net/http"
"sync"
)
func main() {
var wg sync.WaitGroup
var urls = []string{
"http://www.golang.org/",
"http://www.google.com/",
"http://www.somestupidname.com/",
}
for _, url := range urls {
// Increment the WaitGroup counter.
wg.Add(1)
// Launch a goroutine to fetch the URL.
go func(url string) {
// Decrement the counter when the goroutine completes.
defer wg.Done()
// Fetch the URL.
http.Get(url)
}(url)
}
// Wait for all HTTP fetches to complete.
wg.Wait()
}
WaitGroups are definitely the canonical way to do this. Just for the sake of completeness, though, here's the solution that was commonly used before WaitGroups were introduced. The basic idea is to use a channel to say "I'm done," and have the main goroutine wait until each spawned routine has reported its completion.
func main() {
c := make(chan struct{}) // We don't need any data to be passed, so use an empty struct
for i := 0; i < 100; i++ {
go func() {
doSomething()
c <- struct{}{} // signal that the routine has completed
}()
}
// Since we spawned 100 routines, receive 100 messages.
for i := 0; i < 100; i++ {
<- c
}
}
sync.WaitGroup can help you here.
package main
import (
"fmt"
"sync"
"time"
)
func wait(seconds int, wg * sync.WaitGroup) {
defer wg.Done()
time.Sleep(time.Duration(seconds) * time.Second)
fmt.Println("Slept ", seconds, " seconds ..")
}
func main() {
var wg sync.WaitGroup
for i := 0; i <= 5; i++ {
wg.Add(1)
go wait(i, &wg)
}
wg.Wait()
}
Although sync.waitGroup (wg) is the canonical way forward, it does require you do at least some of your wg.Add calls before you wg.Wait for all to complete. This may not be feasible for simple things like a web crawler, where you don't know the number of recursive calls beforehand and it takes a while to retrieve the data that drives the wg.Add calls. After all, you need to load and parse the first page before you know the size of the first batch of child pages.
I wrote a solution using channels, avoiding waitGroup in my solution the the Tour of Go - web crawler exercise. Each time one or more go-routines are started, you send the number to the children channel. Each time a go routine is about to complete, you send a 1 to the done channel. When the sum of children equals the sum of done, we are done.
My only remaining concern is the hard-coded size of the the results channel, but that is a (current) Go limitation.
// recursionController is a data structure with three channels to control our Crawl recursion.
// Tried to use sync.waitGroup in a previous version, but I was unhappy with the mandatory sleep.
// The idea is to have three channels, counting the outstanding calls (children), completed calls
// (done) and results (results). Once outstanding calls == completed calls we are done (if you are
// sufficiently careful to signal any new children before closing your current one, as you may be the last one).
//
type recursionController struct {
results chan string
children chan int
done chan int
}
// instead of instantiating one instance, as we did above, use a more idiomatic Go solution
func NewRecursionController() recursionController {
// we buffer results to 1000, so we cannot crawl more pages than that.
return recursionController{make(chan string, 1000), make(chan int), make(chan int)}
}
// recursionController.Add: convenience function to add children to controller (similar to waitGroup)
func (rc recursionController) Add(children int) {
rc.children <- children
}
// recursionController.Done: convenience function to remove a child from controller (similar to waitGroup)
func (rc recursionController) Done() {
rc.done <- 1
}
// recursionController.Wait will wait until all children are done
func (rc recursionController) Wait() {
fmt.Println("Controller waiting...")
var children, done int
for {
select {
case childrenDelta := <-rc.children:
children += childrenDelta
// fmt.Printf("children found %v total %v\n", childrenDelta, children)
case <-rc.done:
done += 1
// fmt.Println("done found", done)
default:
if done > 0 && children == done {
fmt.Printf("Controller exiting, done = %v, children = %v\n", done, children)
close(rc.results)
return
}
}
}
}
Full source code for the solution
Here is a solution that employs WaitGroup.
First, define 2 utility methods:
package util
import (
"sync"
)
var allNodesWaitGroup sync.WaitGroup
func GoNode(f func()) {
allNodesWaitGroup.Add(1)
go func() {
defer allNodesWaitGroup.Done()
f()
}()
}
func WaitForAllNodes() {
allNodesWaitGroup.Wait()
}
Then, replace the invocation of callback:
go callback(fileName)
With a call to your utility function:
util.GoNode(func() { callback(fileName) })
Last step, add this line at the end of your main, instead of your sleep. This will make sure the main thread is waiting for all routines to finish before the program can stop.
func main() {
// ...
util.WaitForAllNodes()
}