Waitgroups and synchronization not working in Go - go

I've been experimenting with goroutines and channels, and I wanted to test the WaitGroup feature. Here I'm trying to execute an HTTP flood job, where the parent thread spawns a lot of goroutines which will make infinite requests, unless receiving a stop message:
func (hf *HTTPFlood) Run() {
childrenStop := make(chan int, hf.ConcurrentCalls)
stop := false
totalRequests := 0
requestsChan := make(chan int)
totalErrors := 0
errorsChan := make(chan int)
var wg sync.WaitGroup
for i := 0; i < hf.ConcurrentCalls; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for {
select {
case <-childrenStop:
fmt.Printf("stop child\n")
return
default:
_, err := Request(hf.Victim.String())
requestsChan <- 1
if err != nil {
errorsChan <- 1
}
}
}
}()
}
timeout := time.NewTimer(time.Duration(MaximumJobTime) * time.Second)
for !stop {
select {
case req := <- requestsChan:
totalReq += req
case err := <- errorsChan:
totalErrors += err
case <- timeout.C:
fmt.Printf("%s timed up\n", hf.Victim.String())
for i := 0; i < hf.ConcurrentCalls; i++ {
childrenStop <- 1
}
close(childrenStop)
stop = true
break
}
}
fmt.Printf("waiting\n")
wg.Wait()
fmt.Printf("after wait\n")
close(requestsChan)
close(errorsChan)
fmt.Printf("end\n")
}
Once timeout is fired, the parent thread successfully exits the loop and reaches the Wait instruction, but even though the stopChildren channel is filled, the child goroutines seem to never receive messages on the stopChildren channel.
What am I missing?
EDIT:
So the issue obviously was how the channels and its sends/receives were managed.
First of all the childrenStop channel was closed before all childs had received the message. The channel should be closed after the Wait
On the other hand, since no reads were done neither on requestsChan nor errorsChan once the parent thread sends the stop signal, most of the childs stayed blocked sending on these two channels. I tried to keep reading in the parent thread, outside the loop just before the Wait but that didn't work so I switched the implementation to Atomic counters which seem to be a more suitable way to manage this specific use case.
func (hf *HTTPFlood) Run() {
childrenStop := make(chan int, hf.ConcurrentCalls)
var totalReq uint64
var totalErrors uint64
var wg sync.WaitGroup
for i := 0; i < hf.ConcurrentCalls; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for {
select {
case <-childrenStop:
fmt.Printf("stop child\n")
return
default:
_, err := Request(hf.Victim.String())
atomic.AddUint64(&totalReq, 1)
if err != nil {
atomic.AddUint64(&totalErrors, 1)
}
}
}
}()
}
timeout := time.NewTimer(time.Duration(MaximumJobTime) * time.Second)
<- timeout.C
fmt.Printf("%s timed up\n", hf.Victim.String())
for i := 0; i < hf.ConcurrentCalls; i++ {
childrenStop <- 1
}
fmt.Printf("waiting\n")
wg.Wait()
fmt.Printf("after wait\n")
close(childrenStop)
fmt.Printf("end\n")
}

Your go routines can be is blocked at requestsChan <- 1.
case <- timeout.C:
fmt.Printf("%s timed up\n", hf.Victim.String())
for i := 0; i < hf.ConcurrentCalls; i++ {
childrenStop <- 1
}
close(childrenStop)
stop = true
break
Here you are sending a number to childrenStop and expect the go routines to receive it. But while you are sending the childrenStop signal, your routines could have sent something on requestsChan. But as you break from the loop after sending the close signals, there's no one listening on requestsChan to receive.
You can confirm this by printing something just before and after requestsChan <- 1 to confirm the behaviour.
A channel will block when you send something on it while no one is receiving on the other end
Here is a possible modification.
package main
import (
"fmt"
"time"
)
func main() {
requestsChan := make(chan int)
done := make(chan chan bool)
for i := 0; i < 5; i++ {
go func(it int) {
for {
select {
case c := <-done:
c <- true
return
default:
requestsChan <- it
}
}
}(i)
}
max := time.NewTimer(1 * time.Millisecond)
allChildrenDone := make(chan bool)
childrenDone := 0
childDone := make(chan bool)
go func() {
for {
select {
case i := <-requestsChan:
fmt.Printf("received %d;", i)
case <-max.C:
fmt.Println("\nTimeup")
for i := 0; i < 5; i++ {
go func() {
done <- childDone
fmt.Println("sent done")
}()
}
case <-childDone:
childrenDone++
fmt.Println("child done ", childrenDone)
if childrenDone == 5 {
allChildrenDone <- true
return
}
}
}
}()
fmt.Println("Waiting")
<-allChildrenDone
}
Thing to note here is that am sending the close signal in go routines so that the loop can continue while i wait for all the children to have cleanly exit.
Please watch this talk by Rob Pike which covers these details clearly.
[Edit]: The previous code would have resulted in a running routine after exiting.

Related

why there is deadlock although i tried closing the channel after sending all values

why there is deadlock after printing all values ?
what i understand
as from receiving part code channel is waiting which letting to block or pause main go routine although i tried with waitgroup doesn't work
package main
import (
"fmt"
//"sync"
)
//output from 10 20 30 ... - 100
func main() {
//wg := sync.WaitGroup{}
done := make(chan int)
for i := 1; i <= 10; i++ {
//wg.Add(1)
go func(i int) {
done <- i * 10
}(i)
// close(done)
}
// close(done)
//wg.Wait()
// for item := range done{
// fmt.Println(item)}
for {
if value, ok := <-done; ok {
fmt.Println("received is ", value)
} else {
return
//os.Exit(1)
}
}
}
As per the answer from #aureliar value, ok := <-done will block until a value is received on the channel or the channel is closed (and, once your goroutines complete, neither of these happen). From your question and the comments in your code it looks like you were close to working out how to solve this by waiting for the goroutines to complete and closing the channel.
Because you know the number of goroutines in advance (and each goroutine always sends one value on the channel) there is a simple solution (playground):
func main() {
noOfGoRoutines := 10
done := make(chan int)
for i := 1; i <= noOfGoRoutines; i++ {
go func(i int) {
done <- i * 10
}(i)
}
for noOfGoRoutines > 0 {
value := <-done
fmt.Println("received is ", value)
noOfGoRoutines--
}
}
Things get a bit more complicated when you don't know how many values will be received in advance. In that case closing the channel is a good way of letting the receiver know you have finished. In your case this means closing the channel after the goroutines have completed (this is important because sending to a closed channel leads to a panic).
To do this using a WaitGroup you will need three functions:
wg.Add(delta int) to set the counter. The call you have commented out is fine but an alternative is to call wg.Add(10) before entering the loop.
wg.Done() "decrements the WaitGroup counter by one" - you need to call this before each goroutine ends (its common to defer the call). This was missing from your code.
wg.Wait() "blocks until the WaitGroup counter is zero".
With the above in place you can safely call close(done) knowing that nothing further will be sent to the channel. However there is a complication - if you just add this code after your first loop you will hit another deadlock because your goroutines will all block at done <- i * 10. This happens because main will be blocked at wg.Wait() meaning nothing is receiving from the channel and as per the spec:
If the capacity is zero or absent, the channel is unbuffered and communication succeeds only when both a sender and receiver are ready.
This can be solved by waiting/closing within another goroutine.
You can try this in the playground.
package main
import (
"fmt"
"sync"
)
//output from 10 20 30 ... - 100
func main() {
wg := sync.WaitGroup{}
done := make(chan int)
for i := 1; i <= 10; i++ {
wg.Add(1)
go func(i int) {
done <- i * 10
wg.Done()
}(i)
}
go func() {
wg.Wait()
close(done)
}()
for value := range done {
fmt.Println("received is ", value)
}
fmt.Println("channel closed")
/* I have simplified your loop but this would also work
for {
if value, ok := <-done; ok {
fmt.Println("received is ", value)
} else {
fmt.Println("channel closed")
return
}
}
*/
}
Because you never close the channel. So the value, ok := <-done part is always waiting for the 11th value that will never come.
Replacing this part should do the trick:
for i := 1; i <= 10; i++ {
//wg.Add(1)
go func(i int) {
done <- i * 10
}(i)
// close(done)
}
new:
go func(){
for i := 1; i <= 10; i++ {
done <- i * 10
}
close(done)
}()

Deadlock using channels as queues

I'm learning Go and I am trying to implement a job queue.
What I'm trying to do is:
Have the main goroutine feed lines through a channel for multiple parser workers (that parse a line to s struct), and have each parser send the struct to a channel of structs that other workers (goroutines) will process (send to database, etc).
The code looks like this:
lineParseQ := make(chan string, 5)
jobProcessQ := make(chan myStruct, 5)
doneQ := make(chan myStruct, 5)
fileName := "myfile.csv"
file, err := os.Open(fileName)
if err != nil {
log.Fatal(err)
}
defer file.Close()
reader := bufio.NewReader(file)
// Start line parsing workers and send to jobProcessQ
for i := 1; i <= 2; i++ {
go lineToStructWorker(i, lineParseQ, jobProcessQ)
}
// Process myStruct from jobProcessQ
for i := 1; i <= 5; i++ {
go WorkerProcessStruct(i, jobProcessQ, doneQ)
}
lineCount := 0
countSend := 0
for {
line, err := reader.ReadString('\n')
if err != nil && err != io.EOF {
log.Fatal(err)
}
if err == io.EOF {
break
}
lineCount++
if lineCount > 1 {
countSend++
lineParseQ <- line[:len(line)-1] // Avoid last char '\n'
}
}
for i := 0; i < countSend; i++ {
fmt.Printf("Received %+v.\n", <-doneQ)
}
close(doneQ)
close(jobProcessQ)
close(lineParseQ)
Here's a simplified playground: https://play.golang.org/p/yz84g6CJraa
the workers look like this:
func lineToStructWorker(workerID int, lineQ <-chan string, strQ chan<- myStruct ) {
for j := range lineQ {
strQ <- lineToStruct(j) // just parses the csv to a struct...
}
}
func WorkerProcessStruct(workerID int, strQ <-chan myStruct, done chan<- myStruct) {
for a := range strQ {
time.Sleep(time.Millisecond * 500) // fake long operation...
done <- a
}
}
I know the problem is related to the "done" channel because if I don't use it, there's no error, but I can't figure out how to fix it.
You don't start reading from doneQ until you've finished sending all the lines to lineParseQ, which is more lines than there is buffer space. So once the doneQ buffer is full, that send blocks, which starts filling the lineParseQ buffer, and once that's full, it deadlocks. Move either the loop sending to lineParseQ, the loop reading from doneQ, or both, to separate goroutine(s), e.g.:
go func() {
for _, line := range lines {
countSend++
lineParseQ <- line
}
close(lineParseQ)
}()
This will still deadlock at the end, because you've got a range over a channel and the close after it in the same goroutine; since range continues until the channel is closed, and the close comes after the range finishes, you still have a deadlock. You need to put the closes in appropriate places; that being, either in the sending routine, or blocked on a WaitGroup monitoring the sending routines if there are multiple senders for a given channel.
// Start line parsing workers and send to jobProcessQ
wg := new(sync.WaitGroup)
for i := 1; i <= 2; i++ {
wg.Add(1)
go lineToStructWorker(i, lineParseQ, jobProcessQ, wg)
}
// Process myStruct from jobProcessQ
for i := 1; i <= 5; i++ {
go WorkerProcessStruct(i, jobProcessQ, doneQ)
}
countSend := 0
go func() {
for _, line := range lines {
countSend++
lineParseQ <- line
}
close(lineParseQ)
}()
go func() {
wg.Wait()
close(jobProcessQ)
}()
for a := range doneQ {
fmt.Printf("Received %v.\n", a)
}
// ...
func lineToStructWorker(workerID int, lineQ <-chan string, strQ chan<- myStruct, wg *sync.WaitGroup) {
for j := range lineQ {
strQ <- lineToStruct(j) // just parses the csv to a struct...
}
wg.Done()
}
func WorkerProcessStruct(workerID int, strQ <-chan myStruct, done chan<- myStruct) {
for a := range strQ {
time.Sleep(time.Millisecond * 500) // fake long operation...
done <- a
}
close(done)
}
Full working example here: https://play.golang.org/p/XsnewSZeb2X
Coordinate the pipeline with sync.WaitGroup breaking each piece into stages. When you know one piece of the pipeline is complete (and no one is writing to a particular channel), close the channel to instruct all "workers" to exit e.g.
var wg sync.WaitGroup
for i := 1; i <= 5; i++ {
i := i
wg.Add(1)
go func() {
Worker(i)
wg.Done()
}()
}
// wg.Wait() signals the above have completed
Buffered channels are handy to handle burst workloads, but sometimes they are used to avoid deadlocks in poor designs. If you want to avoid running certain parts of your pipeline in a goroutine you can buffer some channels (matching the number of workers typically) to avoid a blockage in your main goroutine.
If you have dependent pieces that read & write and want to avoid deadlock - ensure they are in separate goroutines. Having all parts of the pipeline it its own goroutine will even remove the need for buffered channels:
// putting all channel work into separate goroutines
// removes the need for buffered channels
lineParseQ := make(chan string, 0)
jobProcessQ := make(chan myStruct, 0)
doneQ := make(chan myStruct, 0)
Its a tradeoff of course - a goroutine costs about 2K in resources - versus a buffered channel which is much less. As with most designs it depends on how it is used.
Also don't get caught by the notorious Go for-loop gotcha, so use a closure assignment to avoid this:
for i := 1; i <= 5; i++ {
i := i // new i (not the i above)
go func() {
myfunc(i) // otherwise all goroutines will most likely get '5'
}()
}
Finally ensure you wait for all results to be processed before exiting.
It's a common mistake to return from a channel based function and believe all results have been processed. In a service this will eventually be true. But in a standalone executable the processing loop may still be working on results.
go func() {
wgW.Wait() // waiting on worker goroutines to finish
close(doneQ) // safe to close results channel now
}()
// ensure we don't return until all results have been processed
for a := range doneQ {
fmt.Printf("Received %v.\n", a)
}
by processing the results in the main goroutine, we ensure we don't return prematurely without having processed everything.
Pulling it all together:
https://play.golang.org/p/MjLpQ5xglP3

Channels and Graceful shutdown deadlock

Run the below program and run CTRL + C, the handle routine gets blocked as it is trying to send to a channel but the process routine has shutdown. What is a better concurrency design to solve this?
Edited the program to describe the problem applying the rules suggested here https://stackoverflow.com/a/66708290/4106031
package main
import (
"context"
"fmt"
"os"
"os/signal"
"sync"
"syscall"
"time"
)
func process(ctx context.Context, c chan string) {
fmt.Println("process: processing (select)")
for {
select {
case <-ctx.Done():
fmt.Printf("process: ctx done bye\n")
return
case i := <-c:
fmt.Printf("process: received i: %v\n", i)
}
}
}
func handle(ctx context.Context, readChan <-chan string) {
c := make(chan string, 1)
wg := &sync.WaitGroup{}
wg.Add(1)
go func() {
process(ctx, c)
wg.Done()
}()
defer wg.Wait()
for i := 0; ; i++ {
select {
case <-ctx.Done():
fmt.Printf("handle: ctx done bye\n")
return
case i := <-readChan:
fmt.Printf("handle: received: %v\n", i)
fmt.Printf("handle: sending for processing: %v\n", i)
// suppose huge time passes here
// to cause the issue we want to happen
// we want the process() to exit due to ctx
// cancellation before send to it happens, this creates deadlock
time.Sleep(5 * time.Second)
// deadlock
c <- i
}
}
}
func main() {
wg := &sync.WaitGroup{}
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
readChan := make(chan string, 10)
wg.Add(1)
go func() {
defer wg.Done()
for i := 0; ; i++ {
select {
case <-ctx.Done():
fmt.Printf("read: ctx done bye\n")
return
case readChan <- fmt.Sprintf("%d", i):
fmt.Printf("read: sent msg: %v\n", i)
}
}
}()
wg.Add(1)
go func() {
handle(ctx, readChan)
wg.Done()
}()
go func() {
sigterm := make(chan os.Signal, 1)
signal.Notify(sigterm, syscall.SIGINT, syscall.SIGTERM)
select {
case <-sigterm:
fmt.Printf("SIGTERM signal received\n")
cancel()
}
}()
wg.Wait()
}
Output
$ go run chan-shared.go
read: sent msg: 0
read: sent msg: 1
read: sent msg: 2
read: sent msg: 3
process: processing (select)
read: sent msg: 4
read: sent msg: 5
read: sent msg: 6
handle: received: 0
handle: sending for processing: 0
read: sent msg: 7
read: sent msg: 8
read: sent msg: 9
read: sent msg: 10
handle: received: 1
handle: sending for processing: 1
read: sent msg: 11
process: received i: 0
process: received i: 1
read: sent msg: 12
handle: received: 2
handle: sending for processing: 2
^CSIGTERM signal received
process: ctx done bye
read: ctx done bye
handle: received: 3
handle: sending for processing: 3
Killed: 9
the step by step review
Always cancel context, whatever you think.
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
Dont wd.Add after starting a routine
wg.Add(1)
go handle(ctx, wg)
Dont sparsely consume waitgroups
wg.Add(1)
go func() {
handle(ctx)
wg.Done()
}()
dont for loop on a channel with a default case. Just read from it and let it unblocks
<-sigterm
fmt.Printf("SIGTERM signal received\n")
main never block on signals, main blocks on the processing routines. Signaling should just do signaling, ie cancel the context.
go func() {
sigterm := make(chan os.Signal, 1)
signal.Notify(sigterm, syscall.SIGINT, syscall.SIGTERM)
<-sigterm
fmt.Printf("SIGTERM signal received\n")
cancel()
}()
It is possible to check for context cancellation on channel writes.
select {
case <-ctx.Done():
fmt.Printf("process: ctx done bye\n")
return
case c <- fmt.Sprintf("%d", i):
fmt.Printf("handled: sent to channel: %v\n", i)
}
Dont time.Sleep, you can t test for context cancellation with it.
select {
case <-ctx.Done():
fmt.Printf("process: ctx done bye\n")
return
case <-time.After(time.Second * 5):
}
So a complete revised version of the code with those various rules applied gives us
package main
import (
"context"
"fmt"
"os"
"os/signal"
"sync"
"syscall"
"time"
)
func process(ctx context.Context, c chan string) {
fmt.Println("process: processing (select)")
for {
select {
case <-ctx.Done():
fmt.Printf("process: ctx done bye\n")
return
case msg := <-c:
fmt.Printf("process: got msg: %v\n", msg)
}
}
}
func handle(ctx context.Context) {
c := make(chan string, 3)
wg := &sync.WaitGroup{}
wg.Add(1)
go func() {
process(ctx, c)
wg.Done()
}()
defer wg.Wait()
for i := 0; ; i++ {
select {
case <-ctx.Done():
fmt.Printf("process: ctx done bye\n")
return
case <-time.After(time.Second * 5):
}
select {
case <-ctx.Done():
fmt.Printf("process: ctx done bye\n")
return
case c <- fmt.Sprintf("%d", i):
fmt.Printf("handled: sent to channel: %v\n", i)
}
}
}
func main() {
wg := &sync.WaitGroup{}
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
wg.Add(1)
go func() {
handle(ctx)
wg.Done()
}()
go func() {
sigterm := make(chan os.Signal, 1)
signal.Notify(sigterm, syscall.SIGINT, syscall.SIGTERM)
<-sigterm
fmt.Printf("SIGTERM signal received\n")
cancel()
}()
wg.Wait()
}
There is more to tell about exit conditions, but this is dependent on the requirements.
As mentioned https://stackoverflow.com/a/66708290/4106031 this change has fixed the issue for me. Thanks mh-cbon for the rules too!

Why is there such a result?

I wrote a golang script to scan for open ports and use sync.WaitGourp to control the number of goroutines.
When the goroutine is too large, such as 2000, the result is different from 1000.
Similar to exiting early. code show as below
func worker(wg *sync.WaitGroup) {
for job := range jobs {
_, err := net.DialTimeout("tcp", fmt.Sprintf("%s:%d", job.host, job.port), time.Millisecond*1500)
if err != nil {
results <- Result{job, false}
} else {
results <- Result{job, true}
}
}
wg.Done()
}
func main() {
go func() {
for i := 1; i < 65535; i++ {
jobs <- Job{host, i}
}
close(jobs)
}()
go func() {
for result := range results {
if result.status {
fmt.Println(result.job, "open")
}
}
}()
wg := sync.WaitGroup{}
for i := 1; i < 1000; i++ {
wg.Add(1)
go worker(&wg)
}
wg.Wait()
}
when 1000
{127.0.0.1 80} open
{127.0.0.1 631} open
{127.0.0.1 3306} open
{127.0.0.1 6379} open
{127.0.0.1 33060} open
when 2000
{127.0.0.1 80} open
{127.0.0.1 631} open
I want 2000 to output all ports like 1000
You do not wait for the two "non-worker" goroutines in main, so as soon as wg.Wait() there returns, the process shuts down, tearing down any outstanding goroutines.
Since one of them is processing the results, this appears to you as if not all the tasks were processed (and this is true).
Close the results channel when workers are done. Process the results in the main goroutine.
wg := sync.WaitGroup{}
for i := 1; i < 1000; i++ {
wg.Add(1)
go worker(&wg)
}
go func() {
for i := 1; i < 65535; i++ {
jobs <- Job{host, i}
}
// No more jobs, exit from worker loops.
close(jobs)
// Wait for workers to write all results and exit.
wg.Wait()
// No more results, exit from main loop.
close(results)
}()
for result := range results {
if result.status {
fmt.Println(result.job, "open")
}
}
View the complete program on the GoLang PlayGround.

Program hangs with channel

I want to use goroutines to batch requests from different customers' with different date.
I mean 50 consumer goroutines to consume all customers from db, and 2 date consumer goroutines to consume date slice.
Main codes as below, but it hung and didn't exit as expected.
Why doesn't it exit as expected?
func Run(){
var syncWg sync.WaitGroup
syncWg.Add(1)
go SyncCustomerMetricsHistory(&syncWg)
syncWg.Wait()
}
func SyncCustomerMetricsHistory(wg *sync.WaitGroup){
defer wg.Done()
odb := orm.NewOrm()
start := time.Now()
logs.Info("start sync customer metrics, time:[%v]", start)
qs := odb.QueryTable("gg_customer")
var customers []*db.GgCustomer
if num, err := qs.All(&customers); err != nil || num == 0 {
logs.Error("Get customer error, rows:[%v], err:[%v]", num, err)
}
customersChan := make(chan *db.GgCustomer, 50)
var wgC sync.WaitGroup
wgC.Add(50)
for i := 0; i < 50; i++ {
go syncCustomerMetricsHistory(customersChan, &wgC)
}
go func() {
for _, customer := range customers {
customersChan <- customer
}
close(customersChan)
}()
wgC.Wait()
}
func syncCustomerMetricsHistory(customerChan <- chan *db.GgCustomer, wg *sync.WaitGroup){
defer wg.Done()
for customer := range customerChan{
dateChan := make(chan string, 2)
var wgD sync.WaitGroup
wgD.Add(2)
for i := 1; i < 2; i++{
go test(dateChan, customer, &wgD)
}
go func(){
for _, date := range GetAllYearDate(){
dateChan <- date
}
close(dateChan)
}()
wgD.Wait()
}
}
}
func test(dateChan <- chan string, customer *db.GgCustomer, wg *sync.WaitGroup){
defer wg.Done()
for date := range dateChan{
fmt.Println(date, customer)
}
}
func GetAllYearDate() []string{
return []string{"2019-10-01", "2019-10-02"}
}
I have not tried to run this (as it requires additional code) but believe your issue is:
wgD.Add(2)
for i := 1; i < 2; i++{
go test(dateChan, customer, &wgD)
}
That for loop will only iterate once but you called wgD.Add(2) (I think you probably meant the loop to iterate twice; try i <= 2).
One other bit of feedback; the way you are using waitgroups will work but is hard to follow (perhaps leading to you not spotting the issue); how about something like:
func Run(){
SyncCustomerMetricsHistory() // No wait group needed as this will not return before done
}
func SyncCustomerMetricsHistory(){
odb := orm.NewOrm()
start := time.Now()
logs.Info("start sync customer metrics, time:[%v]", start)
qs := odb.QueryTable("gg_customer")
var customers []*db.GgCustomer
if num, err := qs.All(&customers); err != nil || num == 0 {
logs.Error("Get customer error, rows:[%v], err:[%v]", num, err)
}
customersChan := make(chan *db.GgCustomer, 50)
var wgC sync.WaitGroup
wgC.Add(50)
for i := 0; i < 50; i++ {
go func() {
syncCustomerMetricsHistory(customersChan)
wgC.Done()
}()
}
go func() {
for _, customer := range customers {
customersChan <- customer
}
close(customersChan)
}()
wgC.Wait()
}
func syncCustomerMetricsHistory(customerChan <- chan *db.GgCustomer){
for customer := range customerChan{
dateChan := make(chan string, 2)
var wgD sync.WaitGroup
wgD.Add(2)
for i := 1; i < 2; i++{
go func() {
test(dateChan, customer)
wgD.Done()
}()
}
go func(){
for _, date := range GetAllYearDate(){
dateChan <- date
}
close(dateChan)
}()
wgD.Wait()
}
}
}
I think this is easier to follow because you can see where wg.Done() is being called. It's also really easy to stick some fmt.Println commands on either side which makes it simpler to debug this kind of issue.

Resources