Simplest concurrent loop with bounded concurrency - go

I'm looking for the simplest code for looping over a dataset in parallel. The requirements are that the number of goroutines is fixed and that they can return an error code. The following is a quick attempt which doesn't work, since the loops will deadlock as both goroutines are blocking on the error channel
package main
import (
"fmt"
"sync"
)
func worker(wg *sync.WaitGroup, intChan chan int, errChan chan error) {
defer wg.Done()
for i := range intChan {
fmt.Printf("Got %d\n", i)
errChan <- nil
}
}
func main() {
ints := []int{1,2,3,4,5,6,7,8,9,10}
intChan := make(chan int)
errChan := make(chan error)
wg := new(sync.WaitGroup)
for i := 0; i < 2; i++ {
wg.Add(1)
go worker(wg, intChan, errChan)
}
for i := range ints {
intChan <- i
}
for range ints {
err := <- errChan
fmt.Printf("Error: %v\n", err)
}
close(intChan)
wg.Wait()
}
What is the simplest pattern for doing this?

Listen for errors in a goroutine:
go func() {
for err:=range errChan {
// Deal with err
}
}()
for i := 0; i < 2; i++ {
wg.Add(1)
go worker(wg, intChan, errChan)
}
for i := range ints {
intChan <- i
}
close(errChan) // Error listener goroutine terminates after this

Related

Synchronize Buffered channel and Waitgroup

I am having issue while using waitgroup with the buffered channel. The problem is waitgroup closes before channel is read completely, which make my channel is half read and break in between.
func main() {
var wg sync.WaitGroup
var err error
start := time.Now()
students := make([]studentDetails, 0)
studentCh := make(chan studentDetail, 10000)
errorCh := make(chan error, 1)
wg.Add(1)
go s.getDetailStudents(rCtx, studentCh , errorCh, &wg, s.Link, false)
go func(ch chan studentDetail, e chan error) {
LOOP:
for {
select {
case p, ok := <-ch:
if ok {
L.Printf("Links %s: [%s]\n", p.title, p.link)
students = append(students, p)
} else {
L.Print("Closed channel")
break LOOP
}
case err = <-e:
if err != nil {
break
}
}
}
}(studentCh, errorCh)
wg.Wait()
close(studentCh)
close(errorCh)
L.Warnln("closed: all wait-groups completed!")
L.Warnf("total items fetched: %d", len(students))
elapsed := time.Since(start)
L.Warnf("operation took %s", elapsed)
}
The problem is this function is recursive. I mean some http call to fetch students and then make more calls depending on condition.
func (s Student) getDetailStudents(rCtx context.Context, content chan<- studentDetail, errorCh chan<- error, wg *sync.WaitGroup, url string, subSection bool) {
util.MustNotNil(rCtx)
L := logger.GetLogger(rCtx)
defer func() {
L.Println("Closing all waitgroup!")
wg.Done()
}()
wc := getWC()
httpClient := wc.Registry.MustHTTPClient()
res, err := httpClient.Get(url)
if err != nil {
L.Fatal(err)
}
defer res.Body.Close()
if res.StatusCode != 200 {
L.Errorf("status code error: %d %s", res.StatusCode, res.Status)
errorCh <- errors.New("service_status_code")
return
}
// parse response and return error if found some through errorCh as done above.
// decide page subSection based on response if it is more.
if !subSection {
wg.Add(1)
go s.getDetailStudents(rCtx, content, errorCh, wg, link, true)
// L.Warnf("total pages found %d", pageSub.Length()+1)
}
// Find students from response list and parse each Student
students := s.parseStudentItemList(rCtx, item)
for _, student := range students {
content <- student
}
L.Warnf("Calling HTTP Service for %q with total %d record", url, elementsSub.Length())
}
Variables are changed to avoid original code base.
The problem is students are read randomly as soon as Waitgroup complete. I am expecting to hold the execution until all students are read, In case of error it should break as soon error encounter.
You need to know when the receiving goroutine completes. The WaitGroup does that for the generating goroutine. So, you can use two waitgroups:
wg.Add(1)
go s.getDetailStudents(rCtx, studentCh , errorCh, &wg, s.Link, false)
wgReader.Add(1)
go func(ch chan studentDetail, e chan error) {
defer wgReader.Done()
...
}
wg.Wait()
close(studentCh)
close(errorCh)
wgReader.Wait() // Wait for the readers to complete
Since you are using buffered channels you can retrieve the remaining values after closing the channel. You will also need a mechanism to prevent your main function from exiting too early while the reader is still doing work ,as #Burak Serdar has advised.
I restructured the code to give a working example but it should get the point across.
package main
import (
"context"
"log"
"sync"
"time"
)
type studentDetails struct {
title string
link string
}
func main() {
var wg sync.WaitGroup
var err error
students := make([]studentDetails, 0)
studentCh := make(chan studentDetails, 10000)
errorCh := make(chan error, 1)
start := time.Now()
wg.Add(1)
go getDetailStudents(context.TODO(), studentCh, errorCh, &wg, "http://example.com", false)
LOOP:
for {
select {
case p, ok := <-studentCh:
if ok {
log.Printf("Links %s: [%s]\n", p.title, p.link)
students = append(students, p)
} else {
log.Println("Draining student channel")
for p := range studentCh {
log.Printf("Links %s: [%s]\n", p.title, p.link)
students = append(students, p)
}
break LOOP
}
case err = <-errorCh:
if err != nil {
break LOOP
}
case <-wrapWait(&wg):
close(studentCh)
}
}
close(errorCh)
elapsed := time.Since(start)
log.Printf("operation took %s", elapsed)
}
func getDetailStudents(rCtx context.Context, content chan<- studentDetails, errorCh chan<- error, wg *sync.WaitGroup, url string, subSection bool) {
defer func() {
log.Println("Closing")
wg.Done()
}()
if !subSection {
wg.Add(1)
go getDetailStudents(rCtx, content, errorCh, wg, url, true)
// L.Warnf("total pages found %d", pageSub.Length()+1)
}
content <- studentDetails{
title: "title",
link: "link",
}
}
// helper function to allow using WaitGroup in a select
func wrapWait(wg *sync.WaitGroup) <-chan struct{} {
out := make(chan struct{})
go func() {
wg.Wait()
out <- struct{}{}
}()
return out
}
wg.Add(1)
go func(){
defer wg.Done()
// I do not think that you need a recursive function.
// this function overcomplicated.
s.getDetailStudents(rCtx, studentCh , errorCh, &wg, s.Link, false)
}(...)
wg.Add(1)
go func(ch chan studentDetail, e chan error) {
defer wg.Done()
...
}(...)
wg.Wait()
close(studentCh)
close(errorCh)
This should solve the problem. s.getDetailStudents function must be simplified. Making it recursive does not have any benefit.

Golang: Cannot send error to channel in recover()

I try to send an error in the channel on recovery
Why this error is not sent to the channel?
package main
import (
"fmt"
"sync"
"errors"
)
func main() {
var wg sync.WaitGroup
wg.Add(1)
batchErrChan := make(chan error)
go func(errchan chan error) {
defer func() {
if r := recover(); r != nil {
errchan <- errors.New("recover err")
}
close(batchErrChan)
wg.Done()
}()
panic("ddd")
}(batchErrChan)
go func() {
for _ = range batchErrChan {
fmt.Println("err in range")
}
}()
wg.Wait()
}
https://play.golang.org/p/0ytunuYDWZU
I expect "err in range" to be printed, but it is not. Why?
Your program ends before the goroutine gets a chance to print the message. Try waiting to it:
...
done:=make(chan struct{})
go func() {
for _ = range batchErrChan {
fmt.Println("err in range")
}
close(done)
}()
wg.Wait()
<-done
}

Managing Producer Consumer deadlock in case of failure

say I have a case of reader, manipulator, consumer in different routines:
package main
import (
"context"
"fmt"
"golang.org/x/sync/errgroup"
"github.com/pkg/errors"
)
func Reader(ctx context.Context, chanFromReader chan int) error {
defer close(chanFromReader)
for i := 0; i < 100; i++ {
select {
case <-ctx.Done():
return nil
case chanFromReader <- i:
}
}
return nil
}
func Manipulate(ctx context.Context, chanFromReader chan int, chanToWriter chan int) error {
defer close(chanToWriter)
for {
select {
case <-ctx.Done():
return nil
case x, ok := <-chanFromReader:
if !ok {
return nil
}
chanToWriter <- 2 * x
}
}
}
func Writer(ctx context.Context, chanToWriter chan int) error {
for {
select {
case <-ctx.Done():
return nil
case x, ok := <-chanToWriter:
if !ok {
return nil
}
fmt.Println("Writer: ", x)
if x == 10 {
return errors.New("Generate some error in writer")
}
}
}
}
func main() {
g, ctx := errgroup.WithContext(context.Background())
chanFromReader := make(chan int)
chanToWriter := make(chan int)
func(ctx context.Context, chanToWriter chan int) {
g.Go(func() error {
return Writer(ctx, chanToWriter)
})
}(ctx, chanToWriter)
func(ctx context.Context, chanFromReader chan int, chanToWriter chan int) {
g.Go(func() error {
return Manipulate(ctx, chanFromReader, chanToWriter)
})
}(ctx, chanFromReader, chanToWriter)
func(ctx context.Context, chanFromReader chan int) {
g.Go(func() error {
return Reader(ctx, chanFromReader)
})
}(ctx, chanFromReader)
g.Wait()
fmt.Println("Main wait done")
}
https://play.golang.org/p/whslVE3rzel
In case the writer fails for some reason, I'm having trouble aborting the rest of the routines.
In the example above for instance, though they listen on ctx for cancellation they still deadlock on case of fail in writer, is there a workaround this?
I thought of adding this:
func Manipulate(ctx context.Context, chanFromReader chan int, chanToWriter chan int) error {
defer close(chanToWriter)
for {
select {
case <-ctx.Done():
return nil
case x, ok := <-chanFromReader:
if !ok {
return nil
}
select {
case <-ctx.Done():
return nil
case chanToWriter <- 2 * x:
}
}
}
}
which solves it, but it looks so unclean...
I would propose a solution where each channel gets closed only by the code that creates it. This can be enforced by returning a receive-only channel from the function that creates the channel and is responsible for closing it:
(kudos to mh-cbon for further refining this:)
https://play.golang.org/p/Tq4OVW5sSP4
package main
import (
"context"
"fmt"
"log"
"sync"
)
func read(ctx context.Context) (<-chan int, <-chan error) {
ch := make(chan int)
e := make(chan error)
go func() {
defer close(e)
defer close(ch)
for i := 0; i < 12; i++ {
select {
case <-ctx.Done():
return
case ch <- i:
}
}
}()
return ch, e
}
func manipulate(in <-chan int) (<-chan int, <-chan error) {
ch := make(chan int)
e := make(chan error)
go func() {
defer close(e)
defer close(ch)
for n := range in {
ch <- 2 * n
}
}()
return ch, e
}
func write(in <-chan int) <-chan error {
e := make(chan error)
go func() {
defer close(e)
for n := range in {
fmt.Println("written: ", n)
if n == 10 {
e <- fmt.Errorf("output error during write")
}
}
}()
return e
}
func collectErrors(errs ...<-chan error) {
var wg sync.WaitGroup
for i := 0; i < len(errs); i++ {
wg.Add(1)
go func(errs <-chan error) {
defer wg.Done()
for err := range errs {
log.Printf("%v", err)
}
}(errs[i])
}
wg.Wait()
}
func main() {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
ch1, err1 := read(ctx)
ch2, err2 := manipulate(ch1)
err3 := write(ch2)
collectErrors(err1, err2, err3)
fmt.Println("main wait complete")
}
This way, each channel gets closed reliably, and the I/O errors from write will cause the child context to be cancelled, shutting down the other goroutines.

Panic while trying to stop creating more goroutines

I'm trying to parallelize calls to an API to speed things up, but I'm facing a problem where I need to stop spinning up goroutines to call the API if I receive an error from one of the goroutine calls. Since I am closing the channel twice(once in the error handling part and when the execution is done), I'm getting a panic: close of closed channel error. Is there an elegant way to handle this without the program to panic? Any help would be appreciated!
The following is the pseudo-code snippet.
for i := 0; i < someNumber; i++ {
go func(num int, q chan<- bool) {
value, err := callAnAPI()
if err != nil {
close(q)//exit from the for-loop
}
// process the value here
wg.Done()
}(i, quit)
}
close(quit)
To mock my scenario, I have written the following program. Is there any way to exit the for-loop gracefully once the condition(commented out) is satisfied?
package main
import (
"fmt"
"sync"
)
func receive(q <-chan bool) {
for {
select {
case <-q:
return
}
}
}
func main() {
quit := make(chan bool)
var result []int
wg := &sync.WaitGroup{}
wg.Add(10)
for i := 0; i < 10; i++ {
go func(num int, q chan<- bool) {
//if num == 5 {
// close(q)
//}
result = append(result, num)
wg.Done()
}(i, quit)
}
close(quit)
receive(quit)
wg.Wait()
fmt.Printf("Result: %v", result)
}
You can use context package which defines the Context type, which carries deadlines, cancellation signals, and other request-scoped values across API boundaries and between processes.
package main
import (
"context"
"fmt"
"sync"
)
func main() {
ctx, cancel := context.WithCancel(context.Background())
defer cancel() // cancel when we are finished, even without error
wg := &sync.WaitGroup{}
for i := 0; i < 10; i++ {
wg.Add(1)
go func(num int) {
defer wg.Done()
select {
case <-ctx.Done():
return // Error occured somewhere, terminate
default: // avoid blocking
}
// your code here
// res, err := callAnAPI()
// if err != nil {
// cancel()
// return
//}
if num == 5 {
cancel()
return
}
fmt.Println(num)
}(i)
}
wg.Wait()
fmt.Println(ctx.Err())
}
Try on: Go Playground
You can also take a look to this answer for more detailed explanation.

which is one better code for goroutine overhead

I want to make Reader.Read concurrent with channel communication.
so I made two ways to run it
1:
type ReturnRead struct {
n int
err error
}
type ReadGoSt struct {
Returnc <-chan ReturnRead
Nextc chan struct{}
}
func (st *ReadGoSt) Close() {
defer func() {
recover()
}()
close(st.Next)
}
func ReadGo(r io.Reader, b []byte) *ReadGoSt {
returnc := make(chan ReturnRead)
nextc := make(chan bool)
go func() {
for range nextc {
n, err := r.Read(b)
returnc <- ReturnRead{n, err}
if err != nil {
return
}
}
}()
return &ReadGoSt{returnc, nextc}
}
2:
func ReadGo(r io.Reader, b []byte) <-chan ReturnRead {
returnc := make(chan ReturnRead)
go func() {
n, err := r.Read(b)
returnc <- ReturnRead{n, err}
}()
return returnc
}
i think code 2 creates too many overhead
which is the better code? 1? 2?
Code 1 is better and probably faster. Code 2 will just read once. But I think both solutions are not the best. you should loop over the read and send back only the bytes readed.
Something like : http://play.golang.org/p/zRPXOtdgWD

Resources