I have two sets of code - to read a file with random lines of text, and load each line to a channel. I am unable to understand why one returns an error. but the other doesn't. Case #1 returns "fatal error: all goroutines are asleep - deadlock!" but Case # works.
Case #1
func main () {
file, err := os.Open("/Users/sample/Downloads/wordlist")
if err != nil {
log.Fatal(err)
}
lines := make (chan string)
scanner := bufio.NewScanner(file)
for scanner.Scan() {
lines <- scanner.Text()
}
close(lines)
for line := range (lines) {
fmt.Println(line)
}
}
Case #2
func main () {
file, err := os.Open("/Users/sample/Downloads/wordlist")
if err != nil {
log.Fatal(err)
}
lines := make (chan string)
go func() {
scanner := bufio.NewScanner(file)
for scanner.Scan() {
lines <- scanner.Text()
}
close(lines)
}()
for line := range (lines) {
fmt.Println(line)
}
}
The sender blocks until the receiver has received the value in unbuffered channel.
In unbuffered channel writing to channel will not happen until there must be some receiver which is waiting to receive the data.
For Example:
queue := make(chan string)
queue <- "one" /* Here main function which is also a goroutine is blocked, because
there is no goroutine to receive the channel value, hence the deadlock*/
close(queue)
for elem := range queue {
fmt.Println(elem)
}
This will leads to deadlock and it is exactly what happened in your Case #1 code.
Now In case where we have other go routine
queue := make(chan string)
go func() {
queue <- "one"
close(queue)
}()
for elem := range queue {
fmt.Println(elem)
}
This will work because main go routine is waiting for the data to be consumed before writes happen to unbuffered channel.
This is exactly happen in your Case #2
So in short, writes to unbuffered channel happens only when there is some routine waiting to read from channel, else the write operation is blocked forever and leads to deadlock.
Related
I am trying to parallelize a recursive problem in Go, and I am unsure what the best way to do this is.
I have a recursive function, which works like this:
func recFunc(input string) (result []string) {
for subInput := range getSubInputs(input) {
subOutput := recFunc(subInput)
result = result.append(result, subOutput...)
}
result = result.append(result, getOutput(input)...)
}
func main() {
output := recFunc("some_input")
...
}
So the function calls itself N times (where N is 0 at some level), generates its own output and returns everything in a list.
Now I want to make this function run in parallel. But I am unsure what the cleanest way to do this is. My Idea:
Have a "result" channel, to which all function calls send their result.
Collect the results in the main function.
Have a wait group, which determines when all results are collected.
The Problem: I need to wait for the wait group and collect all results in parallel. I can start a separate go function for this, but how do I ever quit this separate go function?
func recFunc(input string) (result []string, outputChannel chan []string, waitGroup &sync.WaitGroup) {
defer waitGroup.Done()
waitGroup.Add(len(getSubInputs(input))
for subInput := range getSubInputs(input) {
go recFunc(subInput)
}
outputChannel <-getOutput(input)
}
func main() {
outputChannel := make(chan []string)
waitGroup := sync.WaitGroup{}
waitGroup.Add(1)
go recFunc("some_input", outputChannel, &waitGroup)
result := []string{}
go func() {
nextResult := <- outputChannel
result = append(result, nextResult ...)
}
waitGroup.Wait()
}
Maybe there is a better way to do this? Or how can I ensure the anonymous go function, that collects the results, is quited when done?
tl;dr;
recursive algorithms should have bounded limits on expensive resources (network connections, goroutines, stack space etc.)
cancelation should be supported - to ensure expensive operations can be cleaned up quickly if a result is no longer needed
branch traversal should support error reporting; this allows errors to bubble up the stack & partial results to be returned without the entire recursion traversal to fail.
For asychronous results - whether using recursions or not - use of channels is recommended. Also, for long running jobs with many goroutines, provide a method for cancelation (context.Context) to aid with clean-up.
Since recursion can lead to exponential consumption of resources it's important to put limits in place (see bounded parallelism).
Below is a design patten I use a lot for asynchronous tasks:
always support taking a context.Context for cancelation
number of workers needed for the task
return a chan of results & a chan error (will only return one error or nil)
var (
workers = 10
ctx = context.TODO() // use request context here - otherwise context.Background()
input = "abc"
)
resultC, errC := recJob(ctx, workers, input) // returns results & `error` channels
// asynchronous results - so read that channel first in the event of partial results ...
for r := range resultC {
fmt.Println(r)
}
// ... then check for any errors
if err := <-errC; err != nil {
log.Fatal(err)
}
Recursion:
Since recursion quickly scales horizontally, one needs a consistent way to fill the finite list of workers with work but also ensure when workers are freed up, that they quickly pick up work from other (over-worked) workers.
Rather than create a manager layer, employ a cooperative peer system of workers:
each worker shares a single inputs channel
before recursing on inputs (subIinputs) check if any other workers are idle
if so, delegate to that worker
if not, current worker continues recursing that branch
With this algorithm, the finite count of workers quickly become saturated with work. Any workers which finish early with their branch - will quickly be delegated a sub-branch from another worker. Eventually all workers will run out of sub-branches, at which point all workers will be idled (blocked) and the recursion task can finish up.
Some careful coordination is needed to achieve this. Allowing the workers to write to the input channel helps with this peer coordination via delegation. A "recursion depth" WaitGroup is used to track when all branches have been exhausted across all workers.
(To include context support and error chaining - I updated your getSubInputs function to take a ctx and return an optional error):
func recFunc(ctx context.Context, input string, in chan string, out chan<- string, rwg *sync.WaitGroup) error {
defer rwg.Done() // decrement recursion count when a depth of recursion has completed
subInputs, err := getSubInputs(ctx, input)
if err != nil {
return err
}
for subInput := range subInputs {
rwg.Add(1) // about to recurse (or delegate recursion)
select {
case in <- subInput:
// delegated - to another goroutine
case <-ctx.Done():
// context canceled...
// but first we need to undo the earlier `rwg.Add(1)`
// as this work item was never delegated or handled by this worker
rwg.Done()
return ctx.Err()
default:
// noone available to delegate - so this worker will need to recurse this item themselves
err = recFunc(ctx, subInput, in, out, rwg)
if err != nil {
return err
}
}
select {
case <-ctx.Done():
// always check context when doing anything potentially blocking (in this case writing to `out`)
// context canceled
return ctx.Err()
case out <- subInput:
}
}
return nil
}
Connecting the Pieces:
recJob creates:
input & output channels - shared by all workers
"recursion" WaitGroup detects when all workers are idle
"output" channel can then safely be closed
error channel for all workers
kicks-off recursion workload by writing initial input to input channel
func recJob(ctx context.Context, workers int, input string) (resultsC <-chan string, errC <-chan error) {
// RW channels
out := make(chan string)
eC := make(chan error, 1)
// R-only channels returned to caller
resultsC, errC = out, eC
// create workers + waitgroup logic
go func() {
var err error // error that will be returned to call via error channel
defer func() {
close(out)
eC <- err
close(eC)
}()
var wg sync.WaitGroup
wg.Add(1)
in := make(chan string) // input channel: shared by all workers (to read from and also to write to when they need to delegate)
workerErrC := createWorkers(ctx, workers, in, out, &wg)
// get the ball rolling, pass input job to one of the workers
// Note: must be done *after* workers are created - otherwise deadlock
in <- input
errCount := 0
// wait for all worker error codes to return
for err2 := range workerErrC {
if err2 != nil {
log.Println("worker error:", err2)
errCount++
}
}
// all workers have completed
if errCount > 0 {
err = fmt.Errorf("PARTIAL RESULT: %d of %d workers encountered errors", errCount, workers)
return
}
log.Printf("All %d workers have FINISHED\n", workers)
}()
return
}
Finally, create the workers:
func createWorkers(ctx context.Context, workers int, in chan string, out chan<- string, rwg *sync.WaitGroup) (errC <-chan error) {
eC := make(chan error) // RW-version
errC = eC // RO-version (returned to caller)
// track the completeness of the workers - so we know when to wrap up
var wg sync.WaitGroup
wg.Add(workers)
for i := 0; i < workers; i++ {
i := i
go func() {
defer wg.Done()
var err error
// ensure the current worker's return code gets returned
// via the common workers' error-channel
defer func() {
if err != nil {
log.Printf("worker #%3d ERRORED: %s\n", i+1, err)
} else {
log.Printf("worker #%3d FINISHED.\n", i+1)
}
eC <- err
}()
log.Printf("worker #%3d STARTED successfully\n", i+1)
// worker scans for input
for input := range in {
err = recFunc(ctx, input, in, out, rwg)
if err != nil {
log.Printf("worker #%3d recurseManagers ERROR: %s\n", i+1, err)
return
}
}
}()
}
go func() {
rwg.Wait() // wait for all recursion to finish
close(in) // safe to close input channel as all workers are blocked (i.e. no new inputs)
wg.Wait() // now wait for all workers to return
close(eC) // finally, signal to caller we're truly done by closing workers' error-channel
}()
return
}
I can start a separate go function for this, but how do I ever quit this separate go function?
You can range over the output channel in the separate go-routine. The go-routine, in that case, will exit safely, when the channel is closed
go func() {
for nextResult := range outputChannel {
result = append(result, nextResult ...)
}
}
So, now the thing that we need to take care of is that the channel is closed after all the go-routines spawned as part of the recursive function call have successfully existed
For that, you can use a shared waitgroup across all the go-routines and wait on that waitgroup in your main function, as you are already doing. Once the wait is over, close the outputChannel, so that the other go-routine also exits safely
func recFunc(input string, outputChannel chan, wg &sync.WaitGroup) {
defer wg.Done()
for subInput := range getSubInputs(input) {
wg.Add(1)
go recFunc(subInput)
}
outputChannel <-getOutput(input)
}
func main() {
outputChannel := make(chan []string)
waitGroup := sync.WaitGroup{}
waitGroup.Add(1)
go recFunc("some_input", outputChannel, &waitGroup)
result := []string{}
go func() {
for nextResult := range outputChannel {
result = append(result, nextResult ...)
}
}
waitGroup.Wait()
close(outputChannel)
}
PS: If you want to have bounded parallelism to limit the exponential growth, check this out
I have following code:
func (q *Queue) GetStreams(qi *QueueInfo) {
channel := make(chan error, len(qi.AudioChunks))
for _, audiInfo := range qi.AudioChunks {
go audiInfo.GetStream(q.APIChunkURL, q.Sender, channel)
}
i := 0
for err := range channel {
fmt.Println(i)
i++
if err != nil {
fmt.Println("Error getting audio", err)
}
}
fmt.Println("Passed")
}
func (au *AudioChunkPayload) GetStream(path string, rs *RequestSender, channel chan error) {
completeURL := fmt.Sprintf(path, au.AudioChunkID)
err := GetEncodedAudio(rs, completeURL)
if err != nil {
channel <- err
} else {
channel <- nil
}
}
that where GetStream func is downloading some data. Most of the time, lenght of qi.AudioChunks is 8, but it is not a rule. My problem is, that program succesfully prints numer 0-7 (for 8 chunks) but it never proceed to next print (fmt.Println(Passed)). What I have had read about buffered channels, I thought that it should proceed, but obviously I am wrong. How can I make my func GetStreams proceed and finish?
Range loops over a channel until it's explicitly closed.
Buffering affects sending to a channel not receiving from the channel.
I'm trying to find the official docs but from A Tour of Go:
The loop for i := range c receives values from the channel repeatedly
until it is closed.
I am trying to understand non buffered channels, so I have written a small application that iterates through an array of user input, does some work, places info on a non buffered channel and then reads it. However, I'm not able to read from the channels.
This is my code
toProcess := os.Args[1:]
var wg sync.WaitGroup
results := make(chan string)
errs := make(chan error)
for _, t := range toProcess {
wg.Add(1)
go Worker(t, "text", results, errs, &wg)
}
go func() {
for err := range errs {
if err != nil {
fmt.Println(err)
}
}
}()
go func() {
for res := range results {
fmt.Println(res)
}
}()
What am I not understanding about non buffered channels? I thought I should be placing information on it, and have another go routine reading from it.
EDIT: using two goroutines solves the issues, but it still gives me the following when there are errors:
open /Users/roosingh/go/src/github.com/nonbuff/files/22.txt: no such file or directory
fatal error: all goroutines are asleep - deadlock!
goroutine 1 [semacquire]:
sync.runtime_Semacquire(0xc42001416c)
/usr/local/Cellar/go/1.10.2/libexec/src/runtime/sema.go:56 +0x39
sync.(*WaitGroup).Wait(0xc420014160)
/usr/local/Cellar/go/1.10.2/libexec/src/sync/waitgroup.go:129 +0x72
main.main()
/Users/roosingh/go/src/github.com/nonbuff/main.go:39 +0x207
goroutine 6 [chan receive]:
main.main.func1(0xc4200780c0)
/Users/roosingh/go/src/github.com/nonbuff/main.go:25 +0x41
created by main.main
/Users/roosingh/go/src/github.com/nonbuff/main.go:24 +0x1d4
goroutine 7 [chan receive]:
main.main.func2(0xc420078060)
/Users/roosingh/go/src/github.com/nonbuff/main.go:34 +0xb2
created by main.main
/Users/roosingh/go/src/github.com/nonbuff/main.go:33 +0x1f6
So it is able to print out the error message.
My worker code is as follows;
func Worker(fn string, text string, results chan string, errs chan error, wg *sync.WaitGroup) {
file, err := os.Open(fn)
if err != nil {
errs <- err
return
}
defer func() {
file.Close()
wg.Done()
}()
reader := bufio.NewReader(file)
for {
var buffer bytes.Buffer
var l []byte
var isPrefix bool
for {
l, isPrefix, err = reader.ReadLine()
buffer.Write(l)
if !isPrefix {
break
}
if err != nil {
errs <- err
return
}
}
if err == io.EOF {
return
}
line := buffer.String()
results <- fmt. Sprintf("%s, %s", line, text)
}
if err != io.EOF {
errs <- err
return
}
return
}
As for unbuffered channels, you seem to understand the concept, meaning it's used to pass messages between goroutines but cannot hold any. Therefore, a write on an unbuffered channel will block until another goroutine is reading from the channel and a read from a channel will block until another goroutine writes to this channel.
In your case, you seem to want to read from 2 channels simultaneously in the same goroutine. Because the way channels work, you cannot range on a non closed channel and further down in the same goroutine range on another channel. Unless the first channel gets closed, you won't reach the second range.
But, it doesn't mean it's impossible! This is where the select statement comes in.
The select statement allows you to selectively read from multiple channels, meaning that it will read the first one that has something available to be read.
With that in mind, you can use the for combined with the select and rewrite your routine this way:
go func() {
for {
select {
case err := <- errs: // you got an error
fmt.Println(err)
case res := <- results: // you got a result
fmt.Println(res)
}
}
}()
Also, you don't need a waitgroup here, because you know how many workers you are starting, you could just count how many errors and results you get and stop when you reach the number of workers.
Example:
go func() {
var i int
for {
select {
case err := <- errs: // you got an error
fmt.Println(err)
i++
case res := <- results: // you got a result
fmt.Println(res)
i++
}
// all our workers are done
if i == len(toProcess) {
return
}
}
}()
I'm using two concurrent goroutines to copy stdin/stdout from my terminal to a net.Conn target. For some reason, I can't manage to completely stop the two go routines without getting a panic error (for trying to close a closed connection). This is my code:
func interact(c net.Conn, sessionMap map[int]net.Conn) {
quit := make(chan bool) //the channel to quit
copy := func(r io.ReadCloser, w io.WriteCloser) {
defer func() {
r.Close()
w.Close()
close(quit) //this is how i'm trying to close it
}()
_, err := io.Copy(w, r)
if err != nil {
//
}
}
go func() {
for {
select {
case <-quit:
return
default:
copy(c, os.Stdout)
}
}
}()
go func() {
for {
select {
case <-quit:
return
default:
copy(os.Stdin, c)
}
}
}()
}
This errors as panic: close of closed channel
I want to terminate the two go routines, and then normally proceed to another function. What am I doing wrong?
You can't call close on a channel more than once, there's no reason to call copy in a for loop, since it can only operate one time, and you're copying in the wrong direction, writing to stdin and reading from stdout.
Simply asking how to quit 2 goroutines is simple, but that's not the only thing you need to do here. Since io.Copy is blocking, you don't need the extra synchronization to determine when the call is complete. This lets you simplify the code significantly, which will make it a lot easier to reason about.
func interact(c net.Conn) {
go func() {
// You want to close this outside the goroutine if you
// expect to send data back over a half-closed connection
defer c.Close()
// Optionally close stdout here if you need to signal the
// end of the stream in a pipeline.
defer os.Stdout.Close()
_, err := io.Copy(os.Stdout, c)
if err != nil {
//
}
}()
_, err := io.Copy(c, os.Stdin)
if err != nil {
//
}
}
Also note that you may not be able to break out of the io.Copy from stdin, so you can't expect the interact function to return. Manually doing the io.Copy in the function body and checking for a half-closed connection on every loop may be a good idea, then you can break out sooner and ensure that you fully close the net.Conn.
Also could be like this
func scanReader(quit chan int, r io.Reader) chan string {
line := make(chan string)
go func(quit chan int) {
defer close(line)
scan := bufio.NewScanner(r)
for scan.Scan() {
select {
case <- quit:
return
default:
s := scan.Text()
line <- s
}
}
}(quit)
return line
}
stdIn := scanReader(quit, os.Stdin)
conIn := scanReader(quit, c)
for {
select {
case <-quit:
return
case l <- stdIn:
_, e := fmt.Fprintf(c, l)
if e != nil {
quit <- 1
return
}
case l <- conIn:
fmt.Println(l)
}
}
This is in reference to following code in The Go Programming Language - Chapter 8 p.238 copied below from this link
// makeThumbnails6 makes thumbnails for each file received from the channel.
// It returns the number of bytes occupied by the files it creates.
func makeThumbnails6(filenames <-chan string) int64 {
sizes := make(chan int64)
var wg sync.WaitGroup // number of working goroutines
for f := range filenames {
wg.Add(1)
// worker
go func(f string) {
defer wg.Done()
thumb, err := thumbnail.ImageFile(f)
if err != nil {
log.Println(err)
return
}
info, _ := os.Stat(thumb) // OK to ignore error
fmt.Println(info.Size())
sizes <- info.Size()
}(f)
}
// closer
go func() {
wg.Wait()
close(sizes)
}()
var total int64
for size := range sizes {
total += size
}
return total
}
Why do we need to put the closer in a goroutine? Why can't below work?
// closer
// go func() {
fmt.Println("waiting for reset")
wg.Wait()
fmt.Println("closing sizes")
close(sizes)
// }()
If I try running above code it gives:
waiting for reset
3547
2793
fatal error: all goroutines are asleep - deadlock!
Why is there a deadlock in above? fyi, In the method that calls makeThumbnail6 I do close the filenames channel
Your channel is unbuffered (you didn't specify any buffer size when make()ing the channel). This means that a write to the channel blocks until the value written is read. And you read from the channel after your call to wg.Wait(), so nothing ever gets read and all your goroutines get stuck on the blocking write.
That said, you do not need WaitGroup here. WaitGroups are good when you don't know when your goroutine is done, but you are sending results back, so you know. Here is a sample code that does a similar thing to what you are trying to do (with fake worker payload).
package main
import (
"fmt"
"time"
)
func main() {
var procs int = 0
filenames := []string{"file1", "file2", "file3", "file4"}
mychan := make(chan string)
for _, f := range filenames {
procs += 1
// worker
go func(f string) {
fmt.Printf("Worker processing %v\n", f)
time.Sleep(time.Second)
mychan <- f
}(f)
}
for i := 0; i < procs; i++ {
select {
case msg := <-mychan:
fmt.Printf("got %v from worker channel\n", msg)
}
}
}
Test it in the playground here https://play.golang.org/p/RtMkYbAqtGO
Although it's been a while since the question was raised, I encountered the same issue. Initially my main looked like the following:
func main() {
filenames := make(chan string, len(os.Args))
for _, f := range os.Args[1:] {
filenames <- f
}
sizes := makeThumbnails6(filenames)
close(filenames)
log.Println("Total size: ", sizes)}
This version deadlocks as the call range filenames in makeThumbnails6 is synchronous, thus close(filenames) in main was never called. The channel in makeThumbnails6 is unbuffered so goroutines block when trying to send back the size.
The solution was to move close(filenames) before making the function call in main.
The code is wrong. In short, the channel sizes is unbuffered. To fix it, we need to use a buffered channel with enough capacity when creating sizes. One-liner fix is enough, as shown. Here I just made a simple assumption that 1024 is big enough.
func makeThumbnails6(filenames chan string) int64 {
sizes := make(chan int64, 1024) // CHANGE
var wg sync.WaitGroup // number of working goroutines
for f := range filenames {
wg.Add(1)
// worker
go func(f string) {
defer wg.Done()
thumb, err := thumbnail.ImageFile(f)
if err != nil {
log.Println(err)
return
}
info, _ := os.Stat(thumb) // OK to ignore error
fmt.Println(info.Size())
sizes <- info.Size()
}(f)
}
// closer
go func() {
wg.Wait()
close(sizes)
}()
var total int64
for size := range sizes {
total += size
}
return total
}