Why doesn't this buffered channel block in my code? - go

I'm learning the Go language. Can someone please explain the output here?
package main
import "fmt"
var c = make(chan int, 1)
func f() {
c <- 1
fmt.Println("In f()")
}
func main() {
go f()
c <- 2
fmt.Println(<-c)
fmt.Println(<-c)
}
Output:
In f()
2
1
Process finished with exit code 0
Why did "In f()" occur before "2"? If "In f()" is printed before "2", the buffered channel should block. But this didn't happen, why?
The other outputs are reasonable.
Image of my confusing result

The order of events that cause this to run through are these:
Trigger goroutine.
Write 2 to the channel. Capacity of the channel is now exhausted.
Read from the channel and output result.
Write 1 to the channel.
Read from the channel and output result.
The order of events that cause this to deadlock are these:
Trigger goroutine.
Write 1 to the channel. Capacity of the channel is now exhausted.
Write 2 to the channel. Since the channel's buffer is full, this blocks.
The output you provide seems to indicate that the goroutine finishes first and the program doesn't deadlock, which contradicts above two scenarios as explanation. Here's what happens though:
Trigger goroutine.
Write 2 to the channel.
Read 2 from the channel.
Write 1 to the channel.
Output In f().
Output the 2 received from the channel.
Read 1 from the channel.
Output the 1 received from the channel.
Keep in mind that you don't have any guarantees concerning the scheduling of goroutines unless you programmatically enforce them. When you start a goroutine, it is undefined when the first code of that goroutine are actually executed and how much the starting code progresses until then. Note that since your code relies on a certain order of events, it is broken by that definition, just to say that explicitly.
Also, at any point in your program, the scheduler can decide to switch between different goroutines. Even the single line fmt.Printtln(<-c) consists of multiple steps and in between each step the switch can occur.

To reproduce block you have to run that code many times the easiest way to do it with test. You don't have block cause of luck. But it is not guaranteed:
var c = make(chan int, 1)
func f() {
c <- 1
fmt.Println("In f()")
}
func TestF(t *testing.T) {
go f()
c <- 2
fmt.Println(<-c)
fmt.Println(<-c)
}
and run with command:
go test -race -count=1000 -run=TestF -timeout=4s
where count number of tests. It reproduces blocking to me.
Provide timeout to not wait default 10 minutes

Related

Go Concurrency Circular Logic

I'm just getting into concurrency in Go and trying to create a dispatch go routine that will send jobs to a worker pool listening on the jobchan channel. If a message comes into my dispatch function via the dispatchchan channel and my other go routines are busy, the message is appended onto the stack slice in the dispatcher and the dispatcher will try to send again later when a worker becomes available, and/or no more messages are received on the dispatchchan. This is because the dispatchchan and the jobchan are unbuffered, and the go routine the workers are running will append other messages to the dispatcher up to a certain point and I don't want the workers blocked waiting on the dispatcher and creating a deadlock. Here's the dispatcher code I've come up with so far:
func dispatch() {
var stack []string
acount := 0
for {
select {
case d := <-dispatchchan:
stack = append(stack, d)
case c := <-mw:
acount = acount + c
case jobchan <-stack[0]:
if len(stack) > 1 {
stack[0] = stack[len(stack)-1]
stack = stack[:len(stack)-1]
} else {
stack = nil
}
default:
if acount == 0 && len(stack) == 0 {
close(jobchan)
close(dispatchchan)
close(mw)
wg.Done()
return
}
}
}
Complete example at https://play.golang.wiki/p/X6kXVNUn5N7
The mw channel is a buffered channel the same length as the number of worker go routines. It acts as a semaphore for the worker pool. If the worker routine is doing [m]eaningful [w]ork it throws int 1 on the mw channel and when it finishes its work and goes back into the for loop listening to the jobchan it throws int -1 on the mw. This way the dispatcher knows if there's any work being done by the worker pool, or if the pool is idle. If the pool is idle and there are no more messages on the stack, then the dispatcher closes the channels and return control to the main func.
This is all good but the issue I have is that the stack itself could be zero length so the case where I attempt to send stack[0] to the jobchan, if the stack is empty, I get an out of bounds error. What I'm trying to figure out is how to ensure that when I hit that case, either stack[0] has a value in it or not. I don't want that case to send an empty string to the jobchan.
Any help is greatly appreciated. If there's a more idomatic concurrency pattern I should consider, I'd love to hear about it. I'm not 100% sold on this solution but this is the farthest I've gotten so far.
This is all good but the issue I have is that the stack itself could be zero length so the case where I attempt to send stack[0] to the jobchan, if the stack is empty, I get an out of bounds error.
I can't reproduce it with your playground link, but it's believable, because at lest one gofunc worker might have been ready to receive on that channel.
My output has been Msgcnt: 0, which is also easily explained, because gofunc might not have been ready to receive on jobschan when dispatch() runs its select. The order of these operations is not defined.
trying to create a dispatch go routine that will send jobs to a worker pool listening on the jobchan channel
A channel needs no dispatcher. A channel is the dispatcher.
If a message comes into my dispatch function via the dispatchchan channel and my other go routines are busy, the message is [...] will [...] send again later when a worker becomes available, [...] or no more messages are received on the dispatchchan.
With a few creative edits, it was easy to turn that into something close to the definition of a buffered channel. It can be read from immediately, or it can take up to some "limit" of messages that can't be immediately dispatched. You do define limit, though it's not used elsewhere within your code.
In any function, defining a variable you don't read will result in a compile time error like limit declared but not used. This stricture improves code quality and helps identify typeos. But at package scope, you've gotten away with defining the unused limit as a "global" and thus avoided a useful error - you haven't limited anything.
Don't use globals. Use passed parameters to define scope, because the definition of scope is tantamount to functional concurrency as expressed with the go keyword. Pass the relevant channels defined in local scope to functions defined at package scope so that you can easily track their relationships. And use directional channels to enforce the producer/consumer relationship between your functions. More on this later.
Going back to "limit", it makes sense to limit the quantity of jobs you're queueing because all resources are limited, and accepting more messages than you have any expectation of processing requires more durable storage than process memory provides. If you don't feel obligated to fulfill those requests no matter what, don't accept "too many" of them in the first place.
So then, what function has dispatchchan and dispatch()? To store a limited number of pending requests, if any, before they can be processed, and then to send them to the next available worker? That's exactly what a buffered channel is for.
Circular Logic
Who "knows" when your program is done? main() provides the initial input, but you close all 3 channels in `dispatch():
close(jobchan)
close(dispatchchan)
close(mw)
Your workers write to their own job queue so only when the workers are done writing to it can the incoming job queue be closed. However, individual workers also don't know when to close the jobs queue because other workers are writing to it. Nobody knows when your algorithm is done. There's your circular logic.
The mw channel is a buffered channel the same length as the number of worker go routines. It acts as a semaphore for the worker pool.
There's a race condition here. Consider the case where all n workers have just received the last n jobs. They've each read from jobschan and they're checking the value of ok. disptatcher proceeds to run its select. Nobody is writing to dispatchchan or reading from jobschan right now so the default case is immediately matched. len(stack) is 0 and there's no current job so dispatcher closes all channels including mw. At some point thereafter, a worker tries to write to a closed channel and panics.
So finally I'm ready to provide some code, but I have one more problem: I don't have a clear problem statement to write code around.
I'm just getting into concurrency in Go and trying to create a dispatch go routine that will send jobs to a worker pool listening on the jobchan channel.
Channels between goroutines are like the teeth that synchronize gears. But to what end do the gears turn? You're not trying to keep time, nor construct a wind-up toy. Your gears could be made to turn, but what would success look like? Their turning?
Let's try to define a more specific use case for channels: given an arbitrarily long set of durations coming in as strings on standard input*, sleep that many seconds in one of n workers. So that we actually have a result to return, we'll say each worker will return the start and end time the duration was run for.
So that it can run in the playground, I'll simulate standard input with a hard-coded byte buffer.
package main
import (
"bufio"
"bytes"
"fmt"
"os"
"strings"
"sync"
"time"
)
type SleepResult struct {
worker_id int
duration time.Duration
start time.Time
end time.Time
}
func main() {
var num_workers = 2
workchan := make(chan time.Duration)
resultschan := make(chan SleepResult)
var wg sync.WaitGroup
var resultswg sync.WaitGroup
resultswg.Add(1)
go results(&resultswg, resultschan)
for i := 0; i < num_workers; i++ {
wg.Add(1)
go worker(i, &wg, workchan, resultschan)
}
// playground doesn't have stdin
var input = bytes.NewBufferString(
strings.Join([]string{
"3ms",
"1 seconds",
"3600ms",
"300 ms",
"5s",
"0.05min"}, "\n") + "\n")
var scanner = bufio.NewScanner(input)
for scanner.Scan() {
text := scanner.Text()
if dur, err := time.ParseDuration(text); err != nil {
fmt.Fprintln(os.Stderr, "Invalid duration", text)
} else {
workchan <- dur
}
}
close(workchan) // we know when our inputs are done
wg.Wait() // and when our jobs are done
close(resultschan)
resultswg.Wait()
}
func results(wg *sync.WaitGroup, resultschan <-chan SleepResult) {
for res := range resultschan {
fmt.Printf("Worker %d: %s : %s => %s\n",
res.worker_id, res.duration,
res.start.Format(time.RFC3339Nano), res.end.Format(time.RFC3339Nano))
}
wg.Done()
}
func worker(id int, wg *sync.WaitGroup, jobchan <-chan time.Duration, resultschan chan<- SleepResult) {
var res = SleepResult{worker_id: id}
for dur := range jobchan {
res.duration = dur
res.start = time.Now()
time.Sleep(res.duration)
res.end = time.Now()
resultschan <- res
}
wg.Done()
}
Here I use 2 wait groups, one for the workers, one for the results. This makes sure Im done writing all the results before main() ends. I keep my functions simple by having each function do exactly one thing at a time: main reads inputs, parses durations from them, and sends them off to the next worker. The results function collects results and prints them to standard output. The worker does the sleeping, reading from jobchan and writing to resultschan.
workchan can be buffered (or not, as in this case); it doesn't matter because the input will be read at the rate it can be processed. We can buffer as much input as we want, but we can't buffer an infinite amount. I've set channel sizes as big as 1e6 - but a million is a lot less than infinite. For my use case, I don't need to do any buffering at all.
main knows when the input is done and can close the jobschan. main also knows when jobs are done (wg.Wait()) and can close the results channel. Closing these channels is an important signal to the worker and results goroutines - they can distinguish between a channel that is empty and a channel that is guaranteed not to have any new additions.
for job := range jobchan {...} is shorthand for your more verbose:
for {
job, ok := <- jobchan
if !ok {
wg.Done()
return
}
...
}
Note that this code creates 2 workers, but it could create 20 or 2000, or even 1. The program functions regardless of how many workers are in the pool. It can handle any volume of input (though interminable input of course leads to an interminable program). It does not create a cyclic loop of output to input. If your use case requires jobs to create more jobs, that's a more challenging scenario that can typically be avoided with careful planning.
I hope this gives you some good ideas about how you can better use concurrency in your Go applications.
https://play.golang.wiki/p/cZuI9YXypxI

Doesn't go routine and the channels work in order of call?

Doesn't go routine and the channels worked in the order they were called.
and go routine share values between the region variables?
main.go
var dataSendChannel = make(chan int)
func main() {
a(dataSendChannel)
time.Sleep(time.Second * 10)
}
func a(c chan<- int) {
for i := 0; i < 1000; i++ {
go b(dataSendChannel)
c <- i
}
}
func b(c <-chan int) {
val := <-c
fmt.Println(val)
}
output
> go run main.go
0
1
54
3
61
5
6
7
8
9
Channels are ordered. Goroutines are not. Goroutines may run, or stall, more or less at random, whenever they are logically allowed to run. They must stop and wait whenever you force them to do so, e.g., by attempting to write on a full channel, or using a mutex.Lock() call on an already-locked mutex, or any of those sorts of things.
Your dataSendChannel is unbuffered, so an attempt to write to it will pause until some goroutine is actively attempting to read from it. Function a spins off one goroutine that will attempt one read (go b(...)), then writes and therefore waits for at least one reader to be reading. Function b immediately begins reading, waiting for data. This unblocks function a, which can now write some integer value. Function a can now spin off another instance of b, which begins reading; this may happen before, during, or after the b that got a value begins calling fmt.Println. This second instance of b must now wait for someone—which in this case is always function a, running the loop—to send another value, but a does that as quickly as it can. The second instance of b can now begin calling fmt.Println, but it might, mostly-randomly, not get a chance to do that yet. The first instance of b might already be in fmt.Println, or maybe it isn't yet, and the second one might run first—or maybe both wait around for a while and a third instance of b spins up, reads some value from the channel, and so on.
There's no guarantee which instance of b actually gets into fmt.Println when, so the values you see printed will come out in some semi-random order. If you want the various b instances to sequence themselves, they will need to do that somehow.

Go goroutines leaking

Following this post about go-routines leaking, https://www.ardanlabs.com/blog/2018/11/goroutine-leaks-the-forgotten-sender.html, i tried to solve my leaking code. But adding a buffer to the channel it did not do the trick.
My code
package main
import (
"fmt"
"runtime"
"time"
)
func main() {
fmt.Println(runtime.NumGoroutine())
leaking()
time.Sleep(5)
fmt.Println(runtime.NumGoroutine())
}
func leaking() {
errChang := make(chan int, 1)
go func() {
xx := return666()
errChang <- xx
}()
fmt.Println("hola")
return
fmt.Println(<-errChang)
}
func return666() int {
time.Sleep(time.Second * 1)
return 6
}
My initial code did not use a buffer, leading to the go-routine in the leaking function, .. to leak. Following the post i expected that by adding a buffer to the channel, it would have avoided leaking.
Here, in the Go Playground, is your original code with a few slight modifications:
the delays are reduced, except for time.Sleep(5) which becomes time.Sleep(time.Second);
a return is removed because it becomes unnecessary;
a fmt.Println is commented out, because with both the return and the uncommented fmt.Println, go vet complains about the unreachable fmt.Println;
the channel stored in errChang is changed to unbuffered.
When run, its output is:
1
hola
2
(with a small delay before the 2), showing that indeed, the anonymous goroutine you started in function leaking is still running.
If we uncomment the commented-out fmt.Println, the output is:
1
hola
6
1
(with the same slight delay before the final 1) because we now wait for (and then print) the value computed in return666 and sent over channel errChang.
If we keep the commented-out fmt.Println commented out and make the channel buffered, the output becomes:
1
hola
1
as the anonymous goroutine is now able to push its value (6) into the channel.
The channel itself would be garbage collected, along with the single value stored inside it, as there are no remaining references to the channel at this point. Note, however, that simply making the channel buffered is not always sufficient. If we send two values down the channel, the program returns to printing:
1
hola
2
as the anonymous goroutine succeeds in putting 6 into the channel but then blocks trying to put 42 in as well.

Is there a resource leak here?

func First(query string, replicas ...Search) Result {
c := make(chan Result)
searchReplica := func(i int) {
c <- replicas[i](query)
}
for i := range replicas {
go searchReplica(i)
}
return <-c
}
This function is from the slides of Rob Pike on go concurrency patterns in 2012. I think there is a resource leak in this function. As the function return after the first send & receive pair happens on channel c, the other go routines try to send on channel c. So there is a resource leak here. Anyone knows golang well can confirm this? And how can I detect this leak using what kind of golang tooling?
Yes, you are right (for reference, here's the link to the slide). In the above code only one launched goroutine will terminate, the rest will hang on attempting to send on channel c.
Detailing:
c is an unbuffered channel
there is only a single receive operation, in the return statement
A new goroutine is launched for each element of replicas
each launched goroutine sends a value on channel c
since there is only 1 receive from it, one goroutine will be able to send a value on it, the rest will block forever
Note that depending on the number of elements of replicas (which is len(replicas)):
if it's 0: First() would block forever (no one sends anything on c)
if it's 1: would work as expected
if it's > 1: then it leaks resources
The following modified version will not leak goroutines, by using a non-blocking send (with the help of select with default branch):
searchReplica := func(i int) {
select {
case c <- replicas[i](query):
default:
}
}
The first goroutine ready with the result will send it on channel c which will be received by the goroutine running First(), in the return statement. All other goroutines when they have the result will attempt to send on the channel, and "seeing" that it's not ready (send would block because nobody is ready to receive from it), the default branch will be chosen, and thus the goroutine will end normally.
Another way to fix it would be to use a buffered channel:
c := make(chan Result, len(replicas))
And this way the send operations would not block. And of course only one (the first sent) value will be received from the channel and returned.
Note that the solution with any of the above fixes would still block if len(replicas) is 0. To avoid that, First() should check this explicitly, e.g.:
func First(query string, replicas ...Search) Result {
if len(replicas) == 0 {
return Result{}
}
// ...rest of the code...
}
Some tools / resources to detect leaks:
https://github.com/fortytw2/leaktest
https://github.com/zimmski/go-leak
https://medium.com/golangspec/goroutine-leak-400063aef468
https://blog.minio.io/debugging-go-routine-leaks-a1220142d32c

Why is this code Undefined Behavior?

var x int
done := false
go func() { x = f(...); done = true }
while done == false { }
This is a Go code piece. My fiend told me this is UB code. Why?
As explained in "Why does this program terminate on my system but not on playground?"
The Go Memory Model does not guarantee that the value written to x in the goroutine will ever be observed by the main program.
A similarly erroneous program is given as an example in the section on go routine destruction.
The Go Memory Model also specifically calls out busy waiting without synchronization as an incorrect idiom in this section.
(in your case, there is no guarantee that the value written to done in the goroutine will ever be observed by the main program)
Here, You need to do some kind of synchronization in the goroutine in order to guarantee that done=true happens before one of the iterations of the for loop in main.
The "while" (non-existent in Go) should be replaced by, for instance, a channel you block on (waiting for a communication)
for {
<-c // 2
}
Based on a channel (c := make(chan bool)) created in main, and closed (close(c)) in the goroutine.
The sync package provides other means to wait for a gorountine to end before exiting main.
See for instance Golang Example Wait until all the background goroutine finish:
var w sync.WaitGroup
w.Add(1)
go func() {
// do something
w.Done()
}
w.Wait()

Resources