Confused about channel argument for Goroutines - go

I am learning about channels and concurrency in Go and I am confused on how the code below works.
package main
import (
"fmt"
"time"
"sync/atomic"
)
var workerID int64
var publisherID int64
func main() {
input := make(chan string)
go workerProcess(input)
go workerProcess(input)
go workerProcess(input)
go publisher(input)
go publisher(input)
go publisher(input)
go publisher(input)
time.Sleep(1 * time.Millisecond)
}
// publisher pushes data into a channel
func publisher(out chan string) {
atomic.AddInt64(&publisherID, 1)
thisID := atomic.LoadInt64(&publisherID)
dataID := 0
for {
dataID++
fmt.Printf("publisher %d is pushing data\n", thisID)
data := fmt.Sprintf("Data from publisher %d. Data %d", thisID, dataID)
out <- data
}
}
func workerProcess(in <-chan string) {
atomic.AddInt64(&workerID, 1)
thisID := atomic.LoadInt64(&workerID)
for {
fmt.Printf("%d: waiting for input...\n", thisID)
input := <-in
fmt.Printf("%d: input is: %s\n", thisID, input)
}
}
This is what I understand please correct me if I'm wrong:
workerProcess Goroutine:
Input channel taken as the argument which is assigned as the in channel.
In the for loop, the first printf is executed.
The value off the in channel is assigned to the variable input.
The last printf is executed to show the value of thisID and input.(This doesn't really execute till after other Goroutines run).
publisher Goroutine:
Input channel taken as the argument which is assigned as the out channel.
In the for loop, dataID is incremented then first printf is executed.
The string is assigned to data.
The value of data is passed to out.
Questions:
Where is the value from out getting passed to in the publisher Goroutine? It seems to be only in the scope of the for loop, wouldn't this cause deadlock. Since this is an unbuffered channel.
I don't get how the workerProcess gets the data from publisher if
all the Goroutines in main have the arguments as the channel input.
I'm used to having code written like this if the output of one function
is used in another:
foo = fcn1(input)
fcn2(foo)
I suspect it has something with how Goroutines run in the background but I'm not to sure I would appreciate an explanation.
Why isn't the last printf statement in workerProcess not executed? My guess is the channel is empty, so it's waiting for a value.
Part of the output:
1: waiting for input...
publisher 1 is pushing data
publisher 1 is pushing data
1: input is: Data from publisher 1. Data 1
1: waiting for input...
1: input is: Data from publisher 1. Data 2
1: waiting for input...
publisher 1 is pushing data
publisher 1 is pushing data
publisher 2 is pushing data
2: waiting for input..

You have one channel, it is made with make in main.
This one single channel is named in in workerProcess and out in publisher. out is a function argument to publisher which answers question 1.
Question 2: That's the whole purpose of a channel. What you stuff into a channel on its input side comes out at its output side. If a function has reference to (one end of) such a channel it may communicate with someone else having reference to the same channel (its other end). What producer send to the channel is received by workerProcess. This send and receive is done through the special operator <- on Go. A fact you got slightly wrong in your explanation.
out <- data takes data and sends it through the channel named out until <- in reads it from the channel (remember in and out name the same channel from main). That's how workerProcess and publisher communicate.
Question 3 is a duplicate. Your whole program terminates once main is done (in your case after 1 millisecond. Nothing happens after termination of the program. Give the program more time to execute. (The non-printing is unrelated to the channel).

Related

Go Concurrency Circular Logic

I'm just getting into concurrency in Go and trying to create a dispatch go routine that will send jobs to a worker pool listening on the jobchan channel. If a message comes into my dispatch function via the dispatchchan channel and my other go routines are busy, the message is appended onto the stack slice in the dispatcher and the dispatcher will try to send again later when a worker becomes available, and/or no more messages are received on the dispatchchan. This is because the dispatchchan and the jobchan are unbuffered, and the go routine the workers are running will append other messages to the dispatcher up to a certain point and I don't want the workers blocked waiting on the dispatcher and creating a deadlock. Here's the dispatcher code I've come up with so far:
func dispatch() {
var stack []string
acount := 0
for {
select {
case d := <-dispatchchan:
stack = append(stack, d)
case c := <-mw:
acount = acount + c
case jobchan <-stack[0]:
if len(stack) > 1 {
stack[0] = stack[len(stack)-1]
stack = stack[:len(stack)-1]
} else {
stack = nil
}
default:
if acount == 0 && len(stack) == 0 {
close(jobchan)
close(dispatchchan)
close(mw)
wg.Done()
return
}
}
}
Complete example at https://play.golang.wiki/p/X6kXVNUn5N7
The mw channel is a buffered channel the same length as the number of worker go routines. It acts as a semaphore for the worker pool. If the worker routine is doing [m]eaningful [w]ork it throws int 1 on the mw channel and when it finishes its work and goes back into the for loop listening to the jobchan it throws int -1 on the mw. This way the dispatcher knows if there's any work being done by the worker pool, or if the pool is idle. If the pool is idle and there are no more messages on the stack, then the dispatcher closes the channels and return control to the main func.
This is all good but the issue I have is that the stack itself could be zero length so the case where I attempt to send stack[0] to the jobchan, if the stack is empty, I get an out of bounds error. What I'm trying to figure out is how to ensure that when I hit that case, either stack[0] has a value in it or not. I don't want that case to send an empty string to the jobchan.
Any help is greatly appreciated. If there's a more idomatic concurrency pattern I should consider, I'd love to hear about it. I'm not 100% sold on this solution but this is the farthest I've gotten so far.
This is all good but the issue I have is that the stack itself could be zero length so the case where I attempt to send stack[0] to the jobchan, if the stack is empty, I get an out of bounds error.
I can't reproduce it with your playground link, but it's believable, because at lest one gofunc worker might have been ready to receive on that channel.
My output has been Msgcnt: 0, which is also easily explained, because gofunc might not have been ready to receive on jobschan when dispatch() runs its select. The order of these operations is not defined.
trying to create a dispatch go routine that will send jobs to a worker pool listening on the jobchan channel
A channel needs no dispatcher. A channel is the dispatcher.
If a message comes into my dispatch function via the dispatchchan channel and my other go routines are busy, the message is [...] will [...] send again later when a worker becomes available, [...] or no more messages are received on the dispatchchan.
With a few creative edits, it was easy to turn that into something close to the definition of a buffered channel. It can be read from immediately, or it can take up to some "limit" of messages that can't be immediately dispatched. You do define limit, though it's not used elsewhere within your code.
In any function, defining a variable you don't read will result in a compile time error like limit declared but not used. This stricture improves code quality and helps identify typeos. But at package scope, you've gotten away with defining the unused limit as a "global" and thus avoided a useful error - you haven't limited anything.
Don't use globals. Use passed parameters to define scope, because the definition of scope is tantamount to functional concurrency as expressed with the go keyword. Pass the relevant channels defined in local scope to functions defined at package scope so that you can easily track their relationships. And use directional channels to enforce the producer/consumer relationship between your functions. More on this later.
Going back to "limit", it makes sense to limit the quantity of jobs you're queueing because all resources are limited, and accepting more messages than you have any expectation of processing requires more durable storage than process memory provides. If you don't feel obligated to fulfill those requests no matter what, don't accept "too many" of them in the first place.
So then, what function has dispatchchan and dispatch()? To store a limited number of pending requests, if any, before they can be processed, and then to send them to the next available worker? That's exactly what a buffered channel is for.
Circular Logic
Who "knows" when your program is done? main() provides the initial input, but you close all 3 channels in `dispatch():
close(jobchan)
close(dispatchchan)
close(mw)
Your workers write to their own job queue so only when the workers are done writing to it can the incoming job queue be closed. However, individual workers also don't know when to close the jobs queue because other workers are writing to it. Nobody knows when your algorithm is done. There's your circular logic.
The mw channel is a buffered channel the same length as the number of worker go routines. It acts as a semaphore for the worker pool.
There's a race condition here. Consider the case where all n workers have just received the last n jobs. They've each read from jobschan and they're checking the value of ok. disptatcher proceeds to run its select. Nobody is writing to dispatchchan or reading from jobschan right now so the default case is immediately matched. len(stack) is 0 and there's no current job so dispatcher closes all channels including mw. At some point thereafter, a worker tries to write to a closed channel and panics.
So finally I'm ready to provide some code, but I have one more problem: I don't have a clear problem statement to write code around.
I'm just getting into concurrency in Go and trying to create a dispatch go routine that will send jobs to a worker pool listening on the jobchan channel.
Channels between goroutines are like the teeth that synchronize gears. But to what end do the gears turn? You're not trying to keep time, nor construct a wind-up toy. Your gears could be made to turn, but what would success look like? Their turning?
Let's try to define a more specific use case for channels: given an arbitrarily long set of durations coming in as strings on standard input*, sleep that many seconds in one of n workers. So that we actually have a result to return, we'll say each worker will return the start and end time the duration was run for.
So that it can run in the playground, I'll simulate standard input with a hard-coded byte buffer.
package main
import (
"bufio"
"bytes"
"fmt"
"os"
"strings"
"sync"
"time"
)
type SleepResult struct {
worker_id int
duration time.Duration
start time.Time
end time.Time
}
func main() {
var num_workers = 2
workchan := make(chan time.Duration)
resultschan := make(chan SleepResult)
var wg sync.WaitGroup
var resultswg sync.WaitGroup
resultswg.Add(1)
go results(&resultswg, resultschan)
for i := 0; i < num_workers; i++ {
wg.Add(1)
go worker(i, &wg, workchan, resultschan)
}
// playground doesn't have stdin
var input = bytes.NewBufferString(
strings.Join([]string{
"3ms",
"1 seconds",
"3600ms",
"300 ms",
"5s",
"0.05min"}, "\n") + "\n")
var scanner = bufio.NewScanner(input)
for scanner.Scan() {
text := scanner.Text()
if dur, err := time.ParseDuration(text); err != nil {
fmt.Fprintln(os.Stderr, "Invalid duration", text)
} else {
workchan <- dur
}
}
close(workchan) // we know when our inputs are done
wg.Wait() // and when our jobs are done
close(resultschan)
resultswg.Wait()
}
func results(wg *sync.WaitGroup, resultschan <-chan SleepResult) {
for res := range resultschan {
fmt.Printf("Worker %d: %s : %s => %s\n",
res.worker_id, res.duration,
res.start.Format(time.RFC3339Nano), res.end.Format(time.RFC3339Nano))
}
wg.Done()
}
func worker(id int, wg *sync.WaitGroup, jobchan <-chan time.Duration, resultschan chan<- SleepResult) {
var res = SleepResult{worker_id: id}
for dur := range jobchan {
res.duration = dur
res.start = time.Now()
time.Sleep(res.duration)
res.end = time.Now()
resultschan <- res
}
wg.Done()
}
Here I use 2 wait groups, one for the workers, one for the results. This makes sure Im done writing all the results before main() ends. I keep my functions simple by having each function do exactly one thing at a time: main reads inputs, parses durations from them, and sends them off to the next worker. The results function collects results and prints them to standard output. The worker does the sleeping, reading from jobchan and writing to resultschan.
workchan can be buffered (or not, as in this case); it doesn't matter because the input will be read at the rate it can be processed. We can buffer as much input as we want, but we can't buffer an infinite amount. I've set channel sizes as big as 1e6 - but a million is a lot less than infinite. For my use case, I don't need to do any buffering at all.
main knows when the input is done and can close the jobschan. main also knows when jobs are done (wg.Wait()) and can close the results channel. Closing these channels is an important signal to the worker and results goroutines - they can distinguish between a channel that is empty and a channel that is guaranteed not to have any new additions.
for job := range jobchan {...} is shorthand for your more verbose:
for {
job, ok := <- jobchan
if !ok {
wg.Done()
return
}
...
}
Note that this code creates 2 workers, but it could create 20 or 2000, or even 1. The program functions regardless of how many workers are in the pool. It can handle any volume of input (though interminable input of course leads to an interminable program). It does not create a cyclic loop of output to input. If your use case requires jobs to create more jobs, that's a more challenging scenario that can typically be avoided with careful planning.
I hope this gives you some good ideas about how you can better use concurrency in your Go applications.
https://play.golang.wiki/p/cZuI9YXypxI

How to create channels in loop?

I am learning concurrency in go and how it works.
What I am trying to do ?
Loop through slice of data
Create struct for required/needed data
Create channel for that struct
Call worker func using go rutine and pass that channel to that rutine
Using data from channel do some processing
Set the processed output back into channel
Wait in main thread to get output from all the channels which we kicked off
Code Which I tried
package main
import (
"fmt"
"github.com/pkg/errors"
"time"
)
type subject struct {
Name string
Class string
StartDate time.Time
EndDate time.Time
}
type workerData struct {
Subject string
Class string
Result string
Error error
}
func main () {
// Creating test data
var subjects []subject
st,_ := time.Parse("01/02/2016","01/01/2015")
et,_ := time.Parse("01/02/2016","01/01/2016")
s1 := subject{Name:"Math", Class:"3", StartDate:st,EndDate:et }
s2 := subject{Name:"Geo", Class:"3", StartDate:st,EndDate:et }
s3 := subject{Name:"Bio", Class:"3", StartDate:st,EndDate:et }
s4 := subject{Name:"Phy", Class:"3", StartDate:st,EndDate:et }
s5 := subject{Name:"Art", Class:"3", StartDate:st,EndDate:et }
subjects = append(subjects, s1)
subjects = append(subjects, s2)
subjects = append(subjects, s3)
subjects = append(subjects, s4)
subjects = append(subjects, s5)
c := make(chan workerData) // I am sure this is not how I should be creating channel
for i := 0 ; i< len(subjects) ; i++ {
go worker(c)
}
for _, v := range subjects {
// Setting required data in channel
data := workerData{Subject:v.Name, Class:v.Class}
// set the data and start the routine
c <- data // I think this will update data for all the routines ? SO how should create separate channel for each routine
}
// I want to wait till all the routines set the data in channel and return the data from workers.
for {
select {
case data := <- c :
fmt.Println(data)
}
}
}
func worker (c chan workerData) {
data := <- c
// This can be any processing
time.Sleep(100 * time.Millisecond)
if data.Subject != "Math" {
data.Result = "Pass"
} else {
data.Error = errors.New("Subject not found")
}
fmt.Println(data.Subject)
// returning processed data and error to channel
c <- data
// Rightfully this closes channel and here after I get error send on Closed channel.
close(c)
}
Playgorund Link - https://play.golang.org/p/hs1-B1UR98r
Issue I am Facing
I am not sure how to create different channel for each data item. The way I am currently doing will update the channel data for all routines. I want to know is there way to create diffrent channel for each data item in loop and pass that to the go rutine. And then wait in main rutine to get the result back from rutines from all channels.
Any pointers/ help would be great ? If any confusion feel free to comment.
"// I think this will update data for all the routines ?"
A channel (to simplify) is not a data structure to store data.
It is a structure to send and receive data over different goroutines.
As such, notice that your worker function is doing send and receive on the same channel within each goroutine instances. If you were having only one instance of such worker, this would deadlock (https://golang.org/doc/articles/race_detector.html).
In the version of the code you posted, for a beginner this might seem to work because you have many workers exchanging works to each other. But it is wrong for a correct program.
As a consequence, if a worker can not read and write the same channel, then it must consume a specific writable channel to send its results to some other routines.
// I want to wait till all the routines set the data in channel and
return the data from workers.
This is part of the synchronization mechanisms required to ensure that a pusher waits until all its workers has finished their job before proceeding further. (this blog post talks about it https://medium.com/golangspec/synchronized-goroutines-part-i-4fbcdd64a4ec)
// Rightfully this closes channel and here after I get error send on
Closed channel.
Take care that you have n routines of workers executing in parallel. The first of this worker to reach the end of its function will close the channel, making it unwritable to other workers, and false signaling its end to main.
Normally one use the close statement on the writer side to indicate that there is no more data into the channel. To indicate it has ended. This signal is consumed by readers to quit their read-wait operation of the channel.
As an example, lets review this loop
for {
select {
case data := <- c :
fmt.Println(data)
}
}
it is bad, really bad.
It is an infinite loop with no exit statement
The select is superfluous and does not contain exit statement, remember that a read on a channel is a blocking operation.
It is a bad rewrite of a standard pattern provided by the language, the range loop over a channel
The range loop over a channel is very simply written
for data := range c {
fmt.Println(data)
}
This pattern has one great advantage, it automatically detect a closed channel to exit the loop! letting you loop over only the relevant data to process. It is also much more succint.
Also, your worker is a awkward in that it read and write only one element before quitting.
Spawning go routines is cheap, but not free. You should always evaluate the trade-off between the costs of async processing and its actual workload.
Overall, your code should be closer to what is demonstrated here
https://gobyexample.com/worker-pools

Is there a resource leak here?

func First(query string, replicas ...Search) Result {
c := make(chan Result)
searchReplica := func(i int) {
c <- replicas[i](query)
}
for i := range replicas {
go searchReplica(i)
}
return <-c
}
This function is from the slides of Rob Pike on go concurrency patterns in 2012. I think there is a resource leak in this function. As the function return after the first send & receive pair happens on channel c, the other go routines try to send on channel c. So there is a resource leak here. Anyone knows golang well can confirm this? And how can I detect this leak using what kind of golang tooling?
Yes, you are right (for reference, here's the link to the slide). In the above code only one launched goroutine will terminate, the rest will hang on attempting to send on channel c.
Detailing:
c is an unbuffered channel
there is only a single receive operation, in the return statement
A new goroutine is launched for each element of replicas
each launched goroutine sends a value on channel c
since there is only 1 receive from it, one goroutine will be able to send a value on it, the rest will block forever
Note that depending on the number of elements of replicas (which is len(replicas)):
if it's 0: First() would block forever (no one sends anything on c)
if it's 1: would work as expected
if it's > 1: then it leaks resources
The following modified version will not leak goroutines, by using a non-blocking send (with the help of select with default branch):
searchReplica := func(i int) {
select {
case c <- replicas[i](query):
default:
}
}
The first goroutine ready with the result will send it on channel c which will be received by the goroutine running First(), in the return statement. All other goroutines when they have the result will attempt to send on the channel, and "seeing" that it's not ready (send would block because nobody is ready to receive from it), the default branch will be chosen, and thus the goroutine will end normally.
Another way to fix it would be to use a buffered channel:
c := make(chan Result, len(replicas))
And this way the send operations would not block. And of course only one (the first sent) value will be received from the channel and returned.
Note that the solution with any of the above fixes would still block if len(replicas) is 0. To avoid that, First() should check this explicitly, e.g.:
func First(query string, replicas ...Search) Result {
if len(replicas) == 0 {
return Result{}
}
// ...rest of the code...
}
Some tools / resources to detect leaks:
https://github.com/fortytw2/leaktest
https://github.com/zimmski/go-leak
https://medium.com/golangspec/goroutine-leak-400063aef468
https://blog.minio.io/debugging-go-routine-leaks-a1220142d32c

Multiple receivers on a single channel. Who gets the data?

Unbuffered channels block receivers until data is available on the channel. It's not clear to me how this blocking behaves with multiple receivers on the same channel (say when using goroutines). I am sure they would all block as long as there is no data sent on that channel.
But what happens once I send a single value to that channel? Which receiver/goroutine will get the data and therefore unblock? All of them? The first in line? Random?
A single random (non-deterministic) one will receive it.
See the language spec:
Execution of a "select" statement proceeds in several steps:
For all the cases in the statement, the channel operands of receive operations and the channel and right-hand-side expressions of send
statements are evaluated exactly once, in source order, upon entering
the "select" statement. The result is a set of channels to receive
from or send to, and the corresponding values to send. Any side
effects in that evaluation will occur irrespective of which (if any)
communication operation is selected to proceed. Expressions on the
left-hand side of a RecvStmt with a short variable declaration or
assignment are not yet evaluated.
If one or more of the communications can proceed, a single one that can proceed is chosen via a uniform pseudo-random selection.
Otherwise, if there is a default case, that case is chosen. If there
is no default case, the "select" statement blocks until at least one
of the communications can proceed.
Unless the selected case is the default case, the respective communication operation is executed.
If the selected case is a RecvStmt with a short variable declaration or an assignment, the left-hand side expressions are
evaluated and the received value (or values) are assigned.
The statement list of the selected case is executed.
By default the goroutine communication is synchronous and unbuffered: sends do not complete until there is a receiver to accept the value. There must be a receiver ready to receive data from the channel and then the sender can hand it over directly to the receiver.
So channel send/receive operations block until the other side is ready:
1. A send operation on a channel blocks until a receiver is available for the same channel: if there’s no recipient for the value on ch, no other value can be put in the channel. And the other way around: no new value can be sent in ch when the channel is not empty! So the send operation will wait until ch becomes available again.
2. A receive operation for a channel blocks until a sender is available for the same channel: if there is no value in the channel, the receiver blocks.
This is illustrated in the below example:
package main
import "fmt"
func main() {
ch1 := make(chan int)
go pump(ch1) // pump hangs
fmt.Println(<-ch1) // prints only 0
}
func pump(ch chan int) {
for i:= 0; ; i++ {
ch <- i
}
}
Because there is no receiver the goroutine hangs and print only the first number.
To workaround this we need to define a new goroutine which reads from the channel in an infinite loop.
func receive(ch chan int) {
for {
fmt.Println(<- ch)
}
}
Then in main():
func main() {
ch := make(chan int)
go pump(ch)
receive(ch)
}
If the program is allowing multiple goroutines to receive on a single channel then the sender is broadcasting. Each receiver should be equally able to process the data. So it does not matter what mechanism the go runtime uses to decide which of the many goroutine receivers will run Cf. https://github.com/golang/go/issues/247. But only ONE will run for each sent item if the channel is unbuffered.
There have been some discussion about this
But what is established in the Go Memory Model is that it will be at most one of them.
Each send on a particular channel is matched to a corresponding receive from that channel, usually in a different goroutine.
That isn't as clear cut as I would like, but later down they give this example of a semaphore implementation
var limit = make(chan int, 3)
func main() {
for _, w := range work {
go func(w func()) {
limit <- 1
w()
// if it were possible for more than one channel to receive
// from a single send, it would be possible for this to release
// more than one "lock", making it an invalid semaphore
// implementation
<-limit
}(w)
}
select{}
}

some questions about channel in go

1-what is the condition that make chan break?
deliveries <-chan amqp.Delivery
for d:= range deliveries{
..
}
If there is no more data in chan deliveries about a few minutes,that it will break.
Is the code up is same to below?
deliveries <- chan amqp.Delivery
for{
d,ok:=<-deliveries
if !ok{
break
}
//code
}
2-Why does chan not only return data but also status?And what does the "ok" mean?
3-How does the chan realize?"ok" is the status about client,Why can it return the "ok"?
I will answer question 2 and 3 first because the answer provides context for my answer to question 1.
2, 3) The builtin function close(c) records that no more values will be sent to the channel c.
The second result in a receive expression is a bool indicating if the operation was successful. The second result is true if a sent value was received or false if the zero value was received because the channel was closed.
1) Range over a channel receives values sent on the channel until the channel is closed.
The following loops are very similar. They both receive values until the channel is closed.
for v := range c {
// code
}
for {
v, ok := <-c
if != ok {
break
}
// code
}
The main difference between these loops is the scope of of the variable v. The scope of v is outside of the first loop and inside the second loop. This distinction is important if you use a closure and goroutine in the loop.
1) Code 1 and 2 differ: The second also fetches ok which indicates whether the channel was closed by the sender. This makes your code more robust.
2) Channels can only transfer one type of message. If you need status code you ahve to put it inside the message.

Resources