What is the simplest way of terminating (poisoning) a producer process in occam? - occam-pi

My occam-pi application has a long running producer process defined as follows:
PROC producer (VAL INT start, step, CHAN OF INT c!)
INT count:
SEQ
count := start
WHILE TRUE
SEQ
c ! count
count := count + step
:
It sends a value on the channel c increasing from start by step. A full example is available here.
This works great, and I'm led to believe that infinite loops are idiomatic in CSP. The problem arises when my consuming algorithm is finished. In this example, a deadlock occurs once the consumer finishes.
The TAGGED.INT protocol described here attempts to solve the problem of shutting down a network of processes, however, from my current understanding there is no simple method for terminating a producer whose primary job is sending on a channel. It feels like the only method of halting a producer is to use some sort of control channel and black hole the output:
PROTOCOL CONTROL
CASE
poison
:
PROTOCOL TAGGED.INT
CASE
normal; INT
poison
:
PROC producer (VAL INT start, step, CHAN OF TAGGED.INT c!, CHAN OF CONTROL control?)
INT count:
INITIAL BOOL running IS TRUE:
SEQ
count := start
WHILE running
SEQ
PRI ALT
control ? poison
SEQ
running := FALSE
c ! poison -- necessary, only to kill the black hole process
SKIP
SEQ
c ! normal; count
count := count + step
:
A full working example is available here. The problem with this is that the code is much more unreadable - subjective, I know, but important for software engineering - the original intent is convoluted compared to the original. It seems contradictory to Occam's Razor!
With JCSP, C++CSP2 and python-csp a channel can be explicitly poisoned in order to shutdown a network of processes. For some reason wrangling occam to do this pollutes the code with shutdown logic and seems illogical.
So the question is, is there a method of terminating a producer process without the use of an explicit control channel as in the example?
EDIT:
There is potentially more information on this topic contained within this mailing list archive (Poison), this is quite old (> 10 years). So the question still stands, has anything changed since then, or is this the best way of achieving 'process termination' in occam-pi?

So the question is, is there a method of terminating a producer process without the use of an explicit control channel as in the example?
As long as the decision to terminate originates from outside the producer process, there is no other way than to use a (control) channel. It is because in the distributed-memory model the information has to be communicated via a message.
That said, the poisoning method you refer to is a general method, and it can be made to work in this case too. The reason it pollutes the solution is that the original (non-terminating) producer process only sends messages, but does not receive any. For the poisoning method to work, the producer has to be prepared to accept messages, and - what is even more inconvenient - the consumer has to be prepared to deal with a sluggish producer.
I would consider using a different technique to solve the problem: the producer would get a signal after each message sent whether the consumer want it to continue or not. This would result in more traffic, but the structure of the solution is clearer this way.
Occam 2.1 code:
PROC producer( VAL INT start, step, CHAN INT data, CHAN BOOL control)
BOOL running:
INT count:
SEQ
count, running := start, TRUE
WHILE running
SEQ
data ! count
control ? running
count := count + step
: -- producer
PROC main( CHAN BYTE inp, out, err)
CHAN INT data:
CHAN BOOL control:
VAL INT amount IS 10:
INT val:
PAR
producer( 0, 4, data, control)
SEQ n= 1 FOR amount
SEQ
data ? val
control ! n < amount
out.int( val, 0, out)
out.string( "*n", 0, out)
: -- main

Related

Golang concurrency access to slice

My use case:
append items(small struct) to a slice in the main process
every 100 items I want to process items in a processor go routine (then pop them from slice)
items comme in very fast continuously
I read that if there is at least one "write" in more then two goroutines using a variable (slice in my case), one shall handle the concurrency (mutex or similar).
My questions:
If I do not handle with a mutex the r/w on slice do I risk problems ? (ie. Item 101 arrives while the processor is working on 1-100s)
What is the best concurrency technique for the incoming item flow to remain "fluent" ?
Disclaimer:
I do not want any event queueing, I need to process items in a given "bundle" size
Actually you don't need a lock here. Here is a working code:
package main
import (
"fmt"
"sync"
)
type myStruct struct {
Cpt int
}
func main() {
buf := make([]myStruct, 0, 100)
wg := sync.WaitGroup{}
// Main process
// Appending one million times
for i := 0; i < 10e6; i++ {
// Locking buffer
// Appending
buf = append(buf, myStruct{Cpt: i})
// Did we reach 100 items ?
if len(buf) >= 100 {
// Yes we did. Creating a slice from the buffer
processSlice := make([]myStruct, 100)
copy(processSlice, buf[0:100])
// Emptying buffer
buf = buf[:0]
// Running processor in parallel
// Adding one element to waitgroup
wg.Add(1)
go processor(&wg, processSlice)
}
}
// Waiting for all processors to finish
wg.Wait()
}
func processor(wg *sync.WaitGroup, processSlice []myStruct) {
// Removing one element to waitgroup when done
defer wg.Done()
// Doing some process
fmt.Printf("Procesing items from %d to %d\n", processSlice[0].Cpt, processSlice[99].Cpt)
}
A few notes about your problem and this solution:
If you want a minimal stop time in your feeding process (e.g, to respond as fast as possible to a HTTP call), then the minimal thing to do is just the copy part, and run the processor function in a go routine. By doing so, you have to create a unique process slice dynamically and copying the content of your buffer inside it.
The sync.WaitGroup object is needed to ensure that the last processor function has ended before exiting the program.
Note that this is not a perfect solution: If you run this pattern for a long time, and the input data comes in more than 100 times faster than the processor processes the slices, then there are going to be:
More and more processSlice instances in RAM -> Risks for filling the RAM and hitting the swap
More and more parallel processor goroutines -> Same risks for the RAM, and more to process in the same time, making each of the calls be slower and the problem gets self-feeding.
This will end up in the system crashing at some point.
The solution for this is to have a limited number of workers that ensures there is no crash. However, when this number of workers is fully busy, then there will be wait in the feeding process, which does not answer what you want. However this is a good solution to absorb a charge which intensity is changing in time.
In general, just remember that if you feed more data than you can process in the same time, your program will just reach a limit at some point where it can't handle it so it has to slow down input acquisition or crash. This is mathematical!

Go Concurrency Circular Logic

I'm just getting into concurrency in Go and trying to create a dispatch go routine that will send jobs to a worker pool listening on the jobchan channel. If a message comes into my dispatch function via the dispatchchan channel and my other go routines are busy, the message is appended onto the stack slice in the dispatcher and the dispatcher will try to send again later when a worker becomes available, and/or no more messages are received on the dispatchchan. This is because the dispatchchan and the jobchan are unbuffered, and the go routine the workers are running will append other messages to the dispatcher up to a certain point and I don't want the workers blocked waiting on the dispatcher and creating a deadlock. Here's the dispatcher code I've come up with so far:
func dispatch() {
var stack []string
acount := 0
for {
select {
case d := <-dispatchchan:
stack = append(stack, d)
case c := <-mw:
acount = acount + c
case jobchan <-stack[0]:
if len(stack) > 1 {
stack[0] = stack[len(stack)-1]
stack = stack[:len(stack)-1]
} else {
stack = nil
}
default:
if acount == 0 && len(stack) == 0 {
close(jobchan)
close(dispatchchan)
close(mw)
wg.Done()
return
}
}
}
Complete example at https://play.golang.wiki/p/X6kXVNUn5N7
The mw channel is a buffered channel the same length as the number of worker go routines. It acts as a semaphore for the worker pool. If the worker routine is doing [m]eaningful [w]ork it throws int 1 on the mw channel and when it finishes its work and goes back into the for loop listening to the jobchan it throws int -1 on the mw. This way the dispatcher knows if there's any work being done by the worker pool, or if the pool is idle. If the pool is idle and there are no more messages on the stack, then the dispatcher closes the channels and return control to the main func.
This is all good but the issue I have is that the stack itself could be zero length so the case where I attempt to send stack[0] to the jobchan, if the stack is empty, I get an out of bounds error. What I'm trying to figure out is how to ensure that when I hit that case, either stack[0] has a value in it or not. I don't want that case to send an empty string to the jobchan.
Any help is greatly appreciated. If there's a more idomatic concurrency pattern I should consider, I'd love to hear about it. I'm not 100% sold on this solution but this is the farthest I've gotten so far.
This is all good but the issue I have is that the stack itself could be zero length so the case where I attempt to send stack[0] to the jobchan, if the stack is empty, I get an out of bounds error.
I can't reproduce it with your playground link, but it's believable, because at lest one gofunc worker might have been ready to receive on that channel.
My output has been Msgcnt: 0, which is also easily explained, because gofunc might not have been ready to receive on jobschan when dispatch() runs its select. The order of these operations is not defined.
trying to create a dispatch go routine that will send jobs to a worker pool listening on the jobchan channel
A channel needs no dispatcher. A channel is the dispatcher.
If a message comes into my dispatch function via the dispatchchan channel and my other go routines are busy, the message is [...] will [...] send again later when a worker becomes available, [...] or no more messages are received on the dispatchchan.
With a few creative edits, it was easy to turn that into something close to the definition of a buffered channel. It can be read from immediately, or it can take up to some "limit" of messages that can't be immediately dispatched. You do define limit, though it's not used elsewhere within your code.
In any function, defining a variable you don't read will result in a compile time error like limit declared but not used. This stricture improves code quality and helps identify typeos. But at package scope, you've gotten away with defining the unused limit as a "global" and thus avoided a useful error - you haven't limited anything.
Don't use globals. Use passed parameters to define scope, because the definition of scope is tantamount to functional concurrency as expressed with the go keyword. Pass the relevant channels defined in local scope to functions defined at package scope so that you can easily track their relationships. And use directional channels to enforce the producer/consumer relationship between your functions. More on this later.
Going back to "limit", it makes sense to limit the quantity of jobs you're queueing because all resources are limited, and accepting more messages than you have any expectation of processing requires more durable storage than process memory provides. If you don't feel obligated to fulfill those requests no matter what, don't accept "too many" of them in the first place.
So then, what function has dispatchchan and dispatch()? To store a limited number of pending requests, if any, before they can be processed, and then to send them to the next available worker? That's exactly what a buffered channel is for.
Circular Logic
Who "knows" when your program is done? main() provides the initial input, but you close all 3 channels in `dispatch():
close(jobchan)
close(dispatchchan)
close(mw)
Your workers write to their own job queue so only when the workers are done writing to it can the incoming job queue be closed. However, individual workers also don't know when to close the jobs queue because other workers are writing to it. Nobody knows when your algorithm is done. There's your circular logic.
The mw channel is a buffered channel the same length as the number of worker go routines. It acts as a semaphore for the worker pool.
There's a race condition here. Consider the case where all n workers have just received the last n jobs. They've each read from jobschan and they're checking the value of ok. disptatcher proceeds to run its select. Nobody is writing to dispatchchan or reading from jobschan right now so the default case is immediately matched. len(stack) is 0 and there's no current job so dispatcher closes all channels including mw. At some point thereafter, a worker tries to write to a closed channel and panics.
So finally I'm ready to provide some code, but I have one more problem: I don't have a clear problem statement to write code around.
I'm just getting into concurrency in Go and trying to create a dispatch go routine that will send jobs to a worker pool listening on the jobchan channel.
Channels between goroutines are like the teeth that synchronize gears. But to what end do the gears turn? You're not trying to keep time, nor construct a wind-up toy. Your gears could be made to turn, but what would success look like? Their turning?
Let's try to define a more specific use case for channels: given an arbitrarily long set of durations coming in as strings on standard input*, sleep that many seconds in one of n workers. So that we actually have a result to return, we'll say each worker will return the start and end time the duration was run for.
So that it can run in the playground, I'll simulate standard input with a hard-coded byte buffer.
package main
import (
"bufio"
"bytes"
"fmt"
"os"
"strings"
"sync"
"time"
)
type SleepResult struct {
worker_id int
duration time.Duration
start time.Time
end time.Time
}
func main() {
var num_workers = 2
workchan := make(chan time.Duration)
resultschan := make(chan SleepResult)
var wg sync.WaitGroup
var resultswg sync.WaitGroup
resultswg.Add(1)
go results(&resultswg, resultschan)
for i := 0; i < num_workers; i++ {
wg.Add(1)
go worker(i, &wg, workchan, resultschan)
}
// playground doesn't have stdin
var input = bytes.NewBufferString(
strings.Join([]string{
"3ms",
"1 seconds",
"3600ms",
"300 ms",
"5s",
"0.05min"}, "\n") + "\n")
var scanner = bufio.NewScanner(input)
for scanner.Scan() {
text := scanner.Text()
if dur, err := time.ParseDuration(text); err != nil {
fmt.Fprintln(os.Stderr, "Invalid duration", text)
} else {
workchan <- dur
}
}
close(workchan) // we know when our inputs are done
wg.Wait() // and when our jobs are done
close(resultschan)
resultswg.Wait()
}
func results(wg *sync.WaitGroup, resultschan <-chan SleepResult) {
for res := range resultschan {
fmt.Printf("Worker %d: %s : %s => %s\n",
res.worker_id, res.duration,
res.start.Format(time.RFC3339Nano), res.end.Format(time.RFC3339Nano))
}
wg.Done()
}
func worker(id int, wg *sync.WaitGroup, jobchan <-chan time.Duration, resultschan chan<- SleepResult) {
var res = SleepResult{worker_id: id}
for dur := range jobchan {
res.duration = dur
res.start = time.Now()
time.Sleep(res.duration)
res.end = time.Now()
resultschan <- res
}
wg.Done()
}
Here I use 2 wait groups, one for the workers, one for the results. This makes sure Im done writing all the results before main() ends. I keep my functions simple by having each function do exactly one thing at a time: main reads inputs, parses durations from them, and sends them off to the next worker. The results function collects results and prints them to standard output. The worker does the sleeping, reading from jobchan and writing to resultschan.
workchan can be buffered (or not, as in this case); it doesn't matter because the input will be read at the rate it can be processed. We can buffer as much input as we want, but we can't buffer an infinite amount. I've set channel sizes as big as 1e6 - but a million is a lot less than infinite. For my use case, I don't need to do any buffering at all.
main knows when the input is done and can close the jobschan. main also knows when jobs are done (wg.Wait()) and can close the results channel. Closing these channels is an important signal to the worker and results goroutines - they can distinguish between a channel that is empty and a channel that is guaranteed not to have any new additions.
for job := range jobchan {...} is shorthand for your more verbose:
for {
job, ok := <- jobchan
if !ok {
wg.Done()
return
}
...
}
Note that this code creates 2 workers, but it could create 20 or 2000, or even 1. The program functions regardless of how many workers are in the pool. It can handle any volume of input (though interminable input of course leads to an interminable program). It does not create a cyclic loop of output to input. If your use case requires jobs to create more jobs, that's a more challenging scenario that can typically be avoided with careful planning.
I hope this gives you some good ideas about how you can better use concurrency in your Go applications.
https://play.golang.wiki/p/cZuI9YXypxI

Distribute the same keyword to multiple goroutines

I have something like this mock (code below) which distributes the same keyword out to multiple goroutines, except the goroutines all take different amount of times doing things with the keyword but can operate independently of each other so they don't need any synchronization. The solution given below to distribute clearly synchronizes the goroutines.
I just want to toss this idea out there to see how other people would deal with this type of distribution, as I assume it is fairly common and someone else has thought about it before.
Here are some other solutions I have thought up and why they seem kinda meh to me:
One goroutine for each keyword
Each time a new keyword comes in spawn a goroutine to handle the distribution
Give the keyword a bitmask or something for each goroutine to update
This way once all of the workers have touched the keyword it can be deleted and we can move on
Give each worker its own stack to work off of
This seems kinda appealing, just give each worker a stack to add each keyword to, but we would eventually run into a problem of a ton of memory being taken up since it is planned to run so long
The problem with all of these is that my code is supposed to run for a long time, unwatched, and that would lead to either a huge build up of keywords or goroutines due to the lazy worker taking longer than the others. It almost seems like it'd be nice to give each worker its own Amazon SQS queue or implement something similar to that myself.
EDIT:
Store the keyword outside the program
I just thought of doing it this way instead, I could perhaps just store the keyword outside the program until they all grab it and then delete it once it has been used up. This sits ok with me actually, I don't have a problem with using up disk space
Anyway here is an example of the approach that waits for all to finish:
package main
import (
"flag"
"fmt"
"math/rand"
"os"
"os/signal"
"strconv"
"time"
)
var (
shutdown chan struct{}
count = flag.Int("count", 5, "number to run")
)
type sleepingWorker struct {
name string
sleep time.Duration
ch chan int
}
func NewQuicky(n string) sleepingWorker {
var rq sleepingWorker
rq.name = n
rq.ch = make(chan int)
rq.sleep = time.Duration(rand.Intn(5)) * time.Second
return rq
}
func (r sleepingWorker) Work() {
for {
fmt.Println(r.name, "is about to sleep, number:", <-r.ch)
time.Sleep(r.sleep)
}
}
func NewLazy() sleepingWorker {
var rq sleepingWorker
rq.name = "Lazy slow worker"
rq.ch = make(chan int)
rq.sleep = 20 * time.Second
return rq
}
func distribute(gen chan int, workers ...sleepingWorker) {
for kw := range gen {
for _, w := range workers {
fmt.Println("sending keyword to:", w.name)
select {
case <-shutdown:
return
case w.ch <- kw:
fmt.Println("keyword sent to:", w.name)
}
}
}
}
func main() {
flag.Parse()
shutdown = make(chan struct{})
go func() {
c := make(chan os.Signal, 1)
signal.Notify(c, os.Interrupt)
<-c
close(shutdown)
}()
x := make([]sleepingWorker, *count)
for i := 0; i < (*count)-1; i++ {
x[i] = NewQuicky(strconv.Itoa(i))
go x[i].Work()
}
x[(*count)-1] = NewLazy()
go x[(*count)-1].Work()
gen := make(chan int)
go distribute(gen, x...)
go func() {
i := 0
for {
i++
select {
case <-shutdown:
return
case gen <- i:
}
}
}()
<-shutdown
os.Exit(0)
}
Let's assume I understand the problem correctly:
There's not too much you can do about it I'm afraid. You have limited resources (assuming all resources are limited) so if data to your input is written faster then you process it, there will be some synchronisation needed. At the end the whole process will run as quickly as the slowest worker anyway.
If you really need data from the workers available as soon as possible, the best you can do is to add some kind of buffering. But the buffer must be limited in size (even if you run in the cloud it would be limited by your wallet) so assuming never ending torrent of input it will only postpone the choke until some time in the future where you will start seeing "synchronisation" again.
All the ideas you presented in your questions are based on buffering the data. Even if you run a routine for every keyword-worker pair, this will buffer one element in every routine and, unless you implement the limit on total number of routines, you'll run out of memory. And even if you always leave some room for the quickest worker to spawn a new routine, the input queue won't be able to deliver new items as it would be choked on the slowest worker.
Buffering would solve your problem if on average you input is slower than processing time, but you have occasional spikes. If your buffer is big enough you can than accommodate the increase of throughput and maybe your quickest worker won't notice a thing.
Solution?
As go comes with buffered channels, this is the easiest to implement (also suggested by icza in the comment). Just give each worker a buffer. If you know which worker is the slowest, you can give it a bigger buffer. In this scenario you're limited by the memory of your machine.
If you're not happy with the single-machine memory limit then yes, per one of your ideas, you can "simply" store the buffer (queue) for each worker on the hard drive. But this is also limited and just postpones the blocking scenario until later. This is essentially the same as your Amazon SQS proposal (you could keep buffer in the cloud, but you need either limit it reasonably or prepare for the bill.)
The final note, depending on the system you're building, it might be not a good idea to buffer items in such a massive scale allowing to build up the backlog for the slower workers – it's often not desirable to have a worker hours, days, weeks behind the input flow and this is what would happen with an infinite buffer. The real answer then would be: improve your slowest worker to process things faster. (And add some buffering to improve the experience.)

Behavior of sleep and select in go

I'm trying to understand a bit more about what happens under the surface during various blocking/waiting types of operations in Go. Take the following example:
otherChan = make(chan int)
t = time.NewTicker(time.Second)
for {
doThings()
// OPTION A: Sleep
time.Sleep(time.Second)
// OPTION B: Blocking ticker
<- t.C
// OPTION C: Select multiple
select {
case <- otherChan:
case <- t.C:
}
}
From a low level view (system calls, cpu scheduling) what is the difference between these while waiting?
My understanding is that time.Sleep leaves the CPU free to perform other tasks until the specified time has elapsed. Does the blocking ticker <- t.C do the same? Is the processor polling the channel or is there an interrupt involved? Does having multiple channels in a select change anything?
In other words, assuming that otherChan never had anything put into it, would these three options execute in an identical way, or would one be less resource intensive than the others?
That's a very interesting question, so I did cd into my Go source to start looking.
time.Sleep
time.Sleep is defined like this:
// src/time/sleep.go
// Sleep pauses the current goroutine for at least the duration d.
// A negative or zero duration causes Sleep to return immediately.
func Sleep(d Duration)
No body, no definition in an OS-specific time_unix.go!?! A little search and the answer is because time.Sleep is actually defined in the runtime:
// src/runtime/time.go
// timeSleep puts the current goroutine to sleep for at least ns nanoseconds.
//go:linkname timeSleep time.Sleep
func timeSleep(ns int64) {
// ...
}
Which in retrospect makes a lot of sense, as it has to interact with the goroutine scheduler. It ends up calling goparkunlock, which "puts the goroutine into a waiting state". time.Sleep creates a runtime.timer with a callback function that is called when the timer expires - that callback function wakes up the goroutine by calling goready. See next section for more details on the runtime.timer.
time.NewTicker
time.NewTicker creates a *Ticker (and time.Tick is a helper function that does the same thing but directly returns *Ticker.C, the ticker's receive channel, instead of *Ticker, so you could've written your code with it instead) has similar hooks into the runtime: a ticker is a struct that holds a runtimeTimer and a channel on which to signal the ticks.
runtimeTimer is defined in the time package but it must be kept in sync with timer in src/runtime/time.go, so it is effectively a runtime.timer. Remember that in time.Sleep, the timer had a callback function to wake up the sleeping goroutine? In the case of *Ticker, the timer's callback function sends the current time on the ticker's channel.
Then, the real waiting/scheduling happens on the receive from the channel, which is essentially the same as the select statement unless otherChan sends something before the tick, so let's look at what happens on a blocking receive.
<- chan
Channels are implemented (now in Go!) in src/runtime/chan.go, by the hchan struct. Channel operations have matching functions, and a receive is implemented by chanrecv:
// chanrecv receives on channel c and writes the received data to ep.
// ep may be nil, in which case received data is ignored.
// If block == false and no elements are available, returns (false, false).
// Otherwise, if c is closed, zeros *ep and returns (true, false).
// Otherwise, fills in *ep with an element and returns (true, true).
func chanrecv(t *chantype, c *hchan, ep unsafe.Pointer, block bool) (selected, received bool) {
// ...
}
This part has a lot of different cases, but in your example, it is a blocking receive from an asynchronous channel (time.NewTicker creates a channel with a buffer of 1), but anyway it ends up calling... goparkunlock, again to allow other goroutines to proceed while this one is stuck waiting.
So...
In all cases, the goroutine ends up being parked (which is not really shocking - it can't make progress, so it has to leave its thread available for a different goroutine if there's any available). A glance at the code seems to suggest that the channel has a bit more overhead than a straight-up time.Sleep. However, it allows far more powerful patterns, such as the last one in your example: the goroutine can be woken up by another channel, whichever comes first.
To answer your other questions, regarding polling, the timers are managed by a goroutine that sleeps until the next timer in its queue, so it's working only when it knows a timer has to be triggered. When the next timer has expired, it wakes up the goroutine that called time.Sleep (or sends the value on the ticker's channel, it does whatever the callback function does).
There's no polling in channels, the receive is unlocked when a send is made on the channel, in chansend of the chan.go file:
// wake up a waiting receiver
sg := c.recvq.dequeue()
if sg != nil {
recvg := sg.g
unlock(&c.lock)
if sg.releasetime != 0 {
sg.releasetime = cputicks()
}
goready(recvg, 3)
} else {
unlock(&c.lock)
}
That was an interesting dive into Go's source code, very interesting question! Hope I answered at least part of it!

How do I find out if a goroutine is done, without blocking?

All the examples I've seen so far involve blocking to get the result (via the <-chan operator).
My current approach involves passing a pointer to a struct:
type goresult struct {
result resultType;
finished bool;
}
which the goroutine writes upon completion. Then it's a simple matter of checking finished whenever convenient. Do you have better alternatives?
What I'm really aiming for is a Qt-style signal-slot system. I have a hunch the solution will look almost trivial (chans have lots of unexplored potential), but I'm not yet familiar enough with the language to figure it out.
You can use the "comma, ok" pattern (see their page on "effective go"):
foo := <- ch; // This blocks.
foo, ok := <- ch; // This returns immediately.
Select statements allows you to check multiple channels at once, taking a random branch (of the ones where communication is waiting):
func main () {
for {
select {
case w := <- workchan:
go do_work(w)
case <- signalchan:
return
// default works here if no communication is available
default:
// do idle work
}
}
}
For all the send and receive
expressions in the "select" statement,
the channel expressions are evaluated,
along with any expressions that appear
on the right hand side of send
expressions, in top-to-bottom order.
If any of the resulting operations can
proceed, one is chosen and the
corresponding communication and
statements are evaluated. Otherwise,
if there is a default case, that
executes; if not, the statement blocks
until one of the communications can
complete.
You can also peek at the channel buffer to see if it contains anything by using len:
if len(channel) > 0 {
// has data to receive
}
This won't touch the channel buffer, unlike foo, gotValue := <- ch which removes a value when gotValue == true.

Resources