Hi I'm having a problem with a control channel (of sorts).
The essence of my program:
I do not know how many go routines I will be running at runtime
I will need to restart these go routines at set times, however, they could also potentially error out (and then restarted), so their timing will not be predictable.
These go routines will be putting messages onto a single channel.
So What I've done is created a simple random message generator to put messages onto a channel.
When the timer is up (random duration for testing) I put a message onto a control channel which is a struct payload, so I know there was a close signal and which go routine it was; in reality I'd then do some other stuff I'd need to do before starting the go routines again.
My problem is:
I receive the control message within my reflect.Select loop
I do not (or unable to) receive it in my randmsgs() loop
Therefore I can not stop my randmsgs() go routine.
I believe I'm right in understanding that multiple go routines can read from a single channel, therefore I think I'm misunderstanding how reflect.SelectCases fit into all of this.
My code:
package main
import (
"fmt"
"math/rand"
"reflect"
"time"
)
type testing struct {
control bool
market string
}
func main() {
rand.Seed(time.Now().UnixNano())
// explicitly define chanids for tests.
var chanids []string = []string{"GR I", "GR II", "GR III", "GR IV"}
stream := make(chan string)
control := make([]chan testing, len(chanids))
reflectCases := make([]reflect.SelectCase, len(chanids)+1)
// MAKE REFLECT SELECTS FOR 4 CONTROL CHANS AND 1 DATA CHANNEL
for i := range chanids {
control[i] = make(chan testing)
reflectCases[i] = reflect.SelectCase{Dir: reflect.SelectRecv, Chan: reflect.ValueOf(control[i])}
}
reflectCases[4] = reflect.SelectCase{Dir: reflect.SelectRecv, Chan: reflect.ValueOf(stream)}
// START GO ROUTINES
for i, val := range chanids {
runningFunc(control[i], val, stream, 1+rand.Intn(30-1))
}
// READ DATA
for {
o, recieved, ok := reflect.Select(reflectCases)
if !ok {
fmt.Println("You really buggered this one up...")
}
ty, err := recieved.Interface().(testing)
if err == true {
fmt.Printf("Read from: %v, and recieved close signal from: %s\n", o, ty.market)
// close control & stream here.
} else {
ty := recieved.Interface().(string)
fmt.Printf("Read from: %v, and recieved value from: %s\n", o, ty)
}
}
}
// THE GO ROUTINES - TIMER AND RANDMSGS
func runningFunc(q chan testing, chanid string, stream chan string, dur int) {
go timer(q, dur, chanid)
go randmsgs(q, chanid, stream)
}
func timer(q chan testing, t int, message string) {
for t > 0 {
time.Sleep(time.Second)
t--
}
q <- testing{true, message}
}
func randmsgs(q chan testing, chanid string, stream chan string) {
for {
select {
case <-q:
fmt.Println("Just sitting by the mailbox. :(")
return
default:
secondsToWait := 1 + rand.Intn(5-1)
time.Sleep(time.Second * time.Duration(secondsToWait))
stream <- fmt.Sprintf("%s: %d", chanid, secondsToWait)
}
}
}
I apologise for the wall of text, but I'm all out of ideas :(!
K/Regards,
C.
Your channels q in the second half are the same as control[0...3] in the first.
Your reflect.Select that you are running also reads from all of these channels, with no delay.
The problem I think comes down to that your reflect.Select is simply running too fast and "stealing" all the channel output right away. This is why randmsgs is never able to read the messages.
You'll notice that if you remove the default case from randmsgs, the function is able to (potentially) read some of the messages from q.
select {
case <-q:
fmt.Println("Just sitting by the mailbox. :(")
return
}
This is because now that it is running without delay, it is always waiting for a message on q and thus has the chance to beat the reflect.Select in the race.
If you read from the same channel in multiple goroutines, then the data passed will simply go to whatever goroutine reads it first.
This program appears to just be an experiment / learning experience, but I'll offer some criticism that may help.
Again, generally you don't have multiple goroutines reading from the same channel if both goroutines are doing different tasks. You're creating a mostly non-deterministic race as to which goroutine fetches the data first.
Second, this is a common beginner's anti-pattern with select that you should avoid:
for {
select {
case v := <-myChan:
doSomething(v)
default:
// Oh no, there wasn't anything! Guess we have to wait and try again.
time.Sleep(time.Second)
}
This code is redundant because select already behaves in such a way that if no case is initially ready, it will wait until any case is ready and then proceed with that one. This default: sleep is effectively making your select loop slower and yet spending less time actually waiting on the channel (because 99.999...% of the time is spent on time.Sleep).
Related
I was trying to understand the following piece of code that reads from a channel of channels. I am having some difficulties wrapping my head around the idea.
bridge := func(done <-chan interface{}, chanStream <-chan <-chan interface{}) <-chan interface{} {
outStream := make(chan interface{})
go func() {
defer close(outStream)
for {
var stream <-chan interface{}
select {
case <-done:
return
case maybeSteram, ok := <-chanStream:
if ok == false {
return
}
stream = maybeSteram
}
for c := range orDone(done, stream) {
select {
case outStream <- c:
case <-done: // Why we are selection from the done channel here?
}
}
}
}()
return outStream
}
The orDone function:
orDone := func(done <-chan interface{}, inStream <-chan interface{}) <-chan interface{} {
outStream := make(chan interface{})
go func() {
defer close(outStream)
for {
select {
case <-done:
return
case v, ok := <-inStream:
if ok == false {
return
}
select {
case outStream <- v:
case <-done: // Again why we are only reading from this channel? Shouldn't we return from here?
// Why we are not retuening from here?
}
}
}
}()
return outStream
}
As mentioned in the comment, I need some help to understand why we are selecting in the for c := range orDone(donem, stream). Can anyone explain what is going on here?
Thanks in advance.
Edit
I took the code from the book concurrency in go. The full code can be found here: https://github.com/kat-co/concurrency-in-go-src/blob/master/concurrency-patterns-in-go/the-bridge-channel/fig-bridge-channel.go
In both cases, the select is done to avoid blocking — if the reader isn't reading from our output channel, the write might block (maybe even forever), but we want the goroutine to terminate when the done channel is closed, without waiting for anything else. By using the select, it will wait until either thing happens, and then continue, instead of waiting indefinitely for the write to complete before checking done.
As for the other question, "why are we not returning here?": well, we could. But we don't have to, because a closed channel remains readable forever (producing an unlimited number of zero values) once it's been closed. So it's okay to do nothing in those "bottom" selects; if done was in fact closed we will go back up to the top of the loop and hit the case <-done: return there. I suppose it's a matter of style. I probably would have written the return myself, but the author of this sample may have wanted to avoid handling the same condition in two places. As long as it's just return it doesn't really matter, but if you wanted to do some additional action on done, that behavior would have to be updated in two places if the bottom select returns, but only in one place if it doesn't.
Fist I will explain the usage of done channel. It's a common pattern followed in many concurrency related package implementation in Go. Done channels purpose is to denote the end of computation or more like a stop signal. Usually done channel will be listened by many go-routines or multiple places in code flow. One such example is Done channel in Go's builtin "context" package. Since Go doesn't have anything like broadcast ie., to signal all listeners of a channel (expecting this feature is not a good idea too), people just close the channel and all listeners will receive nil value. In your case, since the second select statement is in the end of for block, the code owner might have decided to just continue the loop so that on next iteration, the first select statement which listens on done channel will return from the function.
I'm trying to create message hub in golang. Messages are getting through different channels that persist in map[uint32]chan []float64. I do an endless loop over map and check if a channel has a message. If it has, I write it to the common client's write channel together with an incoming channel's id. It works fine, but uses all CPU, and other processes are throttled.
UPD: Items in map adding and removing dynamically by another function.
I thinking to limit CPU for this app through Docker, but maybe there is more elegant path?
My code :
func (c *Client) accumHandler() {
for !c.stop {
c.channels.Range(func(key, value interface{}) bool {
select {
case message := <-value.(chan []float64):
mess := map[uint32]interface{}{key.(uint32): message}
select {
case c.send <- mess:
}
default:
}
return !c.stop
})
}
}
If I'm reading the cards correctly, it seems like you are trying to pass along an array of floats to a common channel along with a channel identifier. I assume that you are doing this to pass multiple channels out to different publishers, but so that you only have to track a single channel for your consumer.
It turns out that you don't need to loop over channels to see when it's outputting a value. You can chain channels together inside of goroutines. For this reason, no busy wait is necessary. Something like this will suit your purposes (again, if I'm reading the cards correctly). Look for the all caps comment for the way around your busy loop. Link to playground.
var num_publishes = 3
func main() {
num_publishers := 10
single_consumer := make(chan []float64)
for i:=0;i<num_publishers;i+=1 {
c := make(chan []float64)
// connect channel to your single consumer channel
go func() { for { single_consumer <- <-c } }() // THIS IS PROBABLY WHAT YOU DIDN'T KNOW ABOUT
// send the channel to the publisher
go publisher(c, i*100)
}
// dumb consumer example
for i:=0;i<num_publishers*num_publishes;i+=1 {
fmt.Println(<-single_consumer)
}
}
func publisher(c chan []float64, publisher_id int) {
dummy := []float64{
float64(publisher_id+1),
float64(publisher_id+2),
float64(publisher_id+3),
}
for i:=0;i<num_publishes;i+=1 {
time.Sleep(time.Duration(rand.Intn(10000)) * time.Millisecond)
c <- dummy
}
}
It is eating all of your CPU because you are continuously cycling around the dictionary checking for messages, so even when there are no messages to process the CPU, or at least a thread or core, is running flat out. You need blocking sends and receives on the channels!
I assume you are doing this because you don't know how many channels there will be and therefor can't just select on all of the input channels. A better pattern would be to start a separate goroutine for each input channel you are currently storing in the dictionary. Each goroutine should have a loop in which it blocks waiting the input channel and on receiving a message does a blocking send to a channel to the client which is shared by all.
The question isn't complete so can't give exact code, but you're going to have goroutines that look something like this:
type Message struct {
id uint32,
message []float64
}
func receiverGoroutine(id uint32, input chan []float64, output chan Message) {
for {
message := <- input
output <- Message{id: id, message: message}
}
}
func clientGoroutine(c *Client, input chan Message) {
for {
message := <- input
// do stuff
}
}
(You'll need to add some "done" channels as well though)
Elsewhere you will start them with code like this:
clientChan := make(chan Message)
go clientGoroutine(client, clientChan)
for i:=0; i<max; i++ {
go receiverGoroutine( (uint32)i, make(chan []float64, clientChan)
}
Or you can just start the client routine and then add the others as they are needed rather than in a loop up front - depends on your use case.
Short story:
I'm having an issue where a map that previously had data but should now be empty is reporting a len() of > 0 even though it appears to be empty, and I have no idea why.
Longer story:
I need to process a number of devices at a time. Each device can have a number of messages. The concurrency of Go seemed like an obvious place to begin, so I wrote up some code to handle it and it seems to be going mostly very well. However...
I started a single goroutine for each device. In the main() function I have a map that contains each of the devices. When a message comes in I check to see whether the device already exists and if not I create it, store it in the map, and then pass the message into the device's receiving buffered channel.
This works great, and each device is being processed nicely. However, I need the device (and its goroutine) to terminate when it doesn't receive any messages for a preset amount of time. I've done this by checking in the goroutine itself how much time has passed since the last message was received, and if the goroutine is considered stale then the receiving channel is closed. But how to remove from the map?
So I passed in a pointer to the map, and I have the goroutine delete the device from the map and close the receiving channel before returning. The problem though is that at the end I'm finding that the len() function returns a value > 0, but when I output the map itself I see that it's empty.
I've written up a toy example to try to replicate the fault, and indeed I'm seeing that len() is reporting > 0 when the map is apparently empty. The last time I tried it I saw 10. The time before that 14. The time before that one, 53.
So I can replicate the fault, but I'm not sure whether the fault is with me or with Go. How is len() reporting > 0 when there are apparently no items in it?
Here's an example of how I've been able to replicate. I'm using Go v1.5.1 windows/amd64
There are two things here, as far as I'm concerned:
Am I managing the goroutines properly (probably not) and
Why does len(m) report > 0 when there are no items in it?
Thanks all
Example Code:
package main
import (
"log"
"os"
"time"
)
const (
chBuffSize = 100 // How large the thing's channel buffer should be
thingIdleLifetime = time.Second * 5 // How long things can live for when idle
thingsToMake = 1000 // How many things and associated goroutines to make
thingMessageCount = 10 // How many messages to send to the thing
)
// The thing that we'll be passing into a goroutine to process -----------------
type thing struct {
id string
ch chan bool
}
// Go go gadget map test -------------------------------------------------------
func main() {
// Make all of the things!
things := make(map[string]thing)
for i := 0; i < thingsToMake; i++ {
t := thing{
id: string(i),
ch: make(chan bool, chBuffSize),
}
things[t.id] = t
// Pass the thing into it's own goroutine
go doSomething(t, &things)
// Send (thingMessageCount) messages to the thing
go func(t thing) {
for x := 0; x < thingMessageCount; x++ {
t.ch <- true
}
}(t)
}
// Check the map of things to see whether we're empty or not
size := 0
for {
if size == len(things) && size != thingsToMake {
log.Println("Same number of items in map as last time")
log.Println(things)
os.Exit(1)
}
size = len(things)
log.Printf("Map size: %d\n", size)
time.Sleep(time.Second)
}
}
// Func for each goroutine to run ----------------------------------------------
//
// Takes two arguments:
// 1) the thing that it is working with
// 2) a pointer to the map of things
//
// When this goroutine is ready to terminate, it should remove the associated
// thing from the map of things to clean up after itself
func doSomething(t thing, things *map[string]thing) {
lastAccessed := time.Now()
for {
select {
case <-t.ch:
// We received a message, so extend the lastAccessed time
lastAccessed = time.Now()
default:
// We haven't received a message, so check if we're allowed to continue
n := time.Now()
d := n.Sub(lastAccessed)
if d > thingIdleLifetime {
// We've run for >thingIdleLifetime, so close the channel, delete the
// associated thing from the map and return, terminating the goroutine
close(t.ch)
delete(*things, string(t.id))
return
}
}
// Just sleep for a second in each loop to prevent the CPU being eaten up
time.Sleep(time.Second)
}
}
Just to add; in my original code this is looping forever. The program is designed to listen for TCP connections and receive and process the data, so the function that is checking the map count is running in it's own goroutine. However, this example has exactly the same symptom even though the map len() check is in the main() function and it is designed to handle an initial burst of data and then break out of the loop.
UPDATE 2015/11/23 15:56 UTC
I've refactored my example below. I'm not sure if I've misunderstood #RobNapier or not but this works much better. However, if I change thingsToMake to a larger number, say 100000, then I get lots of errors like this:
goroutine 199734 [select]:
main.doSomething(0xc0d62e7680, 0x4, 0xc0d64efba0, 0xc082016240)
C:/Users/anttheknee/go/src/maptest/maptest.go:83 +0x144
created by main.main
C:/Users/anttheknee/go/src/maptest/maptest.go:46 +0x463
I'm not sure if the problem is that I'm asking Go to do too much, or if I've made a hash of understanding the solution. Any thoughts?
package main
import (
"log"
"os"
"time"
)
const (
chBuffSize = 100 // How large the thing's channel buffer should be
thingIdleLifetime = time.Second * 5 // How long things can live for when idle
thingsToMake = 10000 // How many things and associated goroutines to make
thingMessageCount = 10 // How many messages to send to the thing
)
// The thing that we'll be passing into a goroutine to process -----------------
type thing struct {
id string
ch chan bool
done chan string
}
// Go go gadget map test -------------------------------------------------------
func main() {
// Make all of the things!
things := make(map[string]thing)
// Make a channel to receive completion notification on
doneCh := make(chan string, chBuffSize)
log.Printf("Making %d things\n", thingsToMake)
for i := 0; i < thingsToMake; i++ {
t := thing{
id: string(i),
ch: make(chan bool, chBuffSize),
done: doneCh,
}
things[t.id] = t
// Pass the thing into it's own goroutine
go doSomething(t)
// Send (thingMessageCount) messages to the thing
go func(t thing) {
for x := 0; x < thingMessageCount; x++ {
t.ch <- true
time.Sleep(time.Millisecond * 10)
}
}(t)
}
log.Printf("All %d things made\n", thingsToMake)
// Receive on doneCh when the goroutine is complete and clean the map up
for {
id := <-doneCh
close(things[id].ch)
delete(things, id)
if len(things) == 0 {
log.Printf("Map: %v", things)
log.Println("All done. Exiting")
os.Exit(0)
}
}
}
// Func for each goroutine to run ----------------------------------------------
//
// Takes two arguments:
// 1) the thing that it is working with
// 2) the channel to report that we're done through
//
// When this goroutine is ready to terminate, it should remove the associated
// thing from the map of things to clean up after itself
func doSomething(t thing) {
timer := time.NewTimer(thingIdleLifetime)
for {
select {
case <-t.ch:
// We received a message, so extend the timer
timer.Reset(thingIdleLifetime)
case <-timer.C:
// Timer returned so we need to exit now
t.done <- t.id
return
}
}
}
UPDATE 2015/11/23 16:41 UTC
The completed code that appears to be working properly. Do feel free to let me know if there are any improvements that could be made, but this works (sleeps are deliberate to see progress as it's otherwise too fast!)
package main
import (
"log"
"os"
"strconv"
"time"
)
const (
chBuffSize = 100 // How large the thing's channel buffer should be
thingIdleLifetime = time.Second * 5 // How long things can live for when idle
thingsToMake = 100000 // How many things and associated goroutines to make
thingMessageCount = 10 // How many messages to send to the thing
)
// The thing that we'll be passing into a goroutine to process -----------------
type thing struct {
id string
receiver chan bool
done chan string
}
// Go go gadget map test -------------------------------------------------------
func main() {
// Make all of the things!
things := make(map[string]thing)
// Make a channel to receive completion notification on
doneCh := make(chan string, chBuffSize)
log.Printf("Making %d things\n", thingsToMake)
for i := 0; i < thingsToMake; i++ {
t := thing{
id: strconv.Itoa(i),
receiver: make(chan bool, chBuffSize),
done: doneCh,
}
things[t.id] = t
// Pass the thing into it's own goroutine
go doSomething(t)
// Send (thingMessageCount) messages to the thing
go func(t thing) {
for x := 0; x < thingMessageCount; x++ {
t.receiver <- true
time.Sleep(time.Millisecond * 100)
}
}(t)
}
log.Printf("All %d things made\n", thingsToMake)
// Check the `len()` of things every second and exit when empty
go func() {
for {
time.Sleep(time.Second)
m := things
log.Printf("Map length: %v", len(m))
if len(m) == 0 {
log.Printf("Confirming empty map: %v", things)
log.Println("All done. Exiting")
os.Exit(0)
}
}
}()
// Receive on doneCh when the goroutine is complete and clean the map up
for {
id := <-doneCh
close(things[id].receiver)
delete(things, id)
}
}
// Func for each goroutine to run ----------------------------------------------
//
// When this goroutine is ready to terminate it should respond through t.done to
// notify the caller that it has finished and can be cleaned up. It will wait
// for `thingIdleLifetime` until it times out and terminates on it's own
func doSomething(t thing) {
timer := time.NewTimer(thingIdleLifetime)
for {
select {
case <-t.receiver:
// We received a message, so extend the timer
timer.Reset(thingIdleLifetime)
case <-timer.C:
// Timer expired so we need to exit now
t.done <- t.id
return
}
}
}
map is not thread-safe. You cannot access a map on multiple goroutines safely. You can corrupt the map, as you're seeing in this case.
Rather than allow the goroutine to modify the map, the goroutine should write their identifier to a channel before returning. The main loop should watch that channel, and when an identifier comes back, should remove that element from the map.
You'll probably want to read up on Go concurrency patterns. In particular, you may want to look at Fan-out/Fan-in. Look at the links at the bottom. The Go blog has a lot of information on concurrency.
Note that your goroutine is busy waiting to check for timeout. There's no reason for that. The fact that you "sleep(1 second)") should be a clue that there's a mistake. Instead, look at time.Timer which will give you a chan that will receive a value after some time, which you can reset.
Your problem is how you're converting numbers to strings:
id: string(i),
That creates a string using i as a rune (int32). For example string(65) is A. Some unequal Runes resolve to equal strings. You get a collision and close the same channel twice. See http://play.golang.org/p/__KpnfQc1V
You meant this:
id: strconv.Itoa(i),
I'm having difficulty using time.Tick. I expect this code to print "hi" 10 times then quit after 1 second, but instead it hangs:
ticker := time.NewTicker(100 * time.Millisecond)
time.AfterFunc(time.Second, func () {
ticker.Stop()
})
for _ = range ticker.C {
go fmt.Println("hi")
}
https://play.golang.org/p/1p6-ViSvma
Looking at the source, I see that the channel isn't closed when Stop() is called. In that case, what is the idiomatic way to iterate over the ticker channel?
You're right, ticker's channel is not being closed on stop, that's stated in a documentation:
Stop turns off a ticker. After Stop, no more ticks will be sent. Stop does not close the channel, to prevent a read from the channel succeeding incorrectly.
I believe ticker is more about fire and forget and even if you want to stop it, you could even leave the routine hanging forever (depends on your application of course).
If you really need a finite ticker, you can do tricks and provide a separate channel (per ThunderCat's answer), but what I would do is providing my own implementation of ticker. This should be relatively easy and will give you flexibility with its behaviour, things like what to pass on the channel or deciding what to do with missing ticks (i.e. when reader is falling behind).
My example:
func finiteTicker(n int, d time.Duration) <-chan time.Time {
ch := make(chan time.Time, 1)
go func() {
for i := 0; i < n; i++ {
time.Sleep(d)
ch <- time.Now()
}
close(ch)
}()
return ch
}
func main() {
for range finiteTicker(10, 100*time.Millisecond) {
fmt.Println("hi")
}
}
http://play.golang.org/p/ZOwJlM8rDm
I asked on IRC as well, at got some useful insight from #Tv`.
Despite timer.Ticker looking like it should be part of a go pipeline, it does not actually play well with the pipeline idioms:
Here are the guidelines for pipeline construction:
stages close their outbound channels when all the send operations are done.
stages keep receiving values from inbound channels until those channels are closed or the senders are unblocked.
Pipelines unblock senders either by ensuring there's enough buffer for all the values that are sent or by explicitly signalling senders when the receiver may abandon the channel.
The reason for this inconsistency appears to be primarily to support the following idiom:
for {
select {
case <-ticker.C:
// do something
case <-done:
return
}
}
I don't know why this is the case, and why the pipelining idiom wasn't used:
for {
select {
case _, ok := <-ticker.C:
if ok {
// do something
} else {
return
}
}
}
(or more cleanly)
for _ = range ticker.C {
// do something
}
But this is the way go is :(
This is a good example of workers & controller mode in Go written by #Jimt, in answer to
"Is there some elegant way to pause & resume any other goroutine in golang?"
package main
import (
"fmt"
"runtime"
"sync"
"time"
)
// Possible worker states.
const (
Stopped = 0
Paused = 1
Running = 2
)
// Maximum number of workers.
const WorkerCount = 1000
func main() {
// Launch workers.
var wg sync.WaitGroup
wg.Add(WorkerCount + 1)
workers := make([]chan int, WorkerCount)
for i := range workers {
workers[i] = make(chan int)
go func(i int) {
worker(i, workers[i])
wg.Done()
}(i)
}
// Launch controller routine.
go func() {
controller(workers)
wg.Done()
}()
// Wait for all goroutines to finish.
wg.Wait()
}
func worker(id int, ws <-chan int) {
state := Paused // Begin in the paused state.
for {
select {
case state = <-ws:
switch state {
case Stopped:
fmt.Printf("Worker %d: Stopped\n", id)
return
case Running:
fmt.Printf("Worker %d: Running\n", id)
case Paused:
fmt.Printf("Worker %d: Paused\n", id)
}
default:
// We use runtime.Gosched() to prevent a deadlock in this case.
// It will not be needed of work is performed here which yields
// to the scheduler.
runtime.Gosched()
if state == Paused {
break
}
// Do actual work here.
}
}
}
// controller handles the current state of all workers. They can be
// instructed to be either running, paused or stopped entirely.
func controller(workers []chan int) {
// Start workers
for i := range workers {
workers[i] <- Running
}
// Pause workers.
<-time.After(1e9)
for i := range workers {
workers[i] <- Paused
}
// Unpause workers.
<-time.After(1e9)
for i := range workers {
workers[i] <- Running
}
// Shutdown workers.
<-time.After(1e9)
for i := range workers {
close(workers[i])
}
}
But this code also has an issue: If you want to remove a worker channel in workers when worker() exits, dead lock happens.
If you close(workers[i]), next time controller writes into it will cause a panic since go can't write into a closed channel. If you use some mutex to protect it, then it will be stuck on workers[i] <- Running since the worker is not reading anything from the channel and write will be blocked, and mutex will cause a dead lock. You can also give a bigger buffer to channel as a work-around, but it's not good enough.
So I think the best way to solve this is worker() close channel when exits, if the controller finds a channel closed, it will jump over it and do nothing. But I can't find how to check a channel is already closed or not in this situation. If I try to read the channel in controller, the controller might be blocked. So I'm very confused for now.
PS: Recovering the raised panic is what I have tried, but it will close goroutine which raised panic. In this case it will be controller so it's no use.
Still, I think it's useful for Go team to implement this function in next version of Go.
There's no way to write a safe application where you need to know whether a channel is open without interacting with it.
The best way to do what you're wanting to do is with two channels -- one for the work and one to indicate a desire to change state (as well as the completion of that state change if that's important).
Channels are cheap. Complex design overloading semantics isn't.
[also]
<-time.After(1e9)
is a really confusing and non-obvious way to write
time.Sleep(time.Second)
Keep things simple and everyone (including you) can understand them.
In a hacky way it can be done for channels which one attempts to write to by recovering the raised panic. But you cannot check if a read channel is closed without reading from it.
Either you will
eventually read the "true" value from it (v <- c)
read the "true" value and 'not closed' indicator (v, ok <- c)
read a zero value and the 'closed' indicator (v, ok <- c) (example)
will block in the channel read forever (v <- c)
Only the last one technically doesn't read from the channel, but that's of little use.
I know this answer is so late, I have wrote this solution, Hacking Go run-time, It's not safety, It may crashes:
import (
"unsafe"
"reflect"
)
func isChanClosed(ch interface{}) bool {
if reflect.TypeOf(ch).Kind() != reflect.Chan {
panic("only channels!")
}
// get interface value pointer, from cgo_export
// typedef struct { void *t; void *v; } GoInterface;
// then get channel real pointer
cptr := *(*uintptr)(unsafe.Pointer(
unsafe.Pointer(uintptr(unsafe.Pointer(&ch)) + unsafe.Sizeof(uint(0))),
))
// this function will return true if chan.closed > 0
// see hchan on https://github.com/golang/go/blob/master/src/runtime/chan.go
// type hchan struct {
// qcount uint // total data in the queue
// dataqsiz uint // size of the circular queue
// buf unsafe.Pointer // points to an array of dataqsiz elements
// elemsize uint16
// closed uint32
// **
cptr += unsafe.Sizeof(uint(0))*2
cptr += unsafe.Sizeof(unsafe.Pointer(uintptr(0)))
cptr += unsafe.Sizeof(uint16(0))
return *(*uint32)(unsafe.Pointer(cptr)) > 0
}
Well, you can use default branch to detect it, for a closed channel will be selected, for example: the following code will select default, channel, channel, the first select is not blocked.
func main() {
ch := make(chan int)
go func() {
select {
case <-ch:
log.Printf("1.channel")
default:
log.Printf("1.default")
}
select {
case <-ch:
log.Printf("2.channel")
}
close(ch)
select {
case <-ch:
log.Printf("3.channel")
default:
log.Printf("3.default")
}
}()
time.Sleep(time.Second)
ch <- 1
time.Sleep(time.Second)
}
Prints
2018/05/24 08:00:00 1.default
2018/05/24 08:00:01 2.channel
2018/05/24 08:00:01 3.channel
Note, refer to comment by #Angad under this answer:
It doesn't work if you're using a Buffered Channel and it contains
unread data
I have had this problem frequently with multiple concurrent goroutines.
It may or may not be a good pattern, but I define a a struct for my workers with a quit channel and field for the worker state:
type Worker struct {
data chan struct
quit chan bool
stopped bool
}
Then you can have a controller call a stop function for the worker:
func (w *Worker) Stop() {
w.quit <- true
w.stopped = true
}
func (w *Worker) eventloop() {
for {
if w.Stopped {
return
}
select {
case d := <-w.data:
//DO something
if w.Stopped {
return
}
case <-w.quit:
return
}
}
}
This gives you a pretty good way to get a clean stop on your workers without anything hanging or generating errors, which is especially good when running in a container.
You could set your channel to nil in addition to closing it. That way you can check if it is nil.
example in the playground:
https://play.golang.org/p/v0f3d4DisCz
edit:
This is actually a bad solution as demonstrated in the next example,
because setting the channel to nil in a function would break it:
https://play.golang.org/p/YVE2-LV9TOp
ch1 := make(chan int)
ch2 := make(chan int)
go func(){
for i:=0; i<10; i++{
ch1 <- i
}
close(ch1)
}()
go func(){
for i:=10; i<15; i++{
ch2 <- i
}
close(ch2)
}()
ok1, ok2 := false, false
v := 0
for{
ok1, ok2 = true, true
select{
case v,ok1 = <-ch1:
if ok1 {fmt.Println(v)}
default:
}
select{
case v,ok2 = <-ch2:
if ok2 {fmt.Println(v)}
default:
}
if !ok1 && !ok2{return}
}
}
From the documentation:
A channel may be closed with the built-in function close. The multi-valued assignment form of the receive operator reports whether a received value was sent before the channel was closed.
https://golang.org/ref/spec#Receive_operator
Example by Golang in Action shows this case:
// This sample program demonstrates how to use an unbuffered
// channel to simulate a game of tennis between two goroutines.
package main
import (
"fmt"
"math/rand"
"sync"
"time"
)
// wg is used to wait for the program to finish.
var wg sync.WaitGroup
func init() {
rand.Seed(time.Now().UnixNano())
}
// main is the entry point for all Go programs.
func main() {
// Create an unbuffered channel.
court := make(chan int)
// Add a count of two, one for each goroutine.
wg.Add(2)
// Launch two players.
go player("Nadal", court)
go player("Djokovic", court)
// Start the set.
court <- 1
// Wait for the game to finish.
wg.Wait()
}
// player simulates a person playing the game of tennis.
func player(name string, court chan int) {
// Schedule the call to Done to tell main we are done.
defer wg.Done()
for {
// Wait for the ball to be hit back to us.
ball, ok := <-court
fmt.Printf("ok %t\n", ok)
if !ok {
// If the channel was closed we won.
fmt.Printf("Player %s Won\n", name)
return
}
// Pick a random number and see if we miss the ball.
n := rand.Intn(100)
if n%13 == 0 {
fmt.Printf("Player %s Missed\n", name)
// Close the channel to signal we lost.
close(court)
return
}
// Display and then increment the hit count by one.
fmt.Printf("Player %s Hit %d\n", name, ball)
ball++
// Hit the ball back to the opposing player.
court <- ball
}
}
it's easier to check first if the channel has elements, that would ensure the channel is alive.
func isChanClosed(ch chan interface{}) bool {
if len(ch) == 0 {
select {
case _, ok := <-ch:
return !ok
}
}
return false
}
If you listen this channel you always can findout that channel was closed.
case state, opened := <-ws:
if !opened {
// channel was closed
// return or made some final work
}
switch state {
case Stopped:
But remember, you can not close one channel two times. This will raise panic.