TryRecv returning channel closed despite beeing open - go

I am trying to write a function in Go which monitors a channel and logs what is sent through it.
func monitorChannel(inChannel, outChannel reflect.Value, fid int64, cond *sync.Cond) {
for {
cond.L.Lock()
var toLog reflect.Value
var ok bool
for toLog, ok = inChannel.TryRecv() ; !toLog.IsValid(); { // while no value received
if !ok {
cond.L.Unlock()
return
}
cond.Wait()
}
outChannel.Send(toLog)
logMessage("a", "b", inChannel.Interface(), toLog.Interface(), fid)
cond.L.Unlock()
}
}
This function is supposed to receive from inChannel, log the message sent and send it through outChannel. Since I want to be able to log bi-directional channels, I call this function twice for each channel I want to log, swapping inChannel and outChannel. The lock is to keep the two goroutines from passing messages between each other. "fid" is just the id of the log file.
But when I run the following test code, I get a deadlock :
errsIn := make(chan int64)
errsOut := make(chan int64)
cond := sync.NewCond(&sync.Mutex{})
go monitorChannel(reflect.ValueOf(errsIn), reflect.ValueOf(errsOut), fid, cond)
go monitorChannel(reflect.ValueOf(errsOut), reflect.ValueOf(errsIn), fid, cond)
errsIn <- 1
if <-errsOut != 1 {
t.Fatal("lost value through channel send")
}
errsOut <- 1
if <-errsIn != 1 {
t.Fatal("lost value through channel send")
}
It seems as if TryRecv is returning false on its second return value even though I haven't closed the channel. Why is this? What should I do about it?
I am running go 1.0.3 on Windows 8 64 bit.
EDIT
I later discovered that TryRecv has a somewhat confusing behaviour and managed to make a generalized version of the function using the reflect package and two sync.Locker's. I still think that jnml's solution is more elegant, but if anyone has experienced similar problems with TryRecv, take a look at the comment in the middle of the function.
func passOnAndLog(in, out reflect.Value, l1, l2 sync.Locker) {
for {
l1.Lock()
val, ok := in.TryRecv()
for !val.IsValid() { // while nothing received
l1.Unlock()
time.Sleep(time.Nanosecond) // pausing current thread
l1.Lock()
val, ok = in.TryRecv()
}
// if val.IsValid() == true and ok == false ,the channel is closed
// if val.IsValid() == false and ok == false ,the channel is open but we received nothing
// if val.IsValid() == true and ok == true ,we received an actual value from the open channel
// if val.IsValid() == false and ok == true ,we have no idea what happened
if !ok {
return
}
l1.Unlock()
l2.Lock() // don't want the other thread to receive while I am sending
out.Send(val)
LogValue(val) // logging
l2.Unlock()
}
}

The reflection based solution is too convoluted for me to figure out, being lazy, if it is correct and or feasible at all. (I suspect it is not, but only by intuition.)
I would approach the task in a simpler, although non-generic way. Let's have a channel which will be used by some producer(s) to write to it and will be used by some consumer(s) to read from it.
c := make(chan T, N)
It's possible to monitor this channel using a small helper function, like for example:
func monitored(c chan T) chan T {
m := make(chan T, M)
go func() {
for v := range c {
m <- v
logMessage(v)
}
close(m)
}()
return m
}
Now it is enough to:
mc := monitored(c)
and
Pass c to producers(s), but mc to consumers(s).
Close c when done to not leak goroutines.
Warning: Above code was not tested at all.

Related

Is it possible to cancel unfinished goroutines?

Consider a group of check works, each of which has independent logic, so they seem to be good to run concurrently, like:
type Work struct {
// ...
}
// This Check could be quite time-consuming
func (w *Work) Check() bool {
// return succeed or not
//...
}
func CheckAll(works []*Work) {
num := len(works)
results := make(chan bool, num)
for _, w := range works {
go func(w *Work) {
results <- w.Check()
}(w)
}
for i := 0; i < num; i++ {
if r := <-results; !r {
ReportFailed()
break;
}
}
}
func ReportFailed() {
// ...
}
When concerned about the results, if the logic is no matter which one work fails, we assert all works totally fail, the remaining values in the channel are useless. Let the remaining unfinished goroutines continue to run and send results to the channel is meaningless and waste, especially when w.Check() is quite time-consuming. The ideal effect is similar to:
for _, w := range works {
if !w.Check() {
ReportFailed()
break;
}
}
This only runs necessary check works then break, but is in sequential non-concurrent scenario.
So, is it possible to cancel these unfinished goroutines, or sending to channel?
Cancelling a (blocking) send
Your original question asked how to cancel a send operation. A send on a channel is basically "instant". A send on a channel blocks if the channel's buffer is full and there is no ready receiver.
You can "cancel" this send by using a select statement and a cancel channel which you close, e.g.:
cancel := make(chan struct{})
select {
case ch <- value:
case <- cancel:
}
Closing the cancel channel with close(cancel) on another goroutine will make the above select abandon the send on ch (if it's blocking).
But as said, the send is "instant" on a "ready" channel, and the send first evaluates the value to be sent:
results <- w.Check()
This first has to run w.Check(), and once it's done, its return value will be sent on results.
Cancelling a function call
So what you really need is to cancel the w.Check() method call. For that, the idiomatic way is to pass a context.Context value which you can cancel, and w.Check() itself must monitor and "obey" this cancellation request.
See Terminating function execution if a context is cancelled
Note that your function must support this explicitly. There is no implicit termination of function calls or goroutines, see cancel a blocking operation in Go.
So your Check() should look something like this:
// This Check could be quite time-consuming
func (w *Work) Check(ctx context.Context, workDuration time.Duration) bool {
// Do your thing and monitor the context!
select {
case <-ctx.Done():
return false
case <-time.After(workDuration): // Simulate work
return true
case <-time.After(2500 * time.Millisecond): // Simulate failure after 2.5 sec
return false
}
}
And CheckAll() may look like this:
func CheckAll(works []*Work) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
num := len(works)
results := make(chan bool, num)
wg := &sync.WaitGroup{}
for i, w := range works {
workDuration := time.Second * time.Duration(i)
wg.Add(1)
go func(w *Work) {
defer wg.Done()
result := w.Check(ctx, workDuration)
// You may check and return if context is cancelled
// so result is surely not sent, I omitted it here.
select {
case results <- result:
case <-ctx.Done():
return
}
}(w)
}
go func() {
wg.Wait()
close(results) // This allows the for range over results to terminate
}()
for result := range results {
fmt.Println("Result:", result)
if !result {
cancel()
break
}
}
}
Testing it:
CheckAll(make([]*Work, 10))
Output (try it on the Go Playground):
Result: true
Result: true
Result: true
Result: false
We get true printed 3 times (works that complete under 2.5 seconds), then the failure simulation kicks in, returns false, and terminates all other jobs.
Note that the sync.WaitGroup in the above example is not strictly needed as results has a buffer capable of holding all results, but in general it's still good practice (should you use a smaller buffer in the future).
See related: Close multiple goroutine if an error occurs in one in go
The short answer is: No.
You can not cancel or close any goroutine unless the goroutine itself reaches the return or end of its stack.
If you want to cancel something, the best approach is to pass a context.Context to them and listen to this context.Done() inside of the routine. Whenever context is canceled, you should return and the goroutine will automatically die after executing defers(if any).
package main
import "fmt"
type Work struct {
// ...
Name string
IsSuccess chan bool
}
// This Check could be quite time-consuming
func (w *Work) Check() {
// return succeed or not
//...
if len(w.Name) > 0 {
w.IsSuccess <- true
}else{
w.IsSuccess <- false
}
}
//堆排序
func main() {
works := make([]*Work,3)
works[0] = &Work{
Name: "",
IsSuccess: make(chan bool),
}
works[1] = &Work{
Name: "111",
IsSuccess: make(chan bool),
}
works[2] =&Work{
Name: "",
IsSuccess: make(chan bool),
}
for _,w := range works {
go w.Check()
}
for i,w := range works{
select {
case checkResult := <-w.IsSuccess :
fmt.Printf("index %d checkresult %t \n",i,checkResult)
}
}
}
enter image description here

Multiple Go routines reading from the same channel

Hi I'm having a problem with a control channel (of sorts).
The essence of my program:
I do not know how many go routines I will be running at runtime
I will need to restart these go routines at set times, however, they could also potentially error out (and then restarted), so their timing will not be predictable.
These go routines will be putting messages onto a single channel.
So What I've done is created a simple random message generator to put messages onto a channel.
When the timer is up (random duration for testing) I put a message onto a control channel which is a struct payload, so I know there was a close signal and which go routine it was; in reality I'd then do some other stuff I'd need to do before starting the go routines again.
My problem is:
I receive the control message within my reflect.Select loop
I do not (or unable to) receive it in my randmsgs() loop
Therefore I can not stop my randmsgs() go routine.
I believe I'm right in understanding that multiple go routines can read from a single channel, therefore I think I'm misunderstanding how reflect.SelectCases fit into all of this.
My code:
package main
import (
"fmt"
"math/rand"
"reflect"
"time"
)
type testing struct {
control bool
market string
}
func main() {
rand.Seed(time.Now().UnixNano())
// explicitly define chanids for tests.
var chanids []string = []string{"GR I", "GR II", "GR III", "GR IV"}
stream := make(chan string)
control := make([]chan testing, len(chanids))
reflectCases := make([]reflect.SelectCase, len(chanids)+1)
// MAKE REFLECT SELECTS FOR 4 CONTROL CHANS AND 1 DATA CHANNEL
for i := range chanids {
control[i] = make(chan testing)
reflectCases[i] = reflect.SelectCase{Dir: reflect.SelectRecv, Chan: reflect.ValueOf(control[i])}
}
reflectCases[4] = reflect.SelectCase{Dir: reflect.SelectRecv, Chan: reflect.ValueOf(stream)}
// START GO ROUTINES
for i, val := range chanids {
runningFunc(control[i], val, stream, 1+rand.Intn(30-1))
}
// READ DATA
for {
o, recieved, ok := reflect.Select(reflectCases)
if !ok {
fmt.Println("You really buggered this one up...")
}
ty, err := recieved.Interface().(testing)
if err == true {
fmt.Printf("Read from: %v, and recieved close signal from: %s\n", o, ty.market)
// close control & stream here.
} else {
ty := recieved.Interface().(string)
fmt.Printf("Read from: %v, and recieved value from: %s\n", o, ty)
}
}
}
// THE GO ROUTINES - TIMER AND RANDMSGS
func runningFunc(q chan testing, chanid string, stream chan string, dur int) {
go timer(q, dur, chanid)
go randmsgs(q, chanid, stream)
}
func timer(q chan testing, t int, message string) {
for t > 0 {
time.Sleep(time.Second)
t--
}
q <- testing{true, message}
}
func randmsgs(q chan testing, chanid string, stream chan string) {
for {
select {
case <-q:
fmt.Println("Just sitting by the mailbox. :(")
return
default:
secondsToWait := 1 + rand.Intn(5-1)
time.Sleep(time.Second * time.Duration(secondsToWait))
stream <- fmt.Sprintf("%s: %d", chanid, secondsToWait)
}
}
}
I apologise for the wall of text, but I'm all out of ideas :(!
K/Regards,
C.
Your channels q in the second half are the same as control[0...3] in the first.
Your reflect.Select that you are running also reads from all of these channels, with no delay.
The problem I think comes down to that your reflect.Select is simply running too fast and "stealing" all the channel output right away. This is why randmsgs is never able to read the messages.
You'll notice that if you remove the default case from randmsgs, the function is able to (potentially) read some of the messages from q.
select {
case <-q:
fmt.Println("Just sitting by the mailbox. :(")
return
}
This is because now that it is running without delay, it is always waiting for a message on q and thus has the chance to beat the reflect.Select in the race.
If you read from the same channel in multiple goroutines, then the data passed will simply go to whatever goroutine reads it first.
This program appears to just be an experiment / learning experience, but I'll offer some criticism that may help.
Again, generally you don't have multiple goroutines reading from the same channel if both goroutines are doing different tasks. You're creating a mostly non-deterministic race as to which goroutine fetches the data first.
Second, this is a common beginner's anti-pattern with select that you should avoid:
for {
select {
case v := <-myChan:
doSomething(v)
default:
// Oh no, there wasn't anything! Guess we have to wait and try again.
time.Sleep(time.Second)
}
This code is redundant because select already behaves in such a way that if no case is initially ready, it will wait until any case is ready and then proceed with that one. This default: sleep is effectively making your select loop slower and yet spending less time actually waiting on the channel (because 99.999...% of the time is spent on time.Sleep).

Reliable way to write to a channel when buffer available

I've been working on a sort-of pub-sub mechanism in the application we're building. The business logic basically generates a whack-ton of events, which in turn can be used to feed data to the client using API's, or persistent in storage if the application is running with that option enabled.
What we had, and observed:
Long story short, it turns out we were dropping data we really ought not to have been dropping. The "subscriber" had a channel with a large buffer, and essentially only read data from this channel, checked a few things, and appended it to a slice. The capacity of the slice is such that memory allocations were kept to a minimum. Simulating a scenario where the subscriber channel had a buffer of, say, 1000 data-sets, we noticed data could be dropping after only 10 sets being sent. The very first event was never dropped.
The code we had at this point looks something like this:
type broker struct {
ctx context.Context
subs []*sub
}
type sub struct {
ctx context.Context
mu *sync.Mutex
ch chan []interface{}
buf []interface{}
}
func (b *broker) Send(evts ...interface{}) {
rm := make([]int, 0, len(b.subs))
defer func() {
for i := len(rm) - 1; i >= 0; i-- {
// last element
if i == len(b.subs)-1 {
b.subs = b.subs[:i]
continue
}
b.subs = append(b.subs[:i], b.subs[i+1:]...)
}
}()
for i, s := range b.subs {
select {
case <-b.ctx.Done(): // is the app still running
return
case <-s.Stopped(): // is this sub still valid
rm = append(rm, i)
case s.C() <- evts: // can we write to the channel
continue
default: // app is running, sub is valid, but channel is presumably full, skip
fmt.Println("skipped some events")
}
}
}
func NewSub(ctx context.Context, buf int) *sub {
s := &sub{
ctx: ctx,
mu: &sync.Mutex{},
ch: make(chan []interface{}, buf),
buf: make([]interface{}, 0, buf),
}
go s.loop(ctx) // start routine to consume events
return s
}
func (s *sub) C() chan<- []interface{} {
return s.ch
}
func (s *sub) Stopped() <-chan struct{} {
return s.ctx.Done()
}
func (s *sub) loop(ctx context.Context) {
defer func() {
close(s.ch)
}()
for {
select {
case <-ctx.Done():
return
case data := <-s.ch:
// do some processing
s.mu.Lock()
s.buf = append(s.buf, data...)
s.mu.Unlock()
}
}
}
func (s *sub) GetProcessedData(amt int) []*wrappedT {
s.mu.Lock()
data := s.buf
if len(data) == amt {
s.buf = s.buf[:0]
} else if len(data) > amt {
data = data[:amt]
s.buf = s.buf[amt:]
} else {
s.buf = make([]interface{}, 0, cap(s.buf))
}
s.mu.Unlock()
ret := make([]*wrappedT, 0, len(data))
for _, v := range data {
// some processing
ret = append(ret, &wrappedT{v})
}
return ret
}
Now obviously, the buffers are there to ensure that events can still be consumed when we're calling things like GetProcessedData. That type of call is usually the result of an API request, or some internal flush/persist to storage mechanism. Because of the mutex lock, we might not be reading from the internal channel. Weirdly, the channel buffers never got backed up all the way through, but not all data made its way through to the subscribers. As mentioned, the first event always did, which made us even more suspicious.
What we eventually tried (to fix):
After a fair bit of debugging, hair pulling, looking at language specs, and fruitless googling I began to suspect the select statement to be the problem. Instead of sending to the channels directly, I changed it to the rather hacky:
func (b *broker) send(s *sub, evts []interface{}) {
ctx, cfunc := context.WithTimeout(b.ctx, 100 *time.Millisecond)
defer cfunc()
select {
case <-ctx:
return
case sub.C() <- evts:
return
case <-sub.Closed():
return
}
}
func (b *broker) Send(evts ...interface{}) {
for _, s := range b.subs {
go b.send(s, evts)
}
}
Instantly, all events were correctly propagated through the system. Calling Send on the broker wasn't blocking the part of the system that actually performs the heavy lifting (that was the reason for the use of channels after all), and things are performing reasonably well.
The actual question(s):
There's a couple of things still bugging me:
The way I read the specs, the default statement ought to be the last resort, solely as a way out to prevent blocking channel operations in a select statement. Elsewhere, I read that the runtime may not consider a case ready for communication if there is no routine consuming what you're about to write to the channel, irrespective of channel buffers. Is this indeed how it works?
For the time being, the context with timeout fixes the bigger issue of data not propagating properly. However, I do feel like there should be a better way.
Has anyone ever encountered something similar, and worked out exactly what's going on?
I'm happy to provide more details where needed. I've kept the code as minimal as possible, omitting a lot of complexities WRT the broker system we're using (different event types, different types of subscribers, etc...). We don't use the interface{} type anywhere in case anyone is worried about that :P
For now, though, I think this is plenty of text for a Friday.

GO - Code stops executing after function return

So, I'm trying to construct a websocket server in go. And i ran into this interesting bug, which i cant for the life of me figure out why its happening.
NOTE: The comments in the code snippets are there only for this post. Read them.
Ive got this function:
func Join(ws *websocket.Conn) {
Log.Connection(ws)
enc := json.NewEncoder(ws)
dec := json.NewDecoder(ws)
var dJ g.DiscussionJoin
var disc g.Discussion
Log.Err(dec.Decode(&dJ), "dec.Decode")
ssD := g.FindDiscussionByID(dJ.DiscussionID)
ssDJ := dJ.Convert(ws)
g.DiscHandle <- &ssDJ
disc = ssD.Convert()
Log.Err(enc.Encode(disc), "enc.Encode")
Log.Activity("Discussion", "Joined", disc.DiscussionID.Subject)
fmt.Println("Listening") //This gets called
g.Listen(dec)
fmt.Println("Stoped Listening") //This DOESN'T get called [IT SHOULD]
ssDJ.SSDiscussion.Leave(ssDJ.SSUserID)
Log.Disconnection(ws)
}
The function thats causing this is (in my opinion) g.Listen(...):
func Listen(dec *json.Decoder) {
timeLastSent := time.Now().Second()
in := Message{}
for ((timeLastSent + ConnTimeout) % 60) != time.Now().Second() {
if err := dec.Decode(&in); err != nil {
continue
} else if in == Ping {
timeLastSent = time.Now().Second()
continue
}
timeLastSent = time.Now().Second()
Messages <- in
in = Message{}
}
fmt.Println("Client timed out!") //This gets called
return
}
Ive tried both with and without the return on the last row of Listen
As response to #SimoEndre, Ive left the main method out of the code example, but since you mentioned it, this is the function that takes g.Messege{} out of the Messeges channel.
NOTE: MessageHandler() runs on its own go routine.
func MessageHandler() {
for msg := range Messages {
for _, disc := range LivingDiscussions {
if disc.DiscussionID.UDID == msg.UDID {
go disc.Push(msg)
break
}
}
}
}
Looking at the Listen function you will remark that it has a Messages channel which receive the the Message{} struct, but in the main goroutine it does not get outputted. Remember that goroutines are two way communication channels, which means that if a channel does receive an input value it must have an output value the channel to not block.
So you need to create a channel with the same struct type as the Message{}
message := make(chan Message{})
Then in the Join function you have to pop out the value pushed to channel:
func Join(ws *websocket.Conn) {
...
<-message
}
Update after new inputs:
It's not enough to iterate over the values coming from a channel, you need to do this inside a go func().
Getting the values out of different concurrently executing goroutines can be accomplished with the select keyword, which closely resembles the switch control statement and is sometimes called the communications switch.
go func() {
for msg := range Messages {
for _, disc := range LivingDiscussions {
if disc.DiscussionID.UDID == msg.UDID {
select {
case disc.Push <- msg: // push the channel value to the stack
default :
// default action
}
}
}
}
}()
I don't know how your disc.Push method is implemented, but if the idea is to push the received channel values to a stack you have to modify your code in a way to send back the channel value to the array. In the code snippet above i've just wanted to emphasize that it's important to get the values back pushed into the channel.

How to check a channel is closed or not without reading it?

This is a good example of workers & controller mode in Go written by #Jimt, in answer to
"Is there some elegant way to pause & resume any other goroutine in golang?"
package main
import (
"fmt"
"runtime"
"sync"
"time"
)
// Possible worker states.
const (
Stopped = 0
Paused = 1
Running = 2
)
// Maximum number of workers.
const WorkerCount = 1000
func main() {
// Launch workers.
var wg sync.WaitGroup
wg.Add(WorkerCount + 1)
workers := make([]chan int, WorkerCount)
for i := range workers {
workers[i] = make(chan int)
go func(i int) {
worker(i, workers[i])
wg.Done()
}(i)
}
// Launch controller routine.
go func() {
controller(workers)
wg.Done()
}()
// Wait for all goroutines to finish.
wg.Wait()
}
func worker(id int, ws <-chan int) {
state := Paused // Begin in the paused state.
for {
select {
case state = <-ws:
switch state {
case Stopped:
fmt.Printf("Worker %d: Stopped\n", id)
return
case Running:
fmt.Printf("Worker %d: Running\n", id)
case Paused:
fmt.Printf("Worker %d: Paused\n", id)
}
default:
// We use runtime.Gosched() to prevent a deadlock in this case.
// It will not be needed of work is performed here which yields
// to the scheduler.
runtime.Gosched()
if state == Paused {
break
}
// Do actual work here.
}
}
}
// controller handles the current state of all workers. They can be
// instructed to be either running, paused or stopped entirely.
func controller(workers []chan int) {
// Start workers
for i := range workers {
workers[i] <- Running
}
// Pause workers.
<-time.After(1e9)
for i := range workers {
workers[i] <- Paused
}
// Unpause workers.
<-time.After(1e9)
for i := range workers {
workers[i] <- Running
}
// Shutdown workers.
<-time.After(1e9)
for i := range workers {
close(workers[i])
}
}
But this code also has an issue: If you want to remove a worker channel in workers when worker() exits, dead lock happens.
If you close(workers[i]), next time controller writes into it will cause a panic since go can't write into a closed channel. If you use some mutex to protect it, then it will be stuck on workers[i] <- Running since the worker is not reading anything from the channel and write will be blocked, and mutex will cause a dead lock. You can also give a bigger buffer to channel as a work-around, but it's not good enough.
So I think the best way to solve this is worker() close channel when exits, if the controller finds a channel closed, it will jump over it and do nothing. But I can't find how to check a channel is already closed or not in this situation. If I try to read the channel in controller, the controller might be blocked. So I'm very confused for now.
PS: Recovering the raised panic is what I have tried, but it will close goroutine which raised panic. In this case it will be controller so it's no use.
Still, I think it's useful for Go team to implement this function in next version of Go.
There's no way to write a safe application where you need to know whether a channel is open without interacting with it.
The best way to do what you're wanting to do is with two channels -- one for the work and one to indicate a desire to change state (as well as the completion of that state change if that's important).
Channels are cheap. Complex design overloading semantics isn't.
[also]
<-time.After(1e9)
is a really confusing and non-obvious way to write
time.Sleep(time.Second)
Keep things simple and everyone (including you) can understand them.
In a hacky way it can be done for channels which one attempts to write to by recovering the raised panic. But you cannot check if a read channel is closed without reading from it.
Either you will
eventually read the "true" value from it (v <- c)
read the "true" value and 'not closed' indicator (v, ok <- c)
read a zero value and the 'closed' indicator (v, ok <- c) (example)
will block in the channel read forever (v <- c)
Only the last one technically doesn't read from the channel, but that's of little use.
I know this answer is so late, I have wrote this solution, Hacking Go run-time, It's not safety, It may crashes:
import (
"unsafe"
"reflect"
)
func isChanClosed(ch interface{}) bool {
if reflect.TypeOf(ch).Kind() != reflect.Chan {
panic("only channels!")
}
// get interface value pointer, from cgo_export
// typedef struct { void *t; void *v; } GoInterface;
// then get channel real pointer
cptr := *(*uintptr)(unsafe.Pointer(
unsafe.Pointer(uintptr(unsafe.Pointer(&ch)) + unsafe.Sizeof(uint(0))),
))
// this function will return true if chan.closed > 0
// see hchan on https://github.com/golang/go/blob/master/src/runtime/chan.go
// type hchan struct {
// qcount uint // total data in the queue
// dataqsiz uint // size of the circular queue
// buf unsafe.Pointer // points to an array of dataqsiz elements
// elemsize uint16
// closed uint32
// **
cptr += unsafe.Sizeof(uint(0))*2
cptr += unsafe.Sizeof(unsafe.Pointer(uintptr(0)))
cptr += unsafe.Sizeof(uint16(0))
return *(*uint32)(unsafe.Pointer(cptr)) > 0
}
Well, you can use default branch to detect it, for a closed channel will be selected, for example: the following code will select default, channel, channel, the first select is not blocked.
func main() {
ch := make(chan int)
go func() {
select {
case <-ch:
log.Printf("1.channel")
default:
log.Printf("1.default")
}
select {
case <-ch:
log.Printf("2.channel")
}
close(ch)
select {
case <-ch:
log.Printf("3.channel")
default:
log.Printf("3.default")
}
}()
time.Sleep(time.Second)
ch <- 1
time.Sleep(time.Second)
}
Prints
2018/05/24 08:00:00 1.default
2018/05/24 08:00:01 2.channel
2018/05/24 08:00:01 3.channel
Note, refer to comment by #Angad under this answer:
It doesn't work if you're using a Buffered Channel and it contains
unread data
I have had this problem frequently with multiple concurrent goroutines.
It may or may not be a good pattern, but I define a a struct for my workers with a quit channel and field for the worker state:
type Worker struct {
data chan struct
quit chan bool
stopped bool
}
Then you can have a controller call a stop function for the worker:
func (w *Worker) Stop() {
w.quit <- true
w.stopped = true
}
func (w *Worker) eventloop() {
for {
if w.Stopped {
return
}
select {
case d := <-w.data:
//DO something
if w.Stopped {
return
}
case <-w.quit:
return
}
}
}
This gives you a pretty good way to get a clean stop on your workers without anything hanging or generating errors, which is especially good when running in a container.
You could set your channel to nil in addition to closing it. That way you can check if it is nil.
example in the playground:
https://play.golang.org/p/v0f3d4DisCz
edit:
This is actually a bad solution as demonstrated in the next example,
because setting the channel to nil in a function would break it:
https://play.golang.org/p/YVE2-LV9TOp
ch1 := make(chan int)
ch2 := make(chan int)
go func(){
for i:=0; i<10; i++{
ch1 <- i
}
close(ch1)
}()
go func(){
for i:=10; i<15; i++{
ch2 <- i
}
close(ch2)
}()
ok1, ok2 := false, false
v := 0
for{
ok1, ok2 = true, true
select{
case v,ok1 = <-ch1:
if ok1 {fmt.Println(v)}
default:
}
select{
case v,ok2 = <-ch2:
if ok2 {fmt.Println(v)}
default:
}
if !ok1 && !ok2{return}
}
}
From the documentation:
A channel may be closed with the built-in function close. The multi-valued assignment form of the receive operator reports whether a received value was sent before the channel was closed.
https://golang.org/ref/spec#Receive_operator
Example by Golang in Action shows this case:
// This sample program demonstrates how to use an unbuffered
// channel to simulate a game of tennis between two goroutines.
package main
import (
"fmt"
"math/rand"
"sync"
"time"
)
// wg is used to wait for the program to finish.
var wg sync.WaitGroup
func init() {
rand.Seed(time.Now().UnixNano())
}
// main is the entry point for all Go programs.
func main() {
// Create an unbuffered channel.
court := make(chan int)
// Add a count of two, one for each goroutine.
wg.Add(2)
// Launch two players.
go player("Nadal", court)
go player("Djokovic", court)
// Start the set.
court <- 1
// Wait for the game to finish.
wg.Wait()
}
// player simulates a person playing the game of tennis.
func player(name string, court chan int) {
// Schedule the call to Done to tell main we are done.
defer wg.Done()
for {
// Wait for the ball to be hit back to us.
ball, ok := <-court
fmt.Printf("ok %t\n", ok)
if !ok {
// If the channel was closed we won.
fmt.Printf("Player %s Won\n", name)
return
}
// Pick a random number and see if we miss the ball.
n := rand.Intn(100)
if n%13 == 0 {
fmt.Printf("Player %s Missed\n", name)
// Close the channel to signal we lost.
close(court)
return
}
// Display and then increment the hit count by one.
fmt.Printf("Player %s Hit %d\n", name, ball)
ball++
// Hit the ball back to the opposing player.
court <- ball
}
}
it's easier to check first if the channel has elements, that would ensure the channel is alive.
func isChanClosed(ch chan interface{}) bool {
if len(ch) == 0 {
select {
case _, ok := <-ch:
return !ok
}
}
return false
}
If you listen this channel you always can findout that channel was closed.
case state, opened := <-ws:
if !opened {
// channel was closed
// return or made some final work
}
switch state {
case Stopped:
But remember, you can not close one channel two times. This will raise panic.

Resources