One data in Channel received by two routine - go

Hello i learn about go routine and channel.
I do some experiment with channel, i send a data over channel and try to catch it in 2 functions. But my second function not run
Here is my code :
package main
import (
"fmt"
"os"
"time"
)
func timeout(duration int, ch chan<- bool) {
time.AfterFunc(time.Duration(duration)*time.Second, func() {
ch <- true
})
}
func watcher(duration int, ch <-chan bool) {
<-ch
fmt.Println("\nTimeout! no Answer after", duration, "seconds")
os.Exit(0)
}
func watcher2(duration int, ch <-chan bool) {
<-ch
fmt.Println("This is watcher 2 as a second receiver")
}
func main() {
var data = make(chan bool)
var duration = 5
go timeout(duration, data)
go watcher(duration, data)
go watcher2(duration, data)
var input string
fmt.Print("What is 725/25 ? ")
fmt.Scan(&input)
if input == "29" {
fmt.Println("Correct")
} else {
fmt.Println("Wrong!")
}
}
Can you tell me some explanation about it?
Thank you

As #Andy Schweig mentioned, you can pull from Go channel only once. If you still want to receive message twice, you can use Observer design pattern:
import "fmt"
type Observer interface {
Notify(message string)
}
type Watcher struct {
name string
}
func (w Watcher) Notify(message string) {
fmt.Printf("Watcher %s got message %s\n", w.name, message)
}
var watchers = [...]Watcher {{name: "Watcher 1"}, {name: "Watcher 2"}}
var c = make(chan string)
func notifier() {
var message string
for {
// Messaged pulled only once
message = <- c
// But all watchers still receive it
for _, w := range watchers {
w.Notify(message)
}
}
}
func main() {
go notifier()
c <- "hello"
c <- "how are you?"
}

The channel you declared can only deal with one receiver. By default channels are unbuffered, meaning that they will only accept sends if there is a corresponding receiver to receive the sent value. Whereas a buffered channel accept a limited number of values without a corresponding receiver for those values. If you are looking to inject multiple input and its subsequent receive then you need declare your channel as buffered channel.
ch := make(chan bool, n) //n being the number of items to buffer

Related

Go PubSub without mutexes?

I will be implementing notification system into website backend where each page visit will subscribe user to some data that are displayed on the page and when there are changes in the system, he will be notified about it. For example someone is viewing a page with news articles and when a new article is posted, i want to notify the user so he can then fetch these new articles via js or by reloading the page. Either manually or automatically.
To make this happen I will be using channels in a pub/sub manner. So for example there will be a "news" channel. When new article is created, this channel will receive id of this article. When user opens up a page and subscribes to "news" channel(probably via websocket), there will have to be a list of subscribers for this news channel into which he will be added as a subscriber to be notified.
Something like:
type Channel struct {
ingres <-chan int // news article id
subs [] chan<- int
mx sync.Mutex
}
There will be goroutine for each of these that will be distributing what comes into ingress into the subs list.
Now the problem I am looking at, probably premature optimization, is that there will be a lot of channels and a lot of coming and going subscribers. Which means there will be a lot of stop-the-world events with mutextes. For example if there are 10 000 users online, subscribed to news channel, the goroutine will have to send 10k notifications WHILE the subs slice will be locked so new subscribers will have to wait for mutex to unlock. And now multiply this by 100 channels and I think we have a problem.
So I am looking for a way to add and remove subscribers without blocking other subscribers from being added or removed or potentially just to receive the notification in acceptable time across the board.
That "waiting for all subs to receive" problem can be solved with goroutine for each sub with timeout so after the id is received, 10k goroutines will be created and mutex can be unlocked right away. But still, it can add up with multiple channels.
Based on the linked comments I have came up with this code:
package notif
import (
"context"
"sync"
"time"
"unsafe"
)
type Client struct {
recv chan interface{}
ch *Channel
o sync.Once
ctx context.Context
cancel context.CancelFunc
}
// will be nil if this client is write-only
func (c *Client) Listen() <-chan interface{} {
return c.recv
}
func (c *Client) Close() {
select {
case <-c.ctx.Done():
case c.ch.unsubscribe <- c:
}
}
func (c *Client) Done() <-chan struct{} {
return c.ctx.Done()
}
func (c *Client) doClose() {
c.o.Do(func() {
c.cancel()
if c.recv != nil {
close(c.recv)
}
})
}
func (c *Client) send(msg interface{}) {
// write-only clients will not handle any messages
if c.recv == nil {
return
}
t := time.NewTimer(c.ch.sc)
select {
case <-c.ctx.Done():
case c.recv <- msg:
case <-t.C:
// time out/slow consumer, close the connection
c.Close()
}
}
func (c *Client) Broadcast(payload interface{}) bool {
select {
case <-c.ctx.Done():
return false
default:
c.ch.Broadcast() <- &envelope{Message: payload, Sender: uintptr(unsafe.Pointer(c))}
return true
}
}
type envelope struct {
Message interface{}
Sender uintptr
}
// leech is channel-blocking so goroutine should be called internally to make it non-blocking
// this is to ensure proper order of leeched messages.
func NewChannel(ctx context.Context, name string, slowConsumer time.Duration, emptyCh chan string, leech func(interface{})) *Channel {
return &Channel{
name: name,
ingres: make(chan interface{}, 1000),
subscribe: make(chan *Client, 1000),
unsubscribe: make(chan *Client, 1000),
aud: make(map[*Client]struct{}, 1000),
ctx: ctx,
sc: slowConsumer,
empty: emptyCh,
leech: leech,
}
}
type Channel struct {
name string
ingres chan interface{}
subscribe chan *Client
unsubscribe chan *Client
aud map[*Client]struct{}
ctx context.Context
sc time.Duration
empty chan string
leech func(interface{})
}
func (ch *Channel) Id() string {
return ch.name
}
// subscription is read-write by default. by providing "writeOnly=true", it can be switched into write-only mode
// in which case the client will not be disconnected for being slow reader.
func (ch *Channel) Subscribe(writeOnly ...bool) *Client {
c := &Client{
ch: ch,
}
if len(writeOnly) == 0 || writeOnly[0] == false {
c.recv = make(chan interface{})
}
c.ctx, c.cancel = context.WithCancel(ch.ctx)
ch.subscribe <- c
return c
}
func (ch *Channel) Broadcast() chan<- interface{} {
return ch.ingres
}
// returns once context is cancelled
func (ch *Channel) Start() {
for {
select {
case <-ch.ctx.Done():
for cl := range ch.aud {
delete(ch.aud, cl)
cl.doClose()
}
return
case cl := <-ch.subscribe:
ch.aud[cl] = struct{}{}
case cl := <-ch.unsubscribe:
delete(ch.aud, cl)
cl.doClose()
if len(ch.aud) == 0 {
ch.signalEmpty()
}
case msg := <-ch.ingres:
e, ok := msg.(*envelope)
if ok {
msg = e.Message
}
for cl := range ch.aud {
if ok == false || uintptr(unsafe.Pointer(cl)) != e.Sender {
go cl.send(e.Message)
}
}
if ch.leech != nil {
ch.leech(msg)
}
}
}
}
func (ch *Channel) signalEmpty() {
if ch.empty == nil {
return
}
select {
case ch.empty <- ch.name:
default:
}
}
type subscribeRequest struct {
name string
recv chan *Client
wo bool
}
type broadcastRequest struct {
name string
recv chan *Channel
}
type brokeredChannel struct {
ch *Channel
cancel context.CancelFunc
}
type brokerLeech interface {
Match(string) func(interface{})
}
func NewBroker(ctx context.Context, slowConsumer time.Duration, leech brokerLeech) *Broker {
return &Broker{
chans: make(map[string]*brokeredChannel, 100),
subscribe: make(chan *subscribeRequest, 10),
broadcast: make(chan *broadcastRequest, 10),
ctx: ctx,
sc: slowConsumer,
empty: make(chan string, 10),
leech: leech,
}
}
type Broker struct {
chans map[string]*brokeredChannel
subscribe chan *subscribeRequest
broadcast chan *broadcastRequest
ctx context.Context
sc time.Duration
empty chan string
leech brokerLeech
}
// returns once context is cancelled
func (b *Broker) Start() {
for {
select {
case <-b.ctx.Done():
return
case req := <-b.subscribe:
ch, ok := b.chans[req.name]
if ok == false {
ctx, cancel := context.WithCancel(b.ctx)
var l func(interface{})
if b.leech != nil {
l = b.leech.Match(req.name)
}
ch = &brokeredChannel{
ch: NewChannel(ctx, req.name, b.sc, b.empty, l),
cancel: cancel,
}
b.chans[req.name] = ch
go ch.ch.Start()
}
req.recv <- ch.ch.Subscribe(req.wo)
case req := <-b.broadcast:
ch, ok := b.chans[req.name]
if ok == false {
ctx, cancel := context.WithCancel(b.ctx)
var l func(interface{})
if b.leech != nil {
l = b.leech.Match(req.name)
}
ch = &brokeredChannel{
ch: NewChannel(ctx, req.name, b.sc, b.empty, l),
cancel: cancel,
}
b.chans[req.name] = ch
go ch.ch.Start()
}
req.recv <- ch.ch
case name := <-b.empty:
if ch, ok := b.chans[name]; ok {
ch.cancel()
delete(b.chans, name)
}
}
}
}
// subscription is read-write by default. by providing "writeOnly=true", it can be switched into write-only mode
// in which case the client will not be disconnected for being slow reader.
func (b *Broker) Subscribe(name string, writeOnly ...bool) *Client {
req := &subscribeRequest{
name: name,
recv: make(chan *Client),
wo: len(writeOnly) > 0 && writeOnly[0] == true,
}
b.subscribe <- req
c := <-req.recv
close(req.recv)
return c
}
func (b *Broker) Broadcast(name string) chan<- interface{} {
req := &broadcastRequest{name: name, recv: make(chan *Channel)}
b.broadcast <- req
ch := <-req.recv
close(req.recv)
return ch.ingres
}

How to handle multiple goroutines that share the same channel

I've been searching a lot but could not find an answer for my problem yet.
I need to make multiple calls to an external API, but with different parameters concurrently.
And then for each call I need to init a struct for each dataset and process the data I receive from the API call. Bear in mind that I read each line of the incoming request and start immediately send it to the channel.
First problem I encounter was not obvious at the beginning due to the large quantity of data I'm receiving, is that each goroutine does not receive all the data that goes through the channel. (Which I learned by the research I've made). So what I need is a way of requeuing/redirect that data to the correct goroutine.
The function that sends the streamed response from a single dataset.
(I've cut useless parts of code that are out of context)
func (api *API) RequestData(ctx context.Context, c chan DWeatherResponse, dataset string, wg *sync.WaitGroup) error {
for {
line, err := reader.ReadBytes('\n')
s := string(line)
if err != nil {
log.Println("End of %s", dataset)
return err
}
data, err := extractDataFromStreamLine(s, dataset)
if err != nil {
continue
}
c <- *data
}
}
The function that will process the incoming data
func (s *StrikeStruct) Process(ch, requeue chan dweather.DWeatherResponse) {
for {
data, more := <-ch
if !more {
break
}
// data contains {dataset string, value float64, date time.Time}
// The s.Parameter needs to match the dataset
// IMPORTANT PART, checks if the received data is part of this struct dataset
// If not I want to send it to another go routine until it gets to the correct
one. There will be a max of 4 datasets but still this could not be the best approach to have
if !api.GetDataset(s.Parameter, data.Dataset) {
requeue <- data
continue
}
// Do stuff with the data from this point
}
}
Now on my own API endpoint I have the following:
ch := make(chan dweather.DWeatherResponse, 2)
requeue := make(chan dweather.DWeatherResponse)
final := make(chan strike.StrikePerYearResponse)
var wg sync.WaitGroup
for _, s := range args.Parameters.Strikes {
strike := strike.StrikePerYear{
Parameter: strike.Parameter(s.Dataset),
StrikeValue: s.Value,
}
// I receive and process the data in here
go strike.ProcessStrikePerYear(ch, requeue, final, string(s.Dataset))
}
go func() {
for {
data, _ := <-requeue
ch <- data
}
}()
// Creates a goroutine for each dataset
for _, dataset := range api.Params.Dataset {
wg.Add(1)
go api.RequestData(ctx, ch, dataset, &wg)
}
wg.Wait()
close(ch)
//Once the data is all processed it is all appended
var strikes []strike.StrikePerYearResponse
for range args.Fetch.Datasets {
strikes = append(strikes, <-final)
}
return strikes
The issue with this code is that as soon as I start receiving data from more than one endpoint the requeue will block and nothing more happens. If I remove that requeue logic data will be lost if it does not land on the correct goroutine.
My two questions are:
Why is the requeue blocking if it has a goroutine always ready to receive?
Should I take a different approach on how I'm processing the incoming data?
this is not a good way to solving your problem. you should change your solution. I suggest an implementation like the below:
import (
"fmt"
"sync"
)
// answer for https://stackoverflow.com/questions/68454226/how-to-handle-multiple-goroutines-that-share-the-same-channel
var (
finalResult = make(chan string)
)
// IData use for message dispatcher that all struct must implement its method
type IData interface {
IsThisForMe() bool
Process(*sync.WaitGroup)
}
//MainData can be your main struct like StrikePerYear
type MainData struct {
// add any props
Id int
Name string
}
type DataTyp1 struct {
MainData *MainData
}
func (d DataTyp1) IsThisForMe() bool {
// you can check your condition here to checking incoming data
if d.MainData.Id == 2 {
return true
}
return false
}
func (d DataTyp1) Process(wg *sync.WaitGroup) {
d.MainData.Name = "processed by DataTyp1"
// send result to final channel, you can change it as you want
finalResult <- d.MainData.Name
wg.Done()
}
type DataTyp2 struct {
MainData *MainData
}
func (d DataTyp2) IsThisForMe() bool {
// you can check your condition here to checking incoming data
if d.MainData.Id == 3 {
return true
}
return false
}
func (d DataTyp2) Process(wg *sync.WaitGroup) {
d.MainData.Name = "processed by DataTyp2"
// send result to final channel, you can change it as you want
finalResult <- d.MainData.Name
wg.Done()
}
//dispatcher will run new go routine for each request.
//you can implement a worker pool to preventing running too many go routines.
func dispatcher(incomingData *MainData, wg *sync.WaitGroup) {
// based on your requirements you can remove this go routing or not
go func() {
var p IData
p = DataTyp1{incomingData}
if p.IsThisForMe() {
go p.Process(wg)
return
}
p = DataTyp2{incomingData}
if p.IsThisForMe() {
go p.Process(wg)
return
}
}()
}
func main() {
dummyDataArray := []MainData{
MainData{Id: 2, Name: "this data #2"},
MainData{Id: 3, Name: "this data #3"},
}
wg := sync.WaitGroup{}
for i := range dummyDataArray {
wg.Add(1)
dispatcher(&dummyDataArray[i], &wg)
}
result := make([]string, 0)
done := make(chan struct{})
// data collector
go func() {
loop:for {
select {
case <-done:
break loop
case r := <-finalResult:
result = append(result, r)
}
}
}()
wg.Wait()
done<- struct{}{}
for _, s := range result {
fmt.Println(s)
}
}
Note: this is just for opening your mind for finding a better solution, and for sure this is not a production-ready code.

Check if someone has read from go channel

How we can set something like listener on go channels that when someone has read something from the channel, that notify us?
Imagine we have a sequence number for channel entries and we wanna decrement it when someone had read a value from our channel somewhere out of our package.
Unbuffered channels hand off data synchronously, so you already know when the data is read. Buffered channels work similarly when the buffer is full, but otherwise they don't block the same, so this approach wouldn't tell you quite the same thing. Depending on what your needs really are, consider also using tools like sync.WaitGroup.
ch = make(chan Data)
⋮
for {
⋮
// make data available
ch <- data
// now you know it was read
sequenceNumber--
⋮
}
You could create a channel relay mechanism, to capture read events in realtime.
So for example:
func relayer(in <-chan MyStruct) <-chan MyStruct {
out := make(chan MyStruct) // non-buffered chan (see below)
go func() {
defer close(out)
readCountLimit := 10
for item := range in {
out <- item
// ^^^^ so this will block until some worker has read from 'out'
readCountLimit--
}
}()
return out
}
Usage:
type MyStruct struct {
// put your data fields here
}
ch := make(chan MyStruct) // <- original channel - used by producer to write to
rch := relayer(ch) // <- relay channel - used to read from
// consumers
go worker("worker 1", rch)
go worker("worker 2", rch)
// producer
for { ch <- MyStruct{} }
You can do it in manual mode. implement some sort of ACK marker to the message.
Something like this:
type Msg struct {
Data int
ack bool
}
func (m *Msg) Ack() {
m.ack = true
}
func (m *Msg) Acked() bool {
return m.ack
}
func main() {
ch := make(chan *Msg)
msg := &Msg{Data: 1}
go func() {
for {
if msg.Acked() {
// do smth
}
time.Sleep(10 * time.Second)
}
}()
ch <- msg
for msg := range ch {
msg.Ack()
}
}
Code not tested.
You can also add some additional information to Ack() method, say meta information about package and func, from where Ack() was called, this answer may be related: https://stackoverflow.com/a/35213181/3782382

Broadcast a channel through multiple channel in Go

I would like to broadcast data received from a channel to a list of channel. The list of channel is dynamic an can be modified during the run phase.
As a new developper in Go, I wrote this code. I found it quite heavy for what I want. Is-there a better way to do this?
package utils
import "sync"
// StringChannelBroadcaster broadcasts string data from a channel to multiple channels
type StringChannelBroadcaster struct {
Source chan string
Subscribers map[string]*StringChannelSubscriber
stopChannel chan bool
mutex sync.Mutex
capacity uint64
}
// NewStringChannelBroadcaster creates a StringChannelBroadcaster
func NewStringChannelBroadcaster(capacity uint64) (b *StringChannelBroadcaster) {
return &StringChannelBroadcaster{
Source: make(chan string, capacity),
Subscribers: make(map[string]*StringChannelSubscriber),
capacity: capacity,
}
}
// Dispatch starts dispatching message
func (b *StringChannelBroadcaster) Dispatch() {
b.stopChannel = make(chan bool)
for {
select {
case val, ok := <-b.Source:
if ok {
b.mutex.Lock()
for _, value := range b.Subscribers {
value.Channel <- val
}
b.mutex.Unlock()
}
case <-b.stopChannel:
return
}
}
}
// Stop stops the Broadcaster
func (b *StringChannelBroadcaster) Stop() {
close(b.stopChannel)
}
// StringChannelSubscriber defines a subscriber to a StringChannelBroadcaster
type StringChannelSubscriber struct {
Key string
Channel chan string
}
// NewSubscriber returns a new subsriber to the StringChannelBroadcaster
func (b *StringChannelBroadcaster) NewSubscriber() *StringChannelSubscriber {
key := RandString(20)
newSubscriber := StringChannelSubscriber{
Key: key,
Channel: make(chan string, b.capacity),
}
b.mutex.Lock()
b.Subscribers[key] = &newSubscriber
b.mutex.Unlock()
return &newSubscriber
}
// RemoveSubscriber removes a subscrber from the StringChannelBroadcaster
func (b *StringChannelBroadcaster) RemoveSubscriber(subscriber *StringChannelSubscriber) {
b.mutex.Lock()
delete(b.Subscribers, subscriber.Key)
b.mutex.Unlock()
}
Thank you,
Julien
I think you can simplify it a bit: get rid of stopChannel and the Stop method. You can just close Source instead of calling Stop, and detect that in Dispatch (ok will be false) to quit (you can just range over the source channel actually).
You can get rid of Dispatch, and just start a goroutine in NewStringChannelBroadcaster with the for cycle, so external code doesn't have to start the dispatch cycle separately.
You can use a channel type as the map key, so your map can become map[chan string]struct{} (empty struct because you don't need the map value). So your NewSubscriber can take a channel type parameter (or create a new channel and return it), and insert that into the map, you don't need the random string or the StringChannelSubscriber type.
I also made some improvements, like closing the subscriber channels:
package main
import "sync"
import (
"fmt"
"time"
)
// StringChannelBroadcaster broadcasts string data from a channel to multiple channels
type StringChannelBroadcaster struct {
Source chan string
Subscribers map[chan string]struct{}
mutex sync.Mutex
capacity uint64
}
// NewStringChannelBroadcaster creates a StringChannelBroadcaster
func NewStringChannelBroadcaster(capacity uint64) *StringChannelBroadcaster {
b := &StringChannelBroadcaster{
Source: make(chan string, capacity),
Subscribers: make(map[chan string]struct{}),
capacity: capacity,
}
go b.dispatch()
return b
}
// Dispatch starts dispatching message
func (b *StringChannelBroadcaster) dispatch() {
// for iterates until the channel is closed
for val := range b.Source {
b.mutex.Lock()
for ch := range b.Subscribers {
ch <- val
}
b.mutex.Unlock()
}
b.mutex.Lock()
for ch := range b.Subscribers {
close(ch)
// you shouldn't be calling RemoveSubscriber after closing b.Source
// but it's better to be safe than sorry
delete(b.Subscribers, ch)
}
b.Subscribers = nil
b.mutex.Unlock()
}
func (b *StringChannelBroadcaster) NewSubscriber() chan string {
ch := make(chan string, b.capacity)
b.mutex.Lock()
if b.Subscribers == nil {
panic(fmt.Errorf("NewSubscriber called on closed broadcaster"))
}
b.Subscribers[ch] = struct{}{}
b.mutex.Unlock()
return ch
}
// RemoveSubscriber removes a subscrber from the StringChannelBroadcaster
func (b *StringChannelBroadcaster) RemoveSubscriber(ch chan string) {
b.mutex.Lock()
if _, ok := b.Subscribers[ch]; ok {
close(ch) // this line does have to be inside the if to prevent close of closed channel, in case RemoveSubscriber is called twice on the same channel
delete(b.Subscribers, ch) // this line doesn't need to be inside the if
}
b.mutex.Unlock()
}
func main() {
b := NewStringChannelBroadcaster(0)
var toberemoved chan string
for i := 0; i < 3; i++ {
i := i
ch := b.NewSubscriber()
if i == 1 {
toberemoved = ch
}
go func() {
for v := range ch {
fmt.Printf("receive %v: %v\n", i, v)
}
fmt.Printf("Exit %v\n", i)
}()
}
b.Source <- "Test 1"
b.Source <- "Test 2"
// This is a race condition: the second reader may or may not receive the first two messages.
b.RemoveSubscriber(toberemoved)
b.Source <- "Test 3"
// let the reader goroutines receive the last message
time.Sleep(2 * time.Second)
close(b.Source)
// let the reader goroutines write close message
time.Sleep(1 * time.Second)
}
https://play.golang.org/p/X-NcikvbDM
Edit: I've added your edit to fix the panic when calling RemoveSubscriber after closing Source, but you shouldn't be doing that, you should let the struct and everything in it be garbage collected after the channel is closed.
I've also added a panic to NewSubscriber if it's called after closing Source. Previously you could do that and it'd leak the created channel and presumably the goroutine that will block forever on that channel.
If you can call NewSubscriber (or RemoveSubscriber) on an already closed broadcaster, that probably means there's an error in your code somewhere, since you're holding on to a broadcaster that you shouldn't be.

Buffered channel worker panics

I wrote a little worker queue using buffered channels.
I want to have the ability to "restart" this worker.
But when I do so I get a panic saying "panic: close of closed channel".
Actually I don't understand why its a closed channel because it shouldn't be closed any more after the make.
Here is the example code (http://play.golang.org/p/nLvNiMaOoA):
package main
import (
"fmt"
"time"
)
type T struct {
ch chan int
}
func (s T) reset() {
close(s.ch)
s.ch = make(chan int, 2)
}
func (s T) wrk() {
for i := range s.ch {
fmt.Println(i)
}
fmt.Println("close")
}
func main() {
t := T{make(chan int, 2)}
for {
go t.wrk()
time.Sleep(time.Second)
t.reset()
}
}
Can you tell me what I'm doing wrong there?
The problem is that you have a value receiver in your reset function which means that s will be copied and you don't see the effects on your t variable in the loop.
To fix that, make it a pointer receiver:
func (s *T) reset() {
close(s.ch)
s.ch = make(chan int, 2)
}
For more info on this topic see Effective Go.

Resources