Golang unexpected behaviour while implementing concurrency using channel and select statement - go

Below is my implementation of the Pub/Sub model in golang using concurrency. The code executes as expected sometimes but sometimes it gives output like this:
Output
message received on channel 2: Hello World
message received on channel 3: Hello World
message received on channel 1: Hello World
message received on channel 1:
subscriber 1's context cancelled
message received on channel 3: Only channels 2 and 3 should print this
message received on channel 2: Only channels 2 and 3 should print this
Key thing to note here is that subscriber 1 printed message received on channel 1: even after there is a call in the main function to remove it as a subscriber. Subscriber 1 should only receive one message. What seems to be happening is that subscriber receives a message from subscriber.out before the dctx is cancelled. How can I fix this. The subscriber's goroutine is launched in the AddSubscriber method.
Expected execution:
Channel 1 should only print once and then it's context should be cancelled and it's goroutine should exit.
main.go
package main
import (
"fmt"
"net/http"
)
func main() {
var err error
publisher := NewPublisher()
publisher.AddSubscriber()
publisher.AddSubscriber()
publisher.AddSubscriber()
publisher.Start()
err = publisher.Publish("Hello World")
if err != nil {
fmt.Printf("could not publish: %v\n", err)
}
err = publisher.RemoveSubscriber(1)
if err != nil {
fmt.Printf("could not remove subscriber: %v\n", err)
}
err = publisher.Publish("Only channels 2 and 3 should print this")
if err != nil {
fmt.Printf("could not publish: %v\n", err)
}
// this is merely to keep the server running
http.ListenAndServe(":8080", nil)
}
publisher.go
package main
import (
"context"
"errors"
"fmt"
"sync"
)
// Publisher sends messages received in its in channel
// to all the Subscribers listening to it.
type Publisher struct {
sequence uint
in chan string
subscribers map[uint]*Subscriber
sync.RWMutex
ctx context.Context
cancel *context.CancelFunc
}
// NewPublisher returns a new Publisher with zero subscribers.
// A Publishers needs to be started with the Publisher.Start()
// method before it can start publishing incoming messages.
func NewPublisher() *Publisher {
ctx, cancel := context.WithCancel(context.Background())
return &Publisher{subscribers: map[uint]*Subscriber{}, ctx: ctx, cancel: &cancel}
}
// AddSubscriber creates a new Subscriber that starts
// listening to the messages sent by this Publisher.
func (p *Publisher) AddSubscriber() {
dctx, cancel := context.WithCancel(p.ctx)
p.Lock()
nextId := p.sequence + 1
subscriber := NewSubscriber(nextId, dctx, &cancel)
p.subscribers[nextId] = subscriber
p.sequence = p.sequence + 1
p.Unlock()
go func() {
for {
select {
case <-p.ctx.Done():
fmt.Printf("parent context cancelled\n")
(*subscriber.cancel)()
return
case <-dctx.Done():
fmt.Printf("subscriber %d's context cancelled\n", subscriber.id)
return
case msg := <-subscriber.out:
fmt.Printf("message received on channel %d: %s\n", subscriber.id, msg)
}
}
}()
}
// Publish sends the msg to all the subscribers subscribed to it.
func (p *Publisher) Publish(msg string) error {
// publish only if Publisher has been started
if p.in == nil {
return errors.New("publisher not started yet")
}
// publish only if subscriber count > 0
p.RLock()
if len(p.subscribers) == 0 {
return errors.New("no subscribers to receive the message")
}
p.RUnlock()
// send message to in channel
// establish lock to ensure no modifications take place
// while the message is being sent
p.Lock()
p.in <- msg
p.Unlock()
return nil
}
// Start initializes the the publishers in channel
// and makes it ready to start publishing incoming messages.
func (p *Publisher) Start() {
in := make(chan string)
p.in = in
go func() {
for {
select {
case <-p.ctx.Done():
// using break here only breaks out of select statement
// instead of breaking out of the loop
fmt.Printf("done called on publisher\n")
return
case msg := <-p.in:
p.RLock()
for _, subscriber := range p.subscribers {
subscriber.out <- msg
}
p.RUnlock()
}
}
}()
}
// Stop prevents the Publisher from listening to any
// incoming messages by closing the in channel.
func (p *Publisher) Stop() {
// should I also remove all subscribers to prevent it from panicking?
(*p.cancel)()
}
// RemoveSubscriber removes the subscriber specified by the given id.
// It returns an error "could not find subscriber"
// if Subscriber with the given id is not subscribed to the Publisher.
func (p *Publisher) RemoveSubscriber(id uint) error {
p.Lock()
defer p.Unlock()
subscriber, ok := p.subscribers[id]
if !ok {
return errors.New("could not find subscriber")
}
(*subscriber.cancel)()
delete(p.subscribers, id)
close(subscriber.out)
return nil
}

Related

Golang Server Sent Events Per User

I've been working with Go for some time but never done SSE before. I'm having an issue, can someone PLEASE provide with a working example of server sent events that will only send to a specific user(connection).
I'm using a gorilla - sessions to authenticate and I would like to use UserID to separate connections.
Or should I use 5 second polling via Ajax?
Many thanks
Here is what i found and tried:
https://gist.github.com/ismasan/3fb75381cd2deb6bfa9c it doenst send to an individual user and the go func wont stop if the connection is closed
https://github.com/striversity/gotr/blob/master/010-server-sent-event-part-2/main.go this is kind of what i need but it doesnt track once the connection is removed. So now, once you close and open the browser in private window it's not working at all. Also, as above, the go routine keeps going.
Create a "broker" to distribute messages to connected users:
type Broker struct {
// users is a map where the key is the user id
// and the value is a slice of channels to connections
// for that user id
users map[string][]chan []byte
// actions is a channel of functions to call
// in the broker's goroutine. The broker executes
// everything in that single goroutine to avoid
// data races.
actions chan func()
}
// run executes in a goroutine. It simply gets and
// calls functions.
func (b *Broker) run() {
for a := range b.actions {
a()
}
}
func newBroker() *Broker {
b := &Broker{
users: make(map[string][]chan []byte),
actions: make(chan func()),
}
go b.run()
return b
}
// addUserChan adds a channel for user with given id.
func (b *Broker) addUserChan(id string, ch chan []byte) {
b.actions <- func() {
b.users[id] = append(b.users[id], ch)
}
}
// removeUserchan removes a channel for a user with the given id.
func (b *Broker) removeUserChan(id string, ch chan []byte) {
// The broker may be trying to send to
// ch, but nothing is receiving. Pump ch
// to prevent broker from getting stuck.
go func() { for range ch {} }()
b.actions <- func() {
chs := b.users[id]
i := 0
for _, c := range chs {
if c != ch {
chs[i] = c
i = i + 1
}
}
if i == 0 {
delete(b.users, id)
} else {
b.users[id] = chs[:i]
}
// Close channel to break loop at beginning
// of removeUserChan.
// This must be done in broker goroutine
// to ensure that broker does not send to
// closed goroutine.
close(ch)
}
}
// sendToUser sends a message to all channels for the given user id.
func (b *Broker) sendToUser(id string, data []byte) {
b.actions <- func() {
for _, ch := range b.users[id] {
ch <- data
}
}
}
Declare a variable with the broker at package-level:
var broker = newBroker()
Write the SSE endpoint using the broker:
func sseEndpoint(w http.ResponseWriter, r *http.Request) {
// I assume that user id is in query string for this example,
// You should use your authentication code to get the id.
id := r.FormValue("id")
// Do the usual SSE setup.
flusher := w.(http.Flusher)
w.Header().Set("Content-Type", "text/event-stream")
w.Header().Set("Cache-Control", "no-cache")
w.Header().Set("Connection", "keep-alive")
// Create channel to receive messages for this connection.
// Register that channel with the broker.
// On return from the function, remove the channel
// from the broker.
ch := make(chan []byte)
broker.addUserChan(id, ch)
defer broker.removeUserChan(id, ch)
for {
select {
case <-r.Context().Done():
// User closed the connection. We are out of here.
return
case m := <-ch:
// We got a message. Do the usual SSE stuff.
fmt.Fprintf(w, "data: %s\n\n", m)
flusher.Flush()
}
}
}
Add code to your application to call Broker.sendToUser.

How can I ensure that a spawned go routine finishes processing an array on program terminiation

I am processing records from a kafka topic. The endpoint I need to send these records to supports sending an array of up to 100 records. the kafka records also contains information for performing the rest call (currently only 1 to 2 variations, but this will increase as the number of different record types are processed). I am currently loading a struct array of the unique configs when they are found, and each of these configs have their own queue array. For each config, I spawn a new go routine that will process any records in its queue on a timer (for example 100ms). This process works just fine currently. The issue I am having is when the program shuts down. I do not want to leave any unsent records in the queue and want to finish processing them before app shuts down. The below current code handles the interrupt and starts checking the queue depths, but once the interrupt happens, the queue count does not ever decrease, so the program will never terminate. Any thoughts would be appreciated.
package main
import (
"context"
"encoding/json"
"os"
"os/signal"
"strconv"
"syscall"
"time"
_ "time/tzdata"
"go.uber.org/zap"
"go.uber.org/zap/zapcore"
)
type ChannelDetails struct {
ChannelDetails MsgChannel
LastUsed time.Time
Active bool
Queue []OutputMessage
}
type OutputMessage struct {
Config MsgConfig `json:"config"`
Message string `json:"message"`
}
type MsgConfig struct {
Channel MsgChannel `json:"channel"`
}
type MsgChannel struct {
Id int `json:"id"`
MntDate string `json:"mntDate"`
Otype string `json:"oType"`
}
var channels []ChannelDetails
func checkQueueDepths() int {
var depth int = 0
for _, c := range channels {
depth += len(c.Queue)
}
return depth
}
func TimeIn(t time.Time, name string) (time.Time, error) {
loc, err := time.LoadLocation(name)
if err == nil {
t = t.In(loc)
}
return t, err
}
func find(channel *MsgChannel) int {
for i, c := range channels {
if c.ChannelDetails.Id == channel.Id &&
c.ChannelDetails.MntDate == channel.MntDate {
return i
}
}
return len(channels)
}
func splice(queue []OutputMessage, count int) (ret []OutputMessage, deleted []OutputMessage) {
ret = make([]OutputMessage, len(queue)-count)
deleted = make([]OutputMessage, count)
copy(deleted, queue[0:count])
copy(ret, queue[:0])
copy(ret[0:], queue[0+count:])
return
}
func load(msg OutputMessage, logger *zap.Logger) {
i := find(&msg.Config.Channel)
if i == len(channels) {
channels = append(channels, ChannelDetails{
ChannelDetails: msg.Config.Channel,
LastUsed: time.Now(),
Active: false,
Queue: make([]OutputMessage, 0, 200),
})
}
channels[i].LastUsed = time.Now()
channels[i].Queue = append(channels[i].Queue, msg)
if !channels[i].Active {
channels[i].Active = true
go process(&channels[i], logger)
}
}
func process(data *ChannelDetails, logger *zap.Logger) {
for {
// if Queue is empty and not used for 5 minutes, flag as inActive and shut down go routine
if len(data.Queue) == 0 &&
time.Now().After(data.LastUsed.Add(time.Second*10)) { //reduced for example
data.Active = false
logger.Info("deactivating routine as queue is empty")
break
}
// if Queue has records, process
if len(data.Queue) != 0 {
drainStart, _ := TimeIn(time.Now(), "America/New_York")
spliceCnt := len(data.Queue)
if spliceCnt > 100 {
spliceCnt = 100 // rest api endpoint can only accept array up to 100 items
}
items := []OutputMessage{}
data.Queue, items = splice(data.Queue, spliceCnt)
//process items ... will send array of items to a rest endpoint in another go routine
drainEnd, _ := TimeIn(time.Now(), "America/New_York")
logger.Info("processing records",
zap.Int("numitems", len(items)),
zap.String("start", drainStart.Format("2006-01-02T15:04:05.000-07:00")),
zap.String("end", drainEnd.Format("2006-01-02T15:04:05.000-07:00")),
)
}
time.Sleep(time.Millisecond * time.Duration(500))
}
}
func initZapLog() *zap.Logger {
config := zap.NewProductionConfig()
config.EncoderConfig.TimeKey = "timestamp"
config.EncoderConfig.EncodeTime = zapcore.ISO8601TimeEncoder
logger, _ := config.Build()
zap.ReplaceGlobals(logger)
return logger
}
func main() {
ctx, cancel := context.WithCancel(context.Background())
logger := initZapLog()
defer logger.Sync()
test1 := `{
"config": {
"channel": {
"id": 1,
"mntDate": "2021-12-01",
"oType": "test1"
}
},
"message": "test message1"
}`
test2 := `{
"config": {
"channel": {
"id": 2,
"mntDate": "2021-12-01",
"oType": "test2"
}
},
"message": "test message2"
}`
var testMsg1 OutputMessage
err := json.Unmarshal([]byte(test1), &testMsg1)
if err != nil {
logger.Panic("unable to unmarshall test1 data " + err.Error())
}
var testMsg2 OutputMessage
err = json.Unmarshal([]byte(test2), &testMsg2)
if err != nil {
logger.Panic("unable to unmarshall test2 data " + err.Error())
}
exitCh := make(chan struct{})
go func(ctx context.Context) {
for {
//original data is streamed from kafka
load(testMsg1, logger)
load(testMsg2, logger)
time.Sleep(time.Millisecond * time.Duration(5))
select {
case <-ctx.Done():
logger.Info("received done")
var depthChk int
for {
depthChk = checkQueueDepths()
if depthChk == 0 {
break
} else {
logger.Info("Still processing queues. Msgs left: " + strconv.Itoa(depthChk))
}
time.Sleep(100 * time.Millisecond)
}
exitCh <- struct{}{}
return
default:
}
}
}(ctx)
sigs := make(chan os.Signal, 1)
signal.Notify(sigs, os.Interrupt, syscall.SIGINT, syscall.SIGTERM)
go func() {
<-sigs
depths := checkQueueDepths()
logger.Info("You pressed ctrl + C. Queue depth is: " + strconv.Itoa(depths))
cancel()
}()
<-exitCh
}
example logs:
{"level":"info","timestamp":"2021-12-28T15:26:06.136-0500","caller":"testgo/main.go:116","msg":"processing records","numitems":91,"start":"2021-12-28T15:26:06.136-05:00","end":"2021-12-28T15:26:06.136-05:00"}
{"level":"info","timestamp":"2021-12-28T15:26:06.636-0500","caller":"testgo/main.go:116","msg":"processing records","numitems":92,"start":"2021-12-28T15:26:06.636-05:00","end":"2021-12-28T15:26:06.636-05:00"}
^C{"level":"info","timestamp":"2021-12-28T15:26:06.780-0500","caller":"testgo/main.go:205","msg":"You pressed ctrl + C. Queue depth is: 2442"}
{"level":"info","timestamp":"2021-12-28T15:26:06.783-0500","caller":"testgo/main.go:182","msg":"received done"}
{"level":"info","timestamp":"2021-12-28T15:26:06.783-0500","caller":"testgo/main.go:189","msg":"Still processing queues. Msgs left: 2442"} --line repeats forever
The sync golang package https://pkg.go.dev/sync has the Wait group type that allows you to wait for a group of go routines to complete before the main routine returns.
The best usage example is in this blog post:
https://go.dev/blog/pipelines
To 'wait' for all spawned goroutines from inside the main goroutine to finish, there's 2 ways to do this. The most simple would be to add a
runtime.Goexit()
to the end of your main goroutine, after <-exitCh
Simply, it does this:
"Calling Goexit from the main goroutine terminates that goroutine without func main returning. Since func main has not returned, the program continues execution of other goroutines. If all other goroutines exit, the program crashes."
The other way would be to use a waitgroup, think of a waitgroup as a counter, with a method where the program will 'wait' on the line where the method is called till the counter hits zero:
var wg sync.WaitGroup // declare the waitgroup
Then inside each goroutine that you are to wait on, you add/increment the waitgroup:
wg.Add() // you typically call this for each spawned goroutine
Then when you want to state that the goroutine has finished work, you call
wg.Done() // when you consider the spawned routine to be done call this
Which decrements the counter
Then where you want the code to 'wait' till the counter is zero, you add line:
wg.Wait() // wait here till counter hits zero
And the code will block till the number goroutines that are counted with Add() and decremented with Done() hits zero

How to solve pub-sub problem in go and grpc?

In go grpc service I have a receiver(publisher) event loop, and publisher can detect that it wants sender to stop. But channel principles says that we should not close channels on receiver side, only on sender side. How should it be threated?
Situation is like following. Imagine a chat. 1st client - subscriber - receives message, and its streaming cannot be done without goroutine due to grpc limitations. And 2nd client - publisher is sending a message to chat, so its another goroutine. You have to pass a message from publisher to subscriber receiving client, ONLY if subscriber not closed its connection (forces closing a channel from receiver side)
The problem in code:
//1st client goroutine - subscriber
func (s *GRPCServer) WatchMessageServer(req *WatchMessageRequest, stream ExampleService_WatchMessageServer) error {
ch := s.NewClientChannel()
// natively blocks goroutine with send to its stream, until send gets an error
for {
msg, ok := <-ch
if !ok {
return nil
}
err := stream.Send(msg) // if this fails - we need to close ch from receiver side to "propagate" closing signal
if err != nil {
return err
}
}
}
//2nd client goroutine - publisher
func (s *GRPCServer) SendMessage(ctx context.Context, req *SendMessageRequest) (*emptypb.Empty, error) {
for i := range s.clientChannels {
s.clientChannels[i] <- req
// no way other than panic, to know when to remove channel from list. or need to make a wrapper with recover..
}
return nil
}
I've initially got a clue by searhing, and solution idea was provided in an answer here thanks to that answer.
Providing streaming solution sample code, i guess its an implementation for a generic pub-sub problem:
//1st client goroutine - subscriber
func (s *GRPCServer) WatchMessageServer(req *WatchMessageRequest, stream ExampleService_WatchMessageServer) error {
s.AddClientToBroadcastList(stream)
select {
case <-stream.Context().Done(): // stackoverflow promised that it would signal when client closes stream
return stream.Context().Err() // stream will be closed immediately after return
case <-s.ctx.Done(): // program shutdown
return s.ctx.Err()
}
}
//2nd client goroutine - publisher
func (s *GRPCServer) SendMessage(ctx context.Context, req *SendMessageRequest) (*emptypb.Empty, error) {
for i := range s.clientStreams {
err := s.clientStreams.Send(req)
if err != nil {
s.RemoveClientFromBroadcastList(s.clientStreams[i])
}
}
return nil
}

Why is data being pushed into the channel but never read from the receiver goroutine?

I am building a daemon and I have two services that will be sending data to and from each other. Service A is what produces the data and service B a is Data Buffer service or like a queue. So from the main.go file, service B is instantiated and started. The Start() method will perform the buffer() function as a goroutine because this function waits for data to be passed onto a channel and I don't want the main process to halt waiting for buffer to complete. Then Service A is instantiated and started. It is then also "registered" with Service B.
I created a method called RegisterWithBufferService for Service A that creates two new channels. It will store those channels as it's own attributes and also provide them to Service B.
func (s *ServiceA) RegisterWithBufferService(bufService *data.DataBuffer) error {
newIncomingChan := make(chan *data.DataFrame, 1)
newOutgoingChan := make(chan []byte, 1)
s.IncomingBuffChan = newIncomingChan
s.OutgoingDataChannels = append(s.OutgoingDataChannels, newOutgoingChan)
bufService.DataProviders[s.ServiceName()] = data.DataProviderInfo{
IncomingChan: newOutgoingChan, //our outGoing channel is their incoming
OutgoingChan: newIncomingChan, // our incoming channel is their outgoing
}
s.DataBufferService = bufService
bufService.NewProvider <- s.ServiceName() //The DataBuffer service listens for new services and creates a new goroutine for buffering
s.Logger.Info().Msg("Registeration completed.")
return nil
}
Buffer essentially listens for incoming data from Service A, decodes it using Decode() and then adds it to a slice called buf. If the slice is greater in length than bufferPeriod then it will send the first item in the slice in the Outgoing channel back to Service A.
func (b* DataBuffer) buffer(bufferPeriod int) {
for {
select {
case newProvider := <- b.NewProvider:
b.wg.Add(1)
/*
newProvider is a string
DataProviders is a map the value it returns is a struct containing the Incoming and
Outgoing channels for this service
*/
p := b.DataProviders[newProvider]
go func(prov string, in chan []byte, out chan *DataFrame) {
defer b.wg.Done()
var buf []*DataFrame
for {
select {
case rawData := <-in:
tmp := Decode(rawData) //custom decoding function. Returns a *DataFrame
buf = append(buf, tmp)
if len(buf) < bufferPeriod {
b.Logger.Info().Msg("Sending decoded data out.")
out <- buf[0]
buf = buf[1:] //pop
}
case <- b.Quit:
return
}
}
}(newProvider, p.IncomingChan, p.OutgoingChan)
}
case <- b.Quit:
return
}
}
Now Service A has a method called record that will periodically push data to all the channels in it's OutgoingDataChannels attribute.
func (s *ServiceA) record() error {
...
if atomic.LoadInt32(&s.Listeners) != 0 {
s.Logger.Info().Msg("Sending raw data to data buffer")
for _, outChan := range s.OutgoingDataChannels {
outChan <- dataBytes // the receiver (Service B) is already listening and this doesn't hang
}
s.Logger.Info().Msg("Raw data sent and received") // The logger will output this so I know it's not hanging
}
}
The problem is that Service A seems to push the data successfully using record but Service B never goes into the case rawData := <-in: case in the buffer sub-goroutine. Is this because I have nested goroutines? Incase it's not clear, when Service B is started, it calls buffer but because it would hang otherwise, I made the call to buffer a goroutine. So then when Service A calls RegisterWithBufferService, the buffer goroutine creates a goroutine to listen for new data from Service B and push it back to Service A once the buffer is filled. I hope I explained it clearly.
EDIT 1
I've made a minimal, reproducible example.
package main
import (
"fmt"
"sync"
"sync/atomic"
"time"
)
var (
defaultBufferingPeriod int = 3
DefaultPollingInterval int64 = 10
)
type DataObject struct{
Data string
}
type DataProvider interface {
RegisterWithBufferService(*DataBuffer) error
ServiceName() string
}
type DataProviderInfo struct{
IncomingChan chan *DataObject
OutgoingChan chan *DataObject
}
type DataBuffer struct{
Running int32 //used atomically
DataProviders map[string]DataProviderInfo
Quit chan struct{}
NewProvider chan string
wg sync.WaitGroup
}
func NewDataBuffer() *DataBuffer{
var (
wg sync.WaitGroup
)
return &DataBuffer{
DataProviders: make(map[string]DataProviderInfo),
Quit: make(chan struct{}),
NewProvider: make(chan string),
wg: wg,
}
}
func (b *DataBuffer) Start() error {
if ok := atomic.CompareAndSwapInt32(&b.Running, 0, 1); !ok {
return fmt.Errorf("Could not start Data Buffer Service.")
}
go b.buffer(defaultBufferingPeriod)
return nil
}
func (b *DataBuffer) Stop() error {
if ok := atomic.CompareAndSwapInt32(&b.Running, 1, 0); !ok {
return fmt.Errorf("Could not stop Data Buffer Service.")
}
for _, p := range b.DataProviders {
close(p.IncomingChan)
close(p.OutgoingChan)
}
close(b.Quit)
b.wg.Wait()
return nil
}
// buffer creates goroutines for each incoming, outgoing data pair and decodes the incoming bytes into outgoing DataFrames
func (b *DataBuffer) buffer(bufferPeriod int) {
for {
select {
case newProvider := <- b.NewProvider:
fmt.Println("Received new Data provider.")
if _, ok := b.DataProviders[newProvider]; ok {
b.wg.Add(1)
p := b.DataProviders[newProvider]
go func(prov string, in chan *DataObject, out chan *DataObject) {
defer b.wg.Done()
var (
buf []*DataObject
)
fmt.Printf("Waiting for data from: %s\n", prov)
for {
select {
case inData := <-in:
fmt.Printf("Received data from: %s\n", prov)
buf = append(buf, inData)
if len(buf) > bufferPeriod {
fmt.Printf("Queue is filled, sending data back to %s\n", prov)
out <- buf[0]
fmt.Println("Data Sent")
buf = buf[1:] //pop
}
case <- b.Quit:
return
}
}
}(newProvider, p.IncomingChan, p.OutgoingChan)
}
case <- b.Quit:
return
}
}
}
type ServiceA struct{
Active int32 // atomic
Stopping int32 // atomic
Recording int32 // atomic
Listeners int32 // atomic
name string
QuitChan chan struct{}
IncomingBuffChan chan *DataObject
OutgoingBuffChans []chan *DataObject
DataBufferService *DataBuffer
}
// A compile time check to ensure ServiceA fully implements the DataProvider interface
var _ DataProvider = (*ServiceA)(nil)
func NewServiceA() (*ServiceA, error) {
var newSliceOutChans []chan *DataObject
return &ServiceA{
QuitChan: make(chan struct{}),
OutgoingBuffChans: newSliceOutChans,
name: "SERVICEA",
}, nil
}
// Start starts the service. Returns an error if any issues occur
func (s *ServiceA) Start() error {
atomic.StoreInt32(&s.Active, 1)
return nil
}
// Stop stops the service. Returns an error if any issues occur
func (s *ServiceA) Stop() error {
atomic.StoreInt32(&s.Stopping, 1)
close(s.QuitChan)
return nil
}
func (s *ServiceA) StartRecording(pol_int int64) error {
if ok := atomic.CompareAndSwapInt32(&s.Recording, 0, 1); !ok {
return fmt.Errorf("Could not start recording. Data recording already started")
}
ticker := time.NewTicker(time.Duration(pol_int) * time.Second)
go func() {
for {
select {
case <-ticker.C:
fmt.Println("Time to record...")
err := s.record()
if err != nil {
return
}
case <-s.QuitChan:
ticker.Stop()
return
}
}
}()
return nil
}
func (s *ServiceA) record() error {
current_time := time.Now()
ct := fmt.Sprintf("%02d-%02d-%d", current_time.Day(), current_time.Month(), current_time.Year())
dataObject := &DataObject{
Data: ct,
}
if atomic.LoadInt32(&s.Listeners) != 0 {
fmt.Println("Sending data to Data buffer...")
for _, outChan := range s.OutgoingBuffChans {
outChan <- dataObject // the receivers should already be listening
}
fmt.Println("Data sent.")
}
return nil
}
// RegisterWithBufferService satisfies the DataProvider interface. It provides the bufService with new incoming and outgoing channels along with a polling interval
func (s ServiceA) RegisterWithBufferService(bufService *DataBuffer) error {
if _, ok := bufService.DataProviders[s.ServiceName()]; ok {
return fmt.Errorf("%v data provider already registered with Data Buffer.", s.ServiceName())
}
newIncomingChan := make(chan *DataObject, 1)
newOutgoingChan := make(chan *DataObject, 1)
s.IncomingBuffChan = newIncomingChan
s.OutgoingBuffChans = append(s.OutgoingBuffChans, newOutgoingChan)
bufService.DataProviders[s.ServiceName()] = DataProviderInfo{
IncomingChan: newOutgoingChan, //our outGoing channel is their incoming
OutgoingChan: newIncomingChan, // our incoming channel is their outgoing
}
s.DataBufferService = bufService
bufService.NewProvider <- s.ServiceName() //The DataBuffer service listens for new services and creates a new goroutine for buffering
return nil
}
// ServiceName satisfies the DataProvider interface. It returns the name of the service.
func (s ServiceA) ServiceName() string {
return s.name
}
func main() {
var BufferedServices []DataProvider
fmt.Println("Instantiating and Starting Data Buffer Service...")
bufService := NewDataBuffer()
err := bufService.Start()
if err != nil {
panic(fmt.Sprintf("%v", err))
}
defer bufService.Stop()
fmt.Println("Data Buffer Service successfully started.")
fmt.Println("Instantiating and Starting Service A...")
serviceA, err := NewServiceA()
if err != nil {
panic(fmt.Sprintf("%v", err))
}
BufferedServices = append(BufferedServices, *serviceA)
err = serviceA.Start()
if err != nil {
panic(fmt.Sprintf("%v", err))
}
defer serviceA.Stop()
fmt.Println("Service A successfully started.")
fmt.Println("Registering services with Data Buffer...")
for _, s := range BufferedServices {
_ = s.RegisterWithBufferService(bufService) // ignoring error msgs for base case
}
fmt.Println("Registration complete.")
fmt.Println("Beginning recording...")
_ = atomic.AddInt32(&serviceA.Listeners, 1)
err = serviceA.StartRecording(DefaultPollingInterval)
if err != nil {
panic(fmt.Sprintf("%v", err))
}
for {
select {
case RTD := <-serviceA.IncomingBuffChan:
fmt.Println(RTD)
case <-serviceA.QuitChan:
atomic.StoreInt32(&serviceA.Listeners, 0)
bufService.Quit<-struct{}{}
}
}
}
Running on Go 1.17. When running the example, it should print the following every 10 seconds:
Time to record...
Sending data to Data buffer...
Data sent.
But then Data buffer never goes into the inData := <-in case.
To diagnose this I changed fmt.Println("Sending data to Data buffer...") to fmt.Println("Sending data to Data buffer...", s.OutgoingBuffChans) and the output was:
Time to record...
Sending data to Data buffer... []
So you are not actually sending the data to any channels. The reason for this is:
func (s ServiceA) RegisterWithBufferService(bufService *DataBuffer) error {
As the receiver is not a pointer when you do the s.OutgoingBuffChans = append(s.OutgoingBuffChans, newOutgoingChan) you are changing s.OutgoingBuffChans in a copy of the ServiceA which is discarded when the function exits. To fix this change:
func (s ServiceA) RegisterWithBufferService(bufService *DataBuffer) error {
to
func (s *ServiceA) RegisterWithBufferService(bufService *DataBuffer) error {
and
BufferedServices = append(BufferedServices, *serviceA)
to
BufferedServices = append(BufferedServices, serviceA)
The amended version outputs:
Time to record...
Sending data to Data buffer... [0xc0000d8060]
Data sent.
Received data from: SERVICEA
Time to record...
Sending data to Data buffer... [0xc0000d8060]
Data sent.
Received data from: SERVICEA
So this resolves the reported issue (I would not be suprised if there are other issues but hopefully this points you in the right direction). I did notice that the code you originally posted does use a pointer receiver so that might have suffered from another issue (but its difficult to comment on code fragments in a case like this).

Pulling 0-sized golang chan

My use-case is the following: I need to send POST requests to 0...N subscribers, which are represented by a targetUrl. I want to limitate the max number of goroutine to let's say, 100. My code (simplified) is the following:
package main
import (
"fmt"
"log"
"net/http"
"errors"
)
const MAX_CONCURRENT_NOTIFICATIONS = 100
type Subscription struct {
TargetUrl string
}
func notifySubscribers(subs []Subscription) {
log.Println("notifySubscribers")
var buffer = make(chan Subscription, len(subs))
defer close(buffer)
for i := 0; i < MAX_CONCURRENT_NOTIFICATIONS; i++ {
go notifySubscriber(buffer)
}
for i := range subs {
buffer <- subs[i]
}
}
func notifySubscriber(buffer chan Subscription) {
log.Println("notifySubscriber")
for {
select {
case sub := <-buffer:
log.Println("sending notification to " + sub.TargetUrl)
resp, err := failPost()
if err != nil {
log.Println(fmt.Sprintf("failed to notify %s. error: %s", sub.TargetUrl, err.Error()))
} else {
resp.Body.Close()
if resp.StatusCode != http.StatusOK {
log.Println(fmt.Sprintf("%s responded with %d", sub.TargetUrl, resp.StatusCode))
}
}
}
log.Println(fmt.Sprintf("buffer size: %d", len(buffer)))
}
}
func failPost() (*http.Response, error) {
return &http.Response{
StatusCode: http.StatusBadRequest,
}, errors.New("some bad error")
}
func main() {
log.Println("main")
var subs []Subscription
subs = append(subs, Subscription{TargetUrl: "http://foo.bar"})
subs = append(subs, Subscription{TargetUrl: "http://fizz.buzz"})
notifySubscribers(subs)
select {}
}
The output is the following:
2018/01/24 10:52:48 failed to notify . error: some bad error
2018/01/24 10:52:48 buffer size: 1
2018/01/24 10:52:48 sending notification to
2018/01/24 10:52:48 failed to notify . error: some bad error
2018/01/24 10:52:48 buffer size: 0
2018/01/24 10:52:48 sending notification to
2018/01/24 10:52:48 failed to notify . error: some bad error
... and so on till I SIGINT the program
So basically it means that I've successfuly send the notifications to the right people, but I still continue to send to empty targetUrl because I read from an empty chan.
What is wrong ?
[EDIT] Workaround, but I don't like it
for {
select {
case sub, more := <-buffer:
if !more {
return
}
}
}
It's because you are closing the buffer but your notifySubscriber is still listening on the buffer. A closed channel always returns the default type value(in this case an empty Subscription with empty TargetURL). Hence, you are getting an empty string.
Scenarios:
If you want to keep the goroutines running, then don't close the buffer.
Stop the goroutines once the work is done and then close the buffer.
From the spec:
For a channel c, the built-in function close(c) records that no more
values will be sent on the channel. It is an error if c is a
receive-only channel. Sending to or closing a closed channel causes a
run-time panic. Closing the nil channel also causes a run-time panic.
After calling close, and after any previously sent values have been
received, receive operations will return the zero value for the
channel's type without blocking. The multi-valued receive operation
returns a received value along with an indication of whether the
channel is closed.
The last sentence means that sub, more := <-buffer, more will be false if buffer is closed.
However, in your case, the code can use some improvement.
First, it makes no sense to use a select statement where there are only one case. It would just act the same without the select.
Second, in cases that the recieving channel is guaranteed to return, range over channel can be used. So your code can be changed to:
func notifySubscriber(buffer chan Subscription) {
log.Println("notifySubscriber")
for sub:= range buffer {
//Code here...
}
}

Resources