I'm running a script that forwards webhooks to a WebSocket.
The part that sends the webhook to the WebSocket checks for inactive connections and tries to remove them when forwarding webhooks sometimes fails with this error:
http: panic serving 10.244.38.169:40958: runtime error: slice bounds out of range
(The IP/port is always different, this is just an example.)
Relevant code:
// Map holding all Websocket clients and the endpoints they are subscribed to
var clients = make(map[string][]*websocket.Conn)
var upgrader = websocket.Upgrader{}
// function to execute when a new client connects to the websocket
func handleClient(w http.ResponseWriter, r *http.Request, endpoint string) {
conn, err := upgrader.Upgrade(w, r, nil)
// ...
// Add client to endpoint slice
clients[endpoint] = append(clients[endpoint], conn)
}
// function to send a webhook to a websocket endpoint
func handleHook(w http.ResponseWriter, r *http.Request, endpoint string) {
msg := Message{}
// ...
// Get all clients listening to the current endpoint
conns := clients[endpoint]
if conns != nil {
for i, conn := range conns {
if conn.WriteJSON(msg) != nil {
// Remove client and close connection if sending failed
conns = append(conns[:i], conns[i+1:]...) // this is the line that sometimes triggers the panic
conn.Close()
}
}
}
clients[endpoint] = conns
}
I cannot figure out why iterating over the connections and appending them sometimes triggers the panic.
Few points that I'd like to say:
Make sure that your program has no race condition (eg. clients is globally accessible and should be protected if read/write or write/write
happening concurrently).
When ranging over a slice for [...] range [...] you do not need to check if slice the non-nil as range handles that already (see the code I've shared).
It is happening to your sometimes because there are times when conn.WriteJSON is failing and returning an error and the buggy logic of deleting element while ranging over is making your program panic. (see the code I've shared)
package main
import "fmt"
func main() {
var conns []string = nil
// "if conns != nil" check is not required as "for [...] range [...]"
// can handle that. It is safe to use for "range" directly.
for i, conn := range conns {
fmt.Println(i, conn)
}
conns = []string{"1", "2", "3"}
// Will panic
for i := range conns {
fmt.Printf("access: %d, length: %d\n", i, len(conns))
conns = append(conns[:i], conns[i+1:]...)
}
}
In the example, you can see the index that you are trying to access it more than or equal to the length of the slice which is triggering the panic. I think this answer should help you to correct your logic or you can use a map as well for storing connections but it again comes with its own caveats like no ordering guarantee i.e., in which order it reads from the map.
should be remove by indexs in array
package main
import "fmt"
func main() {
var conns []string = nil
// "if conns != nil" check is not required as "for [...] range [...]"
// can handle that. It is safe to use for "range" directly.
for i, conn := range conns {
fmt.Println(i, conn)
}
conns = []string{"1", "2", "3"}
// Will panic
indexs := []int{}
for i := range conns {
if i < 2 {
indexs = append(indexs, i)
}
}
for i, val := range indexs {
index = val - i
fmt.Printf("access: %d, length: %d\n", i, len(conns))
conns = append(conns[:index], conns[index+1:]...)
}
fmt.Println(conns)
}
Related
I am building a daemon and I have two services that will be sending data to and from each other. Service A is what produces the data and service B a is Data Buffer service or like a queue. So from the main.go file, service B is instantiated and started. The Start() method will perform the buffer() function as a goroutine because this function waits for data to be passed onto a channel and I don't want the main process to halt waiting for buffer to complete. Then Service A is instantiated and started. It is then also "registered" with Service B.
I created a method called RegisterWithBufferService for Service A that creates two new channels. It will store those channels as it's own attributes and also provide them to Service B.
func (s *ServiceA) RegisterWithBufferService(bufService *data.DataBuffer) error {
newIncomingChan := make(chan *data.DataFrame, 1)
newOutgoingChan := make(chan []byte, 1)
s.IncomingBuffChan = newIncomingChan
s.OutgoingDataChannels = append(s.OutgoingDataChannels, newOutgoingChan)
bufService.DataProviders[s.ServiceName()] = data.DataProviderInfo{
IncomingChan: newOutgoingChan, //our outGoing channel is their incoming
OutgoingChan: newIncomingChan, // our incoming channel is their outgoing
}
s.DataBufferService = bufService
bufService.NewProvider <- s.ServiceName() //The DataBuffer service listens for new services and creates a new goroutine for buffering
s.Logger.Info().Msg("Registeration completed.")
return nil
}
Buffer essentially listens for incoming data from Service A, decodes it using Decode() and then adds it to a slice called buf. If the slice is greater in length than bufferPeriod then it will send the first item in the slice in the Outgoing channel back to Service A.
func (b* DataBuffer) buffer(bufferPeriod int) {
for {
select {
case newProvider := <- b.NewProvider:
b.wg.Add(1)
/*
newProvider is a string
DataProviders is a map the value it returns is a struct containing the Incoming and
Outgoing channels for this service
*/
p := b.DataProviders[newProvider]
go func(prov string, in chan []byte, out chan *DataFrame) {
defer b.wg.Done()
var buf []*DataFrame
for {
select {
case rawData := <-in:
tmp := Decode(rawData) //custom decoding function. Returns a *DataFrame
buf = append(buf, tmp)
if len(buf) < bufferPeriod {
b.Logger.Info().Msg("Sending decoded data out.")
out <- buf[0]
buf = buf[1:] //pop
}
case <- b.Quit:
return
}
}
}(newProvider, p.IncomingChan, p.OutgoingChan)
}
case <- b.Quit:
return
}
}
Now Service A has a method called record that will periodically push data to all the channels in it's OutgoingDataChannels attribute.
func (s *ServiceA) record() error {
...
if atomic.LoadInt32(&s.Listeners) != 0 {
s.Logger.Info().Msg("Sending raw data to data buffer")
for _, outChan := range s.OutgoingDataChannels {
outChan <- dataBytes // the receiver (Service B) is already listening and this doesn't hang
}
s.Logger.Info().Msg("Raw data sent and received") // The logger will output this so I know it's not hanging
}
}
The problem is that Service A seems to push the data successfully using record but Service B never goes into the case rawData := <-in: case in the buffer sub-goroutine. Is this because I have nested goroutines? Incase it's not clear, when Service B is started, it calls buffer but because it would hang otherwise, I made the call to buffer a goroutine. So then when Service A calls RegisterWithBufferService, the buffer goroutine creates a goroutine to listen for new data from Service B and push it back to Service A once the buffer is filled. I hope I explained it clearly.
EDIT 1
I've made a minimal, reproducible example.
package main
import (
"fmt"
"sync"
"sync/atomic"
"time"
)
var (
defaultBufferingPeriod int = 3
DefaultPollingInterval int64 = 10
)
type DataObject struct{
Data string
}
type DataProvider interface {
RegisterWithBufferService(*DataBuffer) error
ServiceName() string
}
type DataProviderInfo struct{
IncomingChan chan *DataObject
OutgoingChan chan *DataObject
}
type DataBuffer struct{
Running int32 //used atomically
DataProviders map[string]DataProviderInfo
Quit chan struct{}
NewProvider chan string
wg sync.WaitGroup
}
func NewDataBuffer() *DataBuffer{
var (
wg sync.WaitGroup
)
return &DataBuffer{
DataProviders: make(map[string]DataProviderInfo),
Quit: make(chan struct{}),
NewProvider: make(chan string),
wg: wg,
}
}
func (b *DataBuffer) Start() error {
if ok := atomic.CompareAndSwapInt32(&b.Running, 0, 1); !ok {
return fmt.Errorf("Could not start Data Buffer Service.")
}
go b.buffer(defaultBufferingPeriod)
return nil
}
func (b *DataBuffer) Stop() error {
if ok := atomic.CompareAndSwapInt32(&b.Running, 1, 0); !ok {
return fmt.Errorf("Could not stop Data Buffer Service.")
}
for _, p := range b.DataProviders {
close(p.IncomingChan)
close(p.OutgoingChan)
}
close(b.Quit)
b.wg.Wait()
return nil
}
// buffer creates goroutines for each incoming, outgoing data pair and decodes the incoming bytes into outgoing DataFrames
func (b *DataBuffer) buffer(bufferPeriod int) {
for {
select {
case newProvider := <- b.NewProvider:
fmt.Println("Received new Data provider.")
if _, ok := b.DataProviders[newProvider]; ok {
b.wg.Add(1)
p := b.DataProviders[newProvider]
go func(prov string, in chan *DataObject, out chan *DataObject) {
defer b.wg.Done()
var (
buf []*DataObject
)
fmt.Printf("Waiting for data from: %s\n", prov)
for {
select {
case inData := <-in:
fmt.Printf("Received data from: %s\n", prov)
buf = append(buf, inData)
if len(buf) > bufferPeriod {
fmt.Printf("Queue is filled, sending data back to %s\n", prov)
out <- buf[0]
fmt.Println("Data Sent")
buf = buf[1:] //pop
}
case <- b.Quit:
return
}
}
}(newProvider, p.IncomingChan, p.OutgoingChan)
}
case <- b.Quit:
return
}
}
}
type ServiceA struct{
Active int32 // atomic
Stopping int32 // atomic
Recording int32 // atomic
Listeners int32 // atomic
name string
QuitChan chan struct{}
IncomingBuffChan chan *DataObject
OutgoingBuffChans []chan *DataObject
DataBufferService *DataBuffer
}
// A compile time check to ensure ServiceA fully implements the DataProvider interface
var _ DataProvider = (*ServiceA)(nil)
func NewServiceA() (*ServiceA, error) {
var newSliceOutChans []chan *DataObject
return &ServiceA{
QuitChan: make(chan struct{}),
OutgoingBuffChans: newSliceOutChans,
name: "SERVICEA",
}, nil
}
// Start starts the service. Returns an error if any issues occur
func (s *ServiceA) Start() error {
atomic.StoreInt32(&s.Active, 1)
return nil
}
// Stop stops the service. Returns an error if any issues occur
func (s *ServiceA) Stop() error {
atomic.StoreInt32(&s.Stopping, 1)
close(s.QuitChan)
return nil
}
func (s *ServiceA) StartRecording(pol_int int64) error {
if ok := atomic.CompareAndSwapInt32(&s.Recording, 0, 1); !ok {
return fmt.Errorf("Could not start recording. Data recording already started")
}
ticker := time.NewTicker(time.Duration(pol_int) * time.Second)
go func() {
for {
select {
case <-ticker.C:
fmt.Println("Time to record...")
err := s.record()
if err != nil {
return
}
case <-s.QuitChan:
ticker.Stop()
return
}
}
}()
return nil
}
func (s *ServiceA) record() error {
current_time := time.Now()
ct := fmt.Sprintf("%02d-%02d-%d", current_time.Day(), current_time.Month(), current_time.Year())
dataObject := &DataObject{
Data: ct,
}
if atomic.LoadInt32(&s.Listeners) != 0 {
fmt.Println("Sending data to Data buffer...")
for _, outChan := range s.OutgoingBuffChans {
outChan <- dataObject // the receivers should already be listening
}
fmt.Println("Data sent.")
}
return nil
}
// RegisterWithBufferService satisfies the DataProvider interface. It provides the bufService with new incoming and outgoing channels along with a polling interval
func (s ServiceA) RegisterWithBufferService(bufService *DataBuffer) error {
if _, ok := bufService.DataProviders[s.ServiceName()]; ok {
return fmt.Errorf("%v data provider already registered with Data Buffer.", s.ServiceName())
}
newIncomingChan := make(chan *DataObject, 1)
newOutgoingChan := make(chan *DataObject, 1)
s.IncomingBuffChan = newIncomingChan
s.OutgoingBuffChans = append(s.OutgoingBuffChans, newOutgoingChan)
bufService.DataProviders[s.ServiceName()] = DataProviderInfo{
IncomingChan: newOutgoingChan, //our outGoing channel is their incoming
OutgoingChan: newIncomingChan, // our incoming channel is their outgoing
}
s.DataBufferService = bufService
bufService.NewProvider <- s.ServiceName() //The DataBuffer service listens for new services and creates a new goroutine for buffering
return nil
}
// ServiceName satisfies the DataProvider interface. It returns the name of the service.
func (s ServiceA) ServiceName() string {
return s.name
}
func main() {
var BufferedServices []DataProvider
fmt.Println("Instantiating and Starting Data Buffer Service...")
bufService := NewDataBuffer()
err := bufService.Start()
if err != nil {
panic(fmt.Sprintf("%v", err))
}
defer bufService.Stop()
fmt.Println("Data Buffer Service successfully started.")
fmt.Println("Instantiating and Starting Service A...")
serviceA, err := NewServiceA()
if err != nil {
panic(fmt.Sprintf("%v", err))
}
BufferedServices = append(BufferedServices, *serviceA)
err = serviceA.Start()
if err != nil {
panic(fmt.Sprintf("%v", err))
}
defer serviceA.Stop()
fmt.Println("Service A successfully started.")
fmt.Println("Registering services with Data Buffer...")
for _, s := range BufferedServices {
_ = s.RegisterWithBufferService(bufService) // ignoring error msgs for base case
}
fmt.Println("Registration complete.")
fmt.Println("Beginning recording...")
_ = atomic.AddInt32(&serviceA.Listeners, 1)
err = serviceA.StartRecording(DefaultPollingInterval)
if err != nil {
panic(fmt.Sprintf("%v", err))
}
for {
select {
case RTD := <-serviceA.IncomingBuffChan:
fmt.Println(RTD)
case <-serviceA.QuitChan:
atomic.StoreInt32(&serviceA.Listeners, 0)
bufService.Quit<-struct{}{}
}
}
}
Running on Go 1.17. When running the example, it should print the following every 10 seconds:
Time to record...
Sending data to Data buffer...
Data sent.
But then Data buffer never goes into the inData := <-in case.
To diagnose this I changed fmt.Println("Sending data to Data buffer...") to fmt.Println("Sending data to Data buffer...", s.OutgoingBuffChans) and the output was:
Time to record...
Sending data to Data buffer... []
So you are not actually sending the data to any channels. The reason for this is:
func (s ServiceA) RegisterWithBufferService(bufService *DataBuffer) error {
As the receiver is not a pointer when you do the s.OutgoingBuffChans = append(s.OutgoingBuffChans, newOutgoingChan) you are changing s.OutgoingBuffChans in a copy of the ServiceA which is discarded when the function exits. To fix this change:
func (s ServiceA) RegisterWithBufferService(bufService *DataBuffer) error {
to
func (s *ServiceA) RegisterWithBufferService(bufService *DataBuffer) error {
and
BufferedServices = append(BufferedServices, *serviceA)
to
BufferedServices = append(BufferedServices, serviceA)
The amended version outputs:
Time to record...
Sending data to Data buffer... [0xc0000d8060]
Data sent.
Received data from: SERVICEA
Time to record...
Sending data to Data buffer... [0xc0000d8060]
Data sent.
Received data from: SERVICEA
So this resolves the reported issue (I would not be suprised if there are other issues but hopefully this points you in the right direction). I did notice that the code you originally posted does use a pointer receiver so that might have suffered from another issue (but its difficult to comment on code fragments in a case like this).
I've a go application that gets run periodically by a batch. Each run, it should read some prometheus metrics from a file, run its logic, update a success/fail counter, and write metrics back out to a file.
From looking at How to parse Prometheus data as well as the godocs for prometheus, I'm able to read in the file, but I don't know how to update app_processed_total with the value returned by expfmt.ExtractSamples().
This is what I've done so far. Could someone please tell me how should I proceed from here? How can I typecast the Vector I got into a CounterVec?
package main
import (
"fmt"
"net/http"
"strings"
"time"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
dto "github.com/prometheus/client_model/go"
"github.com/prometheus/common/expfmt"
"github.com/prometheus/common/model"
)
var (
fileOnDisk = prometheus.NewRegistry()
processedTotal = prometheus.NewCounterVec(prometheus.CounterOpts{
Name: "app_processed_total",
Help: "Number of times ran",
}, []string{"status"})
)
func doInit() {
prometheus.MustRegister(processedTotal)
}
func recordMetrics() {
go func() {
for {
processedTotal.With(prometheus.Labels{"status": "ok"}).Inc()
time.Sleep(5 * time.Second)
}
}()
}
func readExistingMetrics() {
var parser expfmt.TextParser
text := `
# HELP app_processed_total Number of times ran
# TYPE app_processed_total counter
app_processed_total{status="ok"} 300
`
parseText := func() ([]*dto.MetricFamily, error) {
parsed, err := parser.TextToMetricFamilies(strings.NewReader(text))
if err != nil {
return nil, err
}
var result []*dto.MetricFamily
for _, mf := range parsed {
result = append(result, mf)
}
return result, nil
}
gatherers := prometheus.Gatherers{
fileOnDisk,
prometheus.GathererFunc(parseText),
}
gathering, err := gatherers.Gather()
if err != nil {
fmt.Println(err)
}
fmt.Println("gathering: ", gathering)
for _, g := range gathering {
vector, err := expfmt.ExtractSamples(&expfmt.DecodeOptions{
Timestamp: model.Now(),
}, g)
fmt.Println("vector: ", vector)
if err != nil {
fmt.Println(err)
}
// How can I update processedTotal with this new value?
}
}
func main() {
doInit()
readExistingMetrics()
recordMetrics()
http.Handle("/metrics", promhttp.Handler())
http.ListenAndServe("localhost:2112", nil)
}
I believe you would need to use processedTotal.WithLabelValues("ok").Inc() or something similar to that.
The more complete example is here
func ExampleCounterVec() {
httpReqs := prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "http_requests_total",
Help: "How many HTTP requests processed, partitioned by status code and HTTP method.",
},
[]string{"code", "method"},
)
prometheus.MustRegister(httpReqs)
httpReqs.WithLabelValues("404", "POST").Add(42)
// If you have to access the same set of labels very frequently, it
// might be good to retrieve the metric only once and keep a handle to
// it. But beware of deletion of that metric, see below!
m := httpReqs.WithLabelValues("200", "GET")
for i := 0; i < 1000000; i++ {
m.Inc()
}
// Delete a metric from the vector. If you have previously kept a handle
// to that metric (as above), future updates via that handle will go
// unseen (even if you re-create a metric with the same label set
// later).
httpReqs.DeleteLabelValues("200", "GET")
// Same thing with the more verbose Labels syntax.
httpReqs.Delete(prometheus.Labels{"method": "GET", "code": "200"})
}
This is taken from the Promethus examples on Github
To use the value of vector you can do the following:
vectorFloat, err := strconv.ParseFloat(vector[0].Value.String(), 64)
if err != nil {
panic(err)
}
processedTotal.WithLabelValues("ok").Add(vectorFloat)
This is assuming you will only ever get a single vector value in your response. The value of the vector is stored as a string but you can convert it to a float with the strconv.ParseFloat method.
I have a function that reads data from a source and send them to destination. Source and destination could be anything, lets say for this example source is database (any MySQL, PostgreSQL...) and destination is distributed Q (any... ActiveMQ, Kafka). Messages are stored in bytes.
This is main function. idea is it will spin a new go routine and will wait for messages to be returned for future processing.
type Message []byte
func (p *ProcessorService) Continue(dictId int) {
level.Info(p.logger).Log("process", "message", "dictId", dictId)
retrieved := make(chan Message)
go func() {
err := p.src.Read(retrieved, strconv.Itoa(p.dictId))
if err != nil {
level.Error(p.logger).Log("process", "read", "message", "err", err)
}
}()
for r := range retrieved {
go func(message Message) {
level.Info(p.logger).Log("message", message)
if len(message) > 0 {
if err := p.dst.sendToQ(message); err != nil {
level.Error(p.logger).Log("failed", "during", "persist", "err", err)
}
} else {
level.Error(p.logger).Log("failed")
}
}(r)
}
}
and this is read function itself
func (s *Storage) Read(out chan<- Message, opt ...string) error {
// I just skip some basic database read operations here
// but idea is simple, read data from the table / file row by row and
//
for _, value := range dataFromDB {
message, err := value.row
if err == nil {
out <- message
} else {
errorf("Unable to get data %v", err)
out <- make([]byte, 0)
}
}
})
close(out)
if err != nil {
return err
}
return nil
}
As you can see communication done via out chan<- Message channel.
My concern in Continue function, specifically here
for r := range retrieved {
go func(message Message) {
// basically here message and r are pointing to the same underlying array
}
}
When data received var r is a type of slice byte. Then it passed to go func(message Message) everything passed by value in go, in this case var r will be passed as copy to anonymous func, however it will still have a pointer to underlying slice data. I am curious if it could be a problem during p.dst.sendToQ(message); execution and at the same time read function will send something to out channel causing slice data structure to be overridden with a new information. Should I copy byte slice r into the new byte slice before passing to anonymous function, so underlying arrays will be different? I tested it, but couldn't really cause this behavior. Not sure if I am paranoid or have to worry about it.
The message in p.dst.sendToQ(message) is the same slice as value.row when you get data from the db. So, as long as each value.row has a different underlying array, you should be good. So, I suggest you check the source and make sure it does not use a common byte array and keeps rewriting to it.
I'm facing a dilemma here trying to keep certain websockets in sync for a given user. Here's the basic setup:
type msg struct {
Key string
Value string
}
type connStruct struct {
//...
ConnRoutineChans []*chan string
LoggedIn bool
Login string
//...
Sockets []*websocket.Conn
}
var (
//...
/* LIST OF CONNECTED USERS AN THEIR IP ADDRESSES */
guestMap sync.Map
)
func main() {
post("Started...")
rand.Seed(time.Now().UTC().UnixNano())
http.HandleFunc("/wss", wsHandler)
panic(http.ListenAndServeTLS("...", "...", "...", nil))
}
func wsHandler(w http.ResponseWriter, r *http.Request) {
if r.Header.Get("Origin")+":8080" != "https://...:8080" {
http.Error(w, "Origin not allowed", 403)
fmt.Println("Client origin not allowed! (https://"+r.Host+")")
fmt.Println("r.Header Origin: "+r.Header.Get("Origin"))
return
}
///
conn, err := websocket.Upgrade(w, r, w.Header(), 1024, 1024)
if err != nil {
http.Error(w, "Could not open websocket connection", http.StatusBadRequest)
fmt.Println("Could not open websocket connection with client!")
}
//ADD CONNECTION TO guestMap IF CONNECTION IS nil
var authString string = /*gets device identity*/;
var authChan chan string = make(chan string);
authValue, authOK := guestMap.Load(authString);
if !authOK {
// NO SESSION, CREATE A NEW ONE
newSession = getSession();
//defer newSession.Close();
guestMap.Store(authString, connStruct{ LoggedIn: false,
ConnRoutineChans: []*chan string{&authChan},
Login: "",
Sockets: []*websocket.Conn{conn}
/* .... */ });
}else{
//SESSION STARTED, ADD NEW SOCKET TO Sockets
var tempConn connStruct = authValue.(connStruct);
tempConn.Sockets = append(tempConn.Sockets, conn);
tempConn.ConnRoutineChans = append(tempConn.ConnRoutineChans, &authChan)
guestMap.Store(authString, tempConn);
}
//
go echo(conn, authString, &authChan);
}
func echo(conn *websocket.Conn, authString string, authChan *chan string) {
var message msg;
//TEST CHANNEL
authValue, _ := guestMap.Load(authString);
go sendToChans(authValue.(connStruct).ConnRoutineChans, "sup dude?")
fmt.Println("got past send...");
for true {
select {
case val := <-*authChan:
// use value of channel
fmt.Println("AuthChan for user #"+strconv.Itoa(myConnNumb)+" spat out: ", val)
default:
// if channels are empty, this is executed
}
readError := conn.ReadJSON(&message)
fmt.Println("got past readJson...");
if readError != nil || message.Key == "" {
//DISCONNECT USER
//.....
return
}
//
_key, _value := chief(message.Key, message.Value, &*conn, browserAndOS, authString)
if writeError := conn.WriteJSON(_key + "|" + _value); writeError != nil {
//...
return
}
fmt.Println("got past writeJson...");
}
}
func sendToChans(chans []*chan string, message string){
for i := 0; i < len(chans); i++ {
*chans[i] <- message
}
}
I know, a big block of code eh? And I commented out most of it...
Anyway, if you've ever used a websocket most of it should be quite familiar:
1) func wsHandler() fires every time a user connects. It makes an entry in guestMap (for each unique device that connects) which holds a connStruct which holds a list of channels: ConnRoutineChans []*chan string. This all gets passed to:
2) echo(), which is a goroutine that constantly runs for each websocket connection. Here I'm just testing out sending a message to other running goroutines, but it seems my for loop isn't actually constantly firing. It only fires when the websocket receives a message from the open tab/window it's connected to. (If anyone can clarify this mechanic, I'd love to know why it's not looping constantly?)
3) For each window or tab that the user has open on a given device there is a websocket and channel stored in an arrays. I want to be able to send a message to all the channels in the array (essentially the other goroutines for open tabs/windows on that device) and receive the message in the other goroutines to change some variables set in the constantly running goroutine.
What I have right now works only for the very first connection on a device, and (of course) it sends "sup dude?" to itself since it's the only channel in the array at the time. Then if you open a new tab (or even many), the message doesn't get sent to anyone at all! Strange?... Then when I close all the tabs (and my commented out logic removes the device item from guestMap) and start up a new device session, still only the first connection gets it's own message.
I already have a method for sending a message to all the other websockets on a device, but sending to a goroutine seems to be a little more tricky than I thought.
To answer my own question:
First, I've switched from a sync.map to a normal map. Secondly, in order for nobody to be reading/writing to it at the same time I've made a channel that you call to do any read/write operation on the map. I've been trying my best to keep my data access and manipulation quick to execute so the channel doesn't get crowded so easily. Here's a small example of that:
package main
import (
"fmt"
)
var (
guestMap map[string]*guestStruct = make(map[string]*guestStruct);
guestMapActionChan = make (chan actionStruct);
)
type actionStruct struct {
Action func([]interface{})[]interface{}
Params []interface{}
ReturnChan chan []interface{}
}
type guestStruct struct {
Name string
Numb int
}
func main(){
//make chan listener
go guestMapActionChanListener(guestMapActionChan)
//some guest logs in...
newGuest := guestStruct{Name: "Larry Josher", Numb: 1337}
//add to the map
addRetChan := make(chan []interface{})
guestMapActionChan <- actionStruct{Action: guestMapAdd,
Params: []interface{}{&newGuest},
ReturnChan: addRetChan}
addReturned := <-addRetChan
fmt.Println(addReturned)
fmt.Println("Also, numb was changed by listener to:", newGuest.Numb)
// Same kind of thing for removing, except (of course) there's
// a lot more logic to a real-life application.
}
func guestMapActionChanListener (c chan actionStruct){
for{
value := <-c;
//
returned := value.Action(value.Params);
value.ReturnChan <- returned;
close(value.ReturnChan)
}
}
func guestMapAdd(params []interface{}) []interface{} {
//.. do some parameter verification checks
theStruct := params[0].(*guestStruct)
name := theStruct.Name
theStruct.Numb = 75
guestMap[name] = &*theStruct
return []interface{}{"Added '"+name+"' to the guestMap"}
}
For communication between connections, I just have each socket loop hold onto their guestStruct, and have more guestMapActionChan functions that take care of distributing data to other guests' guestStructs
Now, I'm not going to mark this as the correct answer unless I get some better suggestions as how to do something like this the right way. But for now this is working and should guarantee no races for reading/writing to the map.
Edit: The correct approach should really have been to just use a sync.Mutex like I do in the (mostly) finished project GopherGameServer
Using the following proto buffer code :
syntax = "proto3";
package pb;
message SimpleRequest {
int64 number = 1;
}
message SimpleResponse {
int64 doubled = 1;
}
// All the calls in this serivce preform the action of doubling a number.
// The streams will continuously send the next double, eg. 1, 2, 4, 8, 16.
service Test {
// This RPC streams from the server only.
rpc Downstream(SimpleRequest) returns (stream SimpleResponse);
}
I'm able to successfully open a stream, and continuously get the next doubled number from the server.
My go code for running this looks like :
ctxDownstream, cancel := context.WithCancel(ctx)
downstream, err := testClient.Downstream(ctxDownstream, &pb.SimpleRequest{Number: 1})
for {
responseDownstream, err := downstream.Recv()
if err != io.EOF {
println(fmt.Sprintf("downstream response: %d, error: %v", responseDownstream.Doubled, err))
if responseDownstream.Doubled >= 32 {
break
}
}
}
cancel() // !!This is not a graceful shutdown
println(fmt.Sprintf("%v", downstream.Trailer()))
The problem I'm having is using a context cancellation means my downstream.Trailer() response is empty. Is there a way to gracefully close this connection from the client side and receive downstream.Trailer().
Note: if I close the downstream connection from the server side, my trailers are populated. But I have no way of instructing my server side to close this particular stream. So there must be a way to gracefully close a stream client side.
Thanks.
As requested some server code :
func (b *binding) Downstream(req *pb.SimpleRequest, stream pb.Test_DownstreamServer) error {
request := req
r := make(chan *pb.SimpleResponse)
e := make(chan error)
ticker := time.NewTicker(200 * time.Millisecond)
defer func() { ticker.Stop(); close(r); close(e) }()
go func() {
defer func() { recover() }()
for {
select {
case <-ticker.C:
response, err := b.Endpoint(stream.Context(), request)
if err != nil {
e <- err
}
r <- response
}
}
}()
for {
select {
case err := <-e:
return err
case response := <-r:
if err := stream.Send(response); err != nil {
return err
}
request.Number = response.Doubled
case <-stream.Context().Done():
return nil
}
}
}
You will still need to populate the trailer with some information. I use the grpc.StreamServerInterceptor to do this.
According to the grpc go documentation
Trailer returns the trailer metadata from the server, if there is any.
It must only be called after stream.CloseAndRecv has returned, or
stream.Recv has returned a non-nil error (including io.EOF).
So if you want to read the trailer in client try something like this
ctxDownstream, cancel := context.WithCancel(ctx)
defer cancel()
for {
...
// on error or EOF
break;
}
println(fmt.Sprintf("%v", downstream.Trailer()))
Break from the infinate loop when there is a error and print the trailer. cancel will be called at the end of the function as it is deferred.
I can't find a reference that explains it clearly, but this doesn't appear to be possible.
On the wire, grpc-status is followed by the trailer metadata when the call completes normally (i.e. the server exits the call).
When the client cancels the call, neither of these are sent.
Seems that gRPC treats call cancellation as a quick abort of the rpc, not much different than the socket being dropped.
Adding a "cancel message" via request streaming works; the server can pick this up and cancel the stream from its end and trailers will still get sent:
message SimpleRequest {
oneof RequestType {
int64 number = 1;
bool cancel = 2;
}
}
....
rpc Downstream(stream SimpleRequest) returns (stream SimpleResponse);
Although this does add a bit of complication to the code.