Reading Data from SSH Session Golang - go

I posted a similar question here for reading from a telnet session.
I am trying to read data from an SSH session in golang. I wrote the following functions to try to accomplish this.
I was running into an issue where I was trying to read from stdout and it was empty and it caused my program to lock. To try to work around this I wrote BufferSocketData, it checks the channel ReadDataFromSocket is supposed to append to and if it has data it adds it to the buffer. If after 1 second it still hasn't received any data it stops the read.
This isn't working correctly though and I'm unsure why. Only the first read gets new data subsequent reads return an empty string even if there is data in the buffer.
In my previous question I was able to use SetReadDeadline to limit the amount of time reading from the socket, is there something similar I can use with an SSH session? or do I need to use a different strategy all together?
/*
ReadDataFromSocket - Attempts to read any data in the socket.
*/
func ReadDataFromSocket(sock io.Reader, c chan string) {
var recvData = make([]byte, 1024)
var numBytes, _ = sock.Read(recvData)
c <- string(recvData[:numBytes])
}
/*
BufferSocketData - Read information from the socket and store it in the buffer.
*/
func (s *SSHLib) BufferSocketData(inp chan string, out chan string) {
var data string
var timeout int64 = 1000 // 1 second timeout.
var start = utils.GetTimestamp()
for utils.GetTimestamp()-start < timeout {
select {
case data = <-inp:
default:
}
if data != "" {
break
}
}
out <- data
}
/*
GetData - Start goroutines to get and buffer data.
*/
func (s *SSHLib) GetData() {
var sockCh = make(chan string)
var buffCh = make(chan string)
go ReadDataFromSocket(s.Stdout, sockCh)
go s.BufferSocketData(sockCh, buffCh)
var data = <-buffCh
if data != "" {
s.Buffer += data
}
}
Please let me know if you need any other information.

Start a single reader goroutine for the session. This goroutine reads from the session and sends data to a channel.
In the main goroutine, select in a loop with cases for data received and timeout. Process each case as appropriate.
type SSHLib struct {
Stdout io.Reader
Buffer string
Data chan string // <-- Add this member
}
// New creates a new SSHLib. This example shows the code
// relevant to reading stdout only.
func New() *SSHLib {
s := &SSHLib{Data: make(chan string)}
go s.Reader()
return s
}
// Reader reads data from stdout in a loop.
func (s *SSHLib) Reader() {
var data = make([]byte, 1024)
for {
n, err := s.Stdout.Read(data)
if err != nil {
// Handle error
return
}
s.Data <- string(data[:n])
}
}
// GetData receives data until regexp match or timeout.
func (s *SSHLib) GetData() {
t := time.NewTimer(time.Second)
defer t.Stop()
for {
select {
case d := <-s.Data:
s.Buffer += d
// Check for regexp match in S.Buffer
case <-t.C:
// Handle timeout
return
}
}
}

Related

Why is data being pushed into the channel but never read from the receiver goroutine?

I am building a daemon and I have two services that will be sending data to and from each other. Service A is what produces the data and service B a is Data Buffer service or like a queue. So from the main.go file, service B is instantiated and started. The Start() method will perform the buffer() function as a goroutine because this function waits for data to be passed onto a channel and I don't want the main process to halt waiting for buffer to complete. Then Service A is instantiated and started. It is then also "registered" with Service B.
I created a method called RegisterWithBufferService for Service A that creates two new channels. It will store those channels as it's own attributes and also provide them to Service B.
func (s *ServiceA) RegisterWithBufferService(bufService *data.DataBuffer) error {
newIncomingChan := make(chan *data.DataFrame, 1)
newOutgoingChan := make(chan []byte, 1)
s.IncomingBuffChan = newIncomingChan
s.OutgoingDataChannels = append(s.OutgoingDataChannels, newOutgoingChan)
bufService.DataProviders[s.ServiceName()] = data.DataProviderInfo{
IncomingChan: newOutgoingChan, //our outGoing channel is their incoming
OutgoingChan: newIncomingChan, // our incoming channel is their outgoing
}
s.DataBufferService = bufService
bufService.NewProvider <- s.ServiceName() //The DataBuffer service listens for new services and creates a new goroutine for buffering
s.Logger.Info().Msg("Registeration completed.")
return nil
}
Buffer essentially listens for incoming data from Service A, decodes it using Decode() and then adds it to a slice called buf. If the slice is greater in length than bufferPeriod then it will send the first item in the slice in the Outgoing channel back to Service A.
func (b* DataBuffer) buffer(bufferPeriod int) {
for {
select {
case newProvider := <- b.NewProvider:
b.wg.Add(1)
/*
newProvider is a string
DataProviders is a map the value it returns is a struct containing the Incoming and
Outgoing channels for this service
*/
p := b.DataProviders[newProvider]
go func(prov string, in chan []byte, out chan *DataFrame) {
defer b.wg.Done()
var buf []*DataFrame
for {
select {
case rawData := <-in:
tmp := Decode(rawData) //custom decoding function. Returns a *DataFrame
buf = append(buf, tmp)
if len(buf) < bufferPeriod {
b.Logger.Info().Msg("Sending decoded data out.")
out <- buf[0]
buf = buf[1:] //pop
}
case <- b.Quit:
return
}
}
}(newProvider, p.IncomingChan, p.OutgoingChan)
}
case <- b.Quit:
return
}
}
Now Service A has a method called record that will periodically push data to all the channels in it's OutgoingDataChannels attribute.
func (s *ServiceA) record() error {
...
if atomic.LoadInt32(&s.Listeners) != 0 {
s.Logger.Info().Msg("Sending raw data to data buffer")
for _, outChan := range s.OutgoingDataChannels {
outChan <- dataBytes // the receiver (Service B) is already listening and this doesn't hang
}
s.Logger.Info().Msg("Raw data sent and received") // The logger will output this so I know it's not hanging
}
}
The problem is that Service A seems to push the data successfully using record but Service B never goes into the case rawData := <-in: case in the buffer sub-goroutine. Is this because I have nested goroutines? Incase it's not clear, when Service B is started, it calls buffer but because it would hang otherwise, I made the call to buffer a goroutine. So then when Service A calls RegisterWithBufferService, the buffer goroutine creates a goroutine to listen for new data from Service B and push it back to Service A once the buffer is filled. I hope I explained it clearly.
EDIT 1
I've made a minimal, reproducible example.
package main
import (
"fmt"
"sync"
"sync/atomic"
"time"
)
var (
defaultBufferingPeriod int = 3
DefaultPollingInterval int64 = 10
)
type DataObject struct{
Data string
}
type DataProvider interface {
RegisterWithBufferService(*DataBuffer) error
ServiceName() string
}
type DataProviderInfo struct{
IncomingChan chan *DataObject
OutgoingChan chan *DataObject
}
type DataBuffer struct{
Running int32 //used atomically
DataProviders map[string]DataProviderInfo
Quit chan struct{}
NewProvider chan string
wg sync.WaitGroup
}
func NewDataBuffer() *DataBuffer{
var (
wg sync.WaitGroup
)
return &DataBuffer{
DataProviders: make(map[string]DataProviderInfo),
Quit: make(chan struct{}),
NewProvider: make(chan string),
wg: wg,
}
}
func (b *DataBuffer) Start() error {
if ok := atomic.CompareAndSwapInt32(&b.Running, 0, 1); !ok {
return fmt.Errorf("Could not start Data Buffer Service.")
}
go b.buffer(defaultBufferingPeriod)
return nil
}
func (b *DataBuffer) Stop() error {
if ok := atomic.CompareAndSwapInt32(&b.Running, 1, 0); !ok {
return fmt.Errorf("Could not stop Data Buffer Service.")
}
for _, p := range b.DataProviders {
close(p.IncomingChan)
close(p.OutgoingChan)
}
close(b.Quit)
b.wg.Wait()
return nil
}
// buffer creates goroutines for each incoming, outgoing data pair and decodes the incoming bytes into outgoing DataFrames
func (b *DataBuffer) buffer(bufferPeriod int) {
for {
select {
case newProvider := <- b.NewProvider:
fmt.Println("Received new Data provider.")
if _, ok := b.DataProviders[newProvider]; ok {
b.wg.Add(1)
p := b.DataProviders[newProvider]
go func(prov string, in chan *DataObject, out chan *DataObject) {
defer b.wg.Done()
var (
buf []*DataObject
)
fmt.Printf("Waiting for data from: %s\n", prov)
for {
select {
case inData := <-in:
fmt.Printf("Received data from: %s\n", prov)
buf = append(buf, inData)
if len(buf) > bufferPeriod {
fmt.Printf("Queue is filled, sending data back to %s\n", prov)
out <- buf[0]
fmt.Println("Data Sent")
buf = buf[1:] //pop
}
case <- b.Quit:
return
}
}
}(newProvider, p.IncomingChan, p.OutgoingChan)
}
case <- b.Quit:
return
}
}
}
type ServiceA struct{
Active int32 // atomic
Stopping int32 // atomic
Recording int32 // atomic
Listeners int32 // atomic
name string
QuitChan chan struct{}
IncomingBuffChan chan *DataObject
OutgoingBuffChans []chan *DataObject
DataBufferService *DataBuffer
}
// A compile time check to ensure ServiceA fully implements the DataProvider interface
var _ DataProvider = (*ServiceA)(nil)
func NewServiceA() (*ServiceA, error) {
var newSliceOutChans []chan *DataObject
return &ServiceA{
QuitChan: make(chan struct{}),
OutgoingBuffChans: newSliceOutChans,
name: "SERVICEA",
}, nil
}
// Start starts the service. Returns an error if any issues occur
func (s *ServiceA) Start() error {
atomic.StoreInt32(&s.Active, 1)
return nil
}
// Stop stops the service. Returns an error if any issues occur
func (s *ServiceA) Stop() error {
atomic.StoreInt32(&s.Stopping, 1)
close(s.QuitChan)
return nil
}
func (s *ServiceA) StartRecording(pol_int int64) error {
if ok := atomic.CompareAndSwapInt32(&s.Recording, 0, 1); !ok {
return fmt.Errorf("Could not start recording. Data recording already started")
}
ticker := time.NewTicker(time.Duration(pol_int) * time.Second)
go func() {
for {
select {
case <-ticker.C:
fmt.Println("Time to record...")
err := s.record()
if err != nil {
return
}
case <-s.QuitChan:
ticker.Stop()
return
}
}
}()
return nil
}
func (s *ServiceA) record() error {
current_time := time.Now()
ct := fmt.Sprintf("%02d-%02d-%d", current_time.Day(), current_time.Month(), current_time.Year())
dataObject := &DataObject{
Data: ct,
}
if atomic.LoadInt32(&s.Listeners) != 0 {
fmt.Println("Sending data to Data buffer...")
for _, outChan := range s.OutgoingBuffChans {
outChan <- dataObject // the receivers should already be listening
}
fmt.Println("Data sent.")
}
return nil
}
// RegisterWithBufferService satisfies the DataProvider interface. It provides the bufService with new incoming and outgoing channels along with a polling interval
func (s ServiceA) RegisterWithBufferService(bufService *DataBuffer) error {
if _, ok := bufService.DataProviders[s.ServiceName()]; ok {
return fmt.Errorf("%v data provider already registered with Data Buffer.", s.ServiceName())
}
newIncomingChan := make(chan *DataObject, 1)
newOutgoingChan := make(chan *DataObject, 1)
s.IncomingBuffChan = newIncomingChan
s.OutgoingBuffChans = append(s.OutgoingBuffChans, newOutgoingChan)
bufService.DataProviders[s.ServiceName()] = DataProviderInfo{
IncomingChan: newOutgoingChan, //our outGoing channel is their incoming
OutgoingChan: newIncomingChan, // our incoming channel is their outgoing
}
s.DataBufferService = bufService
bufService.NewProvider <- s.ServiceName() //The DataBuffer service listens for new services and creates a new goroutine for buffering
return nil
}
// ServiceName satisfies the DataProvider interface. It returns the name of the service.
func (s ServiceA) ServiceName() string {
return s.name
}
func main() {
var BufferedServices []DataProvider
fmt.Println("Instantiating and Starting Data Buffer Service...")
bufService := NewDataBuffer()
err := bufService.Start()
if err != nil {
panic(fmt.Sprintf("%v", err))
}
defer bufService.Stop()
fmt.Println("Data Buffer Service successfully started.")
fmt.Println("Instantiating and Starting Service A...")
serviceA, err := NewServiceA()
if err != nil {
panic(fmt.Sprintf("%v", err))
}
BufferedServices = append(BufferedServices, *serviceA)
err = serviceA.Start()
if err != nil {
panic(fmt.Sprintf("%v", err))
}
defer serviceA.Stop()
fmt.Println("Service A successfully started.")
fmt.Println("Registering services with Data Buffer...")
for _, s := range BufferedServices {
_ = s.RegisterWithBufferService(bufService) // ignoring error msgs for base case
}
fmt.Println("Registration complete.")
fmt.Println("Beginning recording...")
_ = atomic.AddInt32(&serviceA.Listeners, 1)
err = serviceA.StartRecording(DefaultPollingInterval)
if err != nil {
panic(fmt.Sprintf("%v", err))
}
for {
select {
case RTD := <-serviceA.IncomingBuffChan:
fmt.Println(RTD)
case <-serviceA.QuitChan:
atomic.StoreInt32(&serviceA.Listeners, 0)
bufService.Quit<-struct{}{}
}
}
}
Running on Go 1.17. When running the example, it should print the following every 10 seconds:
Time to record...
Sending data to Data buffer...
Data sent.
But then Data buffer never goes into the inData := <-in case.
To diagnose this I changed fmt.Println("Sending data to Data buffer...") to fmt.Println("Sending data to Data buffer...", s.OutgoingBuffChans) and the output was:
Time to record...
Sending data to Data buffer... []
So you are not actually sending the data to any channels. The reason for this is:
func (s ServiceA) RegisterWithBufferService(bufService *DataBuffer) error {
As the receiver is not a pointer when you do the s.OutgoingBuffChans = append(s.OutgoingBuffChans, newOutgoingChan) you are changing s.OutgoingBuffChans in a copy of the ServiceA which is discarded when the function exits. To fix this change:
func (s ServiceA) RegisterWithBufferService(bufService *DataBuffer) error {
to
func (s *ServiceA) RegisterWithBufferService(bufService *DataBuffer) error {
and
BufferedServices = append(BufferedServices, *serviceA)
to
BufferedServices = append(BufferedServices, serviceA)
The amended version outputs:
Time to record...
Sending data to Data buffer... [0xc0000d8060]
Data sent.
Received data from: SERVICEA
Time to record...
Sending data to Data buffer... [0xc0000d8060]
Data sent.
Received data from: SERVICEA
So this resolves the reported issue (I would not be suprised if there are other issues but hopefully this points you in the right direction). I did notice that the code you originally posted does use a pointer receiver so that might have suffered from another issue (but its difficult to comment on code fragments in a case like this).

How to handle multiple goroutines that share the same channel

I've been searching a lot but could not find an answer for my problem yet.
I need to make multiple calls to an external API, but with different parameters concurrently.
And then for each call I need to init a struct for each dataset and process the data I receive from the API call. Bear in mind that I read each line of the incoming request and start immediately send it to the channel.
First problem I encounter was not obvious at the beginning due to the large quantity of data I'm receiving, is that each goroutine does not receive all the data that goes through the channel. (Which I learned by the research I've made). So what I need is a way of requeuing/redirect that data to the correct goroutine.
The function that sends the streamed response from a single dataset.
(I've cut useless parts of code that are out of context)
func (api *API) RequestData(ctx context.Context, c chan DWeatherResponse, dataset string, wg *sync.WaitGroup) error {
for {
line, err := reader.ReadBytes('\n')
s := string(line)
if err != nil {
log.Println("End of %s", dataset)
return err
}
data, err := extractDataFromStreamLine(s, dataset)
if err != nil {
continue
}
c <- *data
}
}
The function that will process the incoming data
func (s *StrikeStruct) Process(ch, requeue chan dweather.DWeatherResponse) {
for {
data, more := <-ch
if !more {
break
}
// data contains {dataset string, value float64, date time.Time}
// The s.Parameter needs to match the dataset
// IMPORTANT PART, checks if the received data is part of this struct dataset
// If not I want to send it to another go routine until it gets to the correct
one. There will be a max of 4 datasets but still this could not be the best approach to have
if !api.GetDataset(s.Parameter, data.Dataset) {
requeue <- data
continue
}
// Do stuff with the data from this point
}
}
Now on my own API endpoint I have the following:
ch := make(chan dweather.DWeatherResponse, 2)
requeue := make(chan dweather.DWeatherResponse)
final := make(chan strike.StrikePerYearResponse)
var wg sync.WaitGroup
for _, s := range args.Parameters.Strikes {
strike := strike.StrikePerYear{
Parameter: strike.Parameter(s.Dataset),
StrikeValue: s.Value,
}
// I receive and process the data in here
go strike.ProcessStrikePerYear(ch, requeue, final, string(s.Dataset))
}
go func() {
for {
data, _ := <-requeue
ch <- data
}
}()
// Creates a goroutine for each dataset
for _, dataset := range api.Params.Dataset {
wg.Add(1)
go api.RequestData(ctx, ch, dataset, &wg)
}
wg.Wait()
close(ch)
//Once the data is all processed it is all appended
var strikes []strike.StrikePerYearResponse
for range args.Fetch.Datasets {
strikes = append(strikes, <-final)
}
return strikes
The issue with this code is that as soon as I start receiving data from more than one endpoint the requeue will block and nothing more happens. If I remove that requeue logic data will be lost if it does not land on the correct goroutine.
My two questions are:
Why is the requeue blocking if it has a goroutine always ready to receive?
Should I take a different approach on how I'm processing the incoming data?
this is not a good way to solving your problem. you should change your solution. I suggest an implementation like the below:
import (
"fmt"
"sync"
)
// answer for https://stackoverflow.com/questions/68454226/how-to-handle-multiple-goroutines-that-share-the-same-channel
var (
finalResult = make(chan string)
)
// IData use for message dispatcher that all struct must implement its method
type IData interface {
IsThisForMe() bool
Process(*sync.WaitGroup)
}
//MainData can be your main struct like StrikePerYear
type MainData struct {
// add any props
Id int
Name string
}
type DataTyp1 struct {
MainData *MainData
}
func (d DataTyp1) IsThisForMe() bool {
// you can check your condition here to checking incoming data
if d.MainData.Id == 2 {
return true
}
return false
}
func (d DataTyp1) Process(wg *sync.WaitGroup) {
d.MainData.Name = "processed by DataTyp1"
// send result to final channel, you can change it as you want
finalResult <- d.MainData.Name
wg.Done()
}
type DataTyp2 struct {
MainData *MainData
}
func (d DataTyp2) IsThisForMe() bool {
// you can check your condition here to checking incoming data
if d.MainData.Id == 3 {
return true
}
return false
}
func (d DataTyp2) Process(wg *sync.WaitGroup) {
d.MainData.Name = "processed by DataTyp2"
// send result to final channel, you can change it as you want
finalResult <- d.MainData.Name
wg.Done()
}
//dispatcher will run new go routine for each request.
//you can implement a worker pool to preventing running too many go routines.
func dispatcher(incomingData *MainData, wg *sync.WaitGroup) {
// based on your requirements you can remove this go routing or not
go func() {
var p IData
p = DataTyp1{incomingData}
if p.IsThisForMe() {
go p.Process(wg)
return
}
p = DataTyp2{incomingData}
if p.IsThisForMe() {
go p.Process(wg)
return
}
}()
}
func main() {
dummyDataArray := []MainData{
MainData{Id: 2, Name: "this data #2"},
MainData{Id: 3, Name: "this data #3"},
}
wg := sync.WaitGroup{}
for i := range dummyDataArray {
wg.Add(1)
dispatcher(&dummyDataArray[i], &wg)
}
result := make([]string, 0)
done := make(chan struct{})
// data collector
go func() {
loop:for {
select {
case <-done:
break loop
case r := <-finalResult:
result = append(result, r)
}
}
}()
wg.Wait()
done<- struct{}{}
for _, s := range result {
fmt.Println(s)
}
}
Note: this is just for opening your mind for finding a better solution, and for sure this is not a production-ready code.

How to use gorutines in a cycle for filling of struct?

There are such models:
type ChromeBasedDirections struct {
CurrencyFromName string
CurrencyToName string
URL string
ParseResponse ParseResponse
}
type ParseResponse struct {
CurrentPrice float64
Change24H float64
Err error
}
type ParseResponseChan struct {
URL string
CurrentPrice float64
Change24H float64
Err error
}
The function is like this:
func ParseBNCByURL(u string, chanResponse chan models.ParseResponseChan) {
var parseResponse models.ParseResponseChan
//..
// code that fill up parseResponse
//..
if err != nil {
parseResponse.Err = err
chanResponse <- parseResponse
return
}
parseResponse.CurrentPrice = price
parseResponse.Change24H = change24H
chanResponse <- parseResponse
return
}
And this is how the function call goes:
func initParserMultiTread() {
var urls = []string{"url_0","url_1","url_2","url_n"} // on production it will be taken from the json file
var chromeBasedDirections []models.ChromeBasedDirections
for _, url := range urls {
chromeBasedDirections = append(chromeBasedDirections, models.ChromeBasedDirections{
CurrencyFromName: "",
CurrencyToName: "",
URL: url,
ParseResponse: models.ParseResponse{},
})
}
var parseResponseChan = make(chan models.ParseResponseChan) // here's how to do without hardcode?
for _, dir := range chromeBasedDirections {
go controllers.ParseBNCByURL(dir.URL, parseResponseChan)
}
}
If initParserMultiTread is executed, the function will complete before ParseBNCByURL is executed. This is because the channel has not been read. If it was possible to use hardcode, then it would be possible for each url_n to create its own chan models.ParseResponseChan and then in a loop compare ChromeBasedDirections.URL and ParseResponseChan.URL, and if there is a match then fill in ChromeBasedDirections.ParseResponse.
But I need to avoid hardcode and do everything in a loop. In general, this is far from the first option as I try to do multi-threaded execution of the ParseBNCByURL function.
That is, there is a ChromeBasedDirections model with filled CurrencyFrom/ToName and URL fields, and I need to execute ParseBNCByURL in a multi-thread, which will fill ParseResponse.
I tried it again, but the function doesn't work in multithreaded mode:
var urls = []string{"url_0","url_1","url_2","url_n"}
var chromeResults []models.ChromeBasedDirections
for _, url := range urls {
var chromeResult = make(chan models.ChromeBasedDirections)
go controllers.ParseBNCByURL(url, chromeResult)
chromeResults = append(chromeResults, <-chromeResult)
}
The problem with your code is that you are not reading the parseResponseChan anywhere. All your goroutines get blocked because of that, and that is why the initParserMultiTread function finishes before your goroutines.
You need to add code for reading the parseResponseChan channel, as well as code for closing the parseResponseChan channel once all goroutines are done with execution.
One solution is synchronization with sync.WaitGroup.
First a change in your ParseBNCByURL function. This function needs to send a signal when it's done.
func ParseBNCByURL( wg *sync.WaitGroup, u string, chanResponse chan models.ParseResponseChan) {
defer wg.Done() //this signals when the goroutine is done
//the rest of the function's body
In the initParserMultiTread function, you need a way to wait for signals from all goroutines to close the parseResponseChan channel and to read the response from that channel.
func initParserMultiTread() {
var urls = []string{"url_0","url_1","url_2","url_n"} // on production it will be taken from the json file
var chromeBasedDirections []models.ChromeBasedDirections
var wg sync.WaitGroup
wg.Add(len(urls)) // add how many goroutines will be initialized
for _, url := range urls {
chromeBasedDirections = append(chromeBasedDirections, models.ChromeBasedDirections{
CurrencyFromName: "",
CurrencyToName: "",
URL: url,
ParseResponse: models.ParseResponse{},
})
}
var parseResponseChan = make(chan models.ParseResponseChan)
//init a goroutine for closing the parseResponseChan once all goroutines are done
go func(wg *sync.WaitGroup, ch chan models.ParseResponseChan) {
wg.Wait() //wait for all goroutines to send the signal
close(ch) //close the channel
}(&wg, parseResponseChan)
for _, dir := range chromeBasedDirections {
go controllers.ParseBNCByURL(&wg, dir.URL, parseResponseChan)
}
//read the responses from the channel
for response := range parseResponseChan {
//some code to handle the responses
}
}
Once all the goroutines are initialized, you start reading from the parseResponseChan channel. This will block the initParserMultiTread function from returning until all goroutines are done with execution and the parseResponseChan channel closes.

Syncing websocket loops with channels in Golang

I'm facing a dilemma here trying to keep certain websockets in sync for a given user. Here's the basic setup:
type msg struct {
Key string
Value string
}
type connStruct struct {
//...
ConnRoutineChans []*chan string
LoggedIn bool
Login string
//...
Sockets []*websocket.Conn
}
var (
//...
/* LIST OF CONNECTED USERS AN THEIR IP ADDRESSES */
guestMap sync.Map
)
func main() {
post("Started...")
rand.Seed(time.Now().UTC().UnixNano())
http.HandleFunc("/wss", wsHandler)
panic(http.ListenAndServeTLS("...", "...", "...", nil))
}
func wsHandler(w http.ResponseWriter, r *http.Request) {
if r.Header.Get("Origin")+":8080" != "https://...:8080" {
http.Error(w, "Origin not allowed", 403)
fmt.Println("Client origin not allowed! (https://"+r.Host+")")
fmt.Println("r.Header Origin: "+r.Header.Get("Origin"))
return
}
///
conn, err := websocket.Upgrade(w, r, w.Header(), 1024, 1024)
if err != nil {
http.Error(w, "Could not open websocket connection", http.StatusBadRequest)
fmt.Println("Could not open websocket connection with client!")
}
//ADD CONNECTION TO guestMap IF CONNECTION IS nil
var authString string = /*gets device identity*/;
var authChan chan string = make(chan string);
authValue, authOK := guestMap.Load(authString);
if !authOK {
// NO SESSION, CREATE A NEW ONE
newSession = getSession();
//defer newSession.Close();
guestMap.Store(authString, connStruct{ LoggedIn: false,
ConnRoutineChans: []*chan string{&authChan},
Login: "",
Sockets: []*websocket.Conn{conn}
/* .... */ });
}else{
//SESSION STARTED, ADD NEW SOCKET TO Sockets
var tempConn connStruct = authValue.(connStruct);
tempConn.Sockets = append(tempConn.Sockets, conn);
tempConn.ConnRoutineChans = append(tempConn.ConnRoutineChans, &authChan)
guestMap.Store(authString, tempConn);
}
//
go echo(conn, authString, &authChan);
}
func echo(conn *websocket.Conn, authString string, authChan *chan string) {
var message msg;
//TEST CHANNEL
authValue, _ := guestMap.Load(authString);
go sendToChans(authValue.(connStruct).ConnRoutineChans, "sup dude?")
fmt.Println("got past send...");
for true {
select {
case val := <-*authChan:
// use value of channel
fmt.Println("AuthChan for user #"+strconv.Itoa(myConnNumb)+" spat out: ", val)
default:
// if channels are empty, this is executed
}
readError := conn.ReadJSON(&message)
fmt.Println("got past readJson...");
if readError != nil || message.Key == "" {
//DISCONNECT USER
//.....
return
}
//
_key, _value := chief(message.Key, message.Value, &*conn, browserAndOS, authString)
if writeError := conn.WriteJSON(_key + "|" + _value); writeError != nil {
//...
return
}
fmt.Println("got past writeJson...");
}
}
func sendToChans(chans []*chan string, message string){
for i := 0; i < len(chans); i++ {
*chans[i] <- message
}
}
I know, a big block of code eh? And I commented out most of it...
Anyway, if you've ever used a websocket most of it should be quite familiar:
1) func wsHandler() fires every time a user connects. It makes an entry in guestMap (for each unique device that connects) which holds a connStruct which holds a list of channels: ConnRoutineChans []*chan string. This all gets passed to:
2) echo(), which is a goroutine that constantly runs for each websocket connection. Here I'm just testing out sending a message to other running goroutines, but it seems my for loop isn't actually constantly firing. It only fires when the websocket receives a message from the open tab/window it's connected to. (If anyone can clarify this mechanic, I'd love to know why it's not looping constantly?)
3) For each window or tab that the user has open on a given device there is a websocket and channel stored in an arrays. I want to be able to send a message to all the channels in the array (essentially the other goroutines for open tabs/windows on that device) and receive the message in the other goroutines to change some variables set in the constantly running goroutine.
What I have right now works only for the very first connection on a device, and (of course) it sends "sup dude?" to itself since it's the only channel in the array at the time. Then if you open a new tab (or even many), the message doesn't get sent to anyone at all! Strange?... Then when I close all the tabs (and my commented out logic removes the device item from guestMap) and start up a new device session, still only the first connection gets it's own message.
I already have a method for sending a message to all the other websockets on a device, but sending to a goroutine seems to be a little more tricky than I thought.
To answer my own question:
First, I've switched from a sync.map to a normal map. Secondly, in order for nobody to be reading/writing to it at the same time I've made a channel that you call to do any read/write operation on the map. I've been trying my best to keep my data access and manipulation quick to execute so the channel doesn't get crowded so easily. Here's a small example of that:
package main
import (
"fmt"
)
var (
guestMap map[string]*guestStruct = make(map[string]*guestStruct);
guestMapActionChan = make (chan actionStruct);
)
type actionStruct struct {
Action func([]interface{})[]interface{}
Params []interface{}
ReturnChan chan []interface{}
}
type guestStruct struct {
Name string
Numb int
}
func main(){
//make chan listener
go guestMapActionChanListener(guestMapActionChan)
//some guest logs in...
newGuest := guestStruct{Name: "Larry Josher", Numb: 1337}
//add to the map
addRetChan := make(chan []interface{})
guestMapActionChan <- actionStruct{Action: guestMapAdd,
Params: []interface{}{&newGuest},
ReturnChan: addRetChan}
addReturned := <-addRetChan
fmt.Println(addReturned)
fmt.Println("Also, numb was changed by listener to:", newGuest.Numb)
// Same kind of thing for removing, except (of course) there's
// a lot more logic to a real-life application.
}
func guestMapActionChanListener (c chan actionStruct){
for{
value := <-c;
//
returned := value.Action(value.Params);
value.ReturnChan <- returned;
close(value.ReturnChan)
}
}
func guestMapAdd(params []interface{}) []interface{} {
//.. do some parameter verification checks
theStruct := params[0].(*guestStruct)
name := theStruct.Name
theStruct.Numb = 75
guestMap[name] = &*theStruct
return []interface{}{"Added '"+name+"' to the guestMap"}
}
For communication between connections, I just have each socket loop hold onto their guestStruct, and have more guestMapActionChan functions that take care of distributing data to other guests' guestStructs
Now, I'm not going to mark this as the correct answer unless I get some better suggestions as how to do something like this the right way. But for now this is working and should guarantee no races for reading/writing to the map.
Edit: The correct approach should really have been to just use a sync.Mutex like I do in the (mostly) finished project GopherGameServer

Go: Passing functions through channels

I'm trying to rate limit the functions that I call by placing them through a queue to be accessed later. Below I have a slice of requests that I have created, and the requestHandler function processes each request at a certain rate.
I want it to accept all kinds of functions with different types of parameters, hence the interface{} type.
How would I be able to pass the functions through a channel and successfully call them?
type request struct {
function interface{}
channel chan interface{}
}
var requestQueue []request
func pushQueue(f interface{}, ch chan interface{}) {
req := request{
f,
ch,
}
//push
requestQueue = append(requestQueue, req)
}
func requestHandler() {
for {
if len(requestQueue) > 0 {
//pop
req := requestQueue[len(requestQueue)-1]
requestQueue = requestQueue[:len(requestQueue)-1]
req.channel <- req.function
}
<-time.After(1200 * time.Millisecond)
}
}
Here is an example of what I'm trying to achieve (GetLeagueEntries(string, string) and GetSummonerName(int, int) are functions):
ch := make(chan interface{})
pushQueue(l.GetLeagueEntries, ch)
pushQueue(l.GetSummonerName, ch)
leagues, _ := <-ch(string1, string2)
summoners, _ := <-ch(int1, int2)
Alright, here is the codez: https://play.golang.org/p/XZvb_4BaJF
Notice that it's not perfect. You have a queue that is executed every second. If the queue is empty and a new item is added, the new item can wait for almost a second before being executed.
But this should get you very close to what you need anyway :)
This code can be split into 3 section:
The rate limited queue executor, which I call the server (I'm horrible at naming things) - The server doesn't know anything about the functions. All it does is start a never-ending goroutine that pops the oldest function in the queue, once every second, and calls it. The issue that I talked about above is in this section of the code BTW and I could help you fix it if you want.
The Button Click functionality - This shows you how each button click could call 3 diff functions (you could obviously make more/less function calls) using the server and make sure that they are each 1 second apart from each other. You can even add a timeout to any of the functions (to fake latency) and they would still get called 1 second apart. This is the only place that you need channels because you want to make all the function calls as fast as possible (if the first function takes 5 seconds, you only want to wait 1 second to call the second function) and then wait for them to finish so you need to know when they are all done.
The Button Click simulation (the main func) - this just shows that 3 button clicks would work as expected. You can also put them in a goroutine to simulate 3 users clicking the button at the same time and it would still work.
package main
import (
"fmt"
"sync"
"time"
)
const (
requestFreq = time.Second
)
type (
// A single request
request func()
// The server that will hold a queue of requests and make them once a requestFreq
server struct {
// This will tick once per requestFreq
ticker *time.Ticker
requests []request
// Mutex for working with the request slice
sync.RWMutex
}
)
var (
createServerOnce sync.Once
s *server
)
func main() {
// Multiple button clicks:
ButtonClick()
ButtonClick()
ButtonClick()
fmt.Println("Done!")
}
// BUTTON LOGIC:
// Calls 3 functions and returns 3 diff values.
// Each function is called at least 1 second appart.
func ButtonClick() (val1 int, val2 string, val3 bool) {
iCh := make(chan int)
sCh := make(chan string)
bCh := make(chan bool)
go func(){
Server().AppendRequest(func() {
t := time.Now()
fmt.Println("Calling func1 (time: " + t.Format("15:04:05") + ")")
// do some stuff
iCh <- 1
})
}()
go func(){
Server().AppendRequest(func() {
t := time.Now()
fmt.Println("Calling func2 (time: " + t.Format("15:04:05") + ")")
// do some stuff
sCh <- "Yo"
})
}()
go func(){
Server().AppendRequest(func() {
t := time.Now()
fmt.Println("Calling func3 (time: " + t.Format("15:04:05") + ")")
// do some stuff
bCh <- true
})
}()
// Wait for all 3 calls to come back
for count := 0; count < 3; count++ {
select {
case val1 = <-iCh:
case val2 = <-sCh:
case val3 = <-bCh:
}
}
return
}
// SERVER LOGIC
// Factory function that will only create a single server
func Server() *server {
// Only one server for the entire application
createServerOnce.Do(func() {
s = &server{ticker: time.NewTicker(requestFreq), requests: []request{}}
// Start a thread to make requests.
go s.makeRequests()
})
return s
}
func (s *server) makeRequests() {
if s == nil || s.ticker == nil {
return
}
// This will keep going once per each requestFreq
for _ = range s.ticker.C {
var r request
// You can't just access s.requests because you are in a goroutine
// here while someone could be adding new requests outside of the
// goroutine so you have to use locks.
s.Lock()
if len(s.requests) > 0 {
// We have a lock here, which blocks all other operations
// so just shift the first request out, save it and give
// the lock back before doing any work.
r = s.requests[0]
s.requests = s.requests[1:]
}
s.Unlock()
if r != nil {
// make the request!
r()
}
}
}
func (s *server) AppendRequest(r request) {
if s == nil {
return
}
s.Lock()
s.requests = append(s.requests, r)
s.Unlock()
}
First, I would write it as:
leagues := server.GetLeagueEntries()
summoners := server.GetSummoners()
And, put the rate limiting into the server. With one of the rate-limiting libraries.
However, it is possible to use an interface to unify the requests, and use a func type to allow closures (as in http.HandleFunc):
type Command interface {
Execute(server *Server)
}
type CommandFunc func(server *Server)
func (fn CommandFunc) Execute(server *Server) { fn(server) }
type GetLeagueEntries struct { Leagues []League }
func (entries *GetLeagueEntries) Execute(server *Server) {
// ...
}
func GetSummonerName(id int, result *string) CommandFunc {
return CommandFunc(func(server *Server){
*result = "hello"
})
}
get := GetLeagueEnties{}
requests <- &get
requests <- CommandFunc(func(server *Server){
// ... handle struff here
})
Of course, this needs some synchronization.
I would have thought it easier to use some sort of semaphore or worker pool. That way you have limited number of workers who can do anything. It would be possible to have multiple worker pools too.
Do you need any of these calls to be concurrent/asynchronous? If not, and they can be called in order you could have configurable sleep (a nasty hack mind).
Try out a worker pool or semaphore rather than a chan of functions.

Resources