I am trying to implement a tunnel between two go channels (note I am very new to go). My goroutine looks like this:
consumer := ...
producer := ...
go func() {
for jsn := range producer {
msg, err := ops.FromJSON(jsn)
if err != nil {
log.Print(err)
continue
}
consumer <- msg
}
}()
This though seems to have some issues. How can I check if the consumer is already closed? How to solve the race between getting the message from the producer and sending it to the consumer? ... and maybe others.
Can someone provide a good example of a tunnel between two channels?
Related
I am connecting to a websocket that is stream live stock trades.
I have to read the prices, perform calculations on the fly and based on these calculations make another API call e.g. buy or sell.
I want to ensure my calculations/processing doesn't slow down my ability to stream in all the live data.
What is a good design pattern to follow for this type of problem?
Is there a way to log/warn in my system to know if I am falling behind?
Falling behind means: the websocket is sending price data, and I am not able to process that data as it comes in and it is lagging behind.
While doing the c.ReadJSON and then passing the message to my channel, there might be a delay in deserializing into JSON
When inside my channel and processing, calculating formulas and sending another API request to buy/sell, this will add delays
How can I prevent lags/delays and also monitor if indeed there is a delay?
func main() {
c, _, err := websocket.DefaultDialer.Dial("wss://socket.example.com/stocks", nil)
if err != nil {
panic(err)
}
defer c.Close()
// Buffered channel to account for bursts or spikes in data:
chanMessages := make(chan interface{}, 10000)
// Read messages off the buffered queue:
go func() {
for msgBytes := range chanMessages {
logrus.Info("Message Bytes: ", msgBytes)
}
}()
// As little logic as possible in the reader loop:
for {
var msg interface{}
err := c.ReadJSON(&msg)
if err != nil {
panic(err)
}
chanMessages <- msg
}
}
You can read bytes, pass them to the channel, and use other goroutines to do conversion.
I worked on a similar crypto market bot. Instead of creating large buffured channel i created buffered channel with cap of 1 and used select statement for sending socket data to channel.
Here is the example
var wg sync.WaitGroup
msg := make(chan []byte, 1)
wg.Add(1)
go func() {
defer wg.Done()
for data := range msg {
// decode and process data
}
}()
for {
_, data, err := c.ReadMessage()
if err != nil {
log.Println("read error: ", err)
return
}
select {
case msg <- data: // in case channel is free
default: // if not, next time will try again with latest data
}
}
This will insure that you'll get the latest data when you are ready to process.
This one is a tricky issue that bugs me quite a bit.
Essentially, I wrote an integration microservice that provides data streams from Binance crypto exchange using the Go client. A client sends a start messages, starts data stream for a symbol, and at some point, sends a close message to stop the stream. My implementation looks basically like this:
func (c BinanceClient) StartDataStream(clientType bn.ClientType, symbol, interval string) error {
switch clientType {
case bn.SPOT_LIVE:
wsKlineHandler := c.handlers.klineHandler.SpotKlineHandler
wsErrHandler := c.handlers.klineHandler.ErrHandler
_, stopC, err := binance.WsKlineServe(symbol, interval, wsKlineHandler, wsErrHandler)
if err != nil {
fmt.Println(err)
return err
} else {
c.state.clientSymChanMap[clientType][symbol] = stopC
return nil
}
...
}
The clientSymChanMap stores the stopChannel in a nested hashmap so that I can retrieve the stop channel later to stop the data feed. The stop function has been implemented accordingly:
func (c BinanceClient) StopDataStream(clientType bn.ClientType, symbol string) {
//mtd := "StopDataStream: "
stopC := c.state.clientSymChanMap[clientType][symbol]
if isClosed(stopC) {
DbgPrint(" Channel is already closed. Do nothing for: " + symbol)
} else {
close(stopC)
}
// Delete channel from the map otherwise the next StopAll throws a NPE due to closing a dead channel
delete(c.state.clientSymChanMap[clientType], symbol)
return
}
To prevent panics from already closed channels, I use a check function that returns true in case the channel is already close.
func isClosed(ch <-chan struct{}) bool {
select {
case <-ch:
return true
default:
}
return false
}
Looks nice, but has a catch. When I run the code with starting data for just one symbol, it starts and closes the datafeed exactly as expected.
However, when starting multiple data feeds, then the above code somehow never closes the websocket and just keeps streaming data forever. Without the isClosed check, I get panics of trying to close a closed channel, but with the check in place, well, nothing gets closed.
When looking at the implementation of the above binance.WsKlineServe function, it's quite obvious that it just wraps a new websocket with each invocation and then returns the done & stop channel.
The documentation gives the following usage example:
wsKlineHandler := func(event *binance.WsKlineEvent) {
fmt.Println(event)
}
errHandler := func(err error) {
fmt.Println(err)
}
doneC, stopC, err := binance.WsKlineServe("LTCBTC", "1m", wsKlineHandler, errHandler)
if err != nil {
fmt.Println(err)
return
}
<-doneC
Because the doneC channel actually blocks, I removed it and thought that storing the stopC channel and then use it later to stop the datafeed would work. However, it only does so for one single instance. When multiple streams are open, this doesn't work anymore.
Any idea what that's the case and how to fix it?
Firstly, this is dangerous:
if isClosed(stopC) {
DbgPrint(" Channel is already closed. Do nothing for: " + symbol)
} else {
close(stopC) // <- can't be sure channel is still open
}
there is no guarantee that after your polling check of the channel state, that the channel will still be in that same state in the next line of code. So this code could in theory could panic if it's called concurrently.
If you want an asynchronous action to occur on the channel close - it's best to do this explicitly from its own goroutine. So you could try this:
go func() {
stopC := c.state.clientSymChanMap[clientType][symbol]
<-stopC
// stopC definitely closed now
delete(c.state.clientSymChanMap[clientType], symbol)
}()
P.S. you do need some sort of mutex on your map, since the delete is asynchronous - you need to ensure any adds to the map don't datarace with this.
P.P.S Channels are reclaimed by the GC when they go out of scope. If you are no longer reading from it - they do not need to be explicitly closed to be reclaimed by the GC.
Using channels for stopping a goroutine or closing something is very tricky. There are lots of things you can do wrong or forget to do.
context.WithCancel abstracts that complexity away, making the code more readable and maintainable.
Some code snippets:
ctx, cancel := context.WitchCancel(context.TODO())
TheThingToCancel(ctx, ...)
// Whenever you want to stop TheThingToCancel. Can be called multiple times.
cancel()
Then in a for loop you'd often have a select like this:
for {
select {
case <-ctx.Done():
return
default:
}
// do stuff
}
Here some code that is closer to your specific case of an open connection:
func TheThingToCancel(ctx context.Context) (context.CancelFunc, error) {
ctx, cancel := context.WithCancel(ctx)
conn, err := net.Dial("tcp", ":12345")
if err != nil {
cancel()
return nil, err
}
go func() {
<-ctx.Done()
_ = conn.Close()
}()
go func() {
defer func() {
_ = conn.Close()
// make sure context is always cancelled to avoid goroutine leak
cancel()
}()
var bts = make([]byte, 1024)
for {
n, err := conn.Read(bts)
if err != nil {
return
}
fmt.Println(bts[:n])
}
}()
return cancel, nil
}
It returns the cancel function to be able to close it from the outside.
Cancelling a context can be done many times over without a panic like would occur if a channel is closed multiple times. That is one advantage. Also you can derive contexts from other contexts and thereby close a lot of contexts that all stop different routines by closing a parent context. Carefully designed, this is very powerful for shutting down different routines belonging together that also need to be able to be shut down individually.
I'm working on a Go project that require calling an initiation function (initFunction) in a separated goroutine (to ensure this function does not interfere with the rest of the project). initFunction must not take more than 30 seconds, so I thought I would use context.WithTimeout. Lastly, initFunction must be able to notify errors to the caller, so I thought of making an error channel and calling initFunction from an anonymous function, to recieve and report the error.
func RunInitGoRoutine(initFunction func(config string)error) error {
initErr := make(chan error)
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Seconds)
go func() {
<-ctx.Done() // Line 7
err := initFunction(config)
initErr <-err
}()
select {
case res := <-initErr:
return res.err
case <-ctx.Done():
err := errors.New("Deadline")
return err
}
}
I'm quite new to Go, so I'm asking for feedbacks about the above code.
I have some doubt about Line 7. I used this to ensure the anonymous function is "included" under ctx and is therefore killed and freed and everything once timeout expires, but I'm not sure I have done the right thing.
Second thing is, I know I should be calling cancel( ) somewhere, but I can't put my finger around where.
Lastly, really any feedback is welcome, being it about efficency, style, correctness or anything.
In Go the pratice is to communicate via channels. So best thing is probably to share a channel on your context so others can consume from the channel.
As you are stating you are new to Go, I wrote a whole bunch of articles on Go (Beginner level) https://marcofranssen.nl/categories/golang.
Read from old to new to get familiar with the language.
Regarding the channel specifics you should have a look at this article.
https://marcofranssen.nl/concurrency-in-go
A pratical example of a webserver listening for ctrl+c and then gracefully shutting down the server using channels is described in this blog post.
https://marcofranssen.nl/improved-graceful-shutdown-webserver
In essence we run the server in a background routine
go func() {
if err := srv.ListenAndServe(); err != nil && err != http.ErrServerClosed {
srv.l.Fatal("Could not listen on", zap.String("addr", srv.Addr), zap.Error(err))
}
}()
and then we have some code that is blocking the main routine by listening on a channel for the shutdown signal.
quit := make(chan os.Signal, 1)
signal.Notify(quit, os.Interrupt)
sig := <-quit
srv.l.Info("Server is shutting down", zap.String("reason", sig.String()))
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
srv.SetKeepAlivesEnabled(false)
if err := srv.Shutdown(ctx); err != nil {
srv.l.Fatal("Could not gracefully shutdown the server", zap.Error(err))
}
srv.l.Info("Server stopped")
This is very similar to your usecase. So running your init in a background routine and then consume the channel waiting for the result of this init routine.
package main
import (
"fmt"
"time"
)
type InitResult struct {
Message string
}
func main() {
initResult := make(chan InitResult, 0)
go func(c chan<- InitResult) {
time.Sleep(5 * time.Second)
// here we are publishing the result on the channel
c <- InitResult{Message: "Initialization succeeded"}
}(initResult)
fmt.Println("Started initializing")
// here we have a blocking operation consuming the channel
res := <-initResult
fmt.Printf("Init result: %s", res.Message)
}
https://play.golang.org/p/_YGIrdNVZx6
You could also add an error field on the struct so you could do you usual way of error checking.
I've implemented a demo tcp chat server in golang, it works fine, but every time a user disconnects and I try to write a message to the broadcast channel to let other users know a user has disconnected it blocks, and would not further process any new messages from other client because its a nonbuffered channel
I've commented by code and explained it you can go through it, I don't know why the code blocks, I've written msgs
I'm about to write to a channel
I've written to the channel
I've read from the channel
and messages are in perfect order still my msg channel blocks.
Ps: If I'm using buffered channel the code is not blocking, but I want to know where is my code getting stuck.
I also tried running my code with -race flag but no help
package main
import (
"fmt"
"io"
"net"
"sync"
)
func main() {
msg := make(chan string) //broadcast channel (making it buffered channel the problem goes away)
allConn := make(map[net.Conn]int) //Collection of incoming connections for broadcasting the message
disConn := make(chan net.Conn) //client disconnect channel
newConn := make(chan net.Conn) //new client connection channel
mutext := new(sync.RWMutex) //mux to assign unique id to incoming connections
i := 0
listener, err := net.Listen("tcp", "127.0.0.1:8081")
checkErr(err)
fmt.Println("Tcp server started at 127.0.0.1:8081")
//Accept incoming connections and store them in global connection store allConn
go func() {
for {
conn, err := listener.Accept()
checkErr(err)
mutext.Lock()
allConn[conn] = i
i++
mutext.Unlock()
newConn <- conn
}
}()
for {
select {
//Wait for a new client message to arrive and broadcast the message
case umsg := <-msg:
fmt.Println("Broadcast Channel: Already Read")
bmsg := []byte(umsg)
for conn1, _ := range allConn {
_, err := conn1.Write(bmsg)
checkErr(err)
}
//Handle client disconnection [disConn]
case conn := <-disConn:
mutext.RLock()
fmt.Println("user disconneting", allConn[conn])
mutext.RUnlock()
delete(allConn, conn)
fmt.Println("Disconnect: About to Write")
//this call results in deadlock even when channel is empty, buffered channel resolves the issue
//need to know why
msg <- fmt.Sprintf("Disconneting", allConn[conn])
fmt.Println("Disconnect: Already Written")
//Read client incoming message and put it on broadcasting channel and upon disconnect put on it disConn channel
case conn := <-newConn:
go func(conn net.Conn) {
for {
buf := make([]byte, 64)
n, err := conn.Read(buf)
if err != nil {
if err == io.EOF {
disConn <- conn
break
}
}
fmt.Println("Client: About to Write")
msg <- string(buf[0:n])
fmt.Println("Client: Already Written")
}
}(conn)
mutext.RLock()
fmt.Println("User Connected", allConn[conn])
mutext.RUnlock()
}
}
}
func checkErr(err error) {
if err != nil {
panic(err)
}
}
In Go, an unbuffered channel is a "synchronisation point". That is, if you have a channel c, and do c <- value, the goroutine blocks until someone is ready to do v = <- c (and the converse holds, receiving from a blocking channel without something to receive blocks until the value is available, but this is possibly less surprising). Specifically, for a blocking channel, the receive completes before the send completes.
Since you only have a single goroutine, it will be unable to loop back to reading from the channel and the write will block until something can read.
You could, in theory, get around this by doing something like: go func() { msg <- fmt.Sprintf("Disconneting", allConn[conn] }(), so essentially spawning a short-lived goroutine to do the write.
So I am trying to use Kafka for my application which has a producer logging actions into the Kafka MQ and the consumer which reads it off the MQ.Since my application is in Go, I am using the Shopify Sarama to make this possible.
Right now, I'm able to read off the MQ and print the message contents using a
fmt.Printf
Howeveer, I would really like the error handling to be better than console printing and I am willing to go the extra mile.
Code right now for consumer connection:
mqCfg := sarama.NewConfig()
master, err := sarama.NewConsumer([]string{brokerConnect}, mqCfg)
if err != nil {
panic(err) // Don't want to panic when error occurs, instead handle it
}
And the processing of messages:
go func() {
defer wg.Done()
for message := range consumer.Messages() {
var msgContent Message
_ = json.Unmarshal(message.Value, &msgContent)
fmt.Printf("Reading message of type %s with id : %d\n", msgContent.Type, msgContent.ContentId) //Don't want to print it
}
}()
My questions (I am new to testing Kafka and new to kafka in general):
Where could errors occur in the above program, so that I can handle them? Any sample code will be great for me to start with. The error conditions I could think of are when the msgContent doesn't really contain any Type of ContentId fields in the JSON.
In kafka, are there situations when the consumer is trying to read at the current offset, but for some reason was not able to (even when the JSON is well formed)? Is it possible for my consumer to backtrack to say x steps above the failed offset read and re-process the offsets? Or is there a better way to do this? again, what could these situations be?
I'm open to reading and trying things.
Regarding 1) Check where I log error messages below. This is more or less what I would do.
Regarding 2) I don't know about trying to walk backwards in a topic. Its very much possible by just creating a consumer over and over, with its starting offset minus one each time. But I wouldn't advise it, as most likely you'll end up replaying the same message over and over. I do advice saving your offset often so you can recover if things go south.
Below is code that I believe addresses most of your questions. I haven't tried compiling this. And sarama api has been changing lately, so the api may currently differ a bit.
func StartKafkaReader(wg *sync.WaitGroup, lastgoodoff int64, out chan<- *Message) (error) {
wg.Add(1)
go func(){
defer wg.Done()
//to track the last known good offset we processed, which is
// updated after each successfully processed event.
saveprogress := func(off int64){
//Save the offset somewhere...a file...
//Ive also used kafka to store progress
//using a special topic as a WAL
}
defer saveprogress(lastgoodoffset)
client, err := sarama.NewClient("clientId", brokers, sarama.NewClientConfig())
if err != nil {
log.Error(err)
return
}
defer client.Close()
sarama.NewConsumerConfig()
consumerConfig.OffsetMethod = sarama.OffsetMethodManual
consumerConfig.OffsetValue = int64(lastgoodoff)
consumer, err := sarama.NewConsumer(client, topic, partition, "consumerId", consumerConfig)
if err != nil {
log.Error(err)
return
}
defer consumer.Close()
for {
select {
case event := <-consumer.Events():
if event.Err != nil {
log.Error(event.Err)
return
}
msgContent := &Message{}
err = json.Unmarshal(message.Value, msgContent)
if err != nil {
log.Error(err)
continue //continue to skip this message or return to stop without updating the offset.
}
// Send the message on to be processed.
out <- msgContent
lastgoodoff = event.Offset
}
}
}()
}