Gorilla websockets, multiple messages in one event - go

I am using the chat app example from gorilla websockets, but I have an issue, sometimes, when the backend needs to send two different messages to the client, they are send in just one Message Event, this is bad for me because JSON.parse will fail parsing 2 jsons from one string.
I can do a split by newline and get every json from the message, but I prefer not to.
If I put a timeout on backend everything works.
Can I do something to prevent this? If not can you please explain me why?
This is the chat example:
https://github.com/gorilla/websocket/tree/master/examples/chat
This is my code where I broadcast 2 messages:
if err == nil {
c.SendMessageWithOrders(DB)
data := models.EventSuccess{
Event: utils.EventOrdersCreateSuccess,
}
toReturnBytes, err := json.Marshal(data)
if err == nil {
toReturn := BroadcastOne{
ID: c.ID,
Message: toReturnBytes,
}
NewHub.broadcastOne <- &toReturn
}
}
c.SendMessageWithOrders(DB) Is making NewHub.broadcastOne <- &toReturn with different data

The following code in client.go reduces the amount of data sent over the network by sending queued chat messages as a single WebSocket message:
n := len(c.send)
for i := 0; i < n; i++ {
w.Write(newline)
w.Write(<-c.send)
}
Fix the problem by deleting the code from the example. The optimization is not required.

Related

why does my "newPendingTransactions" geth subscription not get any events?

Im facing the following challenge. I try to subscribe to "newPendingTransactions" via websocket. I can successfully connect to the websocket. When connected, I would expect a stream of new incoming pending transactions, which I can read out and propagate it to some channel (chan json.RawMessage).
...
c, httpResponse, err := websocket.DefaultDialer.Dial(u.String(), req.Header)
...
for {
messageType, message, err := c.ReadMessage()
if err != nil {
log.Println("ERROR:", err)
os.Exit(1)
return
}
_, _ = message, messageType
// s.Out is the outgoing chan json.RawMessage
s.Out <- message
}
...
sadly I dont receive any message (pending tx).. only one on closing the whole construct. When I check on my node directly with "txpool.status" in console, then I can see that there are new pending txs incoming all the time. They just dont wanna get propagated to my websocket connection. Is there anyone who can help me out here? Maybe I am missing a parameter for starting the geth node itself?
here is how I start my geth node:
geth --http.api eth,web3,debug,txpool,net,shh,db,admin,debug --http --ws --ws.api eth,web3,debug,txpool,net,shh,db,admin,debug --ws.origins localhost --gcmode full --http.port=8545 --maxpeers 120
here is my "admin.nodeInfo":
Geth/v1.10.16-stable-20356e57/linux-amd64/go1.17.5
I found out what I did wrong:
It was not about any forgotten geth parameter and even that the "newPendingTransactions" were propagated correctly..
I tested the websocket connection with another tool called "wscat" and sent the necessary rpc via console (resulting in a stream of tx hashes!)
{"id": 1, "method": "eth_subscribe", "params": ["newPendingTransactions"]}
It showed me that the error must be about the golang code itself..
The last line inside the for loop
s.Out <- message
was sending messages into an unbuffered channel. In order for this to work, some other go routine must consume the channels messages on the other side.
It also helps to use a buffered channel like this:
s.Out = make(chan Message, 1000)
... so at least a 1000 messages will be sent out

How I can receive data for ever from TCP server

I try to create TCP client to receive data from TCP server,
but after server sending data only I receive data one even if server send many data, and I want to receive data forever, and I don't know what is my problem,and
Client:
func main() {
tcpAddr := "localhost:3333"
conn, err := net.DialTimeout("tcp", tcpAddr, time.Second*7)
if err != nil {
log.Println(err)
}
defer conn.Close()
// conn.Write([]byte("Hello World"))
connBuf := bufio.NewReader(conn)
for {
bytes, err := connBuf.ReadBytes('\n')
if err != nil {
log.Println("Rrecv Error:", err)
}
if len(bytes) > 0 {
fmt.Println(string(bytes))
}
time.Sleep(time.Second * 2)
}
}
I'm following this example to create TCP test server
Server:
// Handles incoming requests.
func handleRequest(conn net.Conn) {
// Make a buffer to hold incoming data.
buf := make([]byte, 1024)
// Read the incoming connection into the buffer.
_, err := conn.Read(buf)
if err != nil {
fmt.Println("Error reading:", err.Error())
}
fmt.Println(buf)
// Send a response back to person contacting us.
var msg string
fmt.Scanln(&msg)
conn.Write([]byte(msg))
// Close the connection when you're done with it.
conn.Close()
}
Read requires a Write on the other side of the connection
want to receive data forever
Then you have to send data forever. There's a for loop on the receiving end, but no looping on the sending end. The server writes its message once and closes the connection.
Server expects to get msg from client but client doesn't send it
// conn.Write([]byte("Hello World"))
That's supposed to provide the msg value to the server
_, err := conn.Read(buf)
So those two lines don't match.
Client expects a newline but server isn't sending one
fmt.Scanln expects to put each whitespace separated value into the corresponding argument. It does not capture the whitespace. So:
Only up to the first whitespace of what you type into server's stdin will be stored in msg
Newline will not be stored in msg.
But your client is doing
bytes, err := connBuf.ReadBytes('\n')
The \n never comes. The client never gets done reading that first msg.
bufio.NewScanner would be a better way to collect data from stdin, since you're likely to want to capture whitespace as well. Don't forget to append the newline to each line of text you send, because the client expects it!
Working code
I put these changes together into a working example on the playground. To get it working in that context, I had to make a few other changes too.
Running server and client in the same process
Hard coded 3 clients so the program ended in limited amount of time
Hard coded 10 receives in the client so program can end
Hard coded 3 server connections handled so program can end
Removed fmt.Scanln and have server just return the original message sent (because playground provides no stdin mechanism)
Should be enough to get you started.

Go GRPC client disconnect terminates Go server

Bit of a newb to both Go and GRPC, so bear with me.
Using go version go1.14.4 windows/amd64, proto3, and latest grpc (1.31 i think). I'm trying to set up a bidi streaming connection that will likely be open for longer periods of time. Everything works locally, except if I terminate the client (or one of them) it kills the server as well with the following error:
Unable to trade data rpc error: code = Canceled desc = context canceled
This error comes out of this code server side
func (s *exchangeserver) Trade(stream proto.ExchageService_TradeServer) error {
endchan := make(chan int)
defer close(endchan)
go func() {
for {
req, err := stream.Recv()
if err == io.EOF {
break
}
if err != nil {
log.Fatal("Unable to trade data ", err)
break
}
fmt.Println("Got ", req.GetNumber())
}
endchan <- 1
}()
go func() {
for {
resp := &proto.WordResponse{Word: "Hello again "}
err := stream.Send(resp)
if err != nil {
log.Fatal("Unable to send from server ", err)
break
}
time.Sleep(time.Duration(500 * time.Millisecond))
}
endchan <- 1
}()
<-endchan
return nil
}
And the Trade() RPC is so simple it isn't worth posting the .proto.
The error is clearly coming out of the Recv() call, but that call blocks until it sees a message, like the client disconnect, at which point I would expect it to kill the stream, not the whole process. I've tried adding a service handler with HandleConn(context, stats.ConnStats) and it does catch the disconnect before the server dies, but I can't do anything with it. I've even tried creating a global channel that the serve handler pushes a value into when HandleRPC(context, stats.RPCStats) is called and only allowing Recv() to be called when there's a value in the channel, but that can't be right, that's like blocking a blocking function for safety and it didn't work anyway.
This has to be one of those real stupid mistakes that beginner's make. Of what use would GPRC be if it couldn't handle a client disconnect without dying? Yet I have read probably a trillion (ish) posts from every corner of the internet and noone else is having this issue. On the contrary, the more popular version of this question is "My client stream stays open after disconnect". I'd expect that issue. Not this one.
Im not 100% sure how this is supposed to behave but I note that you are starting separate receive and send goroutines up at the same time. This might be valid but is not the typical approach. Instead you would usually receive what you want to process and then start a nested loop to handle the reply .
See an example of typical bidirectional streaming implementation from here: https://grpc.io/docs/languages/go/basics/
func (s *routeGuideServer) RouteChat(stream pb.RouteGuide_RouteChatServer) error {
for {
in, err := stream.Recv()
if err == io.EOF {
return nil
}
if err != nil {
return err
}
key := serialize(in.Location)
... // look for notes to be sent to client
for _, note := range s.routeNotes[key] {
if err := stream.Send(note); err != nil {
return err
}
}
}
}
sending and receiving at the same time might be valid for your use case but if that is what you are trying to do then I believe your handling of the channels is incorrect. Either way, please read on to understand the issue as it is a common one in go.
You have a single channel which only blocks until it receives a single message, once it unblocks the function ends and the channel is closed (by defer).
You are trying to send to this channel from both your send and receive
loop.
When the last one to finish tries to send to the channel it will have been closed (by the first to finish) and the server will panic. Annoyingly, you wont actually see any sign of this as the server will exit before the goroutine can dump its panic (no clues - probably why you landed here)
see an example of the issue here (grpc code stripped out):
https://play.golang.org/p/GjfgDDAWNYr
Note: comment out the last pause in the main func to stop showing the panic reliably (as in your case)
So one simple fix would probably be to simply create two separate channels (one for send, one for receive) and block on both - this however would leave the send loop open necessarily if you don't get a chance to respond so probably better to structure like the example above unless you have good reason to pursue something different.
Another possibility is some sort server/request context mix up but I'm pretty sure the above will fix - drop an update with your server setup code if your still having issues after the above changes

Updating receive and send message size for grpc in golang

I have grpc server written in Go and I am trying to update receive and send message size to 20MB instead of the default 4MB with the following code
var s *grpc.Server
s = grpc.NewServer(grpc.MaxRecvMsgSize(1024*1024*20), grpc.MaxSendMsgSize(1024*1024*20))
pb.RegisterProductServer(s,mysrv)
But the above doesn't seem to be working as I still get an error when I tried to call from a client received message larger than max (5807570 vs. 4194304)"
Not sure whats overriding the size
I haven't had the chance to test this yet but have you tried adding the same options from the client stub? The same options can be attached as dial options:
maxMsgSize := 1024*1024*20
conn, err := grpc.Dial(address, grpc.WithDefaultCallOptions(grpc.MaxRecvMsgSize(maxMsgSize), grpc.MaxSendMsgSize(maxMsgSize)))
if err != nil {
// ...
}
defer conn.close()
client := pb.NewProductClient(conn)
// ...
I don't know anything about your use case but if your response data can be delivered piece wise then the streaming APIs might be useful.

Kafka Error Handling using Shopify Sarama

So I am trying to use Kafka for my application which has a producer logging actions into the Kafka MQ and the consumer which reads it off the MQ.Since my application is in Go, I am using the Shopify Sarama to make this possible.
Right now, I'm able to read off the MQ and print the message contents using a
fmt.Printf
Howeveer, I would really like the error handling to be better than console printing and I am willing to go the extra mile.
Code right now for consumer connection:
mqCfg := sarama.NewConfig()
master, err := sarama.NewConsumer([]string{brokerConnect}, mqCfg)
if err != nil {
panic(err) // Don't want to panic when error occurs, instead handle it
}
And the processing of messages:
go func() {
defer wg.Done()
for message := range consumer.Messages() {
var msgContent Message
_ = json.Unmarshal(message.Value, &msgContent)
fmt.Printf("Reading message of type %s with id : %d\n", msgContent.Type, msgContent.ContentId) //Don't want to print it
}
}()
My questions (I am new to testing Kafka and new to kafka in general):
Where could errors occur in the above program, so that I can handle them? Any sample code will be great for me to start with. The error conditions I could think of are when the msgContent doesn't really contain any Type of ContentId fields in the JSON.
In kafka, are there situations when the consumer is trying to read at the current offset, but for some reason was not able to (even when the JSON is well formed)? Is it possible for my consumer to backtrack to say x steps above the failed offset read and re-process the offsets? Or is there a better way to do this? again, what could these situations be?
I'm open to reading and trying things.
Regarding 1) Check where I log error messages below. This is more or less what I would do.
Regarding 2) I don't know about trying to walk backwards in a topic. Its very much possible by just creating a consumer over and over, with its starting offset minus one each time. But I wouldn't advise it, as most likely you'll end up replaying the same message over and over. I do advice saving your offset often so you can recover if things go south.
Below is code that I believe addresses most of your questions. I haven't tried compiling this. And sarama api has been changing lately, so the api may currently differ a bit.
func StartKafkaReader(wg *sync.WaitGroup, lastgoodoff int64, out chan<- *Message) (error) {
wg.Add(1)
go func(){
defer wg.Done()
//to track the last known good offset we processed, which is
// updated after each successfully processed event.
saveprogress := func(off int64){
//Save the offset somewhere...a file...
//Ive also used kafka to store progress
//using a special topic as a WAL
}
defer saveprogress(lastgoodoffset)
client, err := sarama.NewClient("clientId", brokers, sarama.NewClientConfig())
if err != nil {
log.Error(err)
return
}
defer client.Close()
sarama.NewConsumerConfig()
consumerConfig.OffsetMethod = sarama.OffsetMethodManual
consumerConfig.OffsetValue = int64(lastgoodoff)
consumer, err := sarama.NewConsumer(client, topic, partition, "consumerId", consumerConfig)
if err != nil {
log.Error(err)
return
}
defer consumer.Close()
for {
select {
case event := <-consumer.Events():
if event.Err != nil {
log.Error(event.Err)
return
}
msgContent := &Message{}
err = json.Unmarshal(message.Value, msgContent)
if err != nil {
log.Error(err)
continue //continue to skip this message or return to stop without updating the offset.
}
// Send the message on to be processed.
out <- msgContent
lastgoodoff = event.Offset
}
}
}()
}

Resources