Bit of a newb to both Go and GRPC, so bear with me.
Using go version go1.14.4 windows/amd64, proto3, and latest grpc (1.31 i think). I'm trying to set up a bidi streaming connection that will likely be open for longer periods of time. Everything works locally, except if I terminate the client (or one of them) it kills the server as well with the following error:
Unable to trade data rpc error: code = Canceled desc = context canceled
This error comes out of this code server side
func (s *exchangeserver) Trade(stream proto.ExchageService_TradeServer) error {
endchan := make(chan int)
defer close(endchan)
go func() {
for {
req, err := stream.Recv()
if err == io.EOF {
break
}
if err != nil {
log.Fatal("Unable to trade data ", err)
break
}
fmt.Println("Got ", req.GetNumber())
}
endchan <- 1
}()
go func() {
for {
resp := &proto.WordResponse{Word: "Hello again "}
err := stream.Send(resp)
if err != nil {
log.Fatal("Unable to send from server ", err)
break
}
time.Sleep(time.Duration(500 * time.Millisecond))
}
endchan <- 1
}()
<-endchan
return nil
}
And the Trade() RPC is so simple it isn't worth posting the .proto.
The error is clearly coming out of the Recv() call, but that call blocks until it sees a message, like the client disconnect, at which point I would expect it to kill the stream, not the whole process. I've tried adding a service handler with HandleConn(context, stats.ConnStats) and it does catch the disconnect before the server dies, but I can't do anything with it. I've even tried creating a global channel that the serve handler pushes a value into when HandleRPC(context, stats.RPCStats) is called and only allowing Recv() to be called when there's a value in the channel, but that can't be right, that's like blocking a blocking function for safety and it didn't work anyway.
This has to be one of those real stupid mistakes that beginner's make. Of what use would GPRC be if it couldn't handle a client disconnect without dying? Yet I have read probably a trillion (ish) posts from every corner of the internet and noone else is having this issue. On the contrary, the more popular version of this question is "My client stream stays open after disconnect". I'd expect that issue. Not this one.
Im not 100% sure how this is supposed to behave but I note that you are starting separate receive and send goroutines up at the same time. This might be valid but is not the typical approach. Instead you would usually receive what you want to process and then start a nested loop to handle the reply .
See an example of typical bidirectional streaming implementation from here: https://grpc.io/docs/languages/go/basics/
func (s *routeGuideServer) RouteChat(stream pb.RouteGuide_RouteChatServer) error {
for {
in, err := stream.Recv()
if err == io.EOF {
return nil
}
if err != nil {
return err
}
key := serialize(in.Location)
... // look for notes to be sent to client
for _, note := range s.routeNotes[key] {
if err := stream.Send(note); err != nil {
return err
}
}
}
}
sending and receiving at the same time might be valid for your use case but if that is what you are trying to do then I believe your handling of the channels is incorrect. Either way, please read on to understand the issue as it is a common one in go.
You have a single channel which only blocks until it receives a single message, once it unblocks the function ends and the channel is closed (by defer).
You are trying to send to this channel from both your send and receive
loop.
When the last one to finish tries to send to the channel it will have been closed (by the first to finish) and the server will panic. Annoyingly, you wont actually see any sign of this as the server will exit before the goroutine can dump its panic (no clues - probably why you landed here)
see an example of the issue here (grpc code stripped out):
https://play.golang.org/p/GjfgDDAWNYr
Note: comment out the last pause in the main func to stop showing the panic reliably (as in your case)
So one simple fix would probably be to simply create two separate channels (one for send, one for receive) and block on both - this however would leave the send loop open necessarily if you don't get a chance to respond so probably better to structure like the example above unless you have good reason to pursue something different.
Another possibility is some sort server/request context mix up but I'm pretty sure the above will fix - drop an update with your server setup code if your still having issues after the above changes
Related
Am I correct to assume that with the Go language, these two formulations are always equivalent ?
func f() {
// Do stuff
}
go f()
and
func f() {
go func(){
// do stuff
}()
)
The question was basically answered in the comments, but although in the simple case both examples do the same thing, one may be preferred over the other depending on what the actual goal is.
One that the comments mention is allowing the user of your code to decide on concurrency vs you (the writer) deciding. I think this rule of thumb is generally preferred especially for people writing packages for others to use (even if perhaps the others are in your own team). I've also seen this rule of thumb espoused elsewhere on "the internet", and I think arose because in the early days of Go, people were using (and abusing) concurrency features just because they were available. For example, returning a channel from which you'd receive a value instead of just returning the value.
Another difference is that in the top example, f() may not be able to close on variables that you might want accessible when run as a goroutine--you'd have to pass everything into f() as a parameter. In the second example the anonymous function in go func() {...} could close over something in f().
One example where I prefer the second style is starting servers. For example:
func (app *Application) start() {
if app.HttpsServer != nil {
go func() {
err := app.HttpsServer.ListenAndServeTLS(
app.Config.TLSCertificateFile,
app.Config.TLSKeyFile)
if err != nil && err != http.ErrServerClosed {
// unexpected error
log.Printf(log.Critical, "error with https server: %s", err)
}
}()
}
go func() {
err := app.HttpServer.ListenAndServe()
if err != nil && err != http.ErrServerClosed {
// unexpected error
log.Printf(log.Critical, "error with http server: %s", err)
}
}()
}
Here the intention is that Application is configured and controlled in main(), the servers (one on https, one on http) are started and program flow returns to main(). In my specific case, main() waits for a signal from the OS then shuts down the servers and exits. Both goroutines close over app and have access to the data it contains. Is this "good" or "bad"...who knows, but it works well for me.
So essentially... "It depends".
I have a websocket client which spins a goroutine, using Conn.ReadJSON to read incoming JSON messages.
I would like the goroutine to be able to nicely react to ctx.Done(), hence I wrote the following code:
for {
msg := new(message)
select {
case <- ctx.Done():
fmt.Println("Halting gracefully")
return
default:
err := Conn.ReadJSON(&msg)
if err != nil {
fmt.Println(err)
break
}
c.inbound <- *msg
}
}
Obviously, in the current state, ReadJSON is a non-blocking function, hence it is launched and then the execution stops somewhere within (not sure where) as it waits for a message to be received.
This means that it will never block at the select statement and hence it will never handle ctx.Done() appropriately.
Gorilla documentation doesn't show any case <- IsThereAnyNewMessage() available.
How can it be handled elegantly?
I was following some of the tutorial for creating bidirectional grpc client and server. Client will pass some value and when last maximum value changed on server it'll response client with current max. Finally I'd like to write down some of the test cases but I have no experience with testing scenarios that's why I'm not sure if I'm doing the correct thing or not.
func TestClientConnection(t *testing.T) {
creds, _ := credentials.NewClientTLSFromFile("../server-cert.pem", "")
conn, err := grpc.Dial(address, grpc.WithTransportCredentials(creds))
if err != nil {
t.Error("Had problem with connection, NOT PASSED")
}
defer conn.Close()
c := proto.NewHerdiusServerClient(conn)
stream, err := c.CheckMax(context.Background())
if err != nil {
t.Error("Had problem with stream, NOT PASSED")
return
}
err = stream.Send(&proto.MaxRequest{Val: int32(10)})
err = stream.Send(&proto.MaxRequest{Val: int32(12)})
err = stream.Send(&proto.MaxRequest{Val: int32(13)})
err = stream.Send(&proto.MaxRequest{Val: int32(9)})
if err != nil {
t.Error("Had problem with stream, NOT PASSED")
return
}
return
}
Right now when I test this scenario wiht go test it passes but I also want to test if something received from server side.
My second question was If I want to tear this test to different scenarios for example to check is server connected or is stream connected or it received response from server side, how can I do that? Should I create another class to retrieve connection and streaming and use on test functions?
Create a contest with timeout context.WithTimeout, and after sending your data call Recv on the stream. Check if you receive anything within the timeout.
The specifics depend on the protocol here - you may need a goroutine to Recv if you have to send server data at the same time.
As for your second question, the Go philosophy is to have clear, explicit, readable tests for each scenario. It's OK if some code is duplicated. It's much more important that each test in isolation is readable and understandable. In cases where the tests are very repetitive one should use table driven tests, but in the cases you describe that sounds like separate tests to me.
It's useful to have tests that "build up" functionality. One test to test connection, the other connection and sending, yet another connection and sending and receiving. This way when tests fail, by rerunning them individually you can very quickly isolate the problem even before you look at the tests' code.
Or how to check it is available for Read or Write in loop? If the conn is closed or not available, we should stop the loop.
For example:
package main
import "net"
func main() {
conn, err := net.Dial("tcp", "127.0.0.1:1111")
defer conn.Close()
for {
buf := make([]byte, 1, 1)
n, err := conn.Read(buf)
if err != nil {
// currently we can only stop the loop
// when occur any errors
log.Fatal(err)
}
}
}
You can get a number of errors, depending on how the connection was closed. The only error that you can count on receiving from a Read is an io.EOF. io.EOF is the value use to indicate that a connection was closed normally.
Other errors can be checked against the net.Error interface for its Timeout and Temporary methods. These are usually of the type net.OpError. Any non-temporary error returned from a Write is fatal, as it indicates the write couldn't succeed, but note that due to the underlying network API, writes returning no error still aren't guaranteed to have succeeded.
In general you can just follow the io.Reader api.
When Read encounters an error or end-of-file condition after successfully reading n > 0 bytes, it returns the number of bytes read. It may return the (non-nil) error from the same call or return the error (and n == 0) from a subsequent call. An instance of this general case is that a Reader returning a non-zero number of bytes at the end of the input stream may return either err == EOF or err == nil. The next Read should return 0, EOF.
If there was data read, you handle that first. After you handle the data, you can break from the loop on any error. If it was io.EOF, the connection is closed normally, and any other errors you can handle as you see fit.
If you are using Go 1.16 or newer and only working with the standard libraries (not some arbitrary networking stack), you can use a function like this to check the closed errors and handle them differently than other errors:
func isNetConnClosedErr(err error) bool {
switch {
case
errors.Is(err, net.ErrClosed),
errors.Is(err, io.EOF),
errors.Is(err, syscall.EPIPE):
return true
default:
return false
}
}
Note that there is an os.ErrClosed error (alias for fs.ErrClosed) that you could add if you were dealing with files, but you don't need it when only using the net package. While your code only shows a client, there is probably a server side doing listener.Accept() that gets closed by a different go process and you don't want to log nasty errors when that happens, so you want to have net.ErrClosed in the list above. As for syscall.EPIPE, this comes in handy when writing to a remote end that already closed the connection. Remember that with networking connections, you may not get the EPIPE on the 1st write, as the OS may need to send some data to discover that the remote end was closed.
http.Serve either returns an error as soon as it is called or blocks if successfully executing.
How can I make it so that if it blocks it does so in its own goroutine? I currently have the following code:
func serveOrErr(l net.Listener, handler http.Handler) error {
starting := make(chan struct{})
serveErr := make(chan error)
go func() {
starting <- struct{}{}
if err := http.Serve(l, handler); err != nil {
serveErr <- err
}
}()
<-starting
select {
case err := <-serveErr:
return err
default:
return nil
}
}
This seemed like a good start and works on my test machine but I believe that there are no guarantees that serveErr <- err would be called before case err := <-serveErr therefore leading to inconsistent results due to a data race if http.Serve were to produce an error.
http.Serve either returns an error as soon as it is called or blocks if successfully executing
This assumption is not correct. And I believe it rarely occurs. http.Serve calls net.Listener.Accept in the loop – an error can occur any time (socket closed, too many open file descriptors etc.). It's http.ListenAndServe, usually being used for running http servers, which often fails early while binding listening socket (no permissions, address already in use).
In my opinion what you're trying to do is wrong, unless really your net.Listener.Accept is failing on the first call for some reason. Is it? If you want to be 100% sure your server is working, you could try to connect to it (and maybe actually transmit something), but once you successfully bound the socket I don't see it really necessary.
You could use a timeout on your select statement, e.g.
timeout := time.After(5 * time.Millisecond) // TODO: ajust the value
select {
case err := <-serveErr:
return err
case _ := <- timeout:
return nil
}
This way your select will block until serveErr has a value or the specified timeout has elapsed. Note that the execution of your function will therefore block the calling goroutine for up to the duration of the specified timeout.
Rob Pike's excellent talk on go concurrency patterns might be helpful.