When I read this opensource code.
I have two questions about the two functions:
func listenTCP() {
for {
conn, err := tcpListener.Accept()
if err != nil {
if netErr, ok := err.(net.Error); ok && netErr.Temporary() {
log.Printf("Temporary error while accepting connection: %s", netErr)
}
log.Fatalf("Unrecoverable error while accepting connection: %s", err)
return
}
go handleTCPConn(conn) // check below
}
}
func handleTCPConn(conn net.Conn) {
log.Printf("Accepting TCP connection from %s with destination of %s", conn.RemoteAddr().String(), conn.LocalAddr().String())
defer conn.Close()
remoteConn, err := conn.(*tproxy.Conn).DialOriginalDestination(false)
if err != nil {
log.Printf("Failed to connect to original destination [%s]: %s", conn.LocalAddr().String(), err)
return
}
defer remoteConn.Close()
var streamWait sync.WaitGroup
streamWait.Add(2)
streamConn := func(dst io.Writer, src io.Reader) {
io.Copy(dst, src)
streamWait.Done()
}
go streamConn(remoteConn, conn)
go streamConn(conn, remoteConn)
streamWait.Wait()
}
Based on my understanding, I draw this diagram:
You see, the handleTCPConn created two goroutines for transmitting two direction(left -> right; right -> left)'s traffic,
My questions are:
You see the code use sync.WaitGroup, if they only send left-> right traffic, there is no traffic in opposite direction, so the handleTCPConn will not end, right? if it is, the listenTCP for loop will create many of those handleTCPConn function calls, is there nothing wrong with this program?
Every time the handleTCPConn is used, it will create a TCP connection to the remote server.
remoteConn, err := conn.(*tproxy.Conn).DialOriginalDestination(false)
My question is still in question 1, you can see that the handleTCPConn transmit the traffic once in both directions, and then ends it, whether the TCP connection is closed when does handleTCPConn end?
if they only transmit part of the data of a file(as per the application layer view), whether it is closed too? (i mean, if A->B->C: part data , then C->B->A: ACK ) .
per the golang docs, https://pkg.go.dev/io#Copy
Copy copies from src to dst until either EOF is reached on src or an error occurs. It returns the number of bytes copied and the first error encountered while copying, if any.
So when you start this program up, it will sit there and wait for you to hit the 'proxy', and send your bytes from the source to the destination... when the destination responds it will copy all those bytes back. if the destination doesn't write any bytes and doesn't close the connection i believe it'll sit there forever, waiting for the far side to either close the socket or respond.
Same is true if you make this connection and the remote sides starts sending data (without a request first). If the "local" side never sends any bytes and doesn't close the connection this code would wait forever as well.
As long as the remote side closes the connection gracefully, this code should exit with "0" bytes received and no error. If the remote side sends a reset, you should get an error of some kind
Related
I try to create TCP client to receive data from TCP server,
but after server sending data only I receive data one even if server send many data, and I want to receive data forever, and I don't know what is my problem,and
Client:
func main() {
tcpAddr := "localhost:3333"
conn, err := net.DialTimeout("tcp", tcpAddr, time.Second*7)
if err != nil {
log.Println(err)
}
defer conn.Close()
// conn.Write([]byte("Hello World"))
connBuf := bufio.NewReader(conn)
for {
bytes, err := connBuf.ReadBytes('\n')
if err != nil {
log.Println("Rrecv Error:", err)
}
if len(bytes) > 0 {
fmt.Println(string(bytes))
}
time.Sleep(time.Second * 2)
}
}
I'm following this example to create TCP test server
Server:
// Handles incoming requests.
func handleRequest(conn net.Conn) {
// Make a buffer to hold incoming data.
buf := make([]byte, 1024)
// Read the incoming connection into the buffer.
_, err := conn.Read(buf)
if err != nil {
fmt.Println("Error reading:", err.Error())
}
fmt.Println(buf)
// Send a response back to person contacting us.
var msg string
fmt.Scanln(&msg)
conn.Write([]byte(msg))
// Close the connection when you're done with it.
conn.Close()
}
Read requires a Write on the other side of the connection
want to receive data forever
Then you have to send data forever. There's a for loop on the receiving end, but no looping on the sending end. The server writes its message once and closes the connection.
Server expects to get msg from client but client doesn't send it
// conn.Write([]byte("Hello World"))
That's supposed to provide the msg value to the server
_, err := conn.Read(buf)
So those two lines don't match.
Client expects a newline but server isn't sending one
fmt.Scanln expects to put each whitespace separated value into the corresponding argument. It does not capture the whitespace. So:
Only up to the first whitespace of what you type into server's stdin will be stored in msg
Newline will not be stored in msg.
But your client is doing
bytes, err := connBuf.ReadBytes('\n')
The \n never comes. The client never gets done reading that first msg.
bufio.NewScanner would be a better way to collect data from stdin, since you're likely to want to capture whitespace as well. Don't forget to append the newline to each line of text you send, because the client expects it!
Working code
I put these changes together into a working example on the playground. To get it working in that context, I had to make a few other changes too.
Running server and client in the same process
Hard coded 3 clients so the program ended in limited amount of time
Hard coded 10 receives in the client so program can end
Hard coded 3 server connections handled so program can end
Removed fmt.Scanln and have server just return the original message sent (because playground provides no stdin mechanism)
Should be enough to get you started.
Bit of a newb to both Go and GRPC, so bear with me.
Using go version go1.14.4 windows/amd64, proto3, and latest grpc (1.31 i think). I'm trying to set up a bidi streaming connection that will likely be open for longer periods of time. Everything works locally, except if I terminate the client (or one of them) it kills the server as well with the following error:
Unable to trade data rpc error: code = Canceled desc = context canceled
This error comes out of this code server side
func (s *exchangeserver) Trade(stream proto.ExchageService_TradeServer) error {
endchan := make(chan int)
defer close(endchan)
go func() {
for {
req, err := stream.Recv()
if err == io.EOF {
break
}
if err != nil {
log.Fatal("Unable to trade data ", err)
break
}
fmt.Println("Got ", req.GetNumber())
}
endchan <- 1
}()
go func() {
for {
resp := &proto.WordResponse{Word: "Hello again "}
err := stream.Send(resp)
if err != nil {
log.Fatal("Unable to send from server ", err)
break
}
time.Sleep(time.Duration(500 * time.Millisecond))
}
endchan <- 1
}()
<-endchan
return nil
}
And the Trade() RPC is so simple it isn't worth posting the .proto.
The error is clearly coming out of the Recv() call, but that call blocks until it sees a message, like the client disconnect, at which point I would expect it to kill the stream, not the whole process. I've tried adding a service handler with HandleConn(context, stats.ConnStats) and it does catch the disconnect before the server dies, but I can't do anything with it. I've even tried creating a global channel that the serve handler pushes a value into when HandleRPC(context, stats.RPCStats) is called and only allowing Recv() to be called when there's a value in the channel, but that can't be right, that's like blocking a blocking function for safety and it didn't work anyway.
This has to be one of those real stupid mistakes that beginner's make. Of what use would GPRC be if it couldn't handle a client disconnect without dying? Yet I have read probably a trillion (ish) posts from every corner of the internet and noone else is having this issue. On the contrary, the more popular version of this question is "My client stream stays open after disconnect". I'd expect that issue. Not this one.
Im not 100% sure how this is supposed to behave but I note that you are starting separate receive and send goroutines up at the same time. This might be valid but is not the typical approach. Instead you would usually receive what you want to process and then start a nested loop to handle the reply .
See an example of typical bidirectional streaming implementation from here: https://grpc.io/docs/languages/go/basics/
func (s *routeGuideServer) RouteChat(stream pb.RouteGuide_RouteChatServer) error {
for {
in, err := stream.Recv()
if err == io.EOF {
return nil
}
if err != nil {
return err
}
key := serialize(in.Location)
... // look for notes to be sent to client
for _, note := range s.routeNotes[key] {
if err := stream.Send(note); err != nil {
return err
}
}
}
}
sending and receiving at the same time might be valid for your use case but if that is what you are trying to do then I believe your handling of the channels is incorrect. Either way, please read on to understand the issue as it is a common one in go.
You have a single channel which only blocks until it receives a single message, once it unblocks the function ends and the channel is closed (by defer).
You are trying to send to this channel from both your send and receive
loop.
When the last one to finish tries to send to the channel it will have been closed (by the first to finish) and the server will panic. Annoyingly, you wont actually see any sign of this as the server will exit before the goroutine can dump its panic (no clues - probably why you landed here)
see an example of the issue here (grpc code stripped out):
https://play.golang.org/p/GjfgDDAWNYr
Note: comment out the last pause in the main func to stop showing the panic reliably (as in your case)
So one simple fix would probably be to simply create two separate channels (one for send, one for receive) and block on both - this however would leave the send loop open necessarily if you don't get a chance to respond so probably better to structure like the example above unless you have good reason to pursue something different.
Another possibility is some sort server/request context mix up but I'm pretty sure the above will fix - drop an update with your server setup code if your still having issues after the above changes
When I close a browser I want to disconnect a websocket in 3 seconds instead of 1 minute. The following just keep writing into a void without error until the tcp ip timeout I guess, not the SetWriteDeadline.
f := func(ws *websocket.Conn) {
for {
select {
case msg := <-out:
ws.SetWriteDeadline(time.Now().Add(3 * time.Second))
if _, err := ws.Write([]byte(msg)); err != nil {
fmt.Println(err)
return
}
case <-time.After(3 * time.Second):
fmt.Println("timeout 3")
return
}
}
}
return websocket.Handler(f)
I need to wait for this err
write tcp [::1]:8080->[::1]:65459: write: broken pipe
before it finally closes the connection, which takes about a minute or more.
You are you using WriteDeadline correctly. The deadline specifies the time for writing data to the TCP stack's buffers, not the time that the peer receives the data (if it does at all).
To reliably detect closed connections, the application should send PINGs to the peer and wait for the expected PONGs. The package you are using does not support this functionality, but the Gorilla package does. The Gorilla chat application shows how use PING and PONG to detect closed connections.
Snippet from WebSocket RFC:
To Start the WebSocket Closing Handshake with a status code (Section 7.4) /code/ and an optional close reason (Section 7.1.6) /reason/, an endpoint MUST send a Close control frame, as described in Section 5.5.1, whose status code is set to /code/ and whose close reason is set to /reason/. Once an endpoint has both sent and received a Close control frame, that endpoint SHOULD Close the WebSocket Connection as defined in Section 7.1.1.
I am trying to do the Close Handshake using Gorilla WebSocket package with the following code:
Server:
// Create upgrader function
conn, err := upgrader.Upgrade(w, r, nil)
// If there is an error stop everything.
if err != nil {
fmt.Println(err)
return
}
for {
// Read Messages
_, _, err := conn.ReadMessage()
// Client is programmed to send a close frame immediately...
// When reading close frame resend close frame with same
// reason and code
conn.WriteMessage(websocket.CloseMessage, websocket.FormatCloseMessage(1000, "woops"))
fmt.Println(err)
break
}
Client:
d := &websocket.Dialer{}
conn, _, err := d.Dial("ws://localhost:8080", nil)
if err != nil {
fmt.Println(err)
return
}
go func() {
for {
// Read Messages
_, _, err := conn.ReadMessage()
if c, k := err.(*websocket.CloseError); k {
if(c.Code == 1000) {
// Never entering since c.Code == 1005
fmt.Println(err)
break
}
}
}
}()
conn.WriteMessage(websocket.CloseMessage, websocket.FormatCloseMessage(1000, "woops"))
for {}
The server is reading the Close Frame as expected outputting the following:
websocket: close 1000 (normal): woops
However the client is like its stopping to read once it sends a close message. The ReadMessage continue to return error 1005. What am I doing wrong?
The server responds to a close frame with the code:
c.WriteControl(CloseMessage, []byte{}, time.Now().Add(writeWait))
This is translated to close code 1005 (no status received) by the client.
The 1000 oops close frame written by the server is not seen by the client application because the websocket connection stops reading from network after receiving the first close frame.
The client application should exit the loop when an error is returned from ReadMessage. There's no need to check for specific close codes.
for {
// Read Messages
_, _, err := conn.ReadMessage()
if err != nil {
break
}
}
Unrelated to the issue in the question, the server application should close the websocket connection after sending the close frame.
Also unrelated to the issue in the question, use select {} instead of for {} to block the main goroutine. The former simply blocks the goroutine. The latter spins using CPU time.
Not sure how to formulate the question and if it really relates only to go language, but what i am trying to do is to have a tcp server and client that will exchange data in between, basically the client will stream big amounts of data into smaller chunks to the server, the server will wait to read every chunk of data and then reply with a status code which will be read by the client and based on that it will do other work.
I use the function below as a test to read the data from client and server (please note, i am aware that is not perfect, but it's just testing) :
func createBufferFromConn(conn net.Conn) *bytes.Buffer {
buffer := &bytes.Buffer{}
doBreak := false
for {
incoming := make([]byte, BUFFER_SIZE)
conn.SetReadDeadline(time.Now().Add(time.Second * 2))
bytesRead, err := conn.Read(incoming)
conn.SetReadDeadline(time.Time{})
if err != nil {
if err == io.EOF {
fmt.Println(err)
} else if neterr, ok := err.(net.Error); ok && neterr.Timeout() {
fmt.Println(err)
}
doBreak = true
}
if doBreak == false && bytesRead == 0 {
continue
}
if bytesRead > 0 {
buffer.Write(incoming[:bytesRead])
if bytes.HasSuffix(buffer.Bytes(), []byte("|")) {
bb := bytes.Trim(buffer.Bytes(), "|")
buffer.Reset()
buffer.Write(bb)
doBreak = true
}
}
if doBreak {
break
}
}
return buffer
}
Now in my case if i connect via telnet(the go code also includes a client() to connect to the server()) and i type something like test 12345| fair enough everything works just fine and the buffer contains all the bytes written from telnet(except the pipe which is removed by the Trim() call).
If i remove the if bytes.HasSuffix(buffer.Bytes(), []byte("|")) { block from the code then i will get a timeout after 2 seconds, again, as expected because no data is received in that amount of time and the server closes the connection, and if i don't set a read deadline from the connection, it will wait forever to read data and will never know when to stop.
I guess my question is, if i send multiple chunks of data, do i have to specify a delimiter of my own so that i know when to stop reading from the connection and avoid waiting forever or waiting for the server to timeout the connection ?
I guess my question is, if i send multiple chunks of data, do i have
to specify a delimiter of my own so that i know when to stop reading
from the connection and avoid waiting forever or waiting for the
server to timeout the connection
Yes. TCP is a stream protocol, and there's no way to determine where messages within the protocol start and stop without framing them in some way.
A more common framing method used is to send a size prefix, so that the receiver knows how much to read without having to buffer the results and scan for a delimiter. This can be as simple as message_length:data.... (see also netstring, and type-length-value encoding).