HTTP reuse connection condition - go

From the official documentation https://golang.org/pkg/net/http/#Client.Do it seems that the RoundTripper may not be able to re-use TCP connection for the next "keep-alive" request if Body is not closed and not fully read. What is this may about?
From what I see Close does not necessarily need to be called, when the whole Body is read. So what is the necessary requirement for connection re-use?
Code snippet (note commented out defer resp.Body.Close()) which creates multiple connections in a loop and from analysing it with netstat it seems the same TCP connection is used for all connections:
for nextPage != "" {
req, err := http.NewRequest("GET", nextPage, nil)
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", *token))
if err != nil {
panic(err)
}
resp, err := http.DefaultClient.Do(req)
if err != nil {
panic(err)
}
// defer resp.Body.Close()
result := []*User{}
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
panic(err)
}
nextPage = getNextPage(resp.Header.Get("Link"))
}

Document doesn't say Don't call Close() for keep-alive. Just say if you want to re-use connection, you MUST call Close() and fully read.

You can only count on what's in the documentation. In some circumstances (go version, OS, architecture, response content length, etc.) it may reuse the connection without fully reading it, or it may not. If you want to ensure the connection will be reused, you must fully read the body and close it.
I generally write a quick helper:
func cleanUpRequest(req *http.Request) {
if req != nil && req.Body != nil {
io.Copy(ioutil.Discard, req.Body)
req.Body.Close()
}
}
This is safe to defer even before error checking.

Related

Sending more than one request to a TCP server fails

I'm trying to send more than one request to a TCP server in Go but for some reason the second request is never sent, even if it is identical to the first one.
This is the server:
func StartServer(host string) {
l, err := net.Listen("tcp", host)
log.Println("Starting server on:", host)
if err != nil {
fmt.Println(err)
os.Exit(1)
}
defer l.Close()
log.Println("Server is running...")
for {
// Listen for an incoming connection.
conn, err := l.Accept()
if err != nil {
log.Fatal("Error accepting: ", err.Error())
}
// Handle connections in a new goroutine.
fmt.Println("got a request")
go handleRequest(conn)
}
}
And this is the function in the client that sends the requests to the server:
func (u *User) ConnectToServer(host string, partner string) {
conn, _ := net.Dial("tcp", host)
fmt.Fprintf(conn, "message1\n")
fmt.Fprintf(conn, "message2\n")
}
EDIT: In the handleRequest function I read the input as follows:
// Handles incoming requests.
func handleRequest(conn net.Conn) {
rec, err := bufio.NewReader(conn).ReadString('\n')
if err != nil {
log.Println("Error reading:", err.Error())
}
log.Println("Got message: ", rec)
// Send a response back to person contacting us.
conn.Write([]byte("Message received."))
// conn.Close()
}
Which according to the documentation only takes the first part before the first line break detected so I believe the second message is ignored because of this. How can I read both messages? Should I change the delimiter in the client maybe?
The server should read multiple lines given that the client sends multiple lines. Use bufio.Scanner to read lines:
func handleRequest(conn net.Conn) {
defer conn.Close()
scanner := bufio.NewScanner(conn)
for scanner.Scan() {
fmt.Printf("Got message: %s\n", scanner.Bytes())
conn.Write([]byte("Message received."))
}
if err := scanner.Err(); err != nil {
fmt.Printf("error reading connection: %v\n", err)
}
}
Some notes about the code:
To prevent a resource leak, the function closes the connection on return.
The scanner loop breaks on error reading the connection. If the error is not io.EOF, then the function logs the error.
bufio.Reader can also be used to read lines, but bufio.Scanner is easier to use.
In handleRequest(), you call ReadString() on the bufio Reader. Let's look at the docs:
ReadString reads until the first occurrence of delim in the input,
returning a string containing the data up to and including the
delimiter. If ReadString encounters an error before finding a
delimiter, it returns the data read before the error and the error
itself (often io.EOF). ReadString returns err != nil if and only if
the returned data does not end in delim. For simple uses, a Scanner
may be more convenient.
Considering the packets you're sending are terminated by a \n, you must call ReadString() twice on the same reader. You probably want to call ReadString() in a loop until it returns an error. Make sure to distinguish io.EOF, then.
Here's a playground with some inspiration. Note: it appears the playground does not allow TCP sockets, so you will have to run this elsewhere.

Unusually High Amount of TCP Connection Timeout Errors

I am using a Go TCP Client to connect to our Go TCP Server.
I am able to connect to the Server and run commands properly, but every so often there will be an unusually high amount of consecutive TCP connection errors reported by my TCP Client when trying to either connect to our TCP Server or sending a message once connected:
dial tcp kubernetes_node_ip:exposed_kubernetes_port:
connectex: A connection attempt failed because the connected party did not properly
respond after a period of time, or established connection failed because connected
host has failed to respond.
read tcp unfamiliar_ip:unfamiliar_port->kubernetes_node_ip:exposed_kubernetes_port
wsarecv: A connection attempt failed because the connected party did not properly
respond after a period of time, or established connection failed because connected
host has failed to respond.
I say "unusually high" because I assume that the number of times these errors occur should be very minimal (about 5 or less within the hour). Note that I am not dismissing the possibility of this being caused by connection instabilities, as I have also noticed that it is possible to run several commands in rapid succession without any errors.
However, I am still going to post my code in case I am doing something wrong.
Below is the code that my TCP Client uses to connect to our server:
serverAddress, err := net.ResolveTCPAddr("tcp", kubernetes_ip+":"+kubernetes_port)
if err != nil {
fmt.Println(err)
return
}
// Never stop asking for commands from the user.
for {
// Connect to the server.
serverConnection, err := net.DialTCP("tcp", nil, serverAddress)
if err != nil {
fmt.Println(err)
continue
}
defer serverConnection.Close()
// Added to prevent connection timeout errors, but doesn't seem to be helping
// because said errors happen within just 1 or 2 minutes.
err = serverConnection.SetDeadline(time.Now().Add(10 * time.Minute))
if err != nil {
fmt.Println(err)
continue
}
// Ask for a command from the user and convert to JSON bytes...
// Send message to server.
_, err = serverConnection.Write(clientMsgBytes)
if err != nil {
err = merry.Wrap(err)
fmt.Println(merry.Details(err))
continue
}
err = serverConnection.CloseWrite()
if err != nil {
err = merry.Wrap(err)
fmt.Println(merry.Details(err))
continue
}
// Wait for a response from the server and print...
}
Below is the code that our TCP Server uses to accept client requests:
// We only supply the port so the IP can be dynamically assigned:
serverAddress, err := net.ResolveTCPAddr("tcp", ":"+server_port)
if err != nil {
return err
}
tcpListener, err := net.ListenTCP("tcp", serverAddress)
if err != nil {
return err
}
defer tcpListener.Close()
// Never stop listening for client requests.
for {
clientConnection, err := tcpListener.AcceptTCP()
if err != nil {
fmt.Println(err)
continue
}
go func() {
// Add client connection to Job Queue.
// Note that `clientConnections` is a buffered channel with a size of 1500.
// Since I am the only user connecting to our server right now, I do not think
// this is a channel blocking issue.
clientConnections <- clientConnection
}()
}
Below is the code that our TCP Server uses to process client requests:
defer clientConnection.Close()
// Added to prevent connection timeout errors, but doesn't seem to be helping
// because said errors happen within just 1 or 2 minutes.
err := clientConnection.SetDeadline(time.Now().Add(10 * time.Minute))
if err != nil {
return err
}
// Read full TCP message.
// Does not stop until an EOF is reported by `CloseWrite()`
clientMsgBytes, err := ioutil.ReadAll(clientConnection)
if err != nil {
err = merry.Wrap(err)
return nil, err
}
// Process the message bytes...
My questions are:
Am I doing something wrong in the above code, or is the above decent enough for basic TCP Client-Server operations?
Is it okay that both the TCP Client and TCP Server have code that defers closing their one connection?
I seem to recall that calling defer inside a loop does nothing. How do I properly close Client connections before starting new ones?
Some extra information:
Said errors are not logged by the TCP Server, so aside from
connection instabilities, this might also be a
Kubernetes/Docker-related issue.
It seems this piece of code does not act as you think it does. The defer statement on the connection close will only happen when the function returns, not when an iteration ends. So as far as I can see here, you are creating a lot of connections on the client side, it could be the problem.
serverAddress, err := net.ResolveTCPAddr("tcp", kubernetes_ip+":"+kubernetes_port)
if err != nil {
fmt.Println(err)
return
}
// Never stop asking for commands from the user.
for {
// Connect to the server.
serverConnection, err := net.DialTCP("tcp", nil, serverAddress)
if err != nil {
fmt.Println(err)
continue
}
defer serverConnection.Close()
// Added to prevent connection timeout errors, but doesn't seem to be helping
// because said errors happen within just 1 or 2 minutes.
err = serverConnection.SetDeadline(time.Now().Add(10 * time.Minute))
if err != nil {
fmt.Println(err)
continue
}
// Ask for a command from the user and send to the server...
// Wait for a response from the server and print...
}
I suggest to write it this way:
func start() {
serverAddress, err := net.ResolveTCPAddr("tcp", kubernetes_ip+":"+kubernetes_port)
if err != nil {
fmt.Println(err)
return
}
for {
if err := listen(serverAddress); err != nil {
fmt.Println(err)
}
}
}
func listen(serverAddress string) error {
// Connect to the server.
serverConnection, err := net.DialTCP("tcp", nil, serverAddress)
if err != nil {
fmt.Println(err)
continue
}
defer serverConnection.Close()
// Never stop asking for commands from the user.
for {
// Added to prevent connection timeout errors, but doesn't seem to be helping
// because said errors happen within just 1 or 2 minutes.
err = serverConnection.SetDeadline(time.Now().Add(10 * time.Minute))
if err != nil {
fmt.Println(err)
return err
}
// Ask for a command from the user and send to the server...
// Wait for a response from the server and print...
}
}
Also, you should keep a single connection open, or a pool of connections, instead of opening and closing the connection right away. Then when you send a message you get a connection from the pool (or the single connection), and you write the message and wait for the response, then you release the connection to the pool.
Something like that:
res, err := c.Send([]byte(`my message`))
if err != nil {
// handle err
}
// the implementation of send
func (c *Client) Send(msg []byte) ([]byte, error) {
conn, err := c.pool.Get() // returns a connection from the pool or starts a new one
if err != nil {
return nil, err
}
// send your message and wait for response
// ...
return response, nil
}

I want to use hijack in golang, while get invalid response on client

I want to use hijack in golang, while recieve invalid response on client
func hijack(w http.ResponseWriter, r *http.Request) {
fmt.Println("start")
hj, ok := w.(http.Hijacker)
fmt.Println(ok)
c, buf, err := hj.Hijack()
if err != nil {
panic(err)
}
n, err := buf.Write([]byte("hello"))
if err != nil {
panic(err)
}
fmt.Println("n == ",n)
err = buf.Flush()
if err != nil {
panic(err)
}
fmt.Println("end")
}
follow printed on server:
start
true
n == 5
end
but I got following error on the client
localhost sent an invalid response. ERR_INVALID_HTTP_RESPONSE
As the Hijacker's documentation says
Hijack lets the caller take over the connection. After a
call to Hijack the HTTP server library will not do
anything else with the connection.
It becomes the caller's responsibility to manage and close
the connection.
The returned net.Conn may have read or write deadlines already set,
depending on the configuration of the Server. It is the
caller's responsibility to set or clear those deadlines
as needed.
The returned bufio.Reader may contain unprocessed buffered data from
the client.
After a call to Hijack, the original Request.Body must not be used. The
original Request's Context remains valid and is not canceled until
the Request's ServeHTTP method returns.
You need to write to c rather than buf. And you need to write response status and Content-Length header.
http.HandleFunc("/", func(writer http.ResponseWriter, request *http.Request) {
fmt.Println("start")
writer.Header().Add("Content-Length", "5")
writer.WriteHeader(200)
hj, ok := writer.(http.Hijacker)
fmt.Println(ok)
c, _, err := hj.Hijack()
if err != nil {
panic(err)
}
n, err := c.Write([]byte("hello"))
if err != nil {
panic(err)
}
fmt.Println("n == ",n)
err = c.Close()
if err != nil {
panic(err)
}
fmt.Println("end")
})

Should I error-check Close() on a response body?

The docs for net/http have the following example:
resp, err := http.Get("http://example.com/")
if err != nil {
panic(err)
}
defer resp.Body.Close()
body, err := ioutil.ReadAll(resp.Body)
fmt.Printf("%s", body)
Close returns an error, but it is not checked. Is there something I'm missing here? The importance of checking every error is frequently emphasized in go, but I see this defer resp.Body.Close() pattern a lot with no error checks.
There are two things to consider: What would you do with it if you checked it and there was an error? And, what would the side-effects be if there was an error?
In most cases, for closing a response body, the answer to both questions is... absolutely nothing. If there's nothing you'd do if there was an error and the error has no appreciable impact, there's no reason to check it.
Also note that Close() returns an error in order to fulfill the io.Closer interface; the function doesn't necessarily return an error. You'd need to check the source to know for sure if it has an error case.
This is a downside of using defer
It would be advised, as a responsible developer, that you check for all possible error prone points, and handle them as gracefully as you can.
Here are some of the options you can choose in handling this situation:
Option #1
Do not use defer, instead manually call close once you're done with the response's body and simply check for errors then.
Option #2
Create an anonymous function that wraps the closing and error checking code. Something like this:
defer func() {
err := resp.Body.Close()
if err != nil {
log.Fatal(err)
}
}()
Avoid using panics in your programs. Try to handle the errors gracefully by doing something or at least logging the error.
Additional information can be found here.
To add to #Mihailo option #2, call it option #2.1
Define function dclose() like so:
func dclose(c io.Closer) {
if err := c.Close(); err != nil {
log.Fatal(err)
}
}
use like so:
defer dclose(resp.Body)
Also in your code the check for err!=nil can declare:
func errcheck(err error) {
if err != nil {
log.Fatal(err)
}
}
then use:
errcheck(err)
then your code becomes:
resp, err := http.Get("http://example.com/")
errcheck(err)
defer dclose(resp.Body)
body, err := ioutil.ReadAll(resp.Body)
errcheck(err)
fmt.Printf("%s", string(body))
IMHO a little cleaner perhaps? But I'll wait for the go aficionado's to correct me and highlight drawbacks.
EDIT
Thanks! #RayfenWindspear
Do replace log.Fatal(err) with log.Println(err) to avoid unnecessary panic.
EDIT2
dclose() to avoid confusion with go close()
Have fun!
From what I could gather, net/http can't err. However, I would rather assume that .Close() can err for any implementation of io.Closer than to study the internals.
Below is an example using named returns where defer only sets the returned error if the error would otherwise be nil:
func printResponse(url string) (retErr error) {
resp, err := http.Get(url)
if err != nil {
return err
}
defer func() {
err := resp.Body.Close()
if err != nil && retErr == nil {
retErr = err
}
}()
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
return err
}
fmt.Printf("%s", body)
return nil
}

golang request to Orientdb http interface error

I am playing wit golang and orientdb to test them. i have written a tiny web app which uppon a request fetches a single document from local orientdb instance and returns it. when i bench this app with apache bench, when concurrency is above 1 it get following error:
2015/04/08 19:24:07 http: panic serving [::1]:57346: Get http://localhost:2480/d
ocument/t1/9:1441: EOF
when i bench Orientdb itself, it runs perfectley OK with any cuncurrency factor.
also when i change the url to fetch from this document to anything (other program whritten in golang, some internet site etc) the app runs OK.
here is the code:
func main() {
fmt.Println("starting ....")
var aa interface{}
router := gin.New()
router.GET("/", func(c *gin.Context) {
ans := getdoc("http://localhost:2480/document/t1/9:1441")
json.Unmarshal(ans, &aa)
c.JSON(http.StatusOK, aa)
})
router.Run(":3000")
}
func getdoc(addr string) []byte {
client := new(http.Client)
req, err := http.NewRequest("GET", addr, nil)
req.SetBasicAuth("admin","admin")
resp, err := client.Do(req)
if err != nil {
fmt.Println("oops", resp, err)
panic(err)
}
defer resp.Body.Close()
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
panic(err)
}
return body
}
thanks in advance
The keepalive connections are getting closed on you for some reason. You might be overwhelming the server, or going past the max number of connections the database can handle.
Also, the current http.Transport connection pool doesn't work well with synthetic benchmarks that make connections as fast as possible, and can quickly exhaust available file descriptors or ports (issue/6785).
To test this, I would set Request.Close = true to prevent the Transport from using the keepalive pool. If that works, one way to handle this with keepalive, is to specifically check for an io.EOF and retry that request, possibly with some backoff delay.

Resources