Golang: http proxy server to multiple endpoints based on some conditions - go

I need to do some logic: proxy server that listen on port is receiving a request. It is always POST, and payload is always XML. I need to look inside it, and based on some conditions (XML tag values) i need to send original request to first or second backend.
I have something like this using standard http package, and it works - sometimes:
func main() {
[...]
server := &http.Server{
Handler: h,
ReadTimeout: time.Duration(*globalTimeout) * time.Millisecond,
}
log.Printf("Proxy engine ready and listen at %s, global timeout: %d ms", *listenAddress, *globalTimeout)
log.Fatalln(server.Serve(listener))
[...]
}
func (h handler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
var requestBody []byte
if r.Body != nil {
requestBody, _ = io.ReadAll(r.Body)
}
// rewind request body
r.Body = io.NopCloser(bytes.NewBuffer(requestBody))
relevantValues := getRelevantValues(requestBody)
if checkCondition(relevantValues) {
log.Println("Proxying request to endpoint 1")
r.URL = h.Target1
} else {
log.Println("Proxying request to endpoint 2")
r.URL = h.Target2
}
timeout := time.Duration(time.Duration(*globalTimeout).Milliseconds())
resp := handleRequest(r, timeout, r.URL.Scheme)
if resp != nil {
defer resp.Body.Close()
log.Printf("%s %s %v, %s", r.Method, r.URL.String(), r.Proto, resp.Status)
// Forward response headers
for k, v := range resp.Header {
w.Header()[k] = v
}
w.WriteHeader(resp.StatusCode)
// Forward response body
io.Copy(w, resp.Body)
} else {
log.Printf("Response is nil :(")
}
}
func handleRequest(request *http.Request, timeout time.Duration, scheme string) *http.Response {
var transport *http.Transport
if scheme == "https" {
transport = &http.Transport{
DialContext: (&net.Dialer{Timeout: timeout, KeepAlive: 10 * timeout}).DialContext,
DisableKeepAlives: *closeConnections,
TLSHandshakeTimeout: timeout,
ResponseHeaderTimeout: timeout,
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
}
} else {
transport = &http.Transport{
DialContext: (&net.Dialer{Timeout: timeout, KeepAlive: 10 * timeout}).DialContext,
DisableKeepAlives: *closeConnections,
TLSHandshakeTimeout: timeout,
ResponseHeaderTimeout: timeout,
}
}
response, err := transport.RoundTrip(request)
if err != nil {
log.Println("Request failed:", err)
}
return response
}
Sometimes it works, sometimes don't. Incoming request is always the same (from file on disk, posted with cURL):
2023/02/03 16:32:48 Proxy engine ready and listen at :8080, global timeout: 5000 ms
2023/02/03 16:32:53 Getting relevant values from request body
2023/02/03 16:32:53 Group: TS1
2023/02/03 16:32:53 Order number: 500000639557
2023/02/03 16:32:53 Check if order can be processed in endpoint 1
2023/02/03 16:32:53 Order can be processed in endpoint 1
2023/02/03 16:32:53 Proxying request to endpoint 1
2023/02/03 16:32:53 POST http://endpoint1:40002/service HTTP/1.1, 200 OK
2023/02/03 16:32:54 Getting relevant values from request body
2023/02/03 16:32:54 Group: TS1
2023/02/03 16:32:54 Order number: 500000639557
2023/02/03 16:32:54 Check if order can be processed in endpoint 1
2023/02/03 16:32:54 Order can be processed in endpoint 1
2023/02/03 16:32:54 Proxying request to endpoint 1
2023/02/03 16:32:54 POST http://endpoint1:40002/service HTTP/1.1, 200 OK
2023/02/03 16:32:55 Getting relevant values from request body
2023/02/03 16:32:55 Group: TS1
2023/02/03 16:32:55 Order number: 500000639557
2023/02/03 16:32:55 Check if order can be processed in endpoint 1
2023/02/03 16:32:55 Order can be processed in endpoint 1
2023/02/03 16:32:55 Proxying request to endpoint 1
2023/02/03 16:32:55 Request failed: EOF // exception from handleRequest function
2023/02/03 16:32:55 Response is nil :(
Does anyone know, what is going on and can help? Thanks!

I've modify handleRequest, now it is running as goroutine:
func (h handler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
// [...]
ch := make(chan *http.Response)
var wg sync.WaitGroup
wg.Add(1)
timeout := time.Duration(time.Duration(*globalTimeout).Milliseconds())
go handleRequest(r, timeout, ch, &wg)
go func() {
wg.Wait()
close(ch)
}()
var resp *http.Response = <-ch
if resp != nil {
defer resp.Body.Close()
log.Printf("%s %s %v, %s", r.Method, r.URL.String(), r.Proto, resp.Status)
// [...]
}
}
func handleRequest(request *http.Request, timeout time.Duration, ch chan *http.Response, wg *sync.WaitGroup) {
defer (*wg).Done()
transport := &http.Transport{
DialContext: (&net.Dialer{Timeout: timeout, KeepAlive: 10 * timeout}).DialContext,
DisableKeepAlives: *closeConnections,
TLSHandshakeTimeout: timeout,
ResponseHeaderTimeout: timeout,
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
}
response, err := transport.RoundTrip(request)
if err != nil {
log.Println("Request failed:", err)
}
ch <- response
}
Now each request is served by separate thread, my parallel stress tests are 100% accurate.

Related

Golang http client failing with: dial tcp <some ip>: connect: operation timed out

I have a program in Go which takes around 10k urls (same base url, simply different resource) and request response for them from 10k goroutines.
At some point, I start receiving :
Get "https://.....": dial tcp <some_ip>: connect: operation timed out
I can't understand whether that's due to some Go limitation, my local machine (macbook) limitation, or the limitations of the Server(s).
And how can it be solved.
My code is simple :
var transport = &http.Transport{
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
Dial: (&net.Dialer{
Timeout: 0,
KeepAlive: 0,
}).Dial,
TLSHandshakeTimeout: 10 * time.Second,
}
var httpClient = &http.Client{Transport: transport}
func main() {
for id := 1; id <= total; id++ {
myWaitGroup.Add(1)
go checkUri(myBaseUrl, id, some_chan)
}
myWaitGroup.Wait()
}
func checkUri(baseUrl string, id int, myChan string) {
defer myWaitGroup.Done()
url := fmt.Sprintf(`%s/%d.json`, baseUrl, id)
req, _ := http.NewRequest(http.MethodGet, url, nil)
req.Header.Set("Content-Type", "application/json")
req.Header.Set("Connection", "close")
response, err := httpClient.Do(req)
if err != nil {
fmt.Println("ERROR: http remote call, err:", err)
return
} else {
defer response.Body.Close()
if response.StatusCode != 200 {
fmt.Printf("ERROR: remote call status code %d [%s]\n", response.StatusCode, url)
return
} else {
.. io.ReadAll(response.Body)
myChan <- fmt.Sprintf(string(b))
..
}
}
}

Go net/http leaks memory in high load

I am developing an API that calls client URL using the net/http package. There are between 1 and 8 URLs called for each request (POST call) in goroutines concurrently based on user country/os. The app works with low qps of around 1000-1500 requests, but scaling the app to 3k requests there is a sudden increase in the memory even if only 1 client URL is called an app stops responding after a few minute(Response time well above 50sec). I am using Go native net/http package along with gorilla/mux router. Other question on this issue says to close the response body but I have done that using
req, err := http.NewRequest("POST", "client_post_url", bytes.NewBuffer(requestBody))
req.Header.Set("Content-Type", "application/json")
req.Header.Set("Connection", "Keep-Alive")
response, err := common.Client.Do(req)
status := 0
if err != nil {//handle and return}
defer response.Body.Close() //used with/without io.Copy
status = response.StatusCode
body, _ := ioutil.ReadAll(response.Body)
_, err = io.Copy(ioutil.Discard, response.Body)
I need to reuse connection hence I have made http client and transport global variable initialized in init method like this.
common.Transport = &http.Transport{
TLSClientConfig: &tls.Config{
InsecureSkipVerify: true,
},
DialContext: (&net.Dialer{
//Timeout: time.Duration(300) * time.Millisecond,
KeepAlive: 30 * time.Second,
}).DialContext,
//ForceAttemptHTTP2: true,
DisableKeepAlives: false,
//MaxIdleConns: 0,
//IdleConnTimeout: 0,
//TLSHandshakeTimeout: time.Duration(300) * time.Millisecond,
//ExpectContinueTimeout: 1 * time.Second,
}
common.Client = &http.Client{
Timeout: time.Duration(300) * time.Millisecond,
Transport: common.Transport,
}
I have read that using keep alive causes the memory to leak, I have tried a few combination for disabling keep-alive/close request flag on request. But nothing seems to work. Also If I don't make any http call and use time.Sleep(300 * time.Millisecond) in goroutine calling each url concurrently app does work without any leak.
So I am sure It has something to do with client/http package that under high load connection are not released or not used properly.
What should be my approach to achieve this?
Is creating a custom server and custom handler type to accept request and route requests will worked as mentioned in C10K approach in several article?
I can share the sample code with all details if required. Above just added that the part where I feel the issue lies.
this is a representative code
main.go
package main
import (
"./common"
"bytes"
"crypto/tls"
"fmt"
"github.com/gorilla/mux"
"io"
"io/ioutil"
"log"
"math/rand"
"net"
"net/http"
"net/http/pprof"
"os"
"runtime"
"strconv"
"sync"
"time"
)
func init() {
//Get Any command line argument passed
args := os.Args[1:]
numCPU := runtime.NumCPU()
if len(args) > 1 {
numCPU, _ = strconv.Atoi(args[0])
}
common.Transport = &http.Transport{
TLSClientConfig: &tls.Config{
InsecureSkipVerify: true,
},
DialContext: (&net.Dialer{
//Timeout: time.Duration() * time.Millisecond,
KeepAlive: 30 * time.Second,
}).DialContext,
//ForceAttemptHTTP2: true,
DisableKeepAlives: false,
//MaxIdleConns: 0,
//IdleConnTimeout: 0,
//TLSHandshakeTimeout: time.Duration(300) * time.Millisecond,
//ExpectContinueTimeout: 1 * time.Second,
}
common.Client = &http.Client{
Timeout: time.Duration(300) * time.Millisecond,
Transport: common.Transport,
}
runtime.GOMAXPROCS(numCPU)
rand.Seed(time.Now().UTC().UnixNano())
}
func main() {
router := mux.NewRouter().StrictSlash(true)
router.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
_, _ = fmt.Fprintf(w, "Hello!!!")
})
router.HandleFunc("/{name}", func(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
prepareRequest(w, r, vars["name"])
}).Methods("POST")
// Register pprof handlers
router.HandleFunc("/debug/pprof/", pprof.Index)
router.HandleFunc("/debug/pprof/cmdline", pprof.Cmdline)
router.HandleFunc("/debug/pprof/profile", pprof.Profile)
router.HandleFunc("/debug/pprof/symbol", pprof.Symbol)
router.HandleFunc("/debug/pprof/trace", pprof.Trace)
routerMiddleWare := http.TimeoutHandler(router, 500*time.Millisecond, "Timeout")
srv := &http.Server{
Addr: "0.0.0.0:" + "80",
/*ReadTimeout: 500 * time.Millisecond,
WriteTimeout: 500 * time.Millisecond,
IdleTimeout: 10 * time.Second,*/
Handler: routerMiddleWare,
}
log.Fatal(srv.ListenAndServe())
}
func prepareRequest(w http.ResponseWriter, r *http.Request, name string) {
//other part of the code and call to goroutine
var urls []string
results, s, c := callUrls(urls)
finalCall(w, results, s, c)
}
type Response struct {
Status int
Url string
Body string
}
func callUrls(urls []string) ([]*Response, []string, []string) {
var wg sync.WaitGroup
wg.Add(len(urls))
ch := make(chan func() (*Response, string, string), len(urls))
for _, url := range urls {
go func(url string) {
//decide if request is valid for client to make http call using country/os
isValid := true //assuming url to be called
if isValid {
//make post call
//request body have many more paramter, just sample included.
//instead of creating new request, time.Sleep for 300ms doesn't cause any memory leak.
req, err := http.NewRequest("POST", url, bytes.NewBuffer([]byte(`{"body":"param"}`)))
req.Header.Set("Content-Type", "application/json")
req.Header.Set("Connection", "Keep-Alive")
//req.Close = true
response, err := common.Client.Do(req)
if err != nil {
wg.Done()
ch <- func() (*Response, string, string) {
return &Response{Status: 500, Url: url, Body: ""}, "error", "500"
}
return
}
defer response.Body.Close()
body, _ := ioutil.ReadAll(response.Body)
_, err = io.Copy(ioutil.Discard, response.Body)
//Close the body, forced this
//Also tried without defer, and only wothout following line
response.Body.Close()
//do something with response body replace a few string etc.
//and return
wg.Done()
ch <- func() (*Response, string, string) {
return &Response{Status: 200, Url: url, Body: string(body)}, "success", "200"
}
} else {
wg.Done()
ch <- func() (*Response, string, string) {
return &Response{Status: 500, Url: url, Body: ""}, "invalid", "500"
}
}
}(url)
}
wg.Wait()
var (
results []*Response
msg []string
status []string
)
for {
r, x, y := (<-ch)()
if r != nil {
results = append(results, r)
msg = append(msg, x)
status = append(status, y)
}
if len(results) == len(urls) {
return results, msg, status
}
}
}
func finalCall(w http.ResponseWriter, results []*Response, msg []string, status []string){
fmt.Println("response", "response body", results, msg, status)
}
vars.go
package common
import (
"net/http"
)
var (
//http client
Client *http.Client
//http Transport
Transport *http.Transport
)
pprof: Profiled app with 4 client url on average of around 2500qps.
Top command:
After 2minutes:
Without calling client url, by keeping isValid = false and time.Sleep(300* time.Millisecond) no leaks happens.
this code is not leaking.
To demonstrate, lets update it ** slightly so the post is reproducible.
main.go
package main
import (
"bytes"
"crypto/tls"
_ "expvar"
"fmt"
"io"
"io/ioutil"
"log"
"math/rand"
"net"
"net/http"
_ "net/http/pprof"
"os"
"runtime"
"strconv"
"sync"
"time"
"github.com/gorilla/mux"
)
var (
//http client
Client *http.Client
//http Transport
Transport *http.Transport
)
func init() {
go http.ListenAndServe("localhost:6060", nil)
//Get Any command line argument passed
args := os.Args[1:]
numCPU := runtime.NumCPU()
if len(args) > 1 {
numCPU, _ = strconv.Atoi(args[0])
}
Transport = &http.Transport{
TLSClientConfig: &tls.Config{
InsecureSkipVerify: true,
},
DialContext: (&net.Dialer{
//Timeout: time.Duration() * time.Millisecond,
KeepAlive: 30 * time.Second,
}).DialContext,
//ForceAttemptHTTP2: true,
DisableKeepAlives: false,
//MaxIdleConns: 0,
//IdleConnTimeout: 0,
//TLSHandshakeTimeout: time.Duration(300) * time.Millisecond,
//ExpectContinueTimeout: 1 * time.Second,
}
Client = &http.Client{
// Timeout: time.Duration(300) * time.Millisecond,
Transport: Transport,
}
runtime.GOMAXPROCS(numCPU)
rand.Seed(time.Now().UTC().UnixNano())
}
func main() {
router := mux.NewRouter().StrictSlash(true)
router.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
_, _ = fmt.Fprintf(w, "Hello!!!")
})
router.HandleFunc("/{name}", func(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
prepareRequest(w, r, vars["name"])
}).Methods("POST", "GET")
// Register pprof handlers
// router.HandleFunc("/debug/pprof/", pprof.Index)
// router.HandleFunc("/debug/pprof/cmdline", pprof.Cmdline)
// router.HandleFunc("/debug/pprof/profile", pprof.Profile)
// router.HandleFunc("/debug/pprof/symbol", pprof.Symbol)
// router.HandleFunc("/debug/pprof/trace", pprof.Trace)
routerMiddleWare := http.TimeoutHandler(router, 500*time.Millisecond, "Timeout")
srv := &http.Server{
Addr: "localhost:8080",
/*ReadTimeout: 500 * time.Millisecond,
WriteTimeout: 500 * time.Millisecond,
IdleTimeout: 10 * time.Second,*/
Handler: routerMiddleWare,
}
log.Fatal(srv.ListenAndServe())
}
func prepareRequest(w http.ResponseWriter, r *http.Request, name string) {
// go func() {
// make(chan []byte) <- make([]byte, 10024)
// }()
//other part of the code and call to goroutine
var urls []string
urls = append(urls,
"http://localhost:7000/",
"http://localhost:7000/",
)
results, s, c := callUrls(urls)
finalCall(w, results, s, c)
}
type Response struct {
Status int
Url string
Body string
}
func callUrls(urls []string) ([]*Response, []string, []string) {
var wg sync.WaitGroup
wg.Add(len(urls))
ch := make(chan func() (*Response, string, string), len(urls))
for _, url := range urls {
go func(url string) {
//decide if request is valid for client to make http call using country/os
isValid := true //assuming url to be called
if isValid {
//make post call
//request body have many more paramter, just sample included.
//instead of creating new request, time.Sleep for 300ms doesn't cause any memory leak.
req, err := http.NewRequest("POST", url, bytes.NewBuffer([]byte(`{"body":"param"}`)))
if err != nil {
wg.Done()
ch <- func() (*Response, string, string) {
return &Response{Status: 500, Url: url, Body: ""}, err.Error(), "500"
}
return
}
req.Header.Set("Content-Type", "application/json")
req.Header.Set("Connection", "Keep-Alive")
//req.Close = true
response, err := Client.Do(req)
if err != nil {
wg.Done()
ch <- func() (*Response, string, string) {
return &Response{Status: 500, Url: url, Body: ""}, err.Error(), "500"
}
return
}
defer response.Body.Close()
body, _ := ioutil.ReadAll(response.Body)
io.Copy(ioutil.Discard, response.Body)
//Close the body, forced this
//Also tried without defer, and only wothout following line
response.Body.Close()
//do something with response body replace a few string etc.
//and return
wg.Done()
ch <- func() (*Response, string, string) {
return &Response{Status: 200, Url: url, Body: string(body)}, "success", "200"
}
} else {
wg.Done()
ch <- func() (*Response, string, string) {
return &Response{Status: 500, Url: url, Body: ""}, "invalid", "500"
}
}
}(url)
}
wg.Wait()
var (
results []*Response
msg []string
status []string
)
for {
r, x, y := (<-ch)()
if r != nil {
results = append(results, r)
msg = append(msg, x)
status = append(status, y)
}
if len(results) == len(urls) {
return results, msg, status
}
}
}
func finalCall(w http.ResponseWriter, results []*Response, msg []string, status []string) {
fmt.Println("response", "response body", results, msg, status)
}
k/main.go
package main
import "net/http"
func main() {
y := make([]byte, 100)
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
w.Write(y)
})
http.ListenAndServe(":7000", nil)
}
Install additional visualization tool, and use ab to simulate some load, it will do the job for that intuitive demonstration.
go get -u github.com/divan/expvarmon
go run main.go &
go run k/main.go &
ab -n 50000 -c 2500 http://localhost:8080/y
# in a different window, for live preview
expvarmon -ports=6060 -i 500ms
At that point you read the output of expvarmon, if it was live you have something like
you can see the stuff waving, the gc is being actively working.
the app is loaded, the memory is being consumed, wait for the server to release its conn and the gc to clean them
You can see the memstats.Alloc, memstats.HeapAlloc, memstats.HeapInuse are now reduced, as expected when the gc does his job and that no leak exists.
If you were to check for go tool pprof -inuse_space -web http://localhost:6060/debug/pprof/heap, right after ab ran
It shows that the app is using 177Mb of memory.
Most of it 102Mb is being used by net/http.Transport.getConn.
Your handler is accouting for 1Mb, the rest is various things required.
If you were to take the screenshot after the server has released and the gc too, you would see an even smaller graph. not demonstrated here.
Now let us generate a leak and see it using both tools again.
In the code uncomment in,
func prepareRequest(w http.ResponseWriter, r *http.Request, name string) {
go func() {
make(chan []byte) <- make([]byte, 10024)
}()
//...
restart apps (press q in expvarmon, although it is not required)
go get -u github.com/divan/expvarmon
go run main.go &
go run k/main.go &
ab -n 50000 -c 2500 http://localhost:8080/y
# in a different window, for live preview
expvarmon -ports=6060 -i 500ms
it shows
In expvarmon you can see the same behavior, only the numbers has changed, and at rest state, after it has been gced, it still consumed a lot of memory, a lot more than a void golang http server to take a comparison point.
Again, screenshot the heap, it shows that your handler is now consuming most of the memory ~450Mb, notice the arrows, it shows that there is for 452mb of 10kb allocations, and 4.50Mb of 96b. They respectively correspond to the []byte slice being pushed to the chan []byte.
Finally, you can check your stack traces to look for dead goroutines, and thus leaking memory, open http://localhost:6060/debug/pprof/goroutine?debug=1
goroutine profile: total 50012
50000 # 0x43098f 0x4077fa 0x4077d0 0x4074bb 0x76b85d 0x45d281
# 0x76b85c main.prepareRequest.func1+0x4c /home/mh-cbon/gow/src/test/oom/main.go:101
4 # 0x43098f 0x42c09a 0x42b686 0x4c3a3b 0x4c484b 0x4c482c 0x57d94f 0x590d79 0x6b4c67 0x5397cf 0x53a51d 0x53a754 0x6419ef 0x6af18d 0x6af17f 0x6b5f33 0x6ba4fd 0x45d281
# 0x42b685 internal/poll.runtime_pollWait+0x55 /home/mh-cbon/.gvm/gos/go1.12.7/src/runtime/netpoll.go:182
# 0x4c3a3a internal/poll.(*pollDesc).wait+0x9a /home/mh-cbon/.gvm/gos/go1.12.7/src/internal/poll/fd_poll_runtime.go:87
// more...
It tells us that the programs is hosting 50 012 goroutines, then it lists them grouped by file positions, where the first number is the count of instances running, 50 000 in the first group of this example. It is followed by the stack trace that lead to the goroutine to exist.
You can see there is a bunch of system thing, that in your case, you should not worry much about it.
You got to look for those that you believe you should not be live if your program was working as you think it should.
However, overall your code is not satisfying and could be, and probably, should be improved with a thorough review about its allocations and overall design conception.
** This is a summary of the changes applied to the original source code.
It adds a new program k/main.go to act as a backend server.
It adds _ "expvar" import statement
It starts the std api HTTP server instance that pprof registers onto during init phase with go http.ListenAndServe("localhost:6060", nil)
The client timeout is disabled Timeout: time.Duration(300) * time.Millisecond,, otherwise the load test does not return 200s
The server address is set to Addr: "localhost:8080",
The urls values created within prepareRequest are set to a static list of len=2
It adds error checking for req, err := http.NewRequest("POST", url, bytes.NewBuffer([]byte({"body":"param"})))
It disalbles error checking in io.Copy(ioutil.Discard, response.Body)
I have solved it by replacing net/http package with fasthttp. Earlier I haven't used it because I was not able find timeout method on fasthttp client but I see that there is indeed a method DoTimeout for fasthttp client which timedout the request after specified duration.
Here the updated code:
in vars.go ClientFastHttp *fasthttp.Client
main.go
package main
import (
"./common"
"crypto/tls"
"fmt"
"github.com/gorilla/mux"
"github.com/valyala/fasthttp"
"log"
"math/rand"
"net"
"net/http"
"net/http/pprof"
"os"
"runtime"
"strconv"
"sync"
"time"
)
func init() {
//Get Any command line argument passed
args := os.Args[1:]
numCPU := runtime.NumCPU()
if len(args) > 1 {
numCPU, _ = strconv.Atoi(args[0])
}
common.Transport = &http.Transport{
TLSClientConfig: &tls.Config{
InsecureSkipVerify: true,
},
DialContext: (&net.Dialer{
//Timeout: time.Duration() * time.Millisecond,
KeepAlive: 30 * time.Second,
}).DialContext,
//ForceAttemptHTTP2: true,
DisableKeepAlives: false,
//MaxIdleConns: 0,
//IdleConnTimeout: 0,
//TLSHandshakeTimeout: time.Duration(300) * time.Millisecond,
//ExpectContinueTimeout: 1 * time.Second,
}
common.Client = &http.Client{
Timeout: time.Duration(300) * time.Millisecond,
Transport: common.Transport,
}
runtime.GOMAXPROCS(numCPU)
rand.Seed(time.Now().UTC().UnixNano())
}
func main() {
router := mux.NewRouter().StrictSlash(true)
router.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
_, _ = fmt.Fprintf(w, "Hello!!!")
})
router.HandleFunc("/{name}", func(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
prepareRequest(w, r, vars["name"])
}).Methods("POST")
// Register pprof handlers
router.HandleFunc("/debug/pprof/", pprof.Index)
router.HandleFunc("/debug/pprof/cmdline", pprof.Cmdline)
router.HandleFunc("/debug/pprof/profile", pprof.Profile)
router.HandleFunc("/debug/pprof/symbol", pprof.Symbol)
router.HandleFunc("/debug/pprof/trace", pprof.Trace)
routerMiddleWare := http.TimeoutHandler(router, 500*time.Millisecond, "Timeout")
srv := &http.Server{
Addr: "0.0.0.0:" + "80",
/*ReadTimeout: 500 * time.Millisecond,
WriteTimeout: 500 * time.Millisecond,
IdleTimeout: 10 * time.Second,*/
Handler: routerMiddleWare,
}
log.Fatal(srv.ListenAndServe())
}
func prepareRequest(w http.ResponseWriter, r *http.Request, name string) {
//other part of the code and call to goroutine
var urls []string
results, s, c := callUrls(urls)
finalCall(w, results, s, c)
}
type Response struct {
Status int
Url string
Body string
}
func callUrls(urls []string) ([]*Response, []string, []string) {
var wg sync.WaitGroup
wg.Add(len(urls))
ch := make(chan func() (*Response, string, string), len(urls))
for _, url := range urls {
go func(url string) {
//decide if request is valid for client to make http call using country/os
isValid := true //assuming url to be called
if isValid {
//make post call
//request body have many more paramter, just sample included.
//instead of creating new request, time.Sleep for 300ms doesn't cause any memory leak.
req := fasthttp.AcquireRequest()
req.SetRequestURI(url)
req.Header.Set("Content-Type", "application/json")
req.Header.Set("Connection", "Keep-Alive")
req.Header.SetMethod("POST")
req.SetBody([]byte(`{"body":"param"}`))
resp := fasthttp.AcquireResponse()
defer fasthttp.ReleaseRequest(req) // <- do not forget to release
defer fasthttp.ReleaseResponse(resp) // <- do not forget to release
//err := clientFastHttp.Do(req, response)
//endregion
t := time.Duration(300)
err := common.ClientFastHttp.DoTimeout(req, resp, t*time.Millisecond)
body := resp.Body()
if err != nil {
wg.Done()
ch <- func() (*Response, string, string) {
return &Response{Status: 500, Url: url, Body: ""}, "error", "500"
}
return
}
/*defer response.Body.Close()
body, _ := ioutil.ReadAll(response.Body)
_, err = io.Copy(ioutil.Discard, response.Body)
//Close the body, forced this
//Also tried without defer, and only wothout following line
response.Body.Close()*/
//do something with response body replace a few string etc.
//and return
wg.Done()
ch <- func() (*Response, string, string) {
return &Response{Status: 200, Url: url, Body: string(body)}, "success", "200"
}
} else {
wg.Done()
ch <- func() (*Response, string, string) {
return &Response{Status: 500, Url: url, Body: ""}, "invalid", "500"
}
}
}(url)
}
wg.Wait()
var (
results []*Response
msg []string
status []string
)
for {
r, x, y := (<-ch)()
if r != nil {
results = append(results, r)
msg = append(msg, x)
status = append(status, y)
}
if len(results) == len(urls) {
return results, msg, status
}
}
}
func finalCall(w http.ResponseWriter, results []*Response, msg []string, status []string) {
fmt.Println("response", "response body", results, msg, status)
}

Error: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

About 3~4minutes,Some Errors would happen in my log.
net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I try to find out where it takes time Using httptrace.
httptrace.GetConn
httptrace.GotConn
I think it runs out of time before httptrace.GotConn.
So errors happend
request canceled while waiting for connection
My machine is ok.and this is my netstat.
LAST_ACK 2
CLOSE_WAIT 7
ESTABLISHED 108
SYN_SENT 3
TIME_WAIT 43
package main
import (
"crypto/md5"
"encoding/hex"
"fmt"
"io/ioutil"
"math/rand"
"net"
"net/http"
"net/http/httptrace"
"os"
"sync"
"time"
)
var Client *http.Client = &http.Client{
Transport: &http.Transport{
DisableKeepAlives:true,
Proxy: http.ProxyFromEnvironment,
DialContext: (&net.Dialer{
Timeout: 3 * time.Second, // 连接超时
KeepAlive: 10 * time.Second,
DualStack: true,
}).DialContext,
IdleConnTimeout: 120 * time.Second,
ResponseHeaderTimeout: 60 * time.Second,
ExpectContinueTimeout: 1 * time.Second,
},
Timeout: 500 * time.Millisecond,
}
func GenLogId() string {
h2 := md5.New()
rand.Seed(time.Now().Unix())
str := fmt.Sprintf("%d%d%d", os.Getpid(), time.Now().UnixNano(), rand.Int())
h2.Write([]byte(str))
uniqid := hex.EncodeToString(h2.Sum(nil))
return uniqid
}
func main() {
var (
wg sync.WaitGroup
maxParallel int = 50
parallelChan chan bool = make(chan bool, maxParallel)
)
for {
parallelChan <- true
wg.Add(1)
go func() {
defer func() {
wg.Done()
<-parallelChan
}()
testHttp2()
}()
}
wg.Wait()
}
func testHttp2() {
url := "http://10.33.108.39:11222/index.php"
req, _ := http.NewRequest("GET", url, nil)
uniqId := GenLogId()
trace := &httptrace.ClientTrace{
GetConn: func(hostPort string) {
fmt.Println("GetConn id:", uniqId, time.Now().UnixNano(), hostPort)
},
GotConn: func(connInfo httptrace.GotConnInfo) {
fmt.Println("GotConn id:", uniqId, time.Now().UnixNano(), connInfo.Conn.LocalAddr())
},
ConnectStart: func(network, addr string) {
fmt.Println("ConnectStart id:", uniqId, time.Now().UnixNano(), network, addr)
},
ConnectDone: func(network, addr string, err error) {
fmt.Println("ConnectDone id:", uniqId, time.Now().UnixNano(), network, addr, err)
},
}
req = req.WithContext(httptrace.WithClientTrace(req.Context(), trace))
resp, err := Client.Do(req)
if err != nil {
fmt.Println("err: id", uniqId, time.Now().UnixNano(), err)
return
}
defer resp.Body.Close()
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
fmt.Println("error", string(body))
}
return
}
You can reproduce using my code. I am so confuse about the bug...
Thank you.
You need to increase the client Timeout value for your test.
net/http: request canceled (Client.Timeout exceeded while awaiting headers)
This means your Client.Timeout value is less than your server response time, due to many reasons ( e.g. Busy, CPU overload, many requests per second you generated here, ...).
Here a simple way to explain it and regenerate it:
Run this server (which waits for 2 * time.Second then sends back the response):
package main
import (
"io"
"log"
"net/http"
"time"
)
func main() {
http.HandleFunc(`/`, func(w http.ResponseWriter, r *http.Request) {
log.Println("wait a couple of seconds ...")
time.Sleep(2 * time.Second)
io.WriteString(w, `Hi`)
log.Println("Done.")
})
log.Println(http.ListenAndServe(":8080", nil))
}
Then run this client which times out in 1 * time.Second:
package main
import (
"io/ioutil"
"log"
"net/http"
"time"
)
func main() {
log.Println("HTTP GET")
client := &http.Client{
Timeout: 1 * time.Second,
}
r, err := client.Get(`http://127.0.0.1:8080/`)
if err != nil {
log.Fatal(err)
}
defer r.Body.Close()
bs, err := ioutil.ReadAll(r.Body)
if err != nil {
log.Fatal(err)
}
log.Println("HTTP Done.")
log.Println(string(bs))
}
The output is (Client.Timeout exceeded while awaiting headers):
2019/10/30 11:05:08 HTTP GET
2019/10/30 11:05:09 Get http://127.0.0.1:8080/: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
exit status 1
Note:
You need to change these two settings accordingly (http.Transport.ResponseHeaderTimeout and http.Client.Timeout).
You have set ResponseHeaderTimeout: 60 * time.Second, while Client.Timeout to half a second.
Suppose anyone wants to capture theses errors please use,
os.IsTimeout(err) -> it will return true for context deadlined
For capturing dial i/o timeout issue,
netErr, ok := err.(net.Error); (ok && netErr.Timeout()) -> it will return true for dial i/o timeout

How to use SingleFlight to share downloaded large size file?

I'm proxying a bunch of http GET calls through singleflight. But returned response is only seen by the first request.
I also noticed a problem in my test. If the first request times out, the response will be lost.
Let's say r1,r2,r3 are requests that come in order. They are all grouped in one groupKey. If r1 time out , r2 and r3 will wait until the shared HTTP call returns or until their own timeout.
proxy code (credits to here)
// add auth to the requst and proxy to target host
var serveReverseProxy = func(target string, res http.ResponseWriter, req *http.Request) {
log.Println("new request!")
requestURL, _ := url.Parse(target)
proxy := httputil.NewSingleHostReverseProxy(requestURL)
req1, _ := http.NewRequest(req.Method, req.RequestURI, req.Body)
for k, v := range req.Header {
for _, vv := range v {
req1.Header.Add(k, vv)
}
}
req1.Header.Set("Authorization", "Bearer "+"some token")
req1.Host = requestURL.Host
proxy.ServeHTTP(res, req1)
}
var requestGroup singleflight.Group
mockBackend := httptest.NewServer(http.HandlerFunc(func(res http.ResponseWriter, req *http.Request) {
groupKey := req.Host + req.RequestURI
name := req.Header.Get("From")
ch := requestGroup.DoChan(groupKey, func() (interface{}, error) {
//increase key retention to 20s to make sure r1,r2,r3 are all in one group
go func() {
time.Sleep(20 * time.Second)
requestGroup.Forget(groupKey)
log.Println("Key deleted :", groupKey)
}()
// proxy to some host and expect the result to be written in res
serveReverseProxy("https://somehost.com", res, req)
return nil, nil
})
timeout := time.After(15 * time.Second)
var result singleflight.Result
select {
case <-timeout: // Timeout elapsed, send a timeout message (504)
log.Println(name, " timed out")
http.Error(res, "request timed out", http.StatusGatewayTimeout)
return
case result = <-ch: // Received result from channel
}
if result.Err != nil {
http.Error(res, result.Err.Error(), http.StatusInternalServerError)
return
}
if result.Shared {
log.Println(name, " is shared")
} else {
log.Println(name, " not shared")
}
}))
I'd expect r2,r3 to either
at least see the result from their own reponseWriter
timeout along with r1
https://github.com/golang/net/blob/master/http2/h2demo/h2demo.go#L181-L219
this works. Turns out I need to return handler in singleFlight.Group.Do instead of the response.
I don't know why

Reusing Keep-Alive Connection in case of Timeout in Golang

/* Keep Alive Client*/
HttpClient{
Client: &http.Client{
Transport: &http.Transport{
Dial: (&net.Dialer{
Timeout: dialTimeout,
KeepAlive: dialTimeout * 60,
}).Dial,
DisableKeepAlives: false,
MaxIdleConnsPerHost: idleConnectionsPerHost,
},
},
Timeout: 5 * time.Second,
}
/* Execute Request */
timeoutContext, cancelFunction := context.WithTimeout(context.Background(), self.Timeout)
defer cancelFunction()
if response, err = self.Client.Do(request.WithContext(timeoutContext)); err == nil {
defer response.Body.Close()
/* Check If Request was Successful */
statusCode = response.StatusCode
if response.StatusCode == http.StatusOK {
/* Read Body & Decode if Response came & unmarshal entity is supplied */
if responseBytes, err = ioutil.ReadAll(response.Body); err == nil && unmarshalledResponse != nil {
//Process Response
}
} else {
err = errors.New(fmt.Sprintf("Non 200 Response. Status Code: %v", response.StatusCode))
}
}
In Golang whenever a request times out in a Keep-Alive connection that connection is lost and is Reset. For the above code incase of timeout Seeing Packets in Wireshark reveals that RST is sent by the client hence connection is no longer reused.
Event tried using Timeout of Httpclient rather than ContextWithTimeout but having similar findings where connection gets reset.
Anyway we can retain established keep-alive connection even in case of a Timeout of request.
The net/http client closes the connection on timeout because the connection cannot be reused.
Consider what would happen if the connection is reused. If the client receives none of the response or a partial response from the server before timeout, then the next request will read some amount of the previous response.
To keep the connection alive on timeout, implement the timeout in application code. Continue to use a longer timeout in the net/http client to handle cases where the connection is truly stuck or dead.
result := make(chan []byte)
go func() {
defer close(result)
resp, err := client.Do(request)
if err != nil {
// return
}
defer resp.Body.Close()
if resp.StatusCode == http.StatusOK {
p, err := ioutil.ReadAll(resp.Body)
if err != nil {
return
}
result <- resp
}
}()
select {
case p, ok <- result:
if ok {
// p is the response body
}
case time.After(timeout):
// do nothing
}

Resources