I have a client that connects to a server to read some response. The server takes around 5 minutes to respond to a particular request when I use Postman to execute the request.
I am writing this client in Go language and executing the following code to set a timeout of 10 minutes.
_client := &http.Client{
Timeout: 10 * time.Minute,
}
resp, err := _client.Post(c.Url, "application/json", r)
However, the request terminates after 2 minutes with an error. The error just says EOF.
I tried setting the timeout to 15 seconds to check if the config works and the request terminates in 15 seconds as expected.
How can I make sure that the timeout is 10 minutes?
I've had the same problem but there my code was running in GAE which canceled the request after 2 min with no operation. So if you have total control over where your code runs you should be able to specify the timeout time in client.Timeout
See here
Here’s a simple way to set timeouts for an HTTP request in Go:
client := &http.Client{Timeout: 10*time.Second}
resp, err := client.Get("https://freshman.tech")
The above snippet sets a timeout of 10 seconds on the HTTP client. So any request made with the client will timeout after 10 seconds if it does not complete in that time.
Before using the response from the request, you can check for a timeout error using the os.IsTimeout() method. It returns a boolean indicating whether the error is known to report a timeout that occurred.
client := &http.Client{Timeout: 10 * time.Second}
resp, err := client.Get(url)
if err != nil {
if os.IsTimeout(err) {
// A timeout error occurred
return
}
// This was an error, but not a timeout
}
// use the response
Another method involves using the net.Error interface along with a type assertion, but I prefer os.IsTimeout() due to its simplicity.
client := &http.Client{Timeout: 10 * time.Second}
resp, err := client.Get(url)
if err, ok := err.(net.Error); ok && err.Timeout() {
// A timeout error occurred
} else if err != nil {
// This was an error, but not a timeout
}
The real problem here is that you are using HTTP in a way implementers do not want you to. You can read more about your problem and the best practice to solve it here. The short answer is that you are better off responding quickly to the client saying the job is running along with the url to find the result when it is done.
Related
I have grpc server written in Go and I am trying to update receive and send message size to 20MB instead of the default 4MB with the following code
var s *grpc.Server
s = grpc.NewServer(grpc.MaxRecvMsgSize(1024*1024*20), grpc.MaxSendMsgSize(1024*1024*20))
pb.RegisterProductServer(s,mysrv)
But the above doesn't seem to be working as I still get an error when I tried to call from a client received message larger than max (5807570 vs. 4194304)"
Not sure whats overriding the size
I haven't had the chance to test this yet but have you tried adding the same options from the client stub? The same options can be attached as dial options:
maxMsgSize := 1024*1024*20
conn, err := grpc.Dial(address, grpc.WithDefaultCallOptions(grpc.MaxRecvMsgSize(maxMsgSize), grpc.MaxSendMsgSize(maxMsgSize)))
if err != nil {
// ...
}
defer conn.close()
client := pb.NewProductClient(conn)
// ...
I don't know anything about your use case but if your response data can be delivered piece wise then the streaming APIs might be useful.
Hi I developed a little go server that does (at the moment) nothing but forwarding the request to a local service on the machine it is running.
So nearly the same as nginx as reverse proxy.
But I observed a really bad performance that even uses up all resources of the server and runs into timeouts on further requests.
I know that this cannot be as performant as nginx, but I don't think that it should be that slow.
Here is the server I use for forwarding the request:
package main
import (
"github.com/gorilla/mux"
"net/http"
"github.com/sirupsen/logrus"
"bytes"
"io/ioutil"
)
func main() {
router := mux.NewRouter()
router.HandleFunc("/", forwarder).Methods("POST")
server := http.Server{
Handler: router,
Addr: ":8443",
}
logrus.Fatal(server.ListenAndServeTLS("cert.pem", "key.pem"))
}
var client = &http.Client{}
func forwarder(w http.ResponseWriter, r *http.Request) {
// read request
body, err := ioutil.ReadAll(r.Body)
if err != nil {
logrus.Error(err.Error())
ServerError(w, nil)
return
}
// create forwarding request
req, err := http.NewRequest("POST", "http://localhost:8000", bytes.NewReader(body))
if err != nil {
logrus.Error(err.Error())
ServerError(w, nil)
return
}
resp, err := client.Do(req)
if err != nil {
logrus.Error(err.Error())
ServerError(w, nil)
return
}
// read response
respBody, err := ioutil.ReadAll(resp.Body)
if err != nil {
logrus.Error(err.Error())
ServerError(w, nil)
return
}
resp.Body.Close()
// return response
w.Header().Set("Content-Type", "application/json; charset=utf-8")
w.WriteHeader(resp.StatusCode)
w.Write(respBody)
}
From the client side I just measure the roundtrip time. And when I fire 100 Requests per second the response time goes up quite fast.
It starts with a response time of about 50ms. After 10 Seconds the response time is at 500ms. After 10 more seconds the response time is at 8000ms and so on, until I get timeouts.
When I use the nginx instead of my server there is no problem running 100 requests per second. Using nginx it stays at 40ms per each request.
Some observation:
using nginx: lsof -i | grep nginx
has no more than 2 connections open.
using my server the number of connection increases up to 500 and then the connections with state SYN_SENT increases and then the requets run into timeouts.
Another finding: I measured the delay of this code line:
resp, err := client.Do(req)
There is where most of the time is spent, but the could also just be because the go routines are starving!?
What I also tried:
r.Close = true (or KeepAlive = false)
I modified timeouts on the server side
I modified all this stuff on the http client used by my forward server (keepalive false, request.Close = true) etc.
I don't know why I got such a bad performance.
My guess is that go runs into problems because of the huge number of go routines. Maybe most of the time is used up scheduling this go routines and so the latency goes up?
I also tried to use the included httputil.NewSingleHostReverseProxy(). Performance is a little bit better, but still the same problem.
UPDATE:
Now I tried fasthttp:
package main
import (
"github.com/sirupsen/logrus"
"github.com/valyala/fasthttp"
)
func StartNodeManager() {
fasthttp.ListenAndServeTLS(":8443", "cert.pem", "key.pem", forwarder)
}
var client = fasthttp.Client{}
func forwarder(ctx *fasthttp.RequestCtx) {
resp := fasthttp.AcquireResponse()
req := fasthttp.AcquireRequest()
req.Header.SetMethod("POST")
req.SetRequestURI("http://127.0.0.1:8000")
req.SetBody(ctx.Request.Body())
err := client.Do(req, resp)
if err != nil {
logrus.Error(err.Error())
ctx.Response.SetStatusCode(500)
return
}
ctx.Response.SetBody(resp.Body())
fasthttp.ReleaseRequest(req)
fasthttp.ReleaseResponse(resp)
}
Little bit better but after 30 seconds the first timeouts arrive and the response time goes up to 5 seconds.
The root cause of the problem is GO http module is not handling connections to upstream in
a manged way, time is increasing because lots of connections are getting opened and they go into time_wait state.
So with number of increasing connections, you will get decrease in performance.
You just have to set
// 1000 what I am using
http.DefaultTransport.(*http.Transport).MaxIdleConns = 1000
http.DefaultTransport.(*http.Transport).MaxIdleConnsPerHost = 1000
in your forwarder and this will solve your problem.
By the way, use go std library reverse proxy, this will take away lot of headache.
But still for reverse proxy you need to set MaxIdleConns and MaxIdleConnsPerHost , in it's transport.
Follow the article given below.
First of all you should profile your app and find out where is the bottleneck.
Second I would be looking to way write code with less memory allocation in heap and more on stack.
Few ideas:
Do you need read request body for all request?
Do you need always read response body?
Can you pass body of client request to request to server? func NewRequest(method, url string, body io.Reader) (*Request, error)
Use sync.Pool
Consider using fasthttp as it creates less pressure to garbage collector
Check if your server uses same optimisation as Nginx. E.g. Keep-Alive, caching, etc.
Again profile and compare against Nginx.
Seems there is a lot of space for optimization.
I am trying to handle a context timeout for every request. We have the following server structures:
Flow overview:
Go Server: Basically, acts as a [Reverse-proxy].2
Auth Server: Check for requests Authentication.
Application Server: Core request processing logic.
Now if Authorization server isn't able to process a request in stipulated time, then I want to close the goroutine from memory.
Here is what I tried:
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
req, _ := http.NewRequest("GET", authorizationServer, nil)
req.Header = r.Header
req.WithContext(ctx)
res, error := client.Do(req)
select {
case <-time.After(10 * time.Second):
fmt.Println("overslept")
case <-ctx.Done():
fmt.Println(ctx.Err()) // prints "context deadline exceeded"
}
Over here, context returns as "deadline exceeded", if the request is not processed in stipulated time. But it continues to process that request and return response in more than the specified time. So, how can I stop the request flow (goroutine), when time is exceeded.
I've also the complete request needs to be processed in 60 seconds with this code:
var netTransport = &http.Transport{
Dial: (&net.Dialer{
Timeout: 60 * time.Second,
}).Dial,
TLSHandshakeTimeout: 60 * time.Second,
}
client := &http.Client{
Timeout: time.Second * 60,
Transport: netTransport,
CheckRedirect: func(req *http.Request, via []*http.Request) error {
return http.ErrUseLastResponse
},
}
So do I need any separate context implementations as well?
Note1: It would be awesome, if we can manage the timeout on every requests (goroutine) created by HTTP server, using context.
What happens in your code is very correct and behaves as expected.
You create a context with 5 seconds timeout. You pass it to the request and make that request. Let's say that request returns in 2 seconds. You then do a select and either wait 10 seconds or wait for the context to finish. Context will always finish in the initial 5 seconds from when it was created and will also give that error every time it reaches the end.
The context is independent of the request and it will reach its deadline unless cancelled previously. You cancel the request when the function finishes using defer.
In your code the request takes your timeout in consideration. But the ctx.Err() will return deadline exceeded every time it reaches the timeout. Since that's what happens inside the context. calling ctx.Err() multiple times will return the same error.
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
go func () {
select {
case <-time.After(10 * time.Second):
fmt.Println("overslept")
case <-ctx.Done():
fmt.Println(ctx.Err()) // prints "context deadline exceeded"
}
}()
req, _ := http.NewRequest("GET", authorizationServer, nil)
req.Header = r.Header
req = req.WithContext(ctx)
res, error := client.Do(req)
From the context documentation:
// Err returns a non-nil error value after Done is closed. Err returns
// Canceled if the context was canceled or DeadlineExceeded if the
// context's deadline passed. No other values for Err are defined.
// After Done is closed, successive calls to Err return the same value.
In your code, the timeout will always be reached and not cancelled, that is why you receive DeadlineExceeeded. Your code is correct except the select part which will block until either 10 seconds pass or context timeout is reached. In your case always the context timeout is reached.
You should check the error returned by the client.Do call and not worry about the context error in here. You are the one controlling the context. If the request times out, a case you should test of course, then a proper error would be returned for you to verify.
I was doing some load testing earlier today and found something peculiar, sometimes an Http request doesn't not die and keeps on firing . How can I correct that in my Golang code for instance see the image below . I am load testing loading 1,000 HTTP request but if you notice on the 1,000th request below it takes 392,999 milliseconds or 392 seconds while the rest of the request takes 2.2 seconds on average . I have done the test multiple times and sometimes it hangs . This is my code
func Home_streams(w http.ResponseWriter, r *http.Request) {
var result string
r.ParseForm()
wg := sync.WaitGroup{}
wg.Add(1)
go func() {
defer wg.Done()
db.QueryRow("select json_build_object('Locations', array_to_json(array_agg(t))) from (SELECT latitudes,county,longitudes,"+
"statelong,thirtylatmin,thirtylatmax,thirtylonmin,thirtylonmax,city"+
" FROM zips where city='Orlando' ORDER BY city limit 5) t").Scan(&result)
}()
wg.Wait()
fmt.Fprintf(w,result)
}
and I connect to the database with this code
func init() {
var err error
db, err = sql.Open("postgres","Postgres Connection String")
if err != nil {
log.Fatal("Invalid DB config:", err)
}
if err = db.Ping(); err != nil {
log.Fatal("DB unreachable:", err)
}
}
I would say that about 10 % of the time I load test this issue happens and the only way it stops is if I stop the requests manually otherwise it keeps on going indefinitely . I wonder if maybe this issue is addressed here https://medium.com/#nate510/don-t-use-go-s-default-http-client-4804cb19f779#.83uzpsp24 I am still learning my way around Golang .
and the only way it stops is if I stop the requests manually otherwise
it keeps on going indefinitely
You're not showing your full code, but it seems like you're using the http.ListenAndServe convenience function, which doesn't set a default timeout. So what I assume is happening is you're overloading your database server and your http server isn't set to timeout so it's just waiting for your database to respond.
Assuming all of this is correct try doing something like this instead:
srv := &http.Server{
ReadTimeout: 5 * time.Second,
WriteTimeout: 10 * time.Second,
}
srv.ListenAndServe()
There's a nice reference here.
I have a function that makes a call to an external API using a Go http.Client, parses the result, and uses the result in the template executed afterwards. Occasionally, the external API will respond slowly (~20s), and the template execution will fail citing "i/o timeout", or more specifically,
template: :1:0: executing "page.html" at <"\n\t\t\t\t\t\t\t\t\...>: write tcp 127.0.0.1:35107: i/o timeout
This always coincides with a slow API response, but there is always a valid response in the JSON object, so the http.Client is receiving a proper response. I am just wondering if anyone could point me towards what could be causing the i/o timeout in the ExecuteTemplate call.
I have tried ResponseHeaderTimeout and DisableKeepAlives in the client transport (both with and without those options) to no avail. I've also tried setting the request's auto-close value to true to no avail. A stripped-down version of the template generation code is below:
func viewPage(w http.ResponseWriter, r *http.Request) {
tmpl := pageTemplate{}
duration, _ := time.ParseDuration("120s")
tr := &http.Transport{
ResponseHeaderTimeout: duration,
DisableKeepAlives: true,
}
client := &http.Client{Transport: tr}
req, _ := http.NewRequest("GET", "http://example.com/some_function", nil)
req.Close = true
resp, _ := client.Do(req)
defer resp.Body.Close()
body, err := ioutil.ReadAll(resp.Body)
var res api_response // some struct that matches the JSON response
err = json.Unmarshal(body, &res)
t, _ := template.New("page.html")
err = t.ExecuteTemplate(w, "page.html", tmpl)
}
The timeout on this line:
err = t.ExecuteTemplate(w, "page.html", tmpl)
means that the outgoing response is timing out when being written into, so nothing you change in the locally created client should affect it. It also does make sense that a slow response from that client increases the chance of the timeout on w, since the deadline is set when the response is created, before your handler is called, so a slow activity from your handler will increase the chances of a timeout.
There's no write timeout on the http.Server instance used by http.ListenAndServe, so you must be setting the Server.WriteTimeout field explicitly on the created server.
As a side note, there are errors being ignored in that handler, which is a strongly discouraged practice.