How to use SingleFlight to share downloaded large size file? - go

I'm proxying a bunch of http GET calls through singleflight. But returned response is only seen by the first request.
I also noticed a problem in my test. If the first request times out, the response will be lost.
Let's say r1,r2,r3 are requests that come in order. They are all grouped in one groupKey. If r1 time out , r2 and r3 will wait until the shared HTTP call returns or until their own timeout.
proxy code (credits to here)
// add auth to the requst and proxy to target host
var serveReverseProxy = func(target string, res http.ResponseWriter, req *http.Request) {
log.Println("new request!")
requestURL, _ := url.Parse(target)
proxy := httputil.NewSingleHostReverseProxy(requestURL)
req1, _ := http.NewRequest(req.Method, req.RequestURI, req.Body)
for k, v := range req.Header {
for _, vv := range v {
req1.Header.Add(k, vv)
}
}
req1.Header.Set("Authorization", "Bearer "+"some token")
req1.Host = requestURL.Host
proxy.ServeHTTP(res, req1)
}
var requestGroup singleflight.Group
mockBackend := httptest.NewServer(http.HandlerFunc(func(res http.ResponseWriter, req *http.Request) {
groupKey := req.Host + req.RequestURI
name := req.Header.Get("From")
ch := requestGroup.DoChan(groupKey, func() (interface{}, error) {
//increase key retention to 20s to make sure r1,r2,r3 are all in one group
go func() {
time.Sleep(20 * time.Second)
requestGroup.Forget(groupKey)
log.Println("Key deleted :", groupKey)
}()
// proxy to some host and expect the result to be written in res
serveReverseProxy("https://somehost.com", res, req)
return nil, nil
})
timeout := time.After(15 * time.Second)
var result singleflight.Result
select {
case <-timeout: // Timeout elapsed, send a timeout message (504)
log.Println(name, " timed out")
http.Error(res, "request timed out", http.StatusGatewayTimeout)
return
case result = <-ch: // Received result from channel
}
if result.Err != nil {
http.Error(res, result.Err.Error(), http.StatusInternalServerError)
return
}
if result.Shared {
log.Println(name, " is shared")
} else {
log.Println(name, " not shared")
}
}))
I'd expect r2,r3 to either
at least see the result from their own reponseWriter
timeout along with r1

https://github.com/golang/net/blob/master/http2/h2demo/h2demo.go#L181-L219
this works. Turns out I need to return handler in singleFlight.Group.Do instead of the response.
I don't know why

Related

Rate limiter with gorilla mux

I am trying to implement http request limiter to allow 10 request per second per user by their usernames.
At the max 10 request can be hit to the server including requests which are under processing.
Below is what I have implemented with reference of rate-limit.
func init() {
go cleanupVisitors()
}
func getVisitor(username string) *rate.Limiter {
mu.Lock()
defer mu.Unlock()
v, exists := visitors[username]
if !exists {
limiter := rate.NewLimiter(10, 3)
visitors[username] = &visitor{limiter, time.Now()}
return limiter
}
v.lastSeen = time.Now()
return v.limiter
}
func cleanupVisitors() {
for {
time.Sleep(time.Minute)
mu.Lock()
for username, v := range visitors {
if time.Since(v.lastSeen) > 1*time.Minute {
delete(visitors, username)
}
}
mu.Unlock()
}
}
func limit(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
mappedArray := hotelapi.SearchResponse{}
mappedArray.StartTime = time.Now().Format("2006-02-01 15:04:05.000000")
mappedArray.EndTime = time.Now().Format("2006-02-01 15:04:05.000000")
userName := r.FormValue("username")
limiter := getVisitor(userName)
if !limiter.Allow() {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusTooManyRequests)
mappedArray.MessageInfo = http.StatusText(http.StatusTooManyRequests)
mappedArray.ErrorCode = strconv.Itoa(http.StatusTooManyRequests)
json.NewEncoder(w).Encode(mappedArray)
return
}
next.ServeHTTP(w, r)
})
}
func route() {
r := mux.NewRouter()
r.PathPrefix("/hello").HandlerFunc(api.ProcessHello).Methods("GET")
ws := r.PathPrefix("/index.php").HandlerFunc(api.ProcessWs).Methods("GET", "POST").Subrouter()
r.Use(panicRecovery)
ws.Use(limit)
http.HandleFunc("/favicon.ico", faviconHandler)
if config.HTTPSEnabled {
err := http.ListenAndServeTLS(":"+config.Port, config.HTTPSCertificateFilePath, config.HTTPSKeyFilePath, handlers.CompressHandlerLevel(r, gzip.BestSpeed))
if err != nil {
fmt.Println(err)
log.Println(err)
}
} else {
err := http.ListenAndServe(":"+config.Port, handlers.CompressHandler(r))
if err != nil {
fmt.Println(err)
log.Println(err)
}
}
}
I have couple of concerns here.
I want limiter only for /index.php and not for /hello. I did implement with Sub route. Is it correct way?
The limit middle ware is not limiting as I assumed. It allows 1 successful request all other requests are returned with too many requests error.
What am I missing here. ?
the subrouter pattern is a solution gorilla proposes , small organizational suggestion though:
r := mux.NewRouter()
r.HandlerFunc("/hello", api.ProcessHello).Methods("GET")
r.HandleFunc("/favicon.ico", faviconHandler)
r.Use(panicRecovery)
ws := r.PathPrefix("/index.php").Subrouter()
ws.Use(limit)
ws.HandlerFunc(api.ProcessWs).Methods("GET", "POST")
you seem to be calling your middleware not only via the Use() method but also calling it over the handler on ListenAndServe, I also see from gorilla same example that a more clear way to approach this is:
server := &http.Server{
Addr: "0.0.0.0:8080",
// Good practice to set timeouts to avoid Slowloris attacks.
WriteTimeout: time.Second * 15,
ReadTimeout: time.Second * 15,
IdleTimeout: time.Second * 60,
Handler: router, // Pass our instance of gorilla/mux in.
}
fmt.Println("starting server")
if err := server.ListenAndServe(); err != nil {
fmt.Println(err)
}
Also, from your source, the pattern of rate limiting you are implementing is to rate limit per user, but you use usernames instead of their IPs to limit their requests, and your question begins without clarifying if you wish to ratelimit per user or rate limit how many requests can be done to the endpoint overall - so maybe you might be getting unexpected behavior due to that too.

Too many open files serving http

I have the following code
package main
import (
"bytes"
"fmt"
"github.com/gorilla/mux"
"log"
"net/http"
"time"
"io"
httprouter "github.com/fasthttp/router"
"github.com/valyala/fasthttp"
)
func main() {
router := mux.NewRouter().StrictSlash(true)
/*router := NewRouter()*/
router.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
_, _ = fmt.Fprintf(w, "Hello!!!")
})
router.HandleFunc("/{name}", func(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
prepare(w, r, vars["name"])
}).Methods("POST")
log.Fatal(http.ListenAndServe(fmt.Sprintf(":%d", 8080), router))
}
//using fast http
func _() {
router := httprouter.New()
router.GET("/", func(w *fasthttp.RequestCtx) {
_, _ = fmt.Fprintf(w, "Hello!!!")
})
router.POST("/:name", func(w *fasthttp.RequestCtx) {
prepareRequest(w, w.UserValue("name").(string))
})
log.Fatal(fasthttp.ListenAndServe(fmt.Sprintf(":%d", 8080), router.Handler))
}
//func prepare(w *fasthttp.RequestCtx, name string)
func prepare(w http.ResponseWriter, r *http.Request, name string) {
//other part of the code and call to goroutine
var urls []string
//lets say all the url loaded, call the go routine func and wait for channel to respond and then proceed with the response of all url
results := callUrls(urls) //there are 10 urls atleast to call simultaneously for each request everytime
process(w, results)
}
type Response struct {
status int
url string
body string
}
func callUrls(urls []string) []*Response {
ch := make(chan *Response, len(urls))
for _, url := range urls {
go func(url string) {
//http post on url,
//base on status code of url call, add to status code
//some thing like
req, err := http.NewRequest("POST", url, bytes.NewBuffer(somePostData))
req.Header.Set("Content-Type", "application/json")
req.Close = true
client := &http.Client{
Timeout: time.Duration(time.Duration(100) * time.Millisecond),
}
response, err := client.Do(req)
//Using fast http client
/*req := fasthttp.AcquireRequest()
req.SetRequestURI(url)
req.Header.Set("Content-Type", "application/json")
req.Header.SetMethod("POST")
req.SetBody(somePostData)
response := fasthttp.AcquireResponse()
client := &fasthttp.Client{
ReadTimeout: time.Duration(time.Duration(100) * time.Millisecond),
}
err := client.Do(req, response)*/
if err != nil {
//do other thing with the response received
_, _ = io.Copy(ioutil.Discard, response.Body)
_ = response.Body.Close()
} else {
//success response
_, _ = io.Copy(ioutil.Discard, response.Body)
_ = response.Body.Close()
body, _:= ioutil.ReadAll(response.Body)
strBody := string(body)
strBody = strings.Replace(strBody, "\r", "", -1)
strBody = strings.Replace(strBody, "\n", "", -1)
}
// return to channel accordingly
ch <- &Response{200, "url", "response body"}
}(url)
}
var results []*Response
for {
select {
case r := <-ch:
results = append(results, r)
if len(results) == len(urls) {
//Done
close(ch)
return results
}
}
}
}
//func process(w *fasthttp.RequestCtx,results []*Response){
func process(w http.ResponseWriter, results []*Response){
fmt.Println("response", "response body")
}
After serving few request on multi core CPU (there are around 4000-6000 req coming per sec) I get too many files open error and response time and CPU goes beyond limit. (Could CPU be be high because I convert byte to string a few times to replace few character? Any suggestion?)
I have seen other question referring to closing req/res body and/or setting sysctl or ulimit to higher values, I did follow those but I always end up with the error.
Config on the server:
/etc/sysctl.conf net.ipv4.tcp_tw_recycle = 1
open files (-n) 65535
I need the code to respond in millisec but it take upto 50sec when cpu is high.
Have tried both net/http and fast http but with no improvement. My Node.js request npm does everything perfectly on the same server. What will be best way to handle those connection or change in the code needed for improvement.
You can use the following library:
Requests: A Go library for reduce the headache when making HTTP requests (20k/s req)
https://github.com/alessiosavi/Requests
It's developed for solve theto many open files dealing with parallel requests.
The idea is to allocate a list of request, than send them with a configurable "parallel" factor that allow to run only "N" request at time.
Initialize the requests (you have already a set of urls)
// This array will contains the list of request
var reqs []requests.Request
// N is the number of request to run in parallel, in order to avoid "TO MANY OPEN FILES. N have to be lower than ulimit threshold"
var N int = 12
// Create the list of request
for i := 0; i < 1000; i++ {
// In this case, we init 1000 request with same URL,METHOD,BODY,HEADERS
req, err := requests.InitRequest("https://127.0.0.1:5000", "GET", nil, nil, true)
if err != nil {
// Request is not compliant, and will not be add to the list
log.Println("Skipping request [", i, "]. Error: ", err)
} else {
// If no error occurs, we can append the request created to the list of request that we need to send
reqs = append(reqs, *req)
}
}
At this point, we have a list that contains the requests that have to be sent.
Let's send them in parallel!
// This array will contains the response from the givens request
var response []datastructure.Response
// send the request using N request to send in parallel
response = requests.ParallelRequest(reqs, N)
// Print the response
for i := range response {
// Dump is a method that print every information related to the response
log.Println("Request [", i, "] -> ", response[i].Dump())
// Or use the data present in the response
log.Println("Headers: ", response[i].Headers)
log.Println("Status code: ", response[i].StatusCode)
log.Println("Time elapsed: ", response[i].Time)
log.Println("Error: ", response[i].Error)
log.Println("Body: ", string(response[i].Body))
}
You can find example usage into the example folder of the repository.
SPOILER:
I'm the author of this little library

Go Routine: Shared Global variable in web server

I have go web server running on port and handling post request which internally calls different url to fetch response using goroutine and proceed.
I have divided the whole flow to different method. Draft of the code.
package main
import (
"bytes"
"fmt"
"github.com/gorilla/mux"
"log"
"net/http"
"time"
)
var status_codes string
func main() {
router := mux.NewRouter().StrictSlash(true)
/*router := NewRouter()*/
router.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
_, _ = fmt.Fprintf(w, "Hello!!!")
})
router.HandleFunc("/{name}", func(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
prepare(w, r, vars["name"])
}).Methods("POST")
log.Fatal(http.ListenAndServe(fmt.Sprintf(":%d", 8080), router))
}
func prepare(w http.ResponseWriter, r *http.Request, name string) {
//initializing for the current request, need to maintain this variable for each request coming
status_codes = ""
//other part of the code and call to goroutine
var urls []string
//lets say all the url loaded, call the go routine func and wait for channel to respond and then proceed with the response of all url
results := callUrls(urls)
process(w, results)
}
type Response struct {
status int
url string
body string
}
func callUrls(urls []string) []*Response {
ch := make(chan *Response, len(urls))
for _, url := range urls {
go func(url string) {
//http post on url,
//base on status code of url call, add to status code
//some thing like
req, err := http.NewRequest("POST", url, bytes.NewBuffer(somePostData))
req.Header.Set("Content-Type", "application/json")
req.Close = true
client := &http.Client{
Timeout: time.Duration(time.Duration(100) * time.Second),
}
response, err := client.Do(req)
if err != nil {
status_codes += "200,"
//do other thing with the response received
} else {
status_codes += "500,"
}
// return to channel accordingly
ch <- &Response{200, "url", "response body"}
}(url)
}
var results []*Response
for {
select {
case r := <-ch:
results = append(results, r)
if len(results) == len(urls) {
//Done
close(ch)
return results
}
}
}
}
func process(w http.ResponseWriter, results []*Response){
//read those status code received from all urls call for the given request
fmt.Println("status", status_codes)
//Now the above line keep getting status code from other request as well
//for eg. if I have called 5 urls then it should have
//200,500,204,404,200,
//but instead it is
//200,500,204,404,200,204,404,200,204,404,200, and some more keep growing with time
}
The above code does:
Variable declare globally, Initialized in prepare function.
append value in go routine callUrls function
read those variable in process function
Now should I pass those variable declared globally to each function call to make them local as it won't be shared then?(I would hate to do this.)
Or is there any other approach to achieve the same thing without adding more argument to function being called.
As I will have few other string and int value as well that will be used across the program and in go routine function as well.
What will be the correct way of making them thread safe and only 5 codes for each request coming on port simultaneously.
Don't use global variables, be explicit instead and use function arguments. Moreover, you have a race condition on status_codes because it is accessed by multiple goroutines without any mutex lock.
Take a look at my fix below.
func prepare(w http.ResponseWriter, r *http.Request, name string) {
var urls []string
//status_codes is populated by callUris(), so let it return the slice with values
results, status_codes := callUrls(urls)
//process() needs status_codes in order to work, so pass the variable explicitely
process(w, results, status_codes)
}
type Response struct {
status int
url string
body string
}
func callUrls(urls []string) []*Response {
ch := make(chan *Response, len(urls))
//In order to avoid race condition, let's use a channel
statusChan := make(chan string, len(urls))
for _, url := range urls {
go func(url string) {
//http post on url,
//base on status code of url call, add to status code
//some thing like
req, err := http.NewRequest("POST", url, bytes.NewBuffer(somePostData))
req.Header.Set("Content-Type", "application/json")
req.Close = true
client := &http.Client{
Timeout: time.Duration(time.Duration(100) * time.Second),
}
response, err := client.Do(req)
if err != nil {
statusChan <- "200"
//do other thing with the response received
} else {
statusChan <- "500"
}
// return to channel accordingly
ch <- &Response{200, "url", "response body"}
}(url)
}
var results []*Response
var status_codes []string
for !doneRes || !doneStatus { //continue until both slices are filled with values
select {
case r := <-ch:
results = append(results, r)
if len(results) == len(urls) {
//Done
close(ch) //Not really needed here
doneRes = true //we are done with results, set the corresponding flag
}
case status := <-statusChan:
status_codes = append(status_codes, status)
if len(status_codes) == len(urls) {
//Done
close(statusChan) //Not really needed here
doneStatus = true //we are done with statusChan, set the corresponding flag
}
}
}
return results, status_codes
}
func process(w http.ResponseWriter, results []*Response, status_codes []string) {
fmt.Println("status", status_codes)
}

how to make a proxy by golang

I'm trying to make a proxy by golang.
The origin version is written by lua, nginx like this:
location / {
keepalive_timeout 3600s;
keepalive_requests 30000;
rewrite_by_lua_file ./test.lua;
proxy_pass http://www.example.com/bd/news/home;
}
and lua file like this:
local req_params = ngx.req.get_uri_args()
local args = {
media = 24,
submedia = 46,
os = req_params.os,
osv = req_params.osv,
make = req_params.make,
model = req_params.model,
devicetype = req_params.devicetype,
conn = req_params.conn,
carrier = req_params.carrier,
sw = req_params.w,
sh = req_params.h,
}
if tonumber(req_params.os) == 1 then
args.imei = req_params.imei
args.adid = req_params.android_id
end
ngx.req.set_uri_args(args)
I try to do the same thing by golang, and my code is like this:
const newsTargetURL = "http://www.example.com/bd/news/home"
func GetNews(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodGet {
http.Error(w, "only get allowed", http.StatusMethodNotAllowed)
return
}
// deal params
rq := r.URL.Query()
os := rq.Get("os")
osv := rq.Get("osv")
imei := rq.Get("imei")
androidID := rq.Get("android_id")
deviceMake := rq.Get("make")
model := rq.Get("model")
deviceType := rq.Get("devicetype")
sw := rq.Get("w")
sh := rq.Get("h")
conn := rq.Get("conn")
carrier := rq.Get("carrier")
uv := make(url.Values)
uv.Set("media", "24")
uv.Set("submedia", "46")
uv.Set("os", os)
uv.Set("osv", osv)
if os == "1" {
uv.Set("imei", imei)
uv.Set("anid", androidID)
}
uv.Set("make", deviceMake)
uv.Set("model", model)
uv.Set("sw", sw)
uv.Set("sh", sh)
uv.Set("devicetype", deviceType)
uv.Set("ip", ip)
uv.Set("ua", ua)
uv.Set("conn", conn)
uv.Set("carrier", carrier)
t := newsTargetURL + "?" + uv.Encode()
// make a director
director := func(req *http.Request) {
u, err := url.Parse(t)
if err != nil {
panic(err)
}
req.URL = u
}
// make a proxy
proxy := &httputil.ReverseProxy{Director: director}
proxy.ServeHTTP(w, r)
}
func main() {
mux := http.NewServeMux()
mux.Handle("/", http.HandlerFunc(GetNews))
srv := &http.Server{
Addr: ":2222",
Handler: mux,
}
srv.ListenAndServe()
}
I put this go version to the same server where lua version locate, but it does not work as lua file do. I read the httputil document but found nothing that can help. What do I need to do?
I wrote together a simple proxy for GET requests. Hope this helps.
package main
import (
"fmt"
"io/ioutil"
"log"
"net/http"
)
const newsTargetURL = "http://www.example.com/bd/news/home"
func main() {
mux := http.NewServeMux()
mux.Handle("/", http.HandlerFunc(GetNews))
srv := &http.Server{
Addr: ":2222",
Handler: mux,
}
// output error and quit if ListenAndServe fails
log.Fatal(srv.ListenAndServe())
}
func GetNews(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodGet {
http.Error(w, "only get allowed", http.StatusMethodNotAllowed)
return
}
// build proxy url
urlstr := fmt.Sprintf("%s?%s", newsTargetURL, r.URL.RawQuery)
// request the proxy url
resp, err := http.Get(urlstr)
if err != nil {
http.Error(w, fmt.Sprintf("error creating request to %s", urlstr), http.StatusInternalServerError)
return
}
// make sure body gets closed when this function exits
defer resp.Body.Close()
// read entire response body
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
http.Error(w, "error reading response body", http.StatusInternalServerError)
return
}
// write status code and body from proxy request into the answer
w.WriteHeader(resp.StatusCode)
w.Write(body)
}
You can try it as is. It will work and show the content of example.com.
It uses a single handler GetNews for all requests. It skips all of the request parameter parsing and building by simply using r.url.RawQuery and newsTargetURL to build the new url.
Then we make a request to the new url (the main part missing in your question). From the response we read resp.StatusCode and resp.body to use in our response to the original request.
The rest is error handling.
The sample does not forward any additional information like cookies, headers, etc. That can be added as needed.

Go http, send incoming http.request to an other server using client.Do

Here my use case
We have one services "foobar" which has two version legacy and version_2_of_doom (both in go)
In order to make the transition from legacy to version_2_of_doom , we would like in a first time, to have the two version alongside, and have the POST request (as there's only one POST api call in this ) received on both.
The way I see how to do it. Would be
modifying the code of legacy at the beginning of the handler, in order to duplicate the request to version_2_of_doom
func(w http.ResponseWriter, req *http.Request) {
req.URL.Host = "v2ofdoom.local:8081"
req.Host = "v2ofdoom.local:8081"
client := &http.Client{}
client.Do(req)
// legacy code
but it seems to not be as straightforward as this
it fails with http: Request.RequestURI can't be set in client requests.
Is there a well-known method to do this kind of action (i.e transfering without touching) a http.Request to an other server ?
You need to copy the values you want into a new request. Since this is very similar to what a reverse proxy does, you may want to look at what "net/http/httputil" does for ReverseProxy.
Create a new request, and copy only the parts of the request you want to send to the next server. You will also need to read and buffer the request body if you intend to use it both places:
func handler(w http.ResponseWriter, req *http.Request) {
// we need to buffer the body if we want to read it here and send it
// in the request.
body, err := ioutil.ReadAll(req.Body)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
// you can reassign the body if you need to parse it as multipart
req.Body = ioutil.NopCloser(bytes.NewReader(body))
// create a new url from the raw RequestURI sent by the client
url := fmt.Sprintf("%s://%s%s", proxyScheme, proxyHost, req.RequestURI)
proxyReq, err := http.NewRequest(req.Method, url, bytes.NewReader(body))
// We may want to filter some headers, otherwise we could just use a shallow copy
// proxyReq.Header = req.Header
proxyReq.Header = make(http.Header)
for h, val := range req.Header {
proxyReq.Header[h] = val
}
resp, err := httpClient.Do(proxyReq)
if err != nil {
http.Error(w, err.Error(), http.StatusBadGateway)
return
}
defer resp.Body.Close()
// legacy code
}
In my experience, the easiest way to achieve this was to simply create a new request and copy all request attributes that you need into the new request object:
func(rw http.ResponseWriter, req *http.Request) {
url := req.URL
url.Host = "v2ofdoom.local:8081"
proxyReq, err := http.NewRequest(req.Method, url.String(), req.Body)
if err != nil {
// handle error
}
proxyReq.Header.Set("Host", req.Host)
proxyReq.Header.Set("X-Forwarded-For", req.RemoteAddr)
for header, values := range req.Header {
for _, value := range values {
proxyReq.Header.Add(header, value)
}
}
client := &http.Client{}
proxyRes, err := client.Do(proxyReq)
// and so on...
This approach has the benefit of not modifying the original request object (maybe your handler function or any middleware functions that are living in your stack still need the original object?).
Using original request (copy or duplicate only if original request still need):
func handler(w http.ResponseWriter, r *http.Request) {
// Step 1: rewrite URL
URL, _ := url.Parse("https://full_generic_url:123/x/y")
r.URL.Scheme = URL.Scheme
r.URL.Host = URL.Host
r.URL.Path = singleJoiningSlash(URL.Path, r.URL.Path)
r.RequestURI = ""
// Step 2: adjust Header
r.Header.Set("X-Forwarded-For", r.RemoteAddr)
// note: client should be created outside the current handler()
client := &http.Client{}
// Step 3: execute request
resp, err := client.Do(r)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
// Step 4: copy payload to response writer
copyHeader(w.Header(), resp.Header)
w.WriteHeader(resp.StatusCode)
io.Copy(w, resp.Body)
resp.Body.Close()
}
// copyHeader and singleJoiningSlash are copy from "/net/http/httputil/reverseproxy.go"
func copyHeader(dst, src http.Header) {
for k, vv := range src {
for _, v := range vv {
dst.Add(k, v)
}
}
}
func singleJoiningSlash(a, b string) string {
aslash := strings.HasSuffix(a, "/")
bslash := strings.HasPrefix(b, "/")
switch {
case aslash && bslash:
return a + b[1:]
case !aslash && !bslash:
return a + "/" + b
}
return a + b
}
I've seen the accepted anwser, but I would like to say that I dont like this. I've used this code for months with it working, but after some time you encounter requests that break (POST requests in my case). My preferred solution is the following:
r.URL.Host = "example.com"
r.RequestURI = ""
client := &http.Client{}
delete(r.Header, "Accept-Encoding")
delete(r.Headers, "Content-Length")
resp, err := client.Do(r.WithContext(context.Background())
if err != nil {
return nil, err
}
return resp, nil

Resources