How to mock second try of http call? - go

As part of my first project I am creating a tiny library to send an SMS to any user. I have added the logic of waiting and retrying if it doesn't receive a positive status on first go. It's a basic HTTP call to am SMS sending service. My algorithm looks like this (comments would explain the flow of the code):
for {
//send request
resp, err := HTTPClient.Do(req)
checkOK, checkSuccessUrl, checkErr := CheckSuccessStatus(resp, err)
//if successful don't continue
if !checkOK and checkErr != nil {
err = checkErr
return resp, SUCCESS, int8(RetryMax-remain+1), err
}
remain := remain - 1
if remain == 0 {
break
}
//calculate wait time
wait := Backoff(RetryWaitMin, RetryWaitMax, RetryMax-remain, resp)
//wait for time calculated in backoff above
time.Sleep(wait)
//check the status of last call, if unsuccessful then continue the loop
if checkSuccessUrl != "" {
req, err := GetNotificationStatusCheckRequest(checkSuccessUrl)
resp, err := HTTPClient.Do(req)
checkOK, _, checkErr = CheckSuccessStatusBeforeRetry(resp, err)
if !checkOK {
if checkErr != nil {
err = checkErr
}
return resp,SUCCESS, int8(RetryMax-remain), err
}
}
}
Now I want to test this logic using any HTTP mock framework available. The best I've got is https://github.com/jarcoal/httpmock
But this one does not provide functionality to mock the response of first and second URL separately. Hence I cannot test the success in second or third retry. I can either test success in first go or failure altogether.
Is there a package out there which suits my needs of testing this particular feature? If no, How can I achieve this using current tools?

This can easily be achieved using the test server that comes in the standard library's httptest package. With a slight modification to the example contained within it you can set up functions for each of the responses you want up front by doing this:
package main
import (
"fmt"
"io/ioutil"
"log"
"net/http"
"net/http/httptest"
)
func main() {
responseCounter := 0
responses := []func(w http.ResponseWriter, r *http.Request){
func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintln(w, "First response")
},
func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintln(w, "Second response")
},
}
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
responses[responseCounter](w, r)
responseCounter++
}))
defer ts.Close()
printBody(ts.URL)
printBody(ts.URL)
}
func printBody(url string) {
res, err := http.Get(url)
if err != nil {
log.Fatal(err)
}
resBody, err := ioutil.ReadAll(res.Body)
res.Body.Close()
if err != nil {
log.Fatal(err)
}
fmt.Printf("%s", resBody)
}
Which outputs:
First response
Second response
Executable code here:
https://play.golang.org/p/YcPe5hOSxlZ

Not sure you still need an answer, but github.com/jarcoal/httpmock provides a way to do this using ResponderFromMultipleResponses.

Related

Trouble figuring out data race in goroutine

I started learning go recently and I've been chipping away at this for a while now, but figured it was time to ask for some specific help. I have my program requesting paginated data from an api and because there are about 160 pages of data. Seems like a good use of goroutines, except I have race conditions and I can't seem to figure out why. It's probably because I'm new to the language, but my impressions was that params for a function are passed as a copy of the data in the function calling it unless it's a pointer.
According to what I think I know this should be making copies of my data which leaves me free to change it in the main function, but I end up request some pages multiple times and other pages just once.
My main.go
package main
import (
"bufio"
"encoding/json"
"log"
"net/http"
"net/url"
"os"
"strconv"
"sync"
"github.com/joho/godotenv"
)
func main() {
err := godotenv.Load()
if err != nil {
log.Fatalln(err)
}
httpClient := &http.Client{}
baseURL := "https://api.data.gov/ed/collegescorecard/v1/schools.json"
filters := make(map[string]string)
page := 0
filters["school.degrees_awarded.predominant"] = "2,3"
filters["fields"] = "id,school.name,school.city,2018.student.size,2017.student.size,2017.earnings.3_yrs_after_completion.overall_count_over_poverty_line,2016.repayment.3_yr_repayment.overall"
filters["api_key"] = os.Getenv("API_KEY")
outFile, err := os.Create("./out.txt")
if err != nil {
log.Fatalln(err)
}
writer := bufio.NewWriter(outFile)
requestURL := getRequestURL(baseURL, filters)
response := requestData(requestURL, httpClient)
wg := sync.WaitGroup{}
for (page+1)*response.Metadata.ResultsPerPage < response.Metadata.TotalResults {
page++
filters["page"] = strconv.Itoa(page)
wg.Add(1)
go func() {
defer wg.Done()
requestURL := getRequestURL(baseURL, filters)
response := requestData(requestURL, httpClient)
_, err = writer.WriteString(response.TextOutput())
if err != nil {
log.Fatalln(err)
}
}()
}
wg.Wait()
}
func getRequestURL(baseURL string, filters map[string]string) *url.URL {
requestURL, err := url.Parse(baseURL)
if err != nil {
log.Fatalln(err)
}
query := requestURL.Query()
for key, value := range filters {
query.Set(key, value)
}
requestURL.RawQuery = query.Encode()
return requestURL
}
func requestData(url *url.URL, httpClient *http.Client) CollegeScoreCardResponseDTO {
request, _ := http.NewRequest(http.MethodGet, url.String(), nil)
resp, err := httpClient.Do(request)
if err != nil {
log.Fatalln(err)
}
defer resp.Body.Close()
var parsedResponse CollegeScoreCardResponseDTO
err = json.NewDecoder(resp.Body).Decode(&parsedResponse)
if err != nil {
log.Fatalln(err)
}
return parsedResponse
}
I know another issue I will be running into is writing to the output file in the correct order, but I believe using channels to tell each routine what request finished writing could solve that. If I'm incorrect on that I would appreciate any advice on how to approach that as well.
Thanks in advance.
goroutines do not receive copies of data. When the compiler detects that a variable "escapes" the current function, it allocates that variable on the heap. In this case, filters is one such variable. When the goroutine starts, the filters it accesses is the same map as the main thread. Since you keep modifying filters in the main thread without locking, there is no guarantee of what the goroutine sees.
I suggest you keep filters read-only, create a new map in the goroutine by copying all items from the filters, and add the "page" in the goroutine. You have to be careful to pass a copy of the page as well:
go func(page int) {
flt:=make(map[string]string)
for k,v:=range filters {
flt[k]=v
}
flt["page"]=strconv.Itoa(page)
...
} (page)

Proxy gateway send HTTP response back

I am looking to make a proxy gateway in Go.
Almost done ! One thing is still missing : send the entire client response to the server request.
I've got my own HTTP handler :
func (f HttpHandlerFunc) ServeHTTP(w http.ResponseWriter, r *http.Request) {
if rurl, err := getOriginurl(r.RequestURI); err == nil {
[...]
client := &Http.Client{}
r.URL = rurl
r.RequestURI = ""
resp, err := client.Do(r)
if err == nil {
for k, vs := range resp.Header {
for _, v := range vs {
w.Header().Set(k, v)
}
}
w.WriteHeader(resp.StatusCode)
if responseData,err := ioutil.ReadAll(resp.Body); err == nil {
w.Write(responseData)
}
}
}
}
func getOriginurl(request string) *url.URL {
{...}
// Would return an *url.URL with : http://127.0.0.1:8080/{requestURI}
}
I am looking for a way to optimize the way to parse Client response to ResponseWriter.
Actually my question would be : How to parse Response type to ResponseWriter exhaustively ?
You can use httputil.NewSingleHostReverseProxy instead of your own HTTP client logic.
httputil.NewSingleHostReverseProxy(rurl).ServeHTTP(w, r)

Unable to redirect in a golang web app. It sticks to one page

This is code snippet from a file called upload.go.
I tried a lot of ways to redirect to another pages. I want to redirect to another page when the statements in POST are completed running.
package main
import (
"fmt"
"io"
"net/http"
"os"
"text/template"
)
func upload(w http.ResponseWriter, r *http.Request) {
if r.Method == "GET" {
// GET
t, _ := template.ParseFiles("upload.gtpl")
t.Execute(w, nil)
} else if r.Method == "POST" {
// Post
file, handler, err := r.FormFile("uploadfile")
if err != nil {
fmt.Println(err)
return
}
defer file.Close()
fmt.Fprintf(w, "%v", handler.Header)
f, err := os.OpenFile("./test/"+handler.Filename, os.O_WRONLY|os.O_CREATE, 0666)
if err != nil {
fmt.Println(err)
return
}
defer f.Close()
io.Copy(f, file)
img, err := imgio.Open("./test/" + handler.Filename)
if err != nil {
panic(err)
}
inverted := effect.Invert(img)
if err := imgio.Save("filename.png", inverted, imgio.PNGEncoder()); err != nil {
panic(err)
}
fmt.Fprintf(w, "%v", handler.Header)
http.Redirect(w, r, "www.google.com", http.StatusMovedPermanently)
} else {
fmt.Println("Unknown HTTP " + r.Method + " Method")
}
}
func main() {
http.HandleFunc("/upload", upload)
http.HandleFunc("/hi", func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "Hi")
http.Redirect(w, r, "www.google.com", http.StatusMovedPermanently)
})
http.ListenAndServe(":9090", nil) // setting listening port
}
It stays on the upload page what ever I do. Can anyone help me debug this?
Your code is writing to the ResponseWriter before trying to send a redirect.
Upon the first write to the ResponseWriter, the status code (200 OK) and headers are sent, if they haven't already been sent, and then the data you passed to the writer.
If you intend to send an HTTP redirect, you can't write any response body to the ResponseWriter. From reading your code, it doesn't make much sense why you are writing to it in the first place. They look like debugging print statements, which you probably ought to send to os.Stderr or a logger instead of the web page response body.
If you need to redirect after posting a form, you need to set the status to http.StatusSeeOther (303)
For example:
http.Redirect(w, r, "/index", http.StatusSeeOther)

Golang AWS S3manager multipartreader w/ Goroutines

I'm creating an endpoint that allows a user to upload several files at the same time and store them in S3. Currently I'm able to achieve this using MultipartReader and s3manager but only in a non-synchronous fashion.
I'm trying to implement Go routines to speed this functionality up and have multiple files uploaded to S3 concurrently, but a data race error is causing trouble. I think *s3manager might not be goroutine safe as the docs say it is.
(Code works synchronously if go-statement is replaced with function code).
Could implementing mutex locks possibly fix my error?
func uploadHandler(w http.ResponseWriter, r *http.Request) {
counter := 0
switch r.Method {
// GET to display the upload form.
case "GET":
err := templates.Execute(w, nil)
if err != nil {
log.Print(err)
}
// POST uploads each file and sends them to S3
case "POST":
c := make(chan string)
// grab the request.MultipartReader
reader, err := r.MultipartReader()
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
// copy each part to destination.
for {
part, err := reader.NextPart()
if err == io.EOF {
break
}
// if part.FileName() is empty, skip this iteration.
if part.FileName() == "" {
continue
}
counter++
go S3Upload(c, part)
}
for i := 0; i < counter; i++ {
fmt.Println(<-c)
}
// displaying a success message.
err = templates.Execute(w, "Upload successful.")
if err != nil {
log.Print(err)
}
default:
w.WriteHeader(http.StatusMethodNotAllowed)
}
}
func S3Upload(c chan string, part *multipart.Part) {
bucket := os.Getenv("BUCKET")
sess, err := session.NewSession(&aws.Config{
Region: aws.String(os.Getenv("REGION"))},
)
if err != nil {
c <- "error occured creating session"
return
}
uploader := s3manager.NewUploader(sess)
_, err = uploader.Upload(&s3manager.UploadInput{
Bucket: aws.String(bucket),
Key: aws.String(part.FileName()),
Body: part,
})
if err != nil {
c <- "Error occurred attempting to upload to S3"
return
}
// successful upload
c <- "successful upload"
}
^ see all the comments above,
here is some modified code example, channels not useful here.
package main
import (
"bytes"
"io"
"log"
"net/http"
"os"
"strings"
"sync"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/s3/s3manager"
)
var (
setupUploaderOnce sync.Once
uploader *s3manager.Uploader
bucket string
region string
)
// ensure sessions and uploader are setup only once using a Singleton pattern
func setupUploader() {
setupUploaderOnce.Do(func() {
bucket = os.Getenv("BUCKET")
region = os.Getenv("REGION")
sess, err := session.NewSession(&aws.Config{Region: aws.String(region)})
if err != nil {
log.Fatal(err)
}
uploader := s3manager.NewUploader(sess)
})
}
// normally singleton stuff is packaged out and called before starting the server, but to keep the example a single file, load it up here
func init() {
setupUploader()
}
func uploadHandler(w http.ResponseWriter, r *http.Request) {
counter := 0
switch r.Method {
// GET to display the upload form.
case "GET":
err := templates.Execute(w, nil)
if err != nil {
log.Print(err)
}
// POST uploads each file and sends them to S3
case "POST":
var buf bytes.Buffer
// "file" is defined by the form field, change it to whatever your form sets it too
file, header, err := r.FormFile("file")
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
// close the file
defer file.Close()
fileName := strings.Split(header.Filename, ".")
// load the entire file data to the buffer
_, err = io.Copy(&buf, file)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
// copy each part to destination.
go S3Upload(buf, fileName[0])
// displaying a success message.
err = templates.Execute(w, "Upload successful.")
if err != nil {
log.Print(err)
}
default:
w.WriteHeader(http.StatusMethodNotAllowed)
}
}
// keeping this simple, do something with the err, like log
// if the uploader fails in the goroutine, there is potential
// for false positive uploads... channels are not really good here
// either, for that, bubble the error up,
// and don't spin up a goroutine.. same thing as waiting for the channel to return.
func S3Upload(body bytes.Buffer, fileName string) {
_, err := uploader.Upload(&s3manager.UploadInput{
Bucket: aws.String(bucket),
Key: aws.String(fileName),
Body: bytes.NewReader(body.Bytes()),
})
}

Go http, send incoming http.request to an other server using client.Do

Here my use case
We have one services "foobar" which has two version legacy and version_2_of_doom (both in go)
In order to make the transition from legacy to version_2_of_doom , we would like in a first time, to have the two version alongside, and have the POST request (as there's only one POST api call in this ) received on both.
The way I see how to do it. Would be
modifying the code of legacy at the beginning of the handler, in order to duplicate the request to version_2_of_doom
func(w http.ResponseWriter, req *http.Request) {
req.URL.Host = "v2ofdoom.local:8081"
req.Host = "v2ofdoom.local:8081"
client := &http.Client{}
client.Do(req)
// legacy code
but it seems to not be as straightforward as this
it fails with http: Request.RequestURI can't be set in client requests.
Is there a well-known method to do this kind of action (i.e transfering without touching) a http.Request to an other server ?
You need to copy the values you want into a new request. Since this is very similar to what a reverse proxy does, you may want to look at what "net/http/httputil" does for ReverseProxy.
Create a new request, and copy only the parts of the request you want to send to the next server. You will also need to read and buffer the request body if you intend to use it both places:
func handler(w http.ResponseWriter, req *http.Request) {
// we need to buffer the body if we want to read it here and send it
// in the request.
body, err := ioutil.ReadAll(req.Body)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
// you can reassign the body if you need to parse it as multipart
req.Body = ioutil.NopCloser(bytes.NewReader(body))
// create a new url from the raw RequestURI sent by the client
url := fmt.Sprintf("%s://%s%s", proxyScheme, proxyHost, req.RequestURI)
proxyReq, err := http.NewRequest(req.Method, url, bytes.NewReader(body))
// We may want to filter some headers, otherwise we could just use a shallow copy
// proxyReq.Header = req.Header
proxyReq.Header = make(http.Header)
for h, val := range req.Header {
proxyReq.Header[h] = val
}
resp, err := httpClient.Do(proxyReq)
if err != nil {
http.Error(w, err.Error(), http.StatusBadGateway)
return
}
defer resp.Body.Close()
// legacy code
}
In my experience, the easiest way to achieve this was to simply create a new request and copy all request attributes that you need into the new request object:
func(rw http.ResponseWriter, req *http.Request) {
url := req.URL
url.Host = "v2ofdoom.local:8081"
proxyReq, err := http.NewRequest(req.Method, url.String(), req.Body)
if err != nil {
// handle error
}
proxyReq.Header.Set("Host", req.Host)
proxyReq.Header.Set("X-Forwarded-For", req.RemoteAddr)
for header, values := range req.Header {
for _, value := range values {
proxyReq.Header.Add(header, value)
}
}
client := &http.Client{}
proxyRes, err := client.Do(proxyReq)
// and so on...
This approach has the benefit of not modifying the original request object (maybe your handler function or any middleware functions that are living in your stack still need the original object?).
Using original request (copy or duplicate only if original request still need):
func handler(w http.ResponseWriter, r *http.Request) {
// Step 1: rewrite URL
URL, _ := url.Parse("https://full_generic_url:123/x/y")
r.URL.Scheme = URL.Scheme
r.URL.Host = URL.Host
r.URL.Path = singleJoiningSlash(URL.Path, r.URL.Path)
r.RequestURI = ""
// Step 2: adjust Header
r.Header.Set("X-Forwarded-For", r.RemoteAddr)
// note: client should be created outside the current handler()
client := &http.Client{}
// Step 3: execute request
resp, err := client.Do(r)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
// Step 4: copy payload to response writer
copyHeader(w.Header(), resp.Header)
w.WriteHeader(resp.StatusCode)
io.Copy(w, resp.Body)
resp.Body.Close()
}
// copyHeader and singleJoiningSlash are copy from "/net/http/httputil/reverseproxy.go"
func copyHeader(dst, src http.Header) {
for k, vv := range src {
for _, v := range vv {
dst.Add(k, v)
}
}
}
func singleJoiningSlash(a, b string) string {
aslash := strings.HasSuffix(a, "/")
bslash := strings.HasPrefix(b, "/")
switch {
case aslash && bslash:
return a + b[1:]
case !aslash && !bslash:
return a + "/" + b
}
return a + b
}
I've seen the accepted anwser, but I would like to say that I dont like this. I've used this code for months with it working, but after some time you encounter requests that break (POST requests in my case). My preferred solution is the following:
r.URL.Host = "example.com"
r.RequestURI = ""
client := &http.Client{}
delete(r.Header, "Accept-Encoding")
delete(r.Headers, "Content-Length")
resp, err := client.Do(r.WithContext(context.Background())
if err != nil {
return nil, err
}
return resp, nil

Resources