I have go web server running on port and handling post request which internally calls different url to fetch response using goroutine and proceed.
I have divided the whole flow to different method. Draft of the code.
package main
import (
"bytes"
"fmt"
"github.com/gorilla/mux"
"log"
"net/http"
"time"
)
var status_codes string
func main() {
router := mux.NewRouter().StrictSlash(true)
/*router := NewRouter()*/
router.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
_, _ = fmt.Fprintf(w, "Hello!!!")
})
router.HandleFunc("/{name}", func(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
prepare(w, r, vars["name"])
}).Methods("POST")
log.Fatal(http.ListenAndServe(fmt.Sprintf(":%d", 8080), router))
}
func prepare(w http.ResponseWriter, r *http.Request, name string) {
//initializing for the current request, need to maintain this variable for each request coming
status_codes = ""
//other part of the code and call to goroutine
var urls []string
//lets say all the url loaded, call the go routine func and wait for channel to respond and then proceed with the response of all url
results := callUrls(urls)
process(w, results)
}
type Response struct {
status int
url string
body string
}
func callUrls(urls []string) []*Response {
ch := make(chan *Response, len(urls))
for _, url := range urls {
go func(url string) {
//http post on url,
//base on status code of url call, add to status code
//some thing like
req, err := http.NewRequest("POST", url, bytes.NewBuffer(somePostData))
req.Header.Set("Content-Type", "application/json")
req.Close = true
client := &http.Client{
Timeout: time.Duration(time.Duration(100) * time.Second),
}
response, err := client.Do(req)
if err != nil {
status_codes += "200,"
//do other thing with the response received
} else {
status_codes += "500,"
}
// return to channel accordingly
ch <- &Response{200, "url", "response body"}
}(url)
}
var results []*Response
for {
select {
case r := <-ch:
results = append(results, r)
if len(results) == len(urls) {
//Done
close(ch)
return results
}
}
}
}
func process(w http.ResponseWriter, results []*Response){
//read those status code received from all urls call for the given request
fmt.Println("status", status_codes)
//Now the above line keep getting status code from other request as well
//for eg. if I have called 5 urls then it should have
//200,500,204,404,200,
//but instead it is
//200,500,204,404,200,204,404,200,204,404,200, and some more keep growing with time
}
The above code does:
Variable declare globally, Initialized in prepare function.
append value in go routine callUrls function
read those variable in process function
Now should I pass those variable declared globally to each function call to make them local as it won't be shared then?(I would hate to do this.)
Or is there any other approach to achieve the same thing without adding more argument to function being called.
As I will have few other string and int value as well that will be used across the program and in go routine function as well.
What will be the correct way of making them thread safe and only 5 codes for each request coming on port simultaneously.
Don't use global variables, be explicit instead and use function arguments. Moreover, you have a race condition on status_codes because it is accessed by multiple goroutines without any mutex lock.
Take a look at my fix below.
func prepare(w http.ResponseWriter, r *http.Request, name string) {
var urls []string
//status_codes is populated by callUris(), so let it return the slice with values
results, status_codes := callUrls(urls)
//process() needs status_codes in order to work, so pass the variable explicitely
process(w, results, status_codes)
}
type Response struct {
status int
url string
body string
}
func callUrls(urls []string) []*Response {
ch := make(chan *Response, len(urls))
//In order to avoid race condition, let's use a channel
statusChan := make(chan string, len(urls))
for _, url := range urls {
go func(url string) {
//http post on url,
//base on status code of url call, add to status code
//some thing like
req, err := http.NewRequest("POST", url, bytes.NewBuffer(somePostData))
req.Header.Set("Content-Type", "application/json")
req.Close = true
client := &http.Client{
Timeout: time.Duration(time.Duration(100) * time.Second),
}
response, err := client.Do(req)
if err != nil {
statusChan <- "200"
//do other thing with the response received
} else {
statusChan <- "500"
}
// return to channel accordingly
ch <- &Response{200, "url", "response body"}
}(url)
}
var results []*Response
var status_codes []string
for !doneRes || !doneStatus { //continue until both slices are filled with values
select {
case r := <-ch:
results = append(results, r)
if len(results) == len(urls) {
//Done
close(ch) //Not really needed here
doneRes = true //we are done with results, set the corresponding flag
}
case status := <-statusChan:
status_codes = append(status_codes, status)
if len(status_codes) == len(urls) {
//Done
close(statusChan) //Not really needed here
doneStatus = true //we are done with statusChan, set the corresponding flag
}
}
}
return results, status_codes
}
func process(w http.ResponseWriter, results []*Response, status_codes []string) {
fmt.Println("status", status_codes)
}
Related
I am working with Go to implement a pipeline of JSON data from an external API, process the message and then send to a SQL database.
I am trying to concurrently run API requests, then after I return a response, I'd like to send it to be inserted into the DB via another goroutine via load().
In my below code, sometimes I'll receive my log.Printf() in the load() func, other times I won't. Which indicates that I'm likely closing a channel or not properly setting up the communication.
The pattern I am attempting is something like this:
package main
import (
"encoding/json"
"io/ioutil"
"log"
"net/http"
"time"
)
type Request struct {
url string
}
type Response struct {
status int
args Args `json:"args"`
headers Headers `json:"headers"`
origin string `json:"origin"`
url string `json:"url"`
}
type Args struct {
}
type Headers struct {
accept string `json:"Accept"`
}
func main() {
start := time.Now()
numRequests := 5
responses := make(chan Response, 5)
defer close(responses)
for i := 0; i < numRequests; i++ {
req := Request{url: "https://httpbin.org/get"}
go func(req *Request) {
resp, err := extract(req)
if err != nil {
log.Fatal("Error extracting data from API")
return
}
// Send response to channel
responses <- resp
}(&req)
// Perform go routine to load data
go load(responses)
}
log.Println("Execution time: ", time.Since(start))
}
func extract(req *Request) (r Response, err error) {
var resp Response
request, err := http.NewRequest("GET", req.url, nil)
if err != nil {
return resp, err
}
request.Header = http.Header{
"accept": {"application/json"},
}
response, err := http.DefaultClient.Do(request)
defer response.Body.Close()
if err != nil {
log.Fatal("Error")
return resp, err
}
// Read response data
body, err := ioutil.ReadAll(response.Body)
if err != nil {
log.Fatal("Error")
return resp, err
}
json.Unmarshal(body, &resp)
resp.status = response.StatusCode
return resp, nil
}
type Record struct {
origin string
url string
}
func load(ch chan Response) {
// Read response from channel
resp := <-ch
// Process the response data
records := process(resp)
log.Printf("%+v\n", records)
// Load data to db stuff here
}
func process(resp Response) (record Record) {
// Process the response struct as needed to get a record of data to insert to DB
return record
}
The program has no protection against completion before the work is done. So sometimes the program terminates before the goroutine can finish.
To prevent that, use a WaitGroup:
wg:=sync.WaitGroup{}
for i := 0; i < numRequests; i++ {
...
wg.Add(1)
go func() {
defer wg.Done()
load(responses)
}()
}
wg.Wait()
I want to call two endpoints at the same time (A and B). But if I got a response 200 from both I need to use the response from A otherwise use B response.
If B returns first I need to wait for A, in other words, I must use A whenever A returns 200.
Can you guys help me with the pattern?
Thank you
Wait for a result from A. If the result is not good, then wait from a result from B. Use a buffered channel for the B result so that the sender does not block when A is good.
In the following snippet, fnA() and fnB() functions that issue requests to the endpoints, consume the response and cleanup. I assume that the result is a []byte, but it could be the result of decoding JSON or something else. Here's an example for fnA:
func fnA() ([]byte, error) {
r, err := http.Get("http://example.com/a")
if err != nil {
return nil, err
}
defer r.Body.Close() // <-- Important: close the response body!
if r.StatusCode != 200 {
return nil, errors.New("bad response")
}
return ioutil.ReadAll(r.Body)
}
Define a type to hold the result and error.
type response struct {
result []byte
err error
}
With those preliminaries done, here's how to prioritize A over B.
a := make(chan response)
go func() {
result, err := fnA()
a <- response{result, err}
}()
b := make(chan response, 1) // Size > 0 is important!
go func() {
result, err := fnB()
b <- response{result, err}
}()
resp := <-a
if resp.err != nil {
resp = <-b
if resp.err != nil {
// handle error. A and B both failed.
}
}
result := resp.result
If the application does not execute code concurrently with A and B, then there's no need to use a goroutine for A:
b := make(chan response, 1) // Size > 0 is important!
go func() {
result, err := fnB()
b <- response{result, err}
}()
result, err := fnA()
if err != nil {
resp = <-b
if resp.err != nil {
// handle error. A and B both failed.
}
result = resp.result
}
I'm suggesting you to use something like this, this is a bulky solution, but there you can start more than two endpoints for you needs.
func endpointPriorityTest() {
const (
sourceA = "a"
sourceB = "b"
sourceC = "c"
)
type endpointResponse struct {
source string
response *http.Response
error
}
epResponseChan := make(chan *endpointResponse)
endpointsMap := map[string]string{
sourceA: "https://jsonplaceholder.typicode.com/posts/1",
sourceB: "https://jsonplaceholder.typicode.com/posts/10",
sourceC: "https://jsonplaceholder.typicode.com/posts/100",
}
for source, endpointURL := range endpointsMap {
source := source
endpointURL := endpointURL
go func(respChan chan<- *endpointResponse) {
// You can add a delay so that the response from A takes longer than from B
// and look to the result map
// if source == sourceA {
// time.Sleep(time.Second)
// }
resp, err := http.Get(endpointURL)
respChan <- &endpointResponse{
source: source,
response: resp,
error: err,
}
}(epResponseChan)
}
respCache := make(map[string]*http.Response)
// Reading endpointURL responses from chan
for epResp := range epResponseChan {
// Skips failed requests
if epResp.error != nil {
continue
}
// Save successful response to cache map
respCache[epResp.source] = epResp.response
// Interrupt reading channel if we've got an response from source A
if epResp.source == sourceA {
break
}
}
fmt.Println("result map: ", respCache)
// Now we can use data from cache map
// resp, ok :=respCache[sourceA]
// if ok{
// ...
// }
}
#Zombo 's answer has the correct logic flow. Piggybacking off this, I would suggest one addition: leveraging the context package.
Basically, any potentially blocking tasks should use context.Context to allow the call-chain to perform more efficient clean-up in the event of early cancelation.
context.Context also can be leveraged, in your case, to abort the B call early if the A call succeeds:
func failoverResult(ctx context.Context) *http.Response {
// wrap the (parent) context
ctx, cancel := context.WithCancel(ctx)
// if we return early i.e. if `fnA()` completes first
// this will "cancel" `fnB()`'s request.
defer cancel()
b := make(chan *http.Response, 1)
go func() {
b <- fnB(ctx)
}()
resp := fnA(ctx)
if resp.StatusCode != 200 {
resp = <-b
}
return resp
}
fnA (and fnB) would look something like this:
func fnA(ctx context.Context) (resp *http.Response) {
req, _ := http.NewRequestWithContext(ctx, "GET", aUrl)
resp, _ = http.DefaultClient.Do(req) // TODO: check errors
return
}
Normally in golang, channel are used for communicating between goroutines.
You can orchestrate your scenario with following sample code.
basically you pass channel into your callB which will hold response. You don't need to run callA in goroutine as you always need result from that endpoint/service
package main
import (
"fmt"
"time"
)
func main() {
resB := make(chan int)
go callB(resB)
res := callA()
if res == 200 {
fmt.Print("No Need for B")
} else {
res = <-resB
fmt.Printf("Response from B : %d", res)
}
}
func callA() int {
time.Sleep(1000)
return 200
}
func callB(res chan int) {
time.Sleep(500)
res <- 200
}
Update: As suggestion given in comment, above code leaks "callB"
package main
import (
"fmt"
"time"
)
func main() {
resB := make(chan int, 1)
go callB(resB)
res := callA()
if res == 200 {
fmt.Print("No Need for B")
} else {
res = <-resB
fmt.Printf("Response from B : %d", res)
}
}
func callA() int {
time.Sleep(1000 * time.Millisecond)
return 200
}
func callB(res chan int) {
time.Sleep(500 * time.Millisecond)
res <- 200
}
I have created some Go functions that make HTTP GET calls to services that are out there on the internet and parse the results.
I am now working on writing test-cases for these functions.
In my test cases, I'm using the go package httptest to simulate calls to these external services. Below is my code. Error checking is purposefully removed for brevity. Here is the go-playground.
package main
import (
"fmt"
"io"
"context"
"net/http"
"net/http/httptest"
)
func handlerResponse() http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
w.Write([]byte(`{"A":"B"}`))
})
}
func buildMyRequest(ctx context.Context, url string) *http.Request {
request, _ := http.NewRequestWithContext(ctx, "GET", url, nil)
return request
}
func myPrint(response *http.Response) {
b := make([]byte, 60000)
for {
_, err := response.Body.Read(b)
if err == io.EOF {
break
}
}
fmt.Println(string(b))
}
func main() {
srv := httptest.NewServer(handlerResponse())
client := http.Client{}
myResponse1, _ := client.Do(buildMyRequest(context.Background(), srv.URL))
fmt.Println("myResponse1:")
myPrint(myResponse1)
myResponse2, _ := client.Do(buildMyRequest(context.Background(), srv.URL))
fmt.Println("myResponse2:")
myPrint(myResponse2)
}
This is the output it produces:
myResponse1:
{"A":"B"}
myResponse2:
{"A":"B"}
As you can see, I have created some dummy HTTP response data {"A":"B"} and when you send an HTTP request to srv.URL, it actually hits an ephemeral HTTP server which responds with the dummy data. Cool!
When you send the second HTTP request to srv.URL, it again responds with the same dummy data. But this is where my problem arises. I want the ephemeral HTTP server to return some different data the second time {"C":"D"} and third time {"E":"F"} it receives a request.
How can I change the first line of the main() function so that the server responds with my desired data on subsequent HTTP calls?
you could use a hack like follows ( playground : here)
package main
import (
"fmt"
"io"
"context"
"net/http"
"net/http/httptest"
"sync"
)
type responseWriter struct{
resp map[int]string
count int
lock *sync.Mutex
}
func NewResponseWriter()*responseWriter{
r := new(responseWriter)
r.lock = new(sync.Mutex)
r.resp = map[int]string{
0: `{"E":"F"}`,
1: `{"A":"B"}`,
2: `{"C":"D"}`,
}
r.count = 0
return r
}
func (r *responseWriter)GetResp()string{
r.lock.Lock()
defer r.lock.Unlock()
r.count ++
return r.resp[r.count%3]
}
func handlerResponse(rr *responseWriter) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
w.Write([]byte(rr.GetResp()))
})
}
func buildMyRequest(ctx context.Context, url string) *http.Request {
request, _ := http.NewRequestWithContext(ctx, "GET", url, nil)
return request
}
func myPrint(response *http.Response) {
b := make([]byte, 60000)
for {
_, err := response.Body.Read(b)
if err == io.EOF {
break
}
}
fmt.Println(string(b))
}
func main() {
rr := NewResponseWriter()
srv := httptest.NewServer(handlerResponse(rr))
client := http.Client{}
myResponse1, err := client.Do(buildMyRequest(context.Background(), srv.URL))
if err != nil{
fmt.Println(err)
return
}
defer myResponse1.Body.Close()
fmt.Println("myResponse1:")
myPrint(myResponse1)
myResponse2, err := client.Do(buildMyRequest(context.Background(), srv.URL))
if err != nil{
fmt.Println(err)
return
}
defer myResponse2.Body.Close()
fmt.Println("myResponse2:")
myPrint(myResponse2)
}
I am not necessarily trying to accomplish something specific, more just understand how goroutines, channels, waitgroups, and select (on channels) plays together. I am writing a simple program that loops through an slice of URLs, fetches the URL, then basically just ends. The simple idea is that I want all of the fetches to occur and return, send their data over channels, and then end once all fetches have occurred. I am almost there, and I know I am missing something in my select that will end the loop, something to say "hey the waitgroup is empty now", but I am unsure how to best do that. Mind taking a look and clearing it up for me? Right now everything runs just fine, it just doesn't terminate, so clearly I am missing something and/or not understanding how some of these components should work together.
package main
import (
"fmt"
"io/ioutil"
"net/http"
"sync"
)
var urls = []string{
"https://www.google.com1",
"https://www.gentoo.org",
}
var wg sync.WaitGroup
// simple struct to store fetching
type urlObject struct {
url string
success bool
body string
}
func getPage(url string, channelMain chan urlObject, channelError chan error) {
// increment waitgroup, defer decrementing
wg.Add(1)
defer wg.Done()
fmt.Println("fetching " + url)
// create a urlObject
uO := urlObject{
url: url,
success: false,
}
// get URL
response, getError := http.Get(url)
// close response later on
if response != nil {
defer response.Body.Close()
}
// send error over error channel if one occurs
if getError != nil {
channelError <- getError
return
}
// convert body to []byte
body, conversionError := ioutil.ReadAll(response.Body)
// convert []byte to string
bodyString := string(body)
// if a conversion error happens send it over the error channel
if conversionError != nil {
channelError <- conversionError
} else {
// if not send a urlObject over the main channel
uO.success = true
uO.body = bodyString
channelMain <- uO
}
}
func main() {
var channelMain = make(chan urlObject)
var channelError = make(chan error)
for _, v := range urls {
go getPage(v, channelMain, channelError)
}
// wait on goroutines to finish
wg.Wait()
for {
select {
case uO := <-channelMain:
fmt.Println("completed " + uO.url)
case err := <-channelError:
fmt.Println("error: " + err.Error())
}
}
}
You need to make the following changes:
As people have mentioned, you probably want to call wg.Add(1) in the main function, before calling your goroutine. That way you KNOW it occurs before the defer wg.Done() call.
Your channel reads will block, unless you can figure out a way to either close the channels in your goroutines, or make them buffered. Probably the easiest way is to make them buffered, e.g., var channelMain = make(chan urlObject, len(urls))
The break in your select statement is going to only exit the select, not the containing for loop. You can label the for loop and break to that, or use some sort of conditional variable.
Playground link to working version: https://play.golang.org/p/WH1fm2MhP-L
package main
import (
"fmt"
"io/ioutil"
"net/http"
"sync"
)
var urls = []string{
"https://www.google.com1",
"https://www.gentoo.org",
}
var wg sync.WaitGroup
// simple struct to store fetching
type urlObject struct {
url string
success bool
body string
}
func getPage(url string, channelMain chan urlObject, channelError chan error) {
// increment waitgroup, defer decrementing
defer wg.Done()
fmt.Println("fetching " + url)
// create a urlObject
uO := urlObject{
url: url,
success: false,
}
// get URL
response, getError := http.Get(url)
// close response later on
if response != nil {
defer response.Body.Close()
}
// send error over error channel if one occurs
if getError != nil {
channelError <- getError
return
}
// convert body to []byte
body, conversionError := ioutil.ReadAll(response.Body)
// convert []byte to string
bodyString := string(body)
// if a conversion error happens send it over the error channel
if conversionError != nil {
channelError <- conversionError
} else {
// if not send a urlObject over the main channel
uO.success = true
uO.body = bodyString
channelMain <- uO
}
}
func main() {
var channelMain = make(chan urlObject, len(urls))
var channelError = make(chan error, len(urls))
for _, v := range urls {
wg.Add(1)
go getPage(v, channelMain, channelError)
}
// wait on goroutines to finish
wg.Wait()
for done := false; !done; {
select {
case uO := <-channelMain:
fmt.Println("completed " + uO.url)
case err := <-channelError:
fmt.Println("error: " + err.Error())
default:
done = true
}
}
}
I have the following code
package main
import (
"bytes"
"fmt"
"github.com/gorilla/mux"
"log"
"net/http"
"time"
"io"
httprouter "github.com/fasthttp/router"
"github.com/valyala/fasthttp"
)
func main() {
router := mux.NewRouter().StrictSlash(true)
/*router := NewRouter()*/
router.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
_, _ = fmt.Fprintf(w, "Hello!!!")
})
router.HandleFunc("/{name}", func(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
prepare(w, r, vars["name"])
}).Methods("POST")
log.Fatal(http.ListenAndServe(fmt.Sprintf(":%d", 8080), router))
}
//using fast http
func _() {
router := httprouter.New()
router.GET("/", func(w *fasthttp.RequestCtx) {
_, _ = fmt.Fprintf(w, "Hello!!!")
})
router.POST("/:name", func(w *fasthttp.RequestCtx) {
prepareRequest(w, w.UserValue("name").(string))
})
log.Fatal(fasthttp.ListenAndServe(fmt.Sprintf(":%d", 8080), router.Handler))
}
//func prepare(w *fasthttp.RequestCtx, name string)
func prepare(w http.ResponseWriter, r *http.Request, name string) {
//other part of the code and call to goroutine
var urls []string
//lets say all the url loaded, call the go routine func and wait for channel to respond and then proceed with the response of all url
results := callUrls(urls) //there are 10 urls atleast to call simultaneously for each request everytime
process(w, results)
}
type Response struct {
status int
url string
body string
}
func callUrls(urls []string) []*Response {
ch := make(chan *Response, len(urls))
for _, url := range urls {
go func(url string) {
//http post on url,
//base on status code of url call, add to status code
//some thing like
req, err := http.NewRequest("POST", url, bytes.NewBuffer(somePostData))
req.Header.Set("Content-Type", "application/json")
req.Close = true
client := &http.Client{
Timeout: time.Duration(time.Duration(100) * time.Millisecond),
}
response, err := client.Do(req)
//Using fast http client
/*req := fasthttp.AcquireRequest()
req.SetRequestURI(url)
req.Header.Set("Content-Type", "application/json")
req.Header.SetMethod("POST")
req.SetBody(somePostData)
response := fasthttp.AcquireResponse()
client := &fasthttp.Client{
ReadTimeout: time.Duration(time.Duration(100) * time.Millisecond),
}
err := client.Do(req, response)*/
if err != nil {
//do other thing with the response received
_, _ = io.Copy(ioutil.Discard, response.Body)
_ = response.Body.Close()
} else {
//success response
_, _ = io.Copy(ioutil.Discard, response.Body)
_ = response.Body.Close()
body, _:= ioutil.ReadAll(response.Body)
strBody := string(body)
strBody = strings.Replace(strBody, "\r", "", -1)
strBody = strings.Replace(strBody, "\n", "", -1)
}
// return to channel accordingly
ch <- &Response{200, "url", "response body"}
}(url)
}
var results []*Response
for {
select {
case r := <-ch:
results = append(results, r)
if len(results) == len(urls) {
//Done
close(ch)
return results
}
}
}
}
//func process(w *fasthttp.RequestCtx,results []*Response){
func process(w http.ResponseWriter, results []*Response){
fmt.Println("response", "response body")
}
After serving few request on multi core CPU (there are around 4000-6000 req coming per sec) I get too many files open error and response time and CPU goes beyond limit. (Could CPU be be high because I convert byte to string a few times to replace few character? Any suggestion?)
I have seen other question referring to closing req/res body and/or setting sysctl or ulimit to higher values, I did follow those but I always end up with the error.
Config on the server:
/etc/sysctl.conf net.ipv4.tcp_tw_recycle = 1
open files (-n) 65535
I need the code to respond in millisec but it take upto 50sec when cpu is high.
Have tried both net/http and fast http but with no improvement. My Node.js request npm does everything perfectly on the same server. What will be best way to handle those connection or change in the code needed for improvement.
You can use the following library:
Requests: A Go library for reduce the headache when making HTTP requests (20k/s req)
https://github.com/alessiosavi/Requests
It's developed for solve theto many open files dealing with parallel requests.
The idea is to allocate a list of request, than send them with a configurable "parallel" factor that allow to run only "N" request at time.
Initialize the requests (you have already a set of urls)
// This array will contains the list of request
var reqs []requests.Request
// N is the number of request to run in parallel, in order to avoid "TO MANY OPEN FILES. N have to be lower than ulimit threshold"
var N int = 12
// Create the list of request
for i := 0; i < 1000; i++ {
// In this case, we init 1000 request with same URL,METHOD,BODY,HEADERS
req, err := requests.InitRequest("https://127.0.0.1:5000", "GET", nil, nil, true)
if err != nil {
// Request is not compliant, and will not be add to the list
log.Println("Skipping request [", i, "]. Error: ", err)
} else {
// If no error occurs, we can append the request created to the list of request that we need to send
reqs = append(reqs, *req)
}
}
At this point, we have a list that contains the requests that have to be sent.
Let's send them in parallel!
// This array will contains the response from the givens request
var response []datastructure.Response
// send the request using N request to send in parallel
response = requests.ParallelRequest(reqs, N)
// Print the response
for i := range response {
// Dump is a method that print every information related to the response
log.Println("Request [", i, "] -> ", response[i].Dump())
// Or use the data present in the response
log.Println("Headers: ", response[i].Headers)
log.Println("Status code: ", response[i].StatusCode)
log.Println("Time elapsed: ", response[i].Time)
log.Println("Error: ", response[i].Error)
log.Println("Body: ", string(response[i].Body))
}
You can find example usage into the example folder of the repository.
SPOILER:
I'm the author of this little library