why is fasthttp like single process? - go

requestHandler := func(ctx *fasthttp.RequestCtx) {
time.Sleep(time.Second*time.Duration(10))
fmt.Fprintf(ctx, "Hello, world! Requested path is %q", ctx.Path())
}
s := &fasthttp.Server{
Handler: requestHandler
}
if err := s.ListenAndServe("127.0.0.1:82"); err != nil {
log.Fatalf("error in ListenAndServe: %s", err)
}
multiple request,and it cost time like X*10s.
fasthttp is single process?
after two days...
I am sorry for this question,i describe my question not well.My question is caused by the browser,the browser request the same url by synchronization, and it mislead me, it make think the fasthttp web server hanlde the request by synchronization.

I think instead of fasthttp is single process?, you're asking whether fasthttp handles client requests concurrently or not?
I'm pretty sure that any server (including fasthttp) package will handle client requests concurrently. You should write a test/benchmark instead of manually access the server through several browsers. The following is an example of such test code:
package main_test
import (
"io/ioutil"
"net/http"
"sync"
"testing"
"time"
)
func doRequest(uri string) error {
resp, err := http.Get(uri)
if err != nil {
return err
}
defer resp.Body.Close()
_, err = ioutil.ReadAll(resp.Body)
if err != nil {
return err
}
return nil
}
func TestGet(t *testing.T) {
N := 1000
wg := sync.WaitGroup{}
wg.Add(N)
start := time.Now()
for i := 0; i < N; i++ {
go func() {
if err := doRequest("http://127.0.0.1:82"); err != nil {
t.Error(err)
}
wg.Done()
}()
}
wg.Wait()
t.Logf("Total duration for %d concurrent request(s) is %v", N, time.Since(start))
}
And the result (in my computer) is
fasthttp_test.go:42: Total duration for 1000 concurrent request(s) is 10.6066411s
You can see that the answer to your question is No, it handles the request concurrently.
UPDATE:
In case the requested URL is the same, your browser may perform the request sequentially. See Multiple Ajax requests for same URL. This explains why the response times are X*10s.

I am sorry for this question,i describe my question not well.My question is caused by the browser,the browser request the same url by synchronization, and it mislead me, it make think the fasthttp web server hanlde the request by synchronization.

Related

GoLang net/http memory keeps increasing on contineous requests

I have the following code in GoLang
package main
import (
"bytes"
"encoding/json"
"io/ioutil"
"log"
"net/http"
"time"
)
func httpClient() *http.Client {
var transport http.RoundTripper = &http.Transport{
DisableKeepAlives: false,
}
client := &http.Client{Timeout: 60 * time.Second, Transport: transport}
return client
}
func sendRequest(client *http.Client, method string) []byte {
endpoint := "https://httpbin.org/post"
values := map[string]string{"foo": "baz"}
jsonData, err := json.Marshal(values)
req, err := http.NewRequest(method, endpoint, bytes.NewBuffer(jsonData))
if err != nil {
log.Fatalf("Error Occurred. %+v", err)
}
resp, err:= client.Do(req)
if err != nil {
defer resp.Body.Close()
log.Fatalf("Error sending request to API endpoint. %+v", err)
}
// Close the connection to reuse it
defer resp.Body.Close()
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
log.Fatalf("Couldn't parse response body. %+v", err)
}
return body
}
func main() {
// c should be re-used for further calls
c := httpClient()
for i := 1; i <= 60; i++ {
response := sendRequest(c, http.MethodPost)
log.Println("Response Body:", string(response))
response = nil
time.Sleep(time.Millisecond * 1000)
}
}
When executed, it keeps the memory size increasing and the growth goes to as much as 90mb in one hour. is the gc not working properly. Even though i am using same httpclient for multiple requests but it still looks like theres something thats increasing the size of memory footprint.
I advice you to use tools like pprof, these are very useful at troubleshooting precisely this kind of issues.
You have set DisableKeepAlives field to false, which means that it will keep open connections even after the requests have been made, leading to further memory leaks. You should also call defer resp.Body.Close() after calling ioutil.ReadAll(resp.Body). This is precisely the purpose of the defer keyword - preventing memory leaks. GC does not mean absolute memory safety.
Also, outside of main avoid using log.Fatal. Use leveled logger, like zap or zerolog instead, since log.Fatal calls os.Exit(1) with an immediate effect, which means your defer statements will take no effect, or call plain panic. See Should a Go package ever use log.Fatal and when?

Trouble figuring out data race in goroutine

I started learning go recently and I've been chipping away at this for a while now, but figured it was time to ask for some specific help. I have my program requesting paginated data from an api and because there are about 160 pages of data. Seems like a good use of goroutines, except I have race conditions and I can't seem to figure out why. It's probably because I'm new to the language, but my impressions was that params for a function are passed as a copy of the data in the function calling it unless it's a pointer.
According to what I think I know this should be making copies of my data which leaves me free to change it in the main function, but I end up request some pages multiple times and other pages just once.
My main.go
package main
import (
"bufio"
"encoding/json"
"log"
"net/http"
"net/url"
"os"
"strconv"
"sync"
"github.com/joho/godotenv"
)
func main() {
err := godotenv.Load()
if err != nil {
log.Fatalln(err)
}
httpClient := &http.Client{}
baseURL := "https://api.data.gov/ed/collegescorecard/v1/schools.json"
filters := make(map[string]string)
page := 0
filters["school.degrees_awarded.predominant"] = "2,3"
filters["fields"] = "id,school.name,school.city,2018.student.size,2017.student.size,2017.earnings.3_yrs_after_completion.overall_count_over_poverty_line,2016.repayment.3_yr_repayment.overall"
filters["api_key"] = os.Getenv("API_KEY")
outFile, err := os.Create("./out.txt")
if err != nil {
log.Fatalln(err)
}
writer := bufio.NewWriter(outFile)
requestURL := getRequestURL(baseURL, filters)
response := requestData(requestURL, httpClient)
wg := sync.WaitGroup{}
for (page+1)*response.Metadata.ResultsPerPage < response.Metadata.TotalResults {
page++
filters["page"] = strconv.Itoa(page)
wg.Add(1)
go func() {
defer wg.Done()
requestURL := getRequestURL(baseURL, filters)
response := requestData(requestURL, httpClient)
_, err = writer.WriteString(response.TextOutput())
if err != nil {
log.Fatalln(err)
}
}()
}
wg.Wait()
}
func getRequestURL(baseURL string, filters map[string]string) *url.URL {
requestURL, err := url.Parse(baseURL)
if err != nil {
log.Fatalln(err)
}
query := requestURL.Query()
for key, value := range filters {
query.Set(key, value)
}
requestURL.RawQuery = query.Encode()
return requestURL
}
func requestData(url *url.URL, httpClient *http.Client) CollegeScoreCardResponseDTO {
request, _ := http.NewRequest(http.MethodGet, url.String(), nil)
resp, err := httpClient.Do(request)
if err != nil {
log.Fatalln(err)
}
defer resp.Body.Close()
var parsedResponse CollegeScoreCardResponseDTO
err = json.NewDecoder(resp.Body).Decode(&parsedResponse)
if err != nil {
log.Fatalln(err)
}
return parsedResponse
}
I know another issue I will be running into is writing to the output file in the correct order, but I believe using channels to tell each routine what request finished writing could solve that. If I'm incorrect on that I would appreciate any advice on how to approach that as well.
Thanks in advance.
goroutines do not receive copies of data. When the compiler detects that a variable "escapes" the current function, it allocates that variable on the heap. In this case, filters is one such variable. When the goroutine starts, the filters it accesses is the same map as the main thread. Since you keep modifying filters in the main thread without locking, there is no guarantee of what the goroutine sees.
I suggest you keep filters read-only, create a new map in the goroutine by copying all items from the filters, and add the "page" in the goroutine. You have to be careful to pass a copy of the page as well:
go func(page int) {
flt:=make(map[string]string)
for k,v:=range filters {
flt[k]=v
}
flt["page"]=strconv.Itoa(page)
...
} (page)

How to mock second try of http call?

As part of my first project I am creating a tiny library to send an SMS to any user. I have added the logic of waiting and retrying if it doesn't receive a positive status on first go. It's a basic HTTP call to am SMS sending service. My algorithm looks like this (comments would explain the flow of the code):
for {
//send request
resp, err := HTTPClient.Do(req)
checkOK, checkSuccessUrl, checkErr := CheckSuccessStatus(resp, err)
//if successful don't continue
if !checkOK and checkErr != nil {
err = checkErr
return resp, SUCCESS, int8(RetryMax-remain+1), err
}
remain := remain - 1
if remain == 0 {
break
}
//calculate wait time
wait := Backoff(RetryWaitMin, RetryWaitMax, RetryMax-remain, resp)
//wait for time calculated in backoff above
time.Sleep(wait)
//check the status of last call, if unsuccessful then continue the loop
if checkSuccessUrl != "" {
req, err := GetNotificationStatusCheckRequest(checkSuccessUrl)
resp, err := HTTPClient.Do(req)
checkOK, _, checkErr = CheckSuccessStatusBeforeRetry(resp, err)
if !checkOK {
if checkErr != nil {
err = checkErr
}
return resp,SUCCESS, int8(RetryMax-remain), err
}
}
}
Now I want to test this logic using any HTTP mock framework available. The best I've got is https://github.com/jarcoal/httpmock
But this one does not provide functionality to mock the response of first and second URL separately. Hence I cannot test the success in second or third retry. I can either test success in first go or failure altogether.
Is there a package out there which suits my needs of testing this particular feature? If no, How can I achieve this using current tools?
This can easily be achieved using the test server that comes in the standard library's httptest package. With a slight modification to the example contained within it you can set up functions for each of the responses you want up front by doing this:
package main
import (
"fmt"
"io/ioutil"
"log"
"net/http"
"net/http/httptest"
)
func main() {
responseCounter := 0
responses := []func(w http.ResponseWriter, r *http.Request){
func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintln(w, "First response")
},
func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintln(w, "Second response")
},
}
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
responses[responseCounter](w, r)
responseCounter++
}))
defer ts.Close()
printBody(ts.URL)
printBody(ts.URL)
}
func printBody(url string) {
res, err := http.Get(url)
if err != nil {
log.Fatal(err)
}
resBody, err := ioutil.ReadAll(res.Body)
res.Body.Close()
if err != nil {
log.Fatal(err)
}
fmt.Printf("%s", resBody)
}
Which outputs:
First response
Second response
Executable code here:
https://play.golang.org/p/YcPe5hOSxlZ
Not sure you still need an answer, but github.com/jarcoal/httpmock provides a way to do this using ResponderFromMultipleResponses.

How to test main function in gin application?

How can I test func main? Like this:
func main(){
Engine := GetEngine() // returns gin router with handlers atttached
Engine.Run(":8080")
}
It has only 2 lines but I'd like to have them covered.
TestMain' is reserved for test preparation, does that mean testing main was not planned by language creators?
I can move the contents to another function mainReal but it seems to be some over engineering?
How to test gin has started well? Can I launch main in separate goroutine, check reply and stop it?
Thanks.
P.S. Possible duplicate is not precise duplicate because it is dedicated not to testing of func main() itself, but rather ideas to move in outside and so contains different issue and approach.
Solution.
You may test function main() from package main the same way, just do not name it TestMain. I launch it as a separate goroutine, than try to connect to it and perform any request.
I decided to connect to auxilary handler which should respond with a simple json {"status": "ok"}.
In my case:
func TestMainExecution(t *testing.T) {
go main()
resp, err := http.Get("http://127.0.0.1:8080/checkHealth")
if err != nil {
t.Fatalf("Cannot make get: %v\n", err)
}
bodySb, err := ioutil.ReadAll(resp.Body)
if err != nil {
t.Fatalf("Error reading body: %v\n", err)
}
body := string(bodySb)
fmt.Printf("Body: %v\n", body)
var decodedResponse interface{}
err = json.Unmarshal(bodySb, &decodedResponse)
if err != nil {
t.Fatalf("Cannot decode response <%p> from server. Err: %v", bodySb, err)
}
assert.Equal(t, map[string]interface{}{"status": "ok"}, decodedResponse,
"Should return status:ok")
}

golang request to Orientdb http interface error

I am playing wit golang and orientdb to test them. i have written a tiny web app which uppon a request fetches a single document from local orientdb instance and returns it. when i bench this app with apache bench, when concurrency is above 1 it get following error:
2015/04/08 19:24:07 http: panic serving [::1]:57346: Get http://localhost:2480/d
ocument/t1/9:1441: EOF
when i bench Orientdb itself, it runs perfectley OK with any cuncurrency factor.
also when i change the url to fetch from this document to anything (other program whritten in golang, some internet site etc) the app runs OK.
here is the code:
func main() {
fmt.Println("starting ....")
var aa interface{}
router := gin.New()
router.GET("/", func(c *gin.Context) {
ans := getdoc("http://localhost:2480/document/t1/9:1441")
json.Unmarshal(ans, &aa)
c.JSON(http.StatusOK, aa)
})
router.Run(":3000")
}
func getdoc(addr string) []byte {
client := new(http.Client)
req, err := http.NewRequest("GET", addr, nil)
req.SetBasicAuth("admin","admin")
resp, err := client.Do(req)
if err != nil {
fmt.Println("oops", resp, err)
panic(err)
}
defer resp.Body.Close()
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
panic(err)
}
return body
}
thanks in advance
The keepalive connections are getting closed on you for some reason. You might be overwhelming the server, or going past the max number of connections the database can handle.
Also, the current http.Transport connection pool doesn't work well with synthetic benchmarks that make connections as fast as possible, and can quickly exhaust available file descriptors or ports (issue/6785).
To test this, I would set Request.Close = true to prevent the Transport from using the keepalive pool. If that works, one way to handle this with keepalive, is to specifically check for an io.EOF and retry that request, possibly with some backoff delay.

Resources