I am trying to set a timer to count how much time is needed for my server to finish a request and I want the timer to stop after the last byte of the response is sent.
I found that the http server will only send the response after the handler function returns.
Is there any way to add a callback after the response is sent ?
Or is there a better way to count the time taken from the first byte of the request coming in till the last byte byte of the response is sent ?
The easier but not as accurate way to do it would be using a middleware to wrap your handler function.
func timer(h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
startTime := time.Now()
h.ServeHTTP(w, r)
duration := time.Now().Sub(startTime)
})
}
Then
http.Handle("/route",timer(yourHandler))
This is more accurately the time taken to process the request and form the response and not the time between writes.
If you absolutely need a more accurate duration then the parts of code you're looking to change reside in the net/http package.
It would be around here.
The highlighted line go c.serve(ctx) is where the the go routine for serving the request is spawned.
for {
rw, e := l.Accept()
if e != nil {
if ne, ok := e.(net.Error); ok && ne.Temporary() {
if tempDelay == 0 {
tempDelay = 5 * time.Millisecond
} else {
tempDelay *= 2
}
if max := 1 * time.Second; tempDelay > max {
tempDelay = max
}
srv.logf("http: Accept error: %v; retrying in %v", e, tempDelay)
time.Sleep(tempDelay)
continue
}
return e
}
tempDelay = 0
c := srv.newConn(rw)
c.setState(c.rwc, StateNew) // before Serve can return
go func(){
startTime := time.Now()
c.serve(ctx)
duration := time.Now().Sub(startTime)
}()
}
Note : The request actually gets written in the net.Conn somewhere inside l.Accept() but the highlighted point is the only place where we can have the approximate start time and end time within the same scope in the code.
Related
I am trying to construct a receiver and sender pattern using two channels in Golang. I am doing a task (API call), and receiving back a Response struct. My goal is that when a response is received I'd like to send it to another channel (writeChan) for additional processing.
I'd like to continuously read/listen on that receiver channel (respChan) and process anything that comes through (such as a Response). Then I'd like to spin up a thread to go and do a further operation with that Response in another goroutine.
I'd like to understand how I can chain together this pattern to allow data to flow from the API calls and concurrently write it (each Response will be written to a separate file destination which the Write() func handles.
Essentially my current pattern is the following:
package main
import (
"fmt"
"sync"
)
func main() {
var wg sync.WaitGroup
respChan := make(chan Response) // Response is a struct that contains API response metadata
defer close(respChan)
// requests is just a slice of requests to be made to an API
// This part is working well
for _, req := range requests {
wg.Add(1)
go func(r Request) {
defer wg.Done()
resp, _ := r.Get() // Make the API call and receive back a Response struct
respChan <- resp // Put the response into our channel
}(req)
}
// Now, I want to extract the responses as they become available and send them to another function to do some processing. This I am unsure of how to handle properly
writeChan := make(chan string)
defer close(writeChan)
select {
case resp := <-respChan: // receive from response channel
go func(response Response) {
signal, _ := Write(response) // Separate func to write the response to a file. Not important here in this context.
writeChan <- signal // Put the signal data into the channel which is a string file path of where the file was written (will be used for a later process)
}(resp)
case <-time.After(15 *time.Second):
fmt.Println("15 seconds have passed without receiving anything...")
}
wg.Wait()
}
Let me share with you a working example that you can benefit from. First, I'm gonna present the code, then, I'm gonna walk you through all the relevant sections.
package main
import (
"fmt"
"net/http"
"os"
"strings"
"time"
)
type Request struct {
Url string
DelayInSeconds time.Duration
}
type Response struct {
Url string
StatusCode int
}
func main() {
requests := []Request{
{"https://www.google.com", 0},
{"https://stackoverflow.com", 1},
{"https://www.wikipedia.com", 4},
}
respChan := make(chan Response)
defer close(respChan)
for _, req := range requests {
go func(r Request) {
fmt.Printf("%q - %v\n", r.Url, strings.Repeat("#", 30))
// simulate heavy work
time.Sleep(time.Second * r.DelayInSeconds)
resp, _ := http.Get(r.Url)
res := Response{r.Url, resp.StatusCode}
fmt.Println(time.Now())
respChan <- res
}(req)
}
writeChan := make(chan struct{})
defer close(writeChan)
for i := 0; i < len(requests); i++ {
select {
case res := <-respChan:
go func(r Response) {
f, err := os.Create(fmt.Sprintf("%v.txt", strings.Replace(r.Url, "https://", "", 1)))
if err != nil {
panic(err)
}
defer f.Close()
f.Write([]byte(fmt.Sprintf("%q OK with %d\n", r.Url, r.StatusCode)))
writeChan <- struct{}{}
}(res)
case <-time.After(time.Second * 2):
fmt.Println("Timeout")
}
}
}
Set up
First, I've defined the two structs that will be used in the example: Request and Response. In the former, I put a DelayInSeconds to mock some heavy loads and time-consuming operations. Then, I defined the requests variable that contains all the requests that have to be done.
The writing part
Here, I range over the requests variable. For each request, I'm gonna issue an HTTP request to the target URL. The time.Sleep emulate the heavy load. Then, I write the response to the respChan channel which is unbuffered.
The reading part
Here, the major change is to wrap the select construct into a for loop. Thanks to this, we'll make sure to iterate the right times (based on the length of the requests variable).
Final notes
First of all, bear in mind that the code is oversimplified just to show off the relevant parts. Due to this, a lot of error handling is missing and some inline functions could be extrapolated into named functions. You don't need to use sync.WaitGroup to achieve what you need, the usage of channels will be enough.
Feel free to play with delays and check which files are written!
Let me know if this helps you!
Edit
As requested, I'm gonna provide you with a more accurate solution based on your needs. The new reading part will be something like the following:
count := 0
for {
// this check is need to exit the for loop and not wait indefinitely
// it can be removed based on your needs
if count == 3 {
fmt.Println("all responses arrived...")
return
}
res := <-respChan
count++
go func(r Response) {
f, err := os.Create(fmt.Sprintf("%v.txt", strings.Replace(r.Url, "https://", "", 1)))
if err != nil {
panic(err)
}
defer f.Close()
f.Write([]byte(fmt.Sprintf("%q OK with %d\n", r.Url, r.StatusCode)))
writeChan <- struct{}{}
}(res)
}
Here, the execution is waiting indefinitely within the for loop. No matter how long each request takes to complete, it will be fetched as soon as it arrives. I put, at the top of the for loop, an if to exit after it processed the requests that we need. However, you can avoid it and let the code run till a cancellation signal comes in (it's up to you).
Let me know if this better meets your requirements, thanks!
To give you context,
The variable elementInput is dynamic. I do not know the exact length of it.
It can be 10, 5, or etc.
The *Element channel type is struct
My example is working. But my problem is this implementation is still synchronized, because I am waiting for the channel return so that I can append it to my result
Can you pls help me how to concurrent call GetElements() function and preserve the order defined in elementInput (based on index)
elementInput := []string{FB_FRIENDS, BEAUTY_USERS, FITNESS_USERS, COMEDY_USERS}
wg.Add(len(elementInput))
for _, v := range elementInput {
//create channel
channel := make(chan *Element)
//concurrent call
go GetElements(ctx, page, channel)
//Preserve the order
var elementRes = *<-channel
if len(elementRes.List) > 0 {
el = append(el, elementRes)
}
}
wg.Wait()
Your implementation is not concurrent.
Reason after every subroutine call you are waiting for result, that is making this serial
Below is Sample implementation similar to your flow
calling Concurreny method which calls function concurrently
afterwards we loop and collect response from every above call
main subroutine sleep for 2 seconds
Go PlayGround with running code -> Sample Application
func main() {
Concurrency()
time.Sleep(2000)
}
func response(greeter string, channel chan *string) {
reply := fmt.Sprintf("hello %s", greeter)
channel <- &reply
}
func Concurrency() {
events := []string{"ALICE", "BOB"}
channels := make([]chan *string, 0)
// start concurrently
for _, event := range events {
channel := make(chan *string)
go response(event, channel)
channels = append(channels, channel)
}
// collect response
response := make([]string, len(channels))
for i := 0; i < len(channels); i++ {
response[i] = *<-channels[i]
}
// print response
log.Printf("channel response %v", response)
}
I'm using a rate limiter to throttle the number of requests that are routed
The requests are sent to a channel, and I want to limit the number that are processed per second but i'm struggling to understand if i'm setting this correctly, I don't get an error, but i'm unsure if i'm even using the rate limiter
This is what is being added to the channel:
type processItem struct {
itemString string
}
Here's the channel and limiter:
itemChannel := make(chan processItem, 5)
itemThrottler := rate.NewLimiter(4, 1) //4 a second, no more per second (1)
var waitGroup sync.WaitGroup
Items are added to the channel:
case "newItem":
waitGroup.Add(1)
itemToExec := new(processItem)
itemToExec.itemString = "item string"
itemChannel <- *itemToExec
Then a go routine is used to process everything that is added to the channel:
go func() {
defer waitGroup.Done()
err := itemThrottler.Wait(context.Background())
if err != nil {
fmt.Printf("Error with limiter: %s", err)
return
}
for item := range itemChannel {
execItem(item.itemString) // the processing function
}
defer func() { <-itemChannel }()
}()
waitGroup.Wait()
Can someone confirm that the following occurs:
The execItem function is run on each member of the channel 4 times a second
I don't understand what "err := itemThrottler.Wait(context.Background())" is doing in the code, how is this being invoked?
... i'm unsure if i'm even using the rate limiter
Yes, you are using the rate-limiter. You are rate-limiting the case "newItem": branch of your code.
I don't understand what "err := itemThrottler.Wait(context.Background())" is doing in the code
itemThrottler.Wait(..) will just stagger requests (4/s i.e. every 0.25s) - it does not refuse requests if the rate is exceeded. So what does this mean? If you receive a glut of 1000 requests in 1 second:
4 requests will be handled immediately; but
996 requests will create a backlog of 996 go-routines that will block
The 996 will unblock at a rate of 4/s and thus the backlog of pending go-routines will not clear for another 4 minutes (or maybe longer if more requests come in). A backlog of go-routines may or may not be what you want. If not, you may want to use Limiter.Allow - and if false is returned, then refuse the request (i.e. don't create a go-routine) and return a 429 error (if this is a HTTP request).
Finally, if this is a HTTP request, you should use it's imbedded context when calling Wait e.g.
func (a *app) myHandler(w http.ResponseWriter, r *http.Request) {
// ...
err := a.ratelimiter(r.Context())
if err != nil {
// client http request most likely canceled (i.e. caller disconnected)
}
}
I tried to measure the bandwidth of Go default HTTP server implementation on my local machine. The server is just accepting any HTTP request, increments the counter using sync.atomic and send 200 OK response. Also, the server collects amount of requests every second, prints it and resets counter to zero:
type hand struct {
cnt int32
}
func (h *hand) ServeHTTP(rsp http.ResponseWriter, req *http.Request) {
atomic.AddInt32(&h.cnt, 1)
rsp.WriteHeader(200)
if req.Body != nil {
req.Body.Close()
}
}
func main() {
h := new(hand)
s := &http.Server{
Addr: ":8080",
Handler: h,
}
ticker := time.NewTicker(1 * time.Second)
go func() {
for tick := range ticker.C {
val := atomic.SwapInt32(&h.cnt, 0)
fmt.Printf("(%v) %d RPS\n", tick, val)
}
}()
log.Fatal(s.ListenAndServe())
}
The target client is trying to send 100000 GET requests simultaneously:
const total = 100000
func main() {
var r int32
var rsp int32
r = total
rsp = r
for r > 0 {
go func() {
p, err := http.Get("http://localhost:8080")
atomic.AddInt32(&rsp, -1)
if err != nil {
fmt.Printf("error: %s\n", err)
return
}
if p.StatusCode != 200 {
fmt.Printf("status %d\n", p.StatusCode)
}
}()
r--
}
for {
x := atomic.LoadInt32(&rsp)
fmt.Printf("sent : %d\n", total-x)
if x == 0 {
return
}
time.Sleep(1 * time.Second)
}
}
I'm using Linux machine with 5.3.2-gentoo kernel. I changed ulimits (both soft and hard) of nofile to 100000. When I run this tests, all other user applications were stopped.
I'm not expecting to get accurate results, but just need to know the level of this threshold, something like X000 or X0000 or X00000.
But the server can't process more than 4000 requests per second, it's looking too low:
# removed timestamps
0 RPS
0 RPS
0 RPS
3953 RPS
3302 RPS
387 RPS
37 RPS
1712 RPS
How can I raise the bandwidth of HTTP server? Or maybe there is an issue with my testing method or local configuration?
The problem was in testing method:
It's not correct to run client and server on the same machine, the target server should be located at dedicated host, the network between target and client should be fast enough
Custom scripts for network testing is not an option: for simple cases wrk can be used, for more complex scenarios Jmetr or other frameworks
When I tested this server on dedicated host using wrk it shows 285900.73 RPS.
Below is the sample snippet for getting value from Redis. I'm pipeling 3 redis commands and getting the values. The problem here is "missing milliseconds". The time taken by redis pipeline is significantly lower ( less than 5ms) but the overall time taken to perform a Get Operation is more than 10ms. Not sure which operation is taking time, unmarshal is not the issue, as I measured the len(bytes) and timing. Any help is much appreciated.
Request/Second = 300, running on 3 AWS large instances with a powerful 25GB redis instance. Using 10 default connections.
func Get(params...) <-chan CacheResult {
start := time.Now()
var res CacheResult
defer func() {
resCh <- res
}()
type timers struct {
total time.Duration
pipeline time.Duration
unmarshal time.Duration
}
t := timers{}
startPipeTime := time.Now()
// pipe line commands
pipe := c.client.Pipeline()
// 3 commands pipelined (HGET, HEGT, GET)
if _, res.Err = pipe.Exec(); res.Err != nil && res.Err != redis.Nil {
return resCh
}
sinceStartPipeTime := time.Since(startPipeTime)
// get query values like below for HGET & GET
if val, res.Err = cachedValue.Bytes(); res.Err != nil {
return resCh
}
// Unmarshal the query value
startUnmarshalTime := time.Now()
var cv common.CacheValue
if res.Err = json.Unmarshal(val, &cv); res.Err != nil {
return resCh
}
sinceStartUnmarshalTime := time.Since(startUnmarshalTime)
t.unmarshal = sinceStartUnmarshalTime
endTime := time.Since(start)
xlog.Infof("Timings total:%s, "+
"pipeline(redis):%s, unmarshaling(%vB):%s", t.total, t.pipeline, len(val), t.unmarshal)
return resCh
}
Time to execute a redis command include:
App server pre-processing
Round trip time between app server and redis server
Redis server processing time
In normal operation, (2) takes the most significant time.