I am trying to implement http request limiter to allow 10 request per second per user by their usernames.
At the max 10 request can be hit to the server including requests which are under processing.
Below is what I have implemented with reference of rate-limit.
func init() {
go cleanupVisitors()
}
func getVisitor(username string) *rate.Limiter {
mu.Lock()
defer mu.Unlock()
v, exists := visitors[username]
if !exists {
limiter := rate.NewLimiter(10, 3)
visitors[username] = &visitor{limiter, time.Now()}
return limiter
}
v.lastSeen = time.Now()
return v.limiter
}
func cleanupVisitors() {
for {
time.Sleep(time.Minute)
mu.Lock()
for username, v := range visitors {
if time.Since(v.lastSeen) > 1*time.Minute {
delete(visitors, username)
}
}
mu.Unlock()
}
}
func limit(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
mappedArray := hotelapi.SearchResponse{}
mappedArray.StartTime = time.Now().Format("2006-02-01 15:04:05.000000")
mappedArray.EndTime = time.Now().Format("2006-02-01 15:04:05.000000")
userName := r.FormValue("username")
limiter := getVisitor(userName)
if !limiter.Allow() {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusTooManyRequests)
mappedArray.MessageInfo = http.StatusText(http.StatusTooManyRequests)
mappedArray.ErrorCode = strconv.Itoa(http.StatusTooManyRequests)
json.NewEncoder(w).Encode(mappedArray)
return
}
next.ServeHTTP(w, r)
})
}
func route() {
r := mux.NewRouter()
r.PathPrefix("/hello").HandlerFunc(api.ProcessHello).Methods("GET")
ws := r.PathPrefix("/index.php").HandlerFunc(api.ProcessWs).Methods("GET", "POST").Subrouter()
r.Use(panicRecovery)
ws.Use(limit)
http.HandleFunc("/favicon.ico", faviconHandler)
if config.HTTPSEnabled {
err := http.ListenAndServeTLS(":"+config.Port, config.HTTPSCertificateFilePath, config.HTTPSKeyFilePath, handlers.CompressHandlerLevel(r, gzip.BestSpeed))
if err != nil {
fmt.Println(err)
log.Println(err)
}
} else {
err := http.ListenAndServe(":"+config.Port, handlers.CompressHandler(r))
if err != nil {
fmt.Println(err)
log.Println(err)
}
}
}
I have couple of concerns here.
I want limiter only for /index.php and not for /hello. I did implement with Sub route. Is it correct way?
The limit middle ware is not limiting as I assumed. It allows 1 successful request all other requests are returned with too many requests error.
What am I missing here. ?
the subrouter pattern is a solution gorilla proposes , small organizational suggestion though:
r := mux.NewRouter()
r.HandlerFunc("/hello", api.ProcessHello).Methods("GET")
r.HandleFunc("/favicon.ico", faviconHandler)
r.Use(panicRecovery)
ws := r.PathPrefix("/index.php").Subrouter()
ws.Use(limit)
ws.HandlerFunc(api.ProcessWs).Methods("GET", "POST")
you seem to be calling your middleware not only via the Use() method but also calling it over the handler on ListenAndServe, I also see from gorilla same example that a more clear way to approach this is:
server := &http.Server{
Addr: "0.0.0.0:8080",
// Good practice to set timeouts to avoid Slowloris attacks.
WriteTimeout: time.Second * 15,
ReadTimeout: time.Second * 15,
IdleTimeout: time.Second * 60,
Handler: router, // Pass our instance of gorilla/mux in.
}
fmt.Println("starting server")
if err := server.ListenAndServe(); err != nil {
fmt.Println(err)
}
Also, from your source, the pattern of rate limiting you are implementing is to rate limit per user, but you use usernames instead of their IPs to limit their requests, and your question begins without clarifying if you wish to ratelimit per user or rate limit how many requests can be done to the endpoint overall - so maybe you might be getting unexpected behavior due to that too.
Related
I want to write health check endpoints for 2 different services, but the problem is they have no HTTP server.
if I can write health check endpoints how can I proceed. or is it mandatory to have an HTTP server to work on health check endpoints with Golang.
Yes, you can add an HTTP health check handler to your application with something like this. Then, in the service that's performing the health check, just make sure it knows which port to run the HTTP checks against.
package main
import "net/http"
func main() {
// Start the health check endpoint and make sure not to block
go func() {
_ = http.ListenAndServe(":8080", http.HandlerFunc(
func(w http.ResponseWriter, r *http.Request) {
_, _ = w.Write([]byte("ok"))
},
))
}()
// Start my application code
}
Alternatively, if you need to expose your health check route at a separate path, you can do something like this.
http.HandleFunc("/health", func(w http.ResponseWriter, r *http.Request) {
_, _ = w.Write([]byte("ok"))
})
_ = http.ListenAndServe(":8080", nil)
Updated
If you want to check the health of a go-routine, you can do something like this.
package main
func main() {
crashed := make(chan struct{})
go func() {
defer close(crashed)
}()
select {
case <-crashed:
// Do something now that the go-routine crashed
}
}
It's not mandatory to have an HTTP server.
You can ping the IP address of your service server. For example I use ping repo:
package main
import (
"fmt"
"time"
"github.com/go-ping/ping"
)
func main() {
t := time.NewTicker(5 * time.Second)
for {
select {
case <-t.C:
err := checkService("google", "216.239.38.120")
if err != nil {
fmt.Println("notif to email, error:", err.Error())
time.Sleep(1 * time.Hour) // to not spam email
}
}
}
}
func checkService(name string, ip string) error {
p, err := ping.NewPinger(ip)
if err != nil {
return err
}
p.Count = 3
p.Timeout = 5 * time.Second
err = p.Run()
if err != nil {
return err
}
stats := p.Statistics()
if stats.PacketLoss == 100 {
return fmt.Errorf("service %s down", name)
}
fmt.Printf("stats: %#v\n", stats)
return nil
}
I want to secure Docker daemon REST API using Go reverse proxy server. I found this article very relevant. I have never used Go so not sure how to implement basic authentication to this with static username and password. I tried all possible ways i happened to find over Google but none worked for me.
Could some please help adding static basicAuth authentication to following code so that request so that Docker daemon API is only reachable if the request includes username and password:
https://github.com/ben-lab/blog-material/blob/master/golang-reverse-proxy-2/reverse-proxy.go
package main
import (
"fmt"
"io"
"log"
"net/http"
"time"
"github.com/tv42/httpunix"
)
func handleHTTP(w http.ResponseWriter, req *http.Request) {
fmt.Printf("Requested : %s\n", req.URL.Path)
u := &httpunix.Transport{
DialTimeout: 100 * time.Millisecond,
RequestTimeout: 1 * time.Second,
ResponseHeaderTimeout: 1 * time.Second,
}
u.RegisterLocation("docker-socket", "/var/run/docker.sock")
req.URL.Scheme = "http+unix"
req.URL.Host = "docker-socket"
resp, err := u.RoundTrip(req)
if err != nil {
http.Error(w, err.Error(), http.StatusServiceUnavailable)
return
}
defer resp.Body.Close()
copyHeader(w.Header(), resp.Header)
w.WriteHeader(resp.StatusCode)
io.Copy(w, resp.Body)
}
func copyHeader(dst, src http.Header) {
for k, vv := range src {
for _, v := range vv {
dst.Add(k, v)
}
}
}
func main() {
server := &http.Server{
Addr: ":8888",
Handler: http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { handleHTTP(w, r) }),
}
log.Fatal(server.ListenAndServe())
}
https://github.com/ben-lab/blog-material/blob/master/golang-reverse-proxy-2/reverse-proxy.go
You can access the basic auth header values by calling BasicAuth() on your
req *http.Request object
like:
user, pass, _ := req.BasicAuth()
Then compare user and pass with the static values you have.
https://golang.org/pkg/net/http/#Request.BasicAuth
Update:
func handleHTTP(w http.ResponseWriter, req *http.Request) {
user, pass, _ := req.BasicAuth()
if user != "muuser" || pass != "mysecret" {
// you have to import "errors"
http.Error(w, errors.New("not authoized!!"), http. StatusUnauthorized)
return
}
fmt.Printf("Requested : %s\n", req.URL.Path)
u := &httpunix.Transport{
DialTimeout: 100 * time.Millisecond,
RequestTimeout: 1 * time.Second,
ResponseHeaderTimeout: 1 * time.Second,
}
u.RegisterLocation("docker-socket", "/var/run/docker.sock")
req.URL.Scheme = "http+unix"
req.URL.Host = "docker-socket"
resp, err := u.RoundTrip(req)
if err != nil {
http.Error(w, err.Error(), http.StatusServiceUnavailable)
return
}
defer resp.Body.Close()
copyHeader(w.Header(), resp.Header)
w.WriteHeader(resp.StatusCode)
io.Copy(w, resp.Body)
}
Here you are, you can copy the logic from my following little project.
https://github.com/alessiosavi/StreamingServer/blob/0f65dbfc77f667777d3047fa1a6b1a2cbd8aaf26/auth/authutils.go
In first instance you need a server for store the users (I've used Redis).
Than you need 3 function for the user
LoginUser
RegisterUser
DeleteUser
During the login/register phase, you generate a cookie hashing username/password and setting the cookie into a Redis table
Than you verify every time that an API is called.
Feel free to copy the code that you need.
Open an issue if something is not well understandable.
I'm proxying a bunch of http GET calls through singleflight. But returned response is only seen by the first request.
I also noticed a problem in my test. If the first request times out, the response will be lost.
Let's say r1,r2,r3 are requests that come in order. They are all grouped in one groupKey. If r1 time out , r2 and r3 will wait until the shared HTTP call returns or until their own timeout.
proxy code (credits to here)
// add auth to the requst and proxy to target host
var serveReverseProxy = func(target string, res http.ResponseWriter, req *http.Request) {
log.Println("new request!")
requestURL, _ := url.Parse(target)
proxy := httputil.NewSingleHostReverseProxy(requestURL)
req1, _ := http.NewRequest(req.Method, req.RequestURI, req.Body)
for k, v := range req.Header {
for _, vv := range v {
req1.Header.Add(k, vv)
}
}
req1.Header.Set("Authorization", "Bearer "+"some token")
req1.Host = requestURL.Host
proxy.ServeHTTP(res, req1)
}
var requestGroup singleflight.Group
mockBackend := httptest.NewServer(http.HandlerFunc(func(res http.ResponseWriter, req *http.Request) {
groupKey := req.Host + req.RequestURI
name := req.Header.Get("From")
ch := requestGroup.DoChan(groupKey, func() (interface{}, error) {
//increase key retention to 20s to make sure r1,r2,r3 are all in one group
go func() {
time.Sleep(20 * time.Second)
requestGroup.Forget(groupKey)
log.Println("Key deleted :", groupKey)
}()
// proxy to some host and expect the result to be written in res
serveReverseProxy("https://somehost.com", res, req)
return nil, nil
})
timeout := time.After(15 * time.Second)
var result singleflight.Result
select {
case <-timeout: // Timeout elapsed, send a timeout message (504)
log.Println(name, " timed out")
http.Error(res, "request timed out", http.StatusGatewayTimeout)
return
case result = <-ch: // Received result from channel
}
if result.Err != nil {
http.Error(res, result.Err.Error(), http.StatusInternalServerError)
return
}
if result.Shared {
log.Println(name, " is shared")
} else {
log.Println(name, " not shared")
}
}))
I'd expect r2,r3 to either
at least see the result from their own reponseWriter
timeout along with r1
https://github.com/golang/net/blob/master/http2/h2demo/h2demo.go#L181-L219
this works. Turns out I need to return handler in singleFlight.Group.Do instead of the response.
I don't know why
As part of my first project I am creating a tiny library to send an SMS to any user. I have added the logic of waiting and retrying if it doesn't receive a positive status on first go. It's a basic HTTP call to am SMS sending service. My algorithm looks like this (comments would explain the flow of the code):
for {
//send request
resp, err := HTTPClient.Do(req)
checkOK, checkSuccessUrl, checkErr := CheckSuccessStatus(resp, err)
//if successful don't continue
if !checkOK and checkErr != nil {
err = checkErr
return resp, SUCCESS, int8(RetryMax-remain+1), err
}
remain := remain - 1
if remain == 0 {
break
}
//calculate wait time
wait := Backoff(RetryWaitMin, RetryWaitMax, RetryMax-remain, resp)
//wait for time calculated in backoff above
time.Sleep(wait)
//check the status of last call, if unsuccessful then continue the loop
if checkSuccessUrl != "" {
req, err := GetNotificationStatusCheckRequest(checkSuccessUrl)
resp, err := HTTPClient.Do(req)
checkOK, _, checkErr = CheckSuccessStatusBeforeRetry(resp, err)
if !checkOK {
if checkErr != nil {
err = checkErr
}
return resp,SUCCESS, int8(RetryMax-remain), err
}
}
}
Now I want to test this logic using any HTTP mock framework available. The best I've got is https://github.com/jarcoal/httpmock
But this one does not provide functionality to mock the response of first and second URL separately. Hence I cannot test the success in second or third retry. I can either test success in first go or failure altogether.
Is there a package out there which suits my needs of testing this particular feature? If no, How can I achieve this using current tools?
This can easily be achieved using the test server that comes in the standard library's httptest package. With a slight modification to the example contained within it you can set up functions for each of the responses you want up front by doing this:
package main
import (
"fmt"
"io/ioutil"
"log"
"net/http"
"net/http/httptest"
)
func main() {
responseCounter := 0
responses := []func(w http.ResponseWriter, r *http.Request){
func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintln(w, "First response")
},
func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintln(w, "Second response")
},
}
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
responses[responseCounter](w, r)
responseCounter++
}))
defer ts.Close()
printBody(ts.URL)
printBody(ts.URL)
}
func printBody(url string) {
res, err := http.Get(url)
if err != nil {
log.Fatal(err)
}
resBody, err := ioutil.ReadAll(res.Body)
res.Body.Close()
if err != nil {
log.Fatal(err)
}
fmt.Printf("%s", resBody)
}
Which outputs:
First response
Second response
Executable code here:
https://play.golang.org/p/YcPe5hOSxlZ
Not sure you still need an answer, but github.com/jarcoal/httpmock provides a way to do this using ResponderFromMultipleResponses.
I need to write a simple web server in Go. It accepts requests, maps the request to avro object and sends it to Kafka. The requirement is that it answers immediately to keep the latency low for the users. Mapping to avro object and sending to Kafka can happen asynchronously. I came up with the following design, but I wonder if it is using Go structures in the intended way or if it can be optimized using channels for example. I'm omitting private methods and initializing structures. The problem is that the server can handle up to 10500 requests a second and then the response time goes up dramatically. So I was wondering if there is a way to optimize it.
func main() {
runtime.GOMAXPROCS(runtime.NumCPU()) // not needed in Go 1.5.0
server := &Server{
Producer: newProducer(brokerList),
}
defer func() {
if err := server.Close(); err != nil {
Error.Println("Failed to close server", err)
}
}()
Error.Fatal(server.Run(*addr))
}
func (s *Server) Run(addr string) error {
httpServer := &http.Server{
Addr: addr,
Handler: s.Handler(),
}
return httpServer.ListenAndServe()
}
func (s *Server) Handler() http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
defer r.Body.Close()
req, err := ParseRequest(r.Body)
if err != nil {
Warning.Println("Failed to parse request", err.Error())
} else {
go handleRequest(s, req)
}
w.WriteHeader(204) // respond with 'no bid'
)
}
func handleRequest(s *Server, req *openrtb.BidRequest) {
req.Validate()
var avroObject, err = createAvro(req)
if err != nil {
Warning.Printf(err.Error())
}
if avroObject != nil {
sendToKafka(avroObject, s)
}
}
}