Why is my webserver in golang not handling concurrent requests? - go

This simple HTTP server contains a call to time.Sleep() that makes
each request take five seconds. When I try quickly loading multiple
tabs in a browser, it is obvious that each request
is queued and handled sequentially. How can I make it handle concurrent requests?
package main
import (
"fmt"
"net/http"
"time"
)
func serve(w http.ResponseWriter, r *http.Request) {
fmt.Fprintln(w, "Hello, world.")
time.Sleep(5 * time.Second)
}
func main() {
http.HandleFunc("/", serve)
http.ListenAndServe(":1234", nil)
}
Actually, I just found the answer to this after writing the question, and it is very subtle. I am posting it anyway, because I couldn't find the answer on Google. Can you see what I am doing wrong?

Your program already handles the requests concurrently. You can test it with ab, a benchmark tool which is shipped with Apache 2:
ab -c 500 -n 500 http://localhost:1234/
On my system, the benchmark takes a total of 5043ms to serve all 500 concurrent requests. It's just your browser which limits the number of connections per website.
Benchmarking Go programs isn't that easy by the way, because you need to make sure that your benchmark tool isn't the bottleneck and that it is also able to handle that many concurrent connections. Therefore, it's a good idea to use a couple of dedicated computers to generate load.

From Server.go , the go routine is spawned in the Serve function when a connection is accepted. Below is the snippet, :-
// Serve accepts incoming connections on the Listener l, creating a
// new service goroutine for each. The service goroutines read requests and
// then call srv.Handler to reply to them.
func (srv *Server) Serve(l net.Listener) error {
for {
rw, e := l.Accept()
if e != nil {
......
c, err := srv.newConn(rw)
if err != nil {
continue
}
c.setState(c.rwc, StateNew) // before Serve can return
go c.serve()
}
}

If you use xhr request, make sure that xhr instance is a local variable.
For example, xhr = new XMLHttpRequest() is a global variable. When you do parallel request with the same xhr variable you receive only one result. So, you must declare xhr locally like this var xhr = new XMLHttpRequest().

Related

Golang Handlefunc (Runtime output display on browser) [duplicate]

I am trying to send a page response as soon as request is received, then process something, but I found the response does not get sent out "first" even though it is first in code sequence.In real life I have a page for uploading a excel sheet which gets saved into the database which takes time (50,0000+ rows) and would like to update to user progress. Here is a simplified example; (depending how much RAM you have you may need to add a couple zeros to counter to see result)
package main
import (
"fmt"
"net/http"
)
func writeAndCount(w http.ResponseWriter, r *http.Request) {
w.Write([]byte("Starting to count"))
for i := 0; i < 1000000; i++ {
if i%1000 == 0 {
fmt.Println(i)
}
}
w.Write([]byte("Finished counting"))
}
func main() {
http.HandleFunc("/", writeAndCount)
http.ListenAndServe(":8080", nil)
}
The original concept of the HTTP protocol is a simple request-response server-client computation model. There was no streaming or "continuous" client update support. It is (was) always the client who first contacted the server should it needed some kind of information.
Also since most web servers cache the response until it is fully ready (or a certain limit is reached–which is typically the buffer size), data you write (send) to the client won't be transmitted immediately.
Several techniques were "developed" to get around this "limitation" so that the server is able to notify the client about changes or progress, such as HTTP Long polling, HTTP Streaming, HTTP/2 Server Push or Websockets. You can read more about these in this answer: Is there a real server push over http?
So to achieve what you want, you have to step around the original "borders" of the HTTP protocol.
If you want to send data periodically, or stream data to the client, you have to tell this to the server. The easiest way is to check if the http.ResponseWriter handed to you implements the http.Flusher interface (using a type assertion), and if it does, calling its Flusher.Flush() method will send any buffered data to the client.
Using http.Flusher is only half of the solution. Since this is a non-standard usage of the HTTP protocol, usually client support is also needed to handle this properly.
First, you have to let the client know about the "streaming" nature of the response, by setting the ContentType=text/event-stream response header.
Next, to avoid clients caching the response, be sure to also set Cache-Control=no-cache.
And last, to let the client know that you might not send the response as a single unit (but rather as periodic updates or as a stream) and so that the client should keep the connection alive and wait for further data, set the Connection=keep-alive response header.
Once the response headers are set as the above, you may start your long work, and whenever you want to update the client about the progress, write some data and call Flusher.Flush().
Let's see a simple example that does everything "right":
func longHandler(w http.ResponseWriter, r *http.Request) {
flusher, ok := w.(http.Flusher)
if !ok {
http.Error(w, "Server does not support Flusher!",
http.StatusInternalServerError)
return
}
w.Header().Set("Content-Type", "text/event-stream")
w.Header().Set("Cache-Control", "no-cache")
w.Header().Set("Connection", "keep-alive")
start := time.Now()
for rows, max := 0, 50*1000; rows < max; {
time.Sleep(time.Second) // Simulating work...
rows += 10 * 1000
fmt.Fprintf(w, "Rows done: %d (%d%%), elapsed: %v\n",
rows, rows*100/max, time.Since(start).Truncate(time.Millisecond))
flusher.Flush()
}
}
func main() {
http.HandleFunc("/long", longHandler)
panic(http.ListenAndServe("localhost:8080", nil))
}
Now if you open http://localhost:8080/long in your browser, you will see an output "growing" by every second:
Rows done: 10000 (20%), elapsed: 1s
Rows done: 20000 (40%), elapsed: 2s
Rows done: 30000 (60%), elapsed: 3s
Rows done: 40000 (80%), elapsed: 4.001s
Rows done: 50000 (100%), elapsed: 5.001s
Also note that when using SSE, you should "pack" updates into SSE frames, that is you should start them with "data:" prefix, and end each frame with 2 newline chars: "\n\n".
"Literature" and further reading / tutorials
Read more about Server-sent events on Wikipedia.
See a Golang HTML5 SSE example.
See Golang SSE server example with client codes using it.
See w3school.com's turorial on Server-Sent Events - One Way Messaging.
You can check if the ResponseWriter is a http.Flusher, and if so, force the flush to network:
if f, ok := w.(http.Flusher); ok {
f.Flush()
}
However, bear in mind that this is a very unconventional HTTP handler. Streaming out progress messages to the response as if it were a terminal presents a few problems, particularly if the client is a web browser.
You might want to consider something more fitting with the nature of HTTP, such as returning a 202 Accepted response immediately, with a unique identifier the client can use to check on the status of processing using subsequent calls to your API.

context without channels in the same thread of execution

Can't figure out how I can cancel a task if it takes to much to time compute in the same thread of execution via context semantics?
I use this example as a reference point
https://golang.org/src/context/context_test.go
The goal here call a doWork, if doWork takes to much time to compute, GetValueWithDeadline should after a timeout return 0, or if caller called cancel that cancel a wait, (here it main is caller) or the value returned in in give a time window.
The same scenario can be done In a different way. ( separate goroutine sleep, wakeup check value etc, condition on a mutex, etc) but I really want to understand the correct way to use context.
The channel semantic I understand but here I can't achieve the desired effect, the default case
call to a doWork fault under default case and sleep.
package main
import (
"context"
"fmt"
"log"
"math/rand"
"sync"
"time"
)
type Server struct {
lock sync.Mutex
}
func NewServer() *Server {
s := new(Server)
return s
}
func (s *Server) doWork() int {
s.lock.Lock()
defer s.lock.Unlock()
r := rand.Intn(100)
log.Printf("Going to nap for %d", r)
time.Sleep(time.Duration(r) * time.Millisecond)
return r
}
// I take an example from here and it very unclear where is do work executed
// https://golang.org/src/context/context_test.go
func (s *Server) GetValueWithDeadline(ctx context.Context) int {
val := 0
select {
case <- time.After(150 * time.Millisecond):
fmt.Println("overslept")
return 0
case <- ctx.Done():
fmt.Println(ctx.Err())
return 0
default:
val = s.doWork()
}
return all
}
func main() {
rand.Seed(time.Now().UTC().UnixNano())
s := NewServer()
for i :=0; i < 10; i++ {
d := time.Now().Add(50 * time.Millisecond)
ctx, cancel := context.WithDeadline(context.Background(), d)
log.Print(s.GetValueWithDeadline(ctx))
cancel()
}
}
Thank you
There are multiple problems with your approach.
What problem contexts solve
First, the primary reason contexts were invented in Go is that they allow to unify an approach to cancellation of a set of tasks.
To explain this concept using a simple example, consider a client request to some sever; to simplify further let it be an HTTP request.
The client connects to the server, sends some data telling the server what to do to fulfill the request and then waits for the server to respond.
Let's now suppose the request requires elaborate and time-consuming processing on the server — for instance, suppose it needs to perform multiple complex queries to multiple remote database engines, do multiple HTTP requests to external services and then process the acquired results to actually produce the data the client wants.
So the client starts its request and the server goes on with all those requests.
To hide latency of individual tasks the server has to perform to fulfill the request, it runs them in separate goroutines.
Once each goroutine completes the assigned task, it communicates its result (and/or an error) back to the goroutine which handles the client's request, and so on.
Now suppose that the client fails to wait for the response to its request for whatever reason — a network outage, an explicit timeout in the client's software, the user kills the app which initiated the request etc, — there are lots of possibilities.
As you can see, there's little sense for the server to continue spending resources to finish the tasks which were logically bound to the now-dead request: there's no one to hear back the result anyway.
So it makes sense to reap those tasks once we know the request is not going to be completed, and that's where contexts come into play: you can associate each incoming request with a single context and then either pass it itself to any goroutine spawned to carry out a single task required to be done to fulfill the request, or derive another request from that and pass it instead.
Then, as soon as you cancel the "root" request, that signal is propagated through the whole tree of requests derived from the root one.
Now each goroutine which were given a context, might "listen" on it to be notified when that cancellation signal is sent, and once the goroutine notices that it might drop whatever it was busy doing and exit.
In terms of actual context.Context type that signal is called "done" — as in "we're done doing whatever that context is assotiated with", — and that's why the goroutine which wants to know it should stop doing its work listens on a special channel returned by the context's method called Done.
Back to your example
To make it work, you'd do something like:
func (s *Server) doWork(ctx context.Context) int {
s.lock.Lock()
defer s.lock.Unlock()
r := rand.Intn(100)
log.Printf("Going to nap for %d", r)
select {
case <- time.After(time.Duration(r) * time.Millisecond):
return r
case <- ctx.Done():
return -1
}
}
func (s *Server) GetValueWithTimeout(ctx context.Context, maxTime time.Duration) int {
d := time.Now().Add(maxTime)
ctx, cancel := context.WithDeadline(ctx, d)
defer cancel()
return s.doWork(ctx)
}
func main() {
const maxTime = 50 * time.Millisecond
rand.Seed(time.Now().UTC().UnixNano())
s := NewServer()
for i :=0; i < 10; i++ {
v := s.GetValueWithTimeout(context.Background(), maxTime)
log.Print(v)
}
}
(Playground).
So what happens here?
The GetValueWithTimeout method accepts the maximum time it should take the doWork method to produce a value, calculates the deadline, derives a context which cancels itself once the deadline passes from the context passed to the method and calls doWork with the new context object.
The doWork method arms its own timer to go off after a random time interval and then listens on both the context and the timer.
This one is the critical point: the code which performs some unit of work which is supposed to be cancellable must check the context to become "done" actively, by itself.
So, in our toy example, either the doWork's own timer fires first or the deadline of the generated context gets reached first; whatever happens first, makes the select statement unblock and proceed.
Note that if your "do the work" code wold be more involved — it would actually do something instead of sleeping, — you would most probably need to check on the context's status periodically, usually after performing invividual bits of that work.

Error timeout get HTTP request golang

I tried to get html source from Reddit with Golang:
package main
import (
"fmt"
"io/ioutil"
"net/http"
"time"
)
func main() {
timeout := time.Duration(5 * time.Second)
client := http.Client{
Timeout: timeout,
}
resp, _ := client.Get("https://www.reddit.com/")
bytes, _ := ioutil.ReadAll(resp.Body)
fmt.Println("HTML:\n\n", string(bytes))
defer resp.Body.Close()
var input string
fmt.Scanln(&input)
}
First attemp was good. But at the second time it ran into an error:
<p>we're sorry, but you appear to be a bot and we've seen too many requests
from you lately. we enforce a hard speed limit on requests that appear to come
from bots to prevent abuse.</p>
<p>if you are not a bot but are spoofing one via your browser's user agent
string: please change your user agent string to avoid seeing this message
again.</p>
<p>please wait 6 second(s) and try again.</p>
<p>as a reminder to developers, we recommend that clients make no
more than <a href="http://github.com/reddit/reddit/wiki/API">one
request every two seconds</a> to avoid seeing this message.</p>
I tried to set delay but it still not work.
Sorry about my bad English.
Reddit doesn't want automatic scanner\grabbers on their site and has a bot-protection mechanism.
Here's a recommendation from them:
one request every two seconds
Just add a delay between requests.
timeout serves a different purpose. timeout is an upper limit for a routine to run. What you need is sleep between subsequent requests.
time.Sleep(6 * time.Second)

net/http server: too many open files error

I'm trying to develop a simple job queue server with some worker that query it but I encountered a problem with my net/http server. I'm surely doing something bad but after ~3 minutes my server start displaying :
http: Accept error: accept tcp [::]:4200: accept4: too many open files; retrying in 1s
For information it receive 10 request per second in my test case.
Here's two files to reproduce this error :
// server.go
package main
import (
"net/http"
)
func main() {
http.HandleFunc("/get", func(rw http.ResponseWriter, r *http.Request) {
http.Error(rw, "Try again", http.StatusInternalServerError)
})
http.ListenAndServe(":4200", nil)
}
// worker.go
package main
import (
"net/http"
"time"
)
func main() {
for {
res, _ := http.Get("http://localhost:4200/get")
defer res.Body.Close()
if res.StatusCode == http.StatusInternalServerError {
time.Sleep(100 * time.Millisecond)
continue
}
return
}
}
I already done some search about this error and I found some interesting response but none of these fixed my issue.
The first response I saw was to correctly close the Body in the http.Get response, as you can see I did it.
The second response was to change the file descriptor ulimit of my system but as I will not control where my app will run I prefer to not use this solution (But for information it's set at 1024 on my system)
Can someone explain me why this problem happen and how I can fix it in my code ?
Thanks a lot for your time
EDIT :
EDIT 2 : In comment Martin says that I'm not closing the Body, I tried to close it (without defer so) and it fixed the issue. Thanks Martin ! I was thinking that continue will execute my defer, I was wrong.
I found a post explaining the root problem in a lot more detail.
Nathan Smith even explains how to control timeouts on the TCP level, if needed.
Below is a summary of everything I could find on this particular problem, as well as the best practices to avoid this problem in future.
Problem
When a response is received regardless of whether response-body is required or not, the connection is kept alive until the response-body stream is closed. So, as mentioned in this thread, always close the response-body. Even if you do not need to use/read the body content:
func Ping(url string) (bool) {
// simple GET request on given URL
res, err := http.Get(url)
if err != nil {
// if unable to GET given URL, then ping must fail
return false
}
// always close the response-body, even if content is not required
defer res.Body.Close()
// is the page status okay?
return res.StatusCode == http.StatusOK
}
Best Practice
As mentioned by Nathan Smith never use the http.DefaultClient in production systems, this includes calls like http.Get as it uses http.DefaultClient at its base.
Another reason to avoid http.DefaultClient is that it is a Singleton (package level variable), meaning that the garbage collector will not try to clean it up, which will leave idling subsequent streams/sockets alive.
Instead create your own instance of http.Client and remember to always specify a sane Timeout:
func Ping(url string) (bool) {
// create a new instance of http client struct, with a timeout of 2sec
client := http.Client{ Timeout: time.Second * 2 }
// simple GET request on given URL
res, err := client.Get(url)
if err != nil {
// if unable to GET given URL, then ping must fail
return false
}
// always close the response-body, even if content is not required
defer res.Body.Close()
// is the page status okay?
return res.StatusCode == http.StatusOK
}
Safety Net
The safety net is for that newbie on the team, who does not know the shortfalls of http.DefaultClient usage. Or even that very useful, but not so active, open-source library that is still riddled with http.DefaultClient calls.
Since http.DefaultClient is a Singleton we can easily change the Timeout setting, just to ensure that legacy code does not cause idle connections to remain open.
I find it best to set this on the package main file in the init function:
package main
import (
"net/http"
"time"
)
func init() {
/*
Safety net for 'too many open files' issue on legacy code.
Set a sane timeout duration for the http.DefaultClient, to ensure idle connections are terminated.
Reference: https://stackoverflow.com/questions/37454236/net-http-server-too-many-open-files-error
*/
http.DefaultClient.Timeout = time.Minute * 10
}
As Martin say in comment I don't really closed the Body after the Get request. I used defer res.Body.Close() but it's not executed since I'm staying in the for loop. So continue dont't trigger defer
Please note that in some cases the setting in /etc/sysctl.conf
net.ipv4.tcp_tw_recycle = 1
Could cause this error because TCP connections remain open.
A temporary solution, just increase the number of open files:
ulimit -Sn 10000

Basic web tweaks that all applications should have

Currently my web app is just a router and handlers.
What are some important things I am missing to make this production worthy?
I believe I have to set the # of procs to ensure this uses maximum goroutines?
Should I be using output buffering?
Anything else you see missing that is best-practise?
var (
templates = template.Must(template.ParseFiles("templates/home.html")
)
func main() {
r := mux.NewRouter()
r.HandleFunc("/", WelcomeHandler)
http.ListenAndServe(":9000", r)
}
func WelcomeHandler(w http.ResponseWriter, r *http.Request) {
homePage, err := api.LoadHomePage()
if err != nil {
}
tmpl := "home"
renderTemplate(w, tmpl, homePage)
}
func renderTemplate(w http.ResponseWriter, tmpl string, hp *HomePage) {
err := templates.ExecuteTemplate(w, tmpl+".html", hp)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
}
}
You don't need to set/change runtime.GOMAXPROCS() as since Go 1.5 it defaults to the number of available CPU cores.
Buffering output? From the performance point of view, you don't need to. But there may be other considerations for which you may.
For example, your renderTemplate() function may potentially panic. If executing the template starts writing to the output, it involves setting the HTTP response code and other headers prior to writing data. And if a template execution error occurs after that, it will return an error, and so your code attempts to send back an error response. At this point HTTP headers are already written, and this http.Error() function will try to set headers again => panic.
One way to avoid this is to first render the template into a buffer (e.g. bytes.Buffer), and if no error is returned by the template execution, then you can write the content of the buffer to the response writer. If error occurs, then of course you won't write the content of the buffer, but send back an error response just like you did.
To sum it up, your code is production ready performance-wise (excluding the way you handle template execution errors).
WelcomeHandler should return when err != nil is true.
Log the error when one is hit to help investigation.
Place templates = template.Must(template.ParseFiles("templates/home.html") in the init. Split it into separate lines. If template.ParseFiles returns an then error make a Fatal log. And if you have multiple templates to initialize then initialize them in goroutines with a common WaitGroup to speed up the startup.
Since you are using mux, HTTP Server is too clean with its URLs might also be good to know.
You might also want to reconsider the decision of letting the user's know why they got the http.StatusInternalServerError response.
Setting the GOMAXPROCS > 1 if you have more the one core would definitely be a good idea but I would keep it less than number of cores available.

Resources