I am trying to send a page response as soon as request is received, then process something, but I found the response does not get sent out "first" even though it is first in code sequence.In real life I have a page for uploading a excel sheet which gets saved into the database which takes time (50,0000+ rows) and would like to update to user progress. Here is a simplified example; (depending how much RAM you have you may need to add a couple zeros to counter to see result)
package main
import (
"fmt"
"net/http"
)
func writeAndCount(w http.ResponseWriter, r *http.Request) {
w.Write([]byte("Starting to count"))
for i := 0; i < 1000000; i++ {
if i%1000 == 0 {
fmt.Println(i)
}
}
w.Write([]byte("Finished counting"))
}
func main() {
http.HandleFunc("/", writeAndCount)
http.ListenAndServe(":8080", nil)
}
The original concept of the HTTP protocol is a simple request-response server-client computation model. There was no streaming or "continuous" client update support. It is (was) always the client who first contacted the server should it needed some kind of information.
Also since most web servers cache the response until it is fully ready (or a certain limit is reached–which is typically the buffer size), data you write (send) to the client won't be transmitted immediately.
Several techniques were "developed" to get around this "limitation" so that the server is able to notify the client about changes or progress, such as HTTP Long polling, HTTP Streaming, HTTP/2 Server Push or Websockets. You can read more about these in this answer: Is there a real server push over http?
So to achieve what you want, you have to step around the original "borders" of the HTTP protocol.
If you want to send data periodically, or stream data to the client, you have to tell this to the server. The easiest way is to check if the http.ResponseWriter handed to you implements the http.Flusher interface (using a type assertion), and if it does, calling its Flusher.Flush() method will send any buffered data to the client.
Using http.Flusher is only half of the solution. Since this is a non-standard usage of the HTTP protocol, usually client support is also needed to handle this properly.
First, you have to let the client know about the "streaming" nature of the response, by setting the ContentType=text/event-stream response header.
Next, to avoid clients caching the response, be sure to also set Cache-Control=no-cache.
And last, to let the client know that you might not send the response as a single unit (but rather as periodic updates or as a stream) and so that the client should keep the connection alive and wait for further data, set the Connection=keep-alive response header.
Once the response headers are set as the above, you may start your long work, and whenever you want to update the client about the progress, write some data and call Flusher.Flush().
Let's see a simple example that does everything "right":
func longHandler(w http.ResponseWriter, r *http.Request) {
flusher, ok := w.(http.Flusher)
if !ok {
http.Error(w, "Server does not support Flusher!",
http.StatusInternalServerError)
return
}
w.Header().Set("Content-Type", "text/event-stream")
w.Header().Set("Cache-Control", "no-cache")
w.Header().Set("Connection", "keep-alive")
start := time.Now()
for rows, max := 0, 50*1000; rows < max; {
time.Sleep(time.Second) // Simulating work...
rows += 10 * 1000
fmt.Fprintf(w, "Rows done: %d (%d%%), elapsed: %v\n",
rows, rows*100/max, time.Since(start).Truncate(time.Millisecond))
flusher.Flush()
}
}
func main() {
http.HandleFunc("/long", longHandler)
panic(http.ListenAndServe("localhost:8080", nil))
}
Now if you open http://localhost:8080/long in your browser, you will see an output "growing" by every second:
Rows done: 10000 (20%), elapsed: 1s
Rows done: 20000 (40%), elapsed: 2s
Rows done: 30000 (60%), elapsed: 3s
Rows done: 40000 (80%), elapsed: 4.001s
Rows done: 50000 (100%), elapsed: 5.001s
Also note that when using SSE, you should "pack" updates into SSE frames, that is you should start them with "data:" prefix, and end each frame with 2 newline chars: "\n\n".
"Literature" and further reading / tutorials
Read more about Server-sent events on Wikipedia.
See a Golang HTML5 SSE example.
See Golang SSE server example with client codes using it.
See w3school.com's turorial on Server-Sent Events - One Way Messaging.
You can check if the ResponseWriter is a http.Flusher, and if so, force the flush to network:
if f, ok := w.(http.Flusher); ok {
f.Flush()
}
However, bear in mind that this is a very unconventional HTTP handler. Streaming out progress messages to the response as if it were a terminal presents a few problems, particularly if the client is a web browser.
You might want to consider something more fitting with the nature of HTTP, such as returning a 202 Accepted response immediately, with a unique identifier the client can use to check on the status of processing using subsequent calls to your API.
Related
I am having some problems with the concurrent HTTP connection in the golang. Kindly read the whole question, and as the actual code is quite long, I am using pseudocode
In short, I have to create a single API, which will internally call 5 other APIs, unify their response, and send them as a single response.
I am using goroutines to call those 5 internal APIs along with timeout, and using channels to ensure that every goroutine has been completed, then I unify their response, and return the same.
Things are going fine when I do local testing, my response time is around 300ms, which is pretty good.
The problem arises when I do the locust load testing of 200 users, then my response time go as high as 7 8 sec. I am thinking it has to do with the HTTP client waiting for the resources as we are running a high number of goroutines.
like 1 API spin up 5 go-routine, so if each of 200 users makes API requests at the rate of supposing 5 req/sec. Then a total number of goroutines goes way higher. Again this is my assumption only
p.s. normally the API I am building is pretty good in response time,
I am using all the caching and stuff and any response greater than
400ms should not be the case
So can anyone please tell me how can I tackle this problem of
increasing response time when number of concurrent users increases
Locust test report
pseudo code
simple route
group.POST("/test", controller.testHandler)
controller
type Worker struct {
NumWorker int
Data chan structures.Placement
}
e := Worker{
NumWorker: 5, // Number of worker goroutine(s)
Data: make(chan, 5) /* Buffer Size */),
}
//call the goroutines along with the
for i := 0; i < e.NumWorker; i++ {
// Do some fake work
wg.Add(1)
go ad.GetResponses(params ,chan , &wg) //making HHTP call and returning the response in the channel
}
for v := range resChan {
//unifying all the response, and return the same as our response
switch v.Tyoe{
case A :
finalResponse.A = v
case B
finalResponse.B = v
}
}
return finalResponse
Request HTTP client
//i am using a global http client with custom transport , so that i can effectively use the resources
var client *http.Client
func init() {
tr := &http.Transport{
MaxIdleConnsPerHost: 1024,
TLSHandshakeTimeout: 0 * time.Second,
}
tr.MaxIdleConns = 100
tr.MaxConnsPerHost = 100
tr.MaxIdleConnsPerHost = 100
client = &http.Client{Transport: tr, Timeout: 10 * time.Second}
}
func GetResponses(params , chan ,wg){
res = client.Do(req)
chan <- res
}
So I have done some debugging and span monitoring , and turns out redis was the culprit in this. You can see this https://stackoverflow.com/a/70902382/9928176
To get an idea how I solved it
Given a (infinite) stream of events, a server program must stream the events to several clients. For example:
for _, event := range events {
for _, client := range clients {
client.write(event) // blocking operation
}
}
However, if a client is slow, it can throttle the other clients. Therefore, for each client, we can add a channel (and a client specific goroutine consuming that channel):
for _, event := range events {
for _, client := range clients {
client.writer <- event // writer channel is consumed by a per-client go routine
}
}
As long as the buffered channel is not full, this works. However, if the channel gets full, it'll block again. I can think of the following options:
Drop the event on channel full, close the channel, force the client to reconnect. This requires re-negotiating the channel position, or a complete re-stream. The force reset will potentially waste resources, making the slow client even slower
Add a channel that goes the other way, make the client signal the server that it is ready to receive. This requires the server to keep a track of stream positions for each client (perhaps that is not too bad?). It seems to have a bit more sync than necessary in the nominal (not-slow) case, increasing latency.
Something else? What's the idiomatic go way to do this?
EDIT: Here's a simple, good looking, but broken solution: track the stream position in the client object, and send the outstanding events:
for _, event := range events {
persisted_events = append(persisted_events, event)
for _, client := range clients {
for _, event := range persisted_events[client.last_event:] {
select {
case client.writer <- event:
client.last_event++
default:
break
}
}
}
}
This allows slow clients to catch-up, without starwing the others. It does not require a disconnect. However, it is also broken: if the stream of events stops while a client is catching up, it is possible that the slow client gets stuck - as the main loop if waiting for new events. Adding a ticker that triggers the loop sometimes is not efficient. Requiring the client to notify the main loop that it is now idle is complex, and potentially doubles the number of events.
The best setup I found so far:
for {
var event Event = nil
select {
case event, ok := <-events:
if !ok {
return
}
persisted_events = append(persisted_events, event)
case event <- ticker:
}
for _, client := range clients {
select {
case client.writer <- persisted_events[client.num_events:]:
client.num_events = len(persisted_events)
default:
}
}
}
This is a slight variation of the broken example in the question. There are two key differences:
The writer channel takes a slice of events (instead of a single event)
There's a ticker, that wakes the loop periodically, even if there are no new events
Combined, this achives that even if a slow client catches up right after it misses the last event(s) before a long silent period, the final catching up is only delayed by one period of the ticker, instead of the combination of the ticker period plus the number of events missed. Therefore, reducing the ticker period can cap the tail latency. Not ideal in terms of performance, but simple (in terms of concurrency).
I am currently building a simple chat server that supports posting messages through a REST API.
example:
========
```
curl -X POST -H "Content-Type: application/json" --data '{"user":"alex", "text":"this is a message"}' http://localhost:8081/message
{
"ok": true
}
Right now, I'm just currently storing the messages in an array of messages. I'm pretty sure this is an inefficient way. So is there a simple, better way to get and store the messages using goroutines and channels that will make it thread-safe.
Here is what I currently have:
type Message struct {
Text string
User string
Timestamp time.Time
}
var Messages = []Message{}
func messagePost(c http.ResponseWriter, req *http.Request){
decoder := json.NewDecoder(req.Body)
var m Message
err := decoder.Decode(&m)
if err != nil {
panic(err)
}
if m.Timestamp == (time.Time{}) {
m.Timestamp = time.Now()
}
addUser(m.User)
Messages = append(Messages, m)
}
Thanks!
It could be made thread safe using mutex, as #ThunderCat suggested but I think this does not add concurrency. If two or more requests are made simultaneously, one will have to wait for the other to complete first, slowing the server down.
Adding Concurrency: You make it faster and handle more concurrent request by using a queue (which is a Go channel) and a worker that listens on that channel - it'll be a simple implementation. Every time a message comes in through a Post request, you add to the queue (this is instantaneous and the HTTP response can be sent immediately). In another goroutine, you detect that a message has been added to the queue, you take it out append it to your Messages slice. While you're appending to Messages, the HTTP requests don't have to wait.
Note: You can make it even better by having multiple goroutines listen on the queue, but we can leave that for later.
This is how the code will somewhat look like:
type Message struct {
Text string
User string
Timestamp time.Time
}
var Messages = []Message{}
// messageQueue is the queue that holds new messages until they are processed
var messageQueue chan Message
func init() { // need the init function to initialize the channel, and the listeners
// initialize the queue, choosing the buffer size as 8 (number of messages the channel can hold at once)
messageQueue = make(chan Message, 8)
// start a goroutine that listens on the queue/channel for new messages
go listenForMessages()
}
func listenForMessages() {
// whenever we detect a message in the queue, append it to Messages
for m := range messageQueue {
Messages = append(Messages, m)
}
}
func messagePost(c http.ResponseWriter, req *http.Request){
decoder := json.NewDecoder(req.Body)
var m Message
err := decoder.Decode(&m)
if err != nil {
panic(err)
}
if m.Timestamp == (time.Time{}) {
m.Timestamp = time.Now()
}
addUser(m.User)
// add the message to the channel, it'll only wait if the channel is full
messageQueue <- m
}
Storing Messages: As other users have suggested, storing messages in memory may not be the right choice since the messages won't persist if the application is restarted. If you're working on a small, proof-of-concept type project and don't want to figure out the DB, you could save the Messages variable as a flat file on the server and then read from it every time the application starts (*Note: this should not be done on a production system, of course, for that you should set up a Database). But yeah, database should be the way to go.
Use a mutex to make the program threadsafe.
var Messages = []Message{}
var messageMu sync.Mutex
...
messageMu.Lock()
Messages = append(Messages, m)
messageMu.Unlock()
There's no need to use channels and goroutines to make the program threadsafe.
A database is probably a better choice for storing messages than the in memory slice used in the question. Asking how to use a database to implement a chat program is too broad a question for SO.
I'm trying to develop a simple job queue server with some worker that query it but I encountered a problem with my net/http server. I'm surely doing something bad but after ~3 minutes my server start displaying :
http: Accept error: accept tcp [::]:4200: accept4: too many open files; retrying in 1s
For information it receive 10 request per second in my test case.
Here's two files to reproduce this error :
// server.go
package main
import (
"net/http"
)
func main() {
http.HandleFunc("/get", func(rw http.ResponseWriter, r *http.Request) {
http.Error(rw, "Try again", http.StatusInternalServerError)
})
http.ListenAndServe(":4200", nil)
}
// worker.go
package main
import (
"net/http"
"time"
)
func main() {
for {
res, _ := http.Get("http://localhost:4200/get")
defer res.Body.Close()
if res.StatusCode == http.StatusInternalServerError {
time.Sleep(100 * time.Millisecond)
continue
}
return
}
}
I already done some search about this error and I found some interesting response but none of these fixed my issue.
The first response I saw was to correctly close the Body in the http.Get response, as you can see I did it.
The second response was to change the file descriptor ulimit of my system but as I will not control where my app will run I prefer to not use this solution (But for information it's set at 1024 on my system)
Can someone explain me why this problem happen and how I can fix it in my code ?
Thanks a lot for your time
EDIT :
EDIT 2 : In comment Martin says that I'm not closing the Body, I tried to close it (without defer so) and it fixed the issue. Thanks Martin ! I was thinking that continue will execute my defer, I was wrong.
I found a post explaining the root problem in a lot more detail.
Nathan Smith even explains how to control timeouts on the TCP level, if needed.
Below is a summary of everything I could find on this particular problem, as well as the best practices to avoid this problem in future.
Problem
When a response is received regardless of whether response-body is required or not, the connection is kept alive until the response-body stream is closed. So, as mentioned in this thread, always close the response-body. Even if you do not need to use/read the body content:
func Ping(url string) (bool) {
// simple GET request on given URL
res, err := http.Get(url)
if err != nil {
// if unable to GET given URL, then ping must fail
return false
}
// always close the response-body, even if content is not required
defer res.Body.Close()
// is the page status okay?
return res.StatusCode == http.StatusOK
}
Best Practice
As mentioned by Nathan Smith never use the http.DefaultClient in production systems, this includes calls like http.Get as it uses http.DefaultClient at its base.
Another reason to avoid http.DefaultClient is that it is a Singleton (package level variable), meaning that the garbage collector will not try to clean it up, which will leave idling subsequent streams/sockets alive.
Instead create your own instance of http.Client and remember to always specify a sane Timeout:
func Ping(url string) (bool) {
// create a new instance of http client struct, with a timeout of 2sec
client := http.Client{ Timeout: time.Second * 2 }
// simple GET request on given URL
res, err := client.Get(url)
if err != nil {
// if unable to GET given URL, then ping must fail
return false
}
// always close the response-body, even if content is not required
defer res.Body.Close()
// is the page status okay?
return res.StatusCode == http.StatusOK
}
Safety Net
The safety net is for that newbie on the team, who does not know the shortfalls of http.DefaultClient usage. Or even that very useful, but not so active, open-source library that is still riddled with http.DefaultClient calls.
Since http.DefaultClient is a Singleton we can easily change the Timeout setting, just to ensure that legacy code does not cause idle connections to remain open.
I find it best to set this on the package main file in the init function:
package main
import (
"net/http"
"time"
)
func init() {
/*
Safety net for 'too many open files' issue on legacy code.
Set a sane timeout duration for the http.DefaultClient, to ensure idle connections are terminated.
Reference: https://stackoverflow.com/questions/37454236/net-http-server-too-many-open-files-error
*/
http.DefaultClient.Timeout = time.Minute * 10
}
As Martin say in comment I don't really closed the Body after the Get request. I used defer res.Body.Close() but it's not executed since I'm staying in the for loop. So continue dont't trigger defer
Please note that in some cases the setting in /etc/sysctl.conf
net.ipv4.tcp_tw_recycle = 1
Could cause this error because TCP connections remain open.
A temporary solution, just increase the number of open files:
ulimit -Sn 10000
This simple HTTP server contains a call to time.Sleep() that makes
each request take five seconds. When I try quickly loading multiple
tabs in a browser, it is obvious that each request
is queued and handled sequentially. How can I make it handle concurrent requests?
package main
import (
"fmt"
"net/http"
"time"
)
func serve(w http.ResponseWriter, r *http.Request) {
fmt.Fprintln(w, "Hello, world.")
time.Sleep(5 * time.Second)
}
func main() {
http.HandleFunc("/", serve)
http.ListenAndServe(":1234", nil)
}
Actually, I just found the answer to this after writing the question, and it is very subtle. I am posting it anyway, because I couldn't find the answer on Google. Can you see what I am doing wrong?
Your program already handles the requests concurrently. You can test it with ab, a benchmark tool which is shipped with Apache 2:
ab -c 500 -n 500 http://localhost:1234/
On my system, the benchmark takes a total of 5043ms to serve all 500 concurrent requests. It's just your browser which limits the number of connections per website.
Benchmarking Go programs isn't that easy by the way, because you need to make sure that your benchmark tool isn't the bottleneck and that it is also able to handle that many concurrent connections. Therefore, it's a good idea to use a couple of dedicated computers to generate load.
From Server.go , the go routine is spawned in the Serve function when a connection is accepted. Below is the snippet, :-
// Serve accepts incoming connections on the Listener l, creating a
// new service goroutine for each. The service goroutines read requests and
// then call srv.Handler to reply to them.
func (srv *Server) Serve(l net.Listener) error {
for {
rw, e := l.Accept()
if e != nil {
......
c, err := srv.newConn(rw)
if err != nil {
continue
}
c.setState(c.rwc, StateNew) // before Serve can return
go c.serve()
}
}
If you use xhr request, make sure that xhr instance is a local variable.
For example, xhr = new XMLHttpRequest() is a global variable. When you do parallel request with the same xhr variable you receive only one result. So, you must declare xhr locally like this var xhr = new XMLHttpRequest().