Http Server Read-Write timeouts and Server Side Events - go

I'm writing a test app with SSE, but my problem is that
ReadTimeout and WriteTimeout are closing the clients connection every 10 Seconds and because of this the main page are losing data.
How can I manage this Issue, serving SSE and webpages together without goroutines leak risk and SSE working done?
Server:
server := &http.Server{Addr: addr,
ReadTimeout: 10 * time.Second,
WriteTimeout: 10 * time.Second,
Handler: s.mainHandler(),
}
Handler:
func sseHandler(w http.ResponseWriter, r *http.Requests) {
f, ok := w.(http.Flusher)
if !ok {
http.Error(w, "Streaming not supported!", http.StatusInternalServerError)
log.Println("Streaming not supported")
return
}
messageChannel := make(chan string)
hub.addClient <- messageChannel
notify := w.(http.CloseNotifier).CloseNotify()
w.Header().Set("Content-Type", "text/event-stream")
w.Header().Set("Cache-Control", "no-cache")
w.Header().Set("Connection", "keep-alive")
for i := 0; i < 1440; {
select {
case msg := <-messageChannel:
jsonData, _ := json.Marshal(msg)
str := string(jsonData)
fmt.Fprintf(w, "data: {\"str\": %s, \"time\": \"%v\"}\n\n", str, time.Now())
f.Flush()
case <-time.After(time.Second * 60):
fmt.Fprintf(w, "data: {\"str\": \"No Data\"}\n\n")
f.Flush()
i++
case <-notify:
f.Flush()
i = 1440
hub.removeClient <- messageChannel
}
}
}

Both ReadTimeout and WriteTimeout define the duration within which the whole request must be read from or written back to the client. These timeouts are applied to the underlying connection (http://golang.org/src/pkg/net/http/server.go?s=15704:15902) and this is before any headers are received, so you cannot set different limits for separate handlers – all connections within the server will share the same timeout limits.
Saying this, if you need customised timeouts per request, you will need to implement them in your handler. In your code you're already using timeouts for your job, so this would be a matter of creating a time.After at the beginning of the handler, keep checking (in the handler itself or even pass it around) and stop the request when necessary. This actually gives you more precise control over the timeout, as WriteTimeout would only trigger when trying to write the response (e.g. if the timeout is set to 10 seconds and it takes a minute to prepare the response before any write, you won't get any error until the job is done so you'll waste resources for 50 seconds). From this perspective, I think WriteTimeout itself is good only when your response is ready quite quickly, but you want to drop the connection when network become very slow (or the client deliberately stops receiving data).
There is a little helper function in the library, http.TimeoutHandler, which behaves similarly to WriteTimeout, but sends back 503 error if the response takes longer than predefined time. You could use it to set up behaviour similar to WriteTimeout per handler, for example:
package main
import (
"log"
"net/http"
"time"
)
type Handler struct {
}
func (h *Handler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
time.Sleep(3*time.Second)
// This will return http.ErrHandlerTimeout
log.Println(w.Write([]byte("body")))
}
func main() {
h := &Handler{}
http.Handle("/t1", http.TimeoutHandler(h, 1*time.Second, ""))
http.Handle("/t2", http.TimeoutHandler(h, 2*time.Second, ""))
http.ListenAndServe(":8080", nil)
}
This looks very handy, but I found one disadvantage that would affect your code: http.ResponseWriter passed from http.TimeoutHandler doesn't implement http.CloseNotifier. If it's required, you could dive into the implementation and see how they solved the timeout problem in order to come up with a similar, but enhanced solution.

Related

How to close all grpc server streams using gracefulStop?

I'm trying to stop all clients connected to a stream server from server side.
Actually I'm using GracefulStop method to handle it gracefully.
I am waiting for os.Interrupt signal on a channel to perform a graceful stop for gRPC. but it gets stuck on server.GracefulStop() when the client is connected.
func (s *Service) Subscribe(_ *empty.Empty, srv clientapi.ClientApi_SubscribeServer) error {
ctx := srv.Context()
updateCh := make(chan *clientapi.Update, 100)
stopCh := make(chan bool)
defer func() {
stopCh<-true
close(updateCh)
}
go func() {
ticker := time.NewTicker(1 * time.Second)
defer func() {
ticker.Stop()
close(stopCh)
}
for {
select {
case <-stopCh:
return
case <-ticker.C:
updateCh<- &clientapi.Update{Name: "notification": Payload: "sample notification every 1 second"}
}
}
}()
for {
select {
case <-ctx.Done():
return ctx.Err()
case notif := <-updateCh:
err := srv.Send(notif)
if err == io.EOF {
return nil
}
if err != nil {
s.logger.Named("Subscribe").Error("error", zap.Error(err))
continue
}
}
}
}
I expected the context in method ctx.Done() could handle it and break the for loop.
How to close all response streams like this one?
Create a global context for your gRPC service. So walking through the various pieces:
Each gRPC service request would use this context (along with the client context) to fulfill that request
os.Interrupt handler would cancel the global context; thus canceling any currently running requests
finally issue server.GracefulStop() - which should wait for all the active gRPC calls to finish up (if they haven't see the cancelation immediately)
So for example, when setting up the gRPC service:
pctx := context.Background()
globalCtx, globalCancel := context.WithCancel(pctx)
mysrv := MyService{
gctx: globalCtx
}
s := grpc.NewServer()
pb.RegisterMyService(s, mysrv)
os.Interrupt handler initiates and waits for shutdown:
globalCancel()
server.GracefulStop()
gRPC methods:
func(s *MyService) SomeRpcMethod(ctx context.Context, req *pb.Request) error {
// merge client and server contexts into one `mctx`
// (client context will cancel if client disconnects)
// (server context will cancel if service Ctrl-C'ed)
mctx, mcancel := mergeContext(ctx, s.gctx)
defer mcancel() // so we don't leak, if neither client or server context cancels
// RPC WORK GOES HERE
// RPC WORK GOES HERE
// RPC WORK GOES HERE
// pass mctx to any blocking calls:
// - http REST calls
// - SQL queries etc.
// - or if running a long loop; status check the context occasionally like so:
// Example long request (10s)
for i:=0; i<10*1000; i++ {
time.Sleep(1*time.Milliscond)
// poll merged context
select {
case <-mctx.Done():
return fmt.Errorf("request canceled: %s", mctx.Err())
default:
}
}
}
And:
func mergeContext(a, b context.Context) (context.Context, context.CancelFunc) {
mctx, mcancel := context.WithCancel(a) // will cancel if `a` cancels
go func() {
select {
case <-mctx.Done(): // don't leak go-routine on clean gRPC run
case <-b.Done():
mcancel() // b canceled, so cancel mctx
}
}()
return mctx, mcancel
}
Typically clients need to assume that RPCs can terminate (e.g. due to connection errors or server power failure) at any moment. So what we do is GracefulStop, sleep for a short time period to allow in-flight RPCs an opportunity to complete naturally, then hard-Stop the server. If you do need to use this termination signal to end your RPCs, then the answer by #colminator is probably the best choice. But this situation should be unusual, and you may want to spend some time analyzing your design if you do find it is necessary to manually end streaming RPCs at server shutdown.

How to release a websocket and redis gateway server resource in golang?

I have a gateway server, which can push message to client side by using websocket, A new client connected to my server, I will generate a cid for it. And then I also subscribe a channel, which using cid. If any message publish to this channel, My server will push it to client side. For now, all unit are working fine, but when I try to test with benchmark test by thor, it will crash, I fine the DeliverMessage has some issue, it would never exit, since it has a die-loop. but since redis need to subscribe something, I don't know how to avoid loop.
func (h *Hub) DeliverMessage(pool *redis.Pool) {
conn := pool.Get()
defer conn.Close()
var gPubSubConn *redis.PubSubConn
gPubSubConn = &redis.PubSubConn{Conn: conn}
defer gPubSubConn.Close()
for {
switch v := gPubSubConn.Receive().(type) {
case redis.Message:
// fmt.Printf("Channel=%q | Data=%s\n", v.Channel, string(v.Data))
h.Push(string(v.Data))
case redis.Subscription:
fmt.Printf("Subscription message: %s : %s %d\n", v.Channel, v.Kind, v.Count)
case error:
fmt.Println("Error pub/sub, delivery has stopped", v)
panic("Error pub/sub")
}
}
}
In the main function, I have call the above function as:
go h.DeliverMessage(pool)
But when I test it with huge connection, it get me some error like:
ERR max number of clients reached
So, I change the redis pool size by change MaxIdle:
func newPool(addr string) *redis.Pool {
return &redis.Pool{
MaxIdle: 5000,
IdleTimeout: 240 * time.Second,
Dial: func() (redis.Conn, error) { return redis.Dial("tcp", addr) },
}
}
But it still doesn't work, so I wonder to know, if there any good way to kill a goroutine after my websocket disconnected to my server on the below selection:
case client := <-h.Unregister:
if _, ok := h.Clients[client]; ok {
delete(h.Clients, client)
delete(h.Connections, client.CID)
close(client.Send)
if err := gPubSubConn.Unsubscribe(client.CID); err != nil {
panic(err)
}
// TODO kill subscribe goroutine if don't client-side disconnected ...
}
But How do I identify this goroutine? How can I do it like unix way. kill -9 <PID>?
Look at the example here
You can make your goroutine exit by having a return statement inside your switch case in your DeliverMessage, once you're not receiving anything more. I'm guessing case error, or as seen in the example, case 0 you'd want to return from that, and your goroutine will cancel. Or if I'm misunderstanding things, and case client := <-h.Unregister: is inside the DeliverMessage, just return.
You're also closing your connection twice. defer gPubSubConn.Close() simply calls conn.Close() so you don't need defer conn.Close()
Also take a look at the Pool and look at what all the parameters actually do. If you want to handle many connections, set MaxActive to 0 "When zero, there is no limit on the number of connections in the pool." (and do you actually want the idle timeout?)
Actually, I got wrong design architecture, I am going to explain what I want to do.
A client can connect to my websocket server;
The server have several handler of http, and the admin can post data via the handler, the structure of the data can be like:
{
"cid": "something",
"body": {
}
}
Since, I have several Nodes are running to service our client, and the Nginx can dispatch each request from admin to totally different Node, but only one Node has hold on the connection about cid with "something", so I will need to publish this data to Redis, if any Node has got the data, it's going to send this message to the client side.
3.Looking for the NodeID, which i am going to Publish to by given an cid.
// redis code & golang
NodeID, err := conn.Do("HGET", "NODE_MAP", cid)
4.For now, I can publish any message from the admin, and publish to the NodeID, which we have got at step 3.
// redis code & golang
NodeID, err := conn.Do("PUBLISH", NodeID, data)
Time to show the core code, which related to this question. I am going to subscribe a channel, which name is NodeID. like the following.
go func(){
for {
switch v := gPubSubConn.Receive().(type) {
case redis.Message:
fmt.Println("Got a message", v.Data)
h.Broadcast <- v.Data
pipeline <- v.Data
case error:
panic(v)
}
}
}()
6.To manage your websocket, you do also need a goroutine to do that. like the following way:
go func () {
for {
select {
case client := <-h.Register:
h.Clients[client] = true
cid := client.CID
h.Connections[cid] = client
body := "something"
client.Send <- msg // greeting
case client := <-h.Unregister:
if _, ok := h.Clients[client]; ok {
delete(h.Clients, client)
delete(h.Connections, client.CID)
close(client.Send)
}
case message := <-h.Broadcast:
fmt.Println("message is", message)
}
}
}()
The last thing is manage a redis pool, you don't really need a connection pool right now. since we only have two goroutine, one main process.
func newPool(addr string) *redis.Pool {
return &redis.Pool{
MaxIdle: 100,
IdleTimeout: 240 * time.Second,
Dial: func() (redis.Conn, error) { return redis.Dial("tcp", addr) },
}
}
var (
pool *redis.Pool
redisServer = flag.String("redisServer", ":6379", "")
)
pool = newPool(*redisServer)
conn := pool.Get()
defer conn.Close()

Pattern for handling HTTP requests via channels

I am writing a web application where I have a long running goroutine.
I want to delegate all HTTP requests to this goroutine via channels.
The pattern that I have come across is:
// Internal long running goroutine
for{
select{
case e := <-event: //web request
req := e.req
// do something
....
select {
case <-ctx.Done():
//log
default:
e.replyTo <- result
}
}
}
// Web handler
http.HandleFunc("/bar", func(w http.ResponseWriter, r *http.Request) {
//decode request etc
...
replyTo := make(chan interface{}, 1)
ctx, cancel := context.WithCancel(context.BackGround())
event <- Event{req: req, ctx: ctx, replyTo: replyTo}
select{
case <-time.After(time.Second):
cancel()
//return 500
case r := <-replyTo:
// return some response
}
})
I do see that there is one single go-routine at the end, so parallelism is lost but I am okay with it.
Is this pattern the right way of doing this?
What other approaches can be suggested?
Is this pattern the right way of doing this?
Assuming you are trying to manage where state in a single go routine, I would say no. I think it would be better to have some form of a state manager that is responsible for thread safety. Therefore the handler should take in something that can manage the state and simply expose a few methods to the handler.
type State interface{
Load() (string, error)
Save(something string) error
}
Decoupling the code will pay off for you later on. It will also allow unit tests for both the handler and the State that can be focused and readable.

How to make stateless connections with gorilla mux?

My program are running fine with one connection per time, but not with concurrent connections.
I need all connections being rendered by one function, which will have all data I need in my service, and that is not working fine, so I ilustrated with the simple code below:
package main
import (
"encoding/json"
"fmt"
"github.com/gorilla/mux"
"github.com/rs/cors"
"net/http"
"reflect"
"time"
)
var Out struct {
Code int `json:"status"`
Message []interface{} `json:"message"`
}
func Clear(v interface{}) {
p := reflect.ValueOf(v).Elem()
p.Set(reflect.Zero(p.Type()))
}
func YourHandler(w http.ResponseWriter, r *http.Request) {
Clear(&Out.Message)
Out.Code = 0
// w.Header().Set("Content-Type", "application/json; charset=UTF-8")
w.Header().Set("Access-Control-Allow-Origin", "*")
w.Header().Set("Access-Control-Allow-Headers","Content-Type,access-control-allow-origin, access-control-allow-headers")
w.WriteHeader(http.StatusOK)
for i:=0; i<10; i++ {
Out.Code = Out.Code + 1
Out.Message = append(Out.Message, "Running...")
time.Sleep(1000 * time.Millisecond)
if err := json.NewEncoder(w).Encode(Out)
err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
}
}
}
func main() {
r := mux.NewRouter()
r.StrictSlash(true);
r.HandleFunc("/", YourHandler)
handler := cors.New(cors.Options{
AllowedOrigins: []string{"*"},
AllowCredentials: true,
Debug: true,
AllowedHeaders: []string{"Content-Type"},
AllowedMethods: []string{"GET"},
}).Handler(r)
fmt.Println("Working in localhost:5000")
http.ListenAndServe(":5000", handler)
}
If you run this code, you won't see anything wrong in one connection per time, but if you run it in another tab/browser/etc, at same time, because of the delay, the status code will not be from 1 to 10, but it will be shuffled with all calls.
So I guess that means it's not stateless, and I need it to be, so even with 300 connections at same time, it will always return status code from 1 to 10 in each one.
How can I do it? (As I said, it's a simple code, the structure and the render functions are in separeted packages from each other and of all data collection and)
Handlers are called concurrently by the net/http server. The server creates a goroutine for each client connection and calls handlers on those goroutines.
The Gorilla Mux is passive with respect to concurrency. The mux calls through to the registered application handler on whatever goroutine the mux is called on.
Use a sync.Mutex to limit execution to one goroutine at a time:
var mu sync.Mutex
func YourHandler(w http.ResponseWriter, r *http.Request) {
mu.Lock()
defer mu.Unlock()
Clear(&Out.Message)
Out.Code = 0
...
This is not a good solution given the time.Sleep calls in the handler. The server will process at most one request every 10 seconds.
A better solution is to declare Out as a local variable inside the handler function. With this change, here's no need for the mutex or to clear Out:
func YourHandler(w http.ResponseWriter, r *http.Request) {
var Out struct {
Code int `json:"status"`
Message []interface{} `json:"message"`
}
// w.Header().Set("Content-Type", "application/json; charset=UTF-8")
w.Header().Set("Access-Control-Allow-Origin", "*")
...
If it's not possible to move the declaration of Out, then copy the value to a local variable:
func YourHandler(w http.ResponseWriter, r *http.Request) {
Out := Out // local Out is copy of package-level Out
Clear(&Out.Message)
Out.Code = 0
...
Gorilla Mix uses Go's net/http server to process your http requests. Go creates a Go routine to service each of these incoming requests. If I understand your question correctly, you expect that the Go responses will have your custom status codes in order from 1 to 10 since you were expecting each request coming in synchronously in that order. Go routine's parallelism doesn't guarantee order of execution just like Java threads are if you're familiar with Java. So if Go routines were spawned for each of the requests created in the for 1-to-10 loop then, the routines will execute on its own without regard for order who goes and complete first. Each of these Go routines will serve your requests as it finishes. If you want to control the order of these requests processed in parallel but in order then you can use channels. Look at this link to control synchonization between your 10 Go routines for each of those http requests. https://gobyexample.com/channel-synchronization
First I would like to thanks ThunderCat and Ramil for the help, yours answers gave me a north to find the correctly answer.
A short answer is: Go don't have stateless connections, so I can't do what I was looking for.
Once that said, the reason why I think (based on RFC 7230) it doesn't have is because:
In a traditional web server application we have a program that handle the connections (Apache, nginx etc) and open a thread to the routed application, while in Go we have both in same application, so anything global are always shared between connections.
In languages that may work like Go (the application that opens a port and stay listen it), like C++, they are Object Oriented, so even public variables are inside a class, so you won't share it, since you have to create an instance of the class each time.
Create a thread would resolve the problem, but Go don't have it, instead it have Goroutines, more detail about it in:
https://translate.google.com/translate?sl=ko&tl=en&u=https%3A%2F%2Ftech.ssut.me%2F2017%2F08%2F20%2Fgoroutine-vs-threads%2F
After days on that and the help here, I'll fix it changing my struct to type and put it local, like that:
package main
import (
"encoding/json"
"fmt"
"github.com/gorilla/mux"
"github.com/rs/cors"
"net/http"
"reflect"
"time"
)
type Out struct {
Code int `json:"status"`
Message []interface{} `json:"message"`
}
func Clear(v interface{}) {
p := reflect.ValueOf(v).Elem()
p.Set(reflect.Zero(p.Type()))
}
func YourHandler(w http.ResponseWriter, r *http.Request) {
localOut := Out{0,nil}
// w.Header().Set("Content-Type", "application/json; charset=UTF-8")
w.Header().Set("Access-Control-Allow-Origin", "*")
w.Header().Set("Access-Control-Allow-Headers","Content-Type,access-control-allow-origin, access-control-allow-headers")
w.WriteHeader(http.StatusOK)
for i:=0; i<10; i++ {
localOut.Code = localOut.Code + 1
localOut.Message = append(localOut.Message, "Running...")
time.Sleep(1000 * time.Millisecond)
if err := json.NewEncoder(w).Encode(localOut)
err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
}
}
}
func main() {
r := mux.NewRouter()
r.StrictSlash(true);
r.HandleFunc("/", YourHandler)
handler := cors.New(cors.Options{
AllowedOrigins: []string{"*"},
AllowCredentials: true,
Debug: true,
AllowedHeaders: []string{"X-Session-Token","Content-Type"},
AllowedMethods: []string{"GET","POST","PUT","DELETE"},
}).Handler(r)
fmt.Println("Working in localhost:5000")
http.ListenAndServe(":5000", handler)
}
Of course that will take some weeks, so for now I put my application behind nginx and now it works as expected.

What is an idiomatic method of listening for events in Go?

A few months ago I was thinking how to implement a closable event loop in Go, for an RPC library. I managed to facilitate closing the server like so:
type Server struct {
listener net.Listener
closeChan chan bool
routines sync.WaitGroup
}
func (s *Server) Serve() {
s.routines.Add(1)
defer s.routines.Done()
defer s.listener.Close()
for {
select {
case <-s.closeChan:
// close server etc.
default:
s.listener.SetDeadline(time.Now().Add(2 * time.Second))
conn, _ := s.listener.Accept()
// handle conn routine
}
}
}
func (s *Server) Close() {
s.closeChan <- true // signal to close serve routine
s.routines.Wait()
}
The problem that I've found with this implementation is it involves a timeout, which means minimum close time is 2 seconds more than it could be. Is there a more idiomatic method of creating an event loop?
I don't think that event loops in Go need to be loops.
It would seem simpler to handle closing and connections in separate goroutines:
go func() {
<-s.closeChan
// close server, release resources, etc.
s.listener.Close()
}()
for {
conn, err := s.listener.Accept()
if err != nil {
// log, return
}
// handle conn routine
}
Note that you might also close the listener directly in your Close function without using a channel. What I have done here is used the error return value of Listener.Accept to facilitate inter-routine communication.
If at some point of the closing and connection handling implementations you need to protect some resources you're closing while you're answering, you may use a Mutex. But it's generally possible to avoid that.

Resources