Gorilla mux reuse/restart server after a shutdown - go

I am setting up an http server in a separate goroutine. This goroutine will set up a new server on a signal. However, after a shutdown request, I cannot restart that, Here is the code
for {
select {
case <-m.start:
fmt.Println("Start the server now")
go func() {
err := m.ListenAndServe()
if err != nil {
fmt.Println(err)
}
}()
case <-m.stop:
fmt.Println("Stop the server now")
go func() {
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
err := m.Shutdown(ctx)
fmt.Println("Error is :", err)
if err != nil {
fmt.Println(err)
}
cancel()
}()
}
}
I am controlling this goroutine through separate handlers externally. In the documentation, it is mentioned that the server may not be reused after a shutdown. Is there any alternative?

Related

Running a consumer and api on port golang

I have a go api project where I also run a worker (RabbitMQ). I just discovered a problem that my worker and my http listen and serve do not work together. The moment I run the worker, the port of api is not reached.
Here is what my code looks like.
app.go
func (a *App) StartWorker() {
connection, err := amqp091.Dial(os.Getenv("AMQP_URL"))
if err != nil {
panic(err)
}
defer connection.Close()
consumer, err := events.NewConsumer(connection, database.GetDatabase(a.Database))
if err != nil {
panic(err)
}
consumer.Listen(os.Args[1:])
}
func (a *App) Run(addr string) {
logs := log.New(os.Stdout, "my-service", log.LstdFlags)
server := &http.Server{
Addr: addr,
Handler: a.Router,
ErrorLog: logs,
IdleTimeout: 120 * time.Second, // max time for connections using TCP Keep-Alive
ReadTimeout: 5 * time.Second,
WriteTimeout: 10 * time.Second,
}
go func() {
if err := server.ListenAndServe(); err != nil {
logs.Fatal(err)
}
}()
// trap sigterm or interrupt and gracefully shutdown the server
c := make(chan os.Signal)
signal.Notify(c, os.Interrupt)
signal.Notify(c, os.Kill)
sig := <-c
logs.Println("Recieved terminate, graceful shutdown", sig)
tc, _ := context.WithTimeout(context.Background(), 30*time.Second)
server.Shutdown(tc)
}
here is my
consumer.go
// NewConsumer returns a new Consumer
func NewConsumer(conn *amqp.Connection, db *mongo.Database) (Consumer, error) {
consumer := Consumer{
conn: conn,
db: db,
}
err := consumer.setup()
if err != nil {
return Consumer{}, err
}
return consumer, nil
}
// Listen will listen for all new Queue publications
// and print them to the console.
func (consumer *Consumer) Listen(topics []string) error {
ch, err := consumer.conn.Channel()
if err != nil {
return err
}
defer ch.Close()
if err != nil {
return err
}
msgs, err := ch.Consume("update.package.rating", "", true, false, false, false, nil)
if err != nil {
return err
}
forever := make(chan bool)
go func() {
for msg := range msgs {
switch msg.RoutingKey {
case "update.package.rating":
worker.RatePackage(packageRepo.NewPackagesRepository(consumer.db), msg.Body)
}
// acknowledege received event
log.Printf("Received a message: %s", msg.Body)
}
}()
log.Printf("[*] Waiting for message [Exchange, Queue][%s, %s]. To exit press CTRL+C", getExchangeName(), "update.package.rating")
<-forever
return nil
}
main.go
func main() {
start := app.App{}
start.StartApp()
start.StartWorker()
start.Run(":3006")
}
the port 3006 is not reached.
I am using gin-gonic to serve my http request.
Any help is welcomed.
I had a similar problem while using gin framework.Solved the issue by running my consumer inside a go routine.I invoked my consumer like below.
go notificationCallback.ConsumeBankTransaction()
and both the server and the rabbitmq consumer run seamlessly.Still monitoring performance to see if it is robust and resilient enough.

websocket gracefully shutdown

I have a websocket server. I wrote a test for him that tests his ability to shutdown gracefully. 5 connections are created and each sends 5 requests. After a while, shutdown starts. All 25 requests must be fulfilled. If I close the exit channel, then the test does not work as it should.
time.AfterFunc(50*time.Millisecond, func() {
close(exit)
close(done)
})
And if I just call the s.shutdown function, then everything is ok.
time.AfterFunc(50*time.Millisecond, func() {
require.Nil(t, s.Shutdown())
close(done)
})
My test
func TestServer_GracefulShutdown(t *testing.T) {
done := make(chan struct{})
exit := make(chan struct{})
ctx := context.Background()
finishedRequestCount := atomic.NewInt32(0)
ln, err := net.Listen("tcp", "localhost:")
require.Nil(t, err)
handler := HandlerFunc(func(conn *websocket.Conn) {
for {
_, _, err := conn.ReadMessage()
if err != nil {
return
}
time.Sleep(100 * time.Millisecond)
finishedRequestCount.Inc()
}
})
s, err := makeServer(ctx, handler) // server create
require.Nil(t, err)
time.AfterFunc(50*time.Millisecond, func() {
close(exit)
close(done)
})
go func() {
fmt.Printf("Running...")
require.Nil(t, s.Run(ctx, exit, ln))
}()
for i := 0; i < connCount; i++ {
go func() {
err := clientRun(ln)
require.Nil(t, err)
}()
}
<-done
assert.Equal(t, int32(totalCount), finishedRequestCount.Load())
}
My run func
func (s *Server) Run(ctx context.Context, exit <-chan struct{}, ln net.Listener) error {
errs := make(chan error, 1)
go func() {
err := s.httpServer.Run(ctx, exit, ln)
if err != nil {
errs <- err
}
}()
select {
case <-ctx.Done():
return s.Close()
case <-exit:
return s.Shutdown()
case err := <-errs:
return err
}
}
My shutdown
func (s *Server) Shutdown() error {
err := s.httpServer.Shutdown() // we close the possibility to connect to any conn
s.wg.Wait()
return err
}
What is happening if the following code is executed?
close(exit)
close(done)
Both channels are closed almost at the same time. The first triggers the Shutdown function which waits for a graceful shutdown. But the second triggers the evaluation of
assert.Equal(t, int32(totalCount), finishedRequestCount.Load())
It is triggered while the graceful shutdown is still running or hasn't even started yet.
If you execute the Shutdown function directly it will block until finished and only then close(done) will start the assertion. That is why this works:
require.Nil(t, s.Shutdown())
close(done)
You can move the close(done) to the following location to make the test work while using the exit channel to close:
go func() {
fmt.Printf("Running...")
require.Nil(t, s.Run(ctx, exit, ln))
close(done)
}()
This way done will be closed after the Shutdown function was executed.
As discussed in the comments I strongly suggest to use contexts instead of channels to close. They have the complexity of close channels hidden away.

Log from withing gRPC/Go handlers

I am trying to debug code by printing/logging from within a gRPC/Go function but the output does not print to the terminal.
fmt.Println() works in my main() method, but not in the server callbacks. How can I print from within these gRPC handlers? I'm open to other methods for debugging if there's a different approach too.
Here's the related code from my main() function. Logging and printing works here:
func main() {
...
lis, err := net.Listen("tcp", fmt.Sprintf("0.0.0.0:%s", os.Getenv("APP_PORT")))
if err != nil {
logger.Error(fmt.Sprintf("Failed start on port: %s", os.Getenv("APP_PORT")), err)
log.Fatalf("Failed to start: %v", err)
}
opts := []grpc.ServerOption{}
s := grpc.NewServer(opts...)
pb.RegisterAuthServiceServer(s, &server{})
// Register reflection service on gRPC server.
reflection.Register(s)
go func() {
fmt.Println("Starting Server...")
if err := s.Serve(lis); err != nil {
log.Fatalf("failed to server: %v", err)
}
}()
// Wait for Control C to exit
ch := make(chan os.Signal, 1)
signal.Notify(ch, os.Interrupt)
// Run until a signal is received
<-ch
fmt.Println("")
fmt.Println("Stopping the server")
s.Stop()
fmt.Println("Closing the listener")
lis.Close()
Then here is one of my handlers, where logging or printing does not output to the terminal where I started the server.
func (*server) AuthUser(ctx context.Context, req *pb.AuthUserRequest) (*pb.AccessToken, error) {
fmt.Println("AuthUser***")
...

Gorilla websocket disconnect is called two times

I'm writing a Go websocket server and I want to graceful stop the connections when my server goes down.
I have a map of active connections stored in the following variable:
var connections = make(map[string]*websocket.Conn)
My main function looks like this:
func main() {
// ... stuff ....
gracefulStop := make(chan os.Signal)
signal.Notify(gracefulStop, syscall.SIGTERM)
signal.Notify(gracefulStop, syscall.SIGINT)
signal.Notify(gracefulStop, syscall.SIGQUIT)
signal.Notify(gracefulStop, syscall.SIGKILL)
signal.Notify(gracefulStop, syscall.SIGHUP)
go func() {
sig := <-gracefulStop
log.Printf("Exiting from process due to %+v", sig)
log.Println("Closing all websocket connections")
for id, conn := range connections {
closeConnection(id, conn)
}
os.Exit(0)
}()
r := mux.NewRouter()
r.HandleFunc("/{id}", wsHandler)
err := http.ListenAndServe(fmt.Sprintf(":%d", *argPort), r)
if err != nil {
log.Println("Could not start http server")
log.Println(err)
}
}
closeConnection does 4 things:
conn.Close()
sets conn as nil
removes the id from the map
calls an AWS Lambda function
The same function is called as a defer function inside the wsHandler function, so if a client disconnects by its own, I execute function in the handler.
It's all working nicely, except that when I ctrl+c the server my closeConnection function is called two times per client, one in the graceful stop handler and the other in the wsHandler defer function.
I tried to check in my closeConnection function if the connection is still defined in connections, but it returns true both of the times.
I thought that it was due to the fact that they are called two times because they are in different goroutines, so I replaced the for loop above with just a time.Sleep(2 * time.Second), but in this case nothing happens (the closeConnection inside the wsHandler defer function is not even called).
This is what I mean:
go func() {
sig := <-gracefulStop
log.Printf("Exiting from process due to %+v", sig)
log.Println("Closing all websocket connections")
// for chargeboxIdentity, conn := range connections {
// chargeboxDisconnected(chargeboxIdentity, conn)
// }
time.Sleep(2 * time.Second)
os.Exit(0)
}()
EDIT: Here is the closeConnection function:
func closeConnection(id string, conn *websocket.Conn) {
_, ok := connections[id]
log.Println(ok)
log.Printf("%s (%s) disconnected", id, conn.RemoteAddr())
conn.WriteMessage(websocket.CloseMessage, websocket.FormatCloseMessage(websocket.CloseNormalClosure, ""))
time.Sleep(300 * time.Millisecond)
conn.Close()
conn = nil
delete(connections, id)
request := LambdaPayload{ID: id}
payload, err := json.Marshal(request)
if err != nil {
log.Println("Could not create payload for lambda call")
log.Println(err)
return
}
_, err = client.Invoke(&lambda.InvokeInput{FunctionName: aws.String(lambdaPrefix + "MainDisconnect"), Payload: payload})
if err != nil {
log.Println("Disconnect Lambda returned an error")
log.Println(err)
}
}
EDIT: Here's the wsHandler function:
func wsHandler(w http.ResponseWriter, r *http.Request) {
conn, err := upgrader.Upgrade(w, r, nil)
if err != nil {
log.Println("Could not upgrade websocket connection")
log.Println(err)
return
}
vars := mux.Vars(r)
if !clientConnected(vars["id"], conn) {
return
}
defer closeConnection(vars["id"], conn)
for {
msgType, msg, err := conn.ReadMessage()
if err != nil {
break
}
log.Printf("%s sent: %s", vars["id"], string(msg))
// ... stuff ...
}
}

Synchronizing a Test Server During Tests

Summary: I'm running into a race condition during testing where my server is not reliably ready to serve requests before making client requests against it. How can I block only until the listener is ready, and still maintain composable public APIs without requiring users to BYO net.Listener?
We see the following error as the goroutine that spins up our (blocking) server in the background isn't listening before we call client.Do(req) in the TestRun test function.
--- FAIL: TestRun/Server_accepts_HTTP_requests (0.00s)
/home/matt/repos/admission-control/server_test.go:64: failed to make a request: Get https://127.0.0.1:37877: dial tcp 127.0.0.1:37877: connect: connection refused
I'm not using httptest.Server directly as I'm attempting to test the blocking & cancellation characteristics of my own server componenent.
I create an httptest.NewUnstartedServer, clone its *tls.Config into a new http.Server after starting it with StartTLS(), and then close it, before calling *AdmissionServer.Run(). This also has the benefit of giving me a *http.Client with the matching RootCAs configured.
Testing TLS is important here as the daemon this exposes lives in a TLS-only environment.
func newTestServer(ctx context.Context, t *testing.T) *httptest.Server {
testHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintln(w, "OK")
})
testSrv := httptest.NewUnstartedServer(testHandler)
admissionServer, err := NewServer(nil, &noopLogger{})
if err != nil {
t.Fatalf("admission server creation failed: %s", err)
return nil
}
// We start the test server, copy its config out, and close it down so we can
// start our own server. This is because httptest.Server only generates a
// self-signed TLS config after starting it.
testSrv.StartTLS()
admissionServer.srv = &http.Server{
Addr: testSrv.Listener.Addr().String(),
Handler: testHandler,
TLSConfig: testSrv.TLS.Clone(),
}
testSrv.Close()
// We need a better synchronization primitive here that doesn't block
// but allows the underlying listener to be ready before
// serving client requests.
go func() {
if err := admissionServer.Run(ctx); err != nil {
t.Fatalf("server returned unexpectedly: %s", err)
}
}()
return testSrv
}
// Test that we can start a minimal AdmissionServer and handle a request.
func TestRun(t *testing.T) {
testSrv := newTestServer(context.TODO(), t)
t.Run("Server accepts HTTP requests", func(t *testing.T) {
client := testSrv.Client()
req, err := http.NewRequest(http.MethodGet, testSrv.URL, nil)
if err != nil {
t.Fatalf("request creation failed: %s", err)
}
resp, err := client.Do(req)
if err != nil {
t.Fatalf("failed to make a request: %s", err)
}
// Later sub-tests will test cancellation propagation, signal handling, etc.
For posterity, this is our composable Run function, that listens in a goroutine and then blocks on our cancellation & error channels in a for-select:
type AdmissionServer struct {
srv *http.Server
logger log.Logger
GracePeriod time.Duration
}
func (as *AdmissionServer) Run(ctx context.Context) error {
sigChan := make(chan os.Signal, 1)
defer close(sigChan)
signal.Notify(sigChan, os.Interrupt, syscall.SIGTERM)
// run in goroutine
errs := make(chan error)
defer close(errs)
go func() {
as.logger.Log(
"msg", fmt.Sprintf("admission control listening on '%s'", as.srv.Addr),
)
if err := as.srv.ListenAndServeTLS("", ""); err != nil && err != http.ErrServerClosed {
errs <- err
as.logger.Log(
"err", err.Error(),
"msg", "the server exited",
)
return
}
return
}()
// Block indefinitely until we receive an interrupt, cancellation or error
// signal.
for {
select {
case sig := <-sigChan:
as.logger.Log(
"msg", fmt.Sprintf("signal received: %s", sig),
)
return as.shutdown(ctx, as.GracePeriod)
case err := <-errs:
as.logger.Log(
"msg", fmt.Sprintf("listener error: %s", err),
)
// We don't need to explictly call shutdown here, as
// *http.Server.ListenAndServe closes the listener when returning an error.
return err
case <-ctx.Done():
as.logger.Log(
"msg", fmt.Sprintf("cancellation received: %s", ctx.Err()),
)
return as.shutdown(ctx, as.GracePeriod)
}
}
}
Notes:
There is a (simple) constructor for an *AdmissionServer: I've left it out for brevity. The AdmissionServer is composable and accepts a *http.Server so that it can be plugged into existing applications easily.
The wrapped http.Server type that we create a listener from doesn't itself expose any way to tell if its listening; at best we can try to listen again and catch the error (e.g. port already bound to another listener), which does not seem robust as the net package doesn't expose a useful typed error for this.
You can just attempt to connect to the server before starting the test suite, as part of the initialization process.
For example, I usually have a function like this in my tests:
// waitForServer attempts to establish a TCP connection to localhost:<port>
// in a given amount of time. It returns upon a successful connection;
// ptherwise exits with an error.
func waitForServer(port string) {
backoff := 50 * time.Millisecond
for i := 0; i < 10; i++ {
conn, err := net.DialTimeout("tcp", ":"+port, 1*time.Second)
if err != nil {
time.Sleep(backoff)
continue
}
err = conn.Close()
if err != nil {
log.Fatal(err)
}
return
}
log.Fatalf("Server on port %s not up after 10 attempts", port)
}
Then in my TestMain() I do:
func TestMain(m *testing.M) {
go startServer()
waitForServer(serverPort)
// run the suite
os.Exit(m.Run())
}

Resources