Connections stuck at CLOSE_WAIT in golang server - go

I am using gorilla mux to create a golang server to support a simple health GET endpoint.
The endpoint responds with a status of ok whenver the server is up.
I see a lot of connections (over 400) in CLOSE_WAIT state on one system.
This does not happen on other systems with the same code.
Output of netstat (9003 is my server port):
tcp 164 0 ::1:9003 ::1:60702 CLOSE_WAIT -
tcp 164 0 ::1:9003 ::1:44472 CLOSE_WAIT -
tcp 164 0 ::1:9003 ::1:31504 CLOSE_WAIT -
This seems to imply that I have a connection I need to close.
Most of the questions I read online seem to suggest that open connections pertain to the client not issuing a response.body.close() after a GET.
As per https://blog.cloudflare.com/the-complete-guide-to-golang-net-http-timeouts/, I could add read/write timeouts on server side but I would like to understand the root cause of CLOSE_WAITS before adding the improvements.
Am I missing any close on the server side?
My code is below:
import "github.com/gorilla/mux"
...
func (server *Srvr) healthHandler(w http.ResponseWriter, r *http.Request) {
resp := map[string]string{"status": "ok"}
respJSON, err := json.Marshal(resp)
if err != nil {
w.WriteHeader(http.StatusInternalServerError)
fmt.Fprintf(w, "Error creating JSON response %s", err)
return
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
w.Write(respJSON)
}
// Load initializes the servers
func Load(port string) *Srvr {
srvrPort := ":" + port
log.Infof("Will listen on port %s", srvrPort)
serverMux := mux.NewRouter()
srvr := &Srvr{Port: port, Srv: &http.Server{Addr: srvrPort, Handler: serverMux}}
serverMux.HandleFunc("/api/v1.0/health", srvr.healthHandler).Methods("GET")
return srvr
}
// Run starts the server
func (server *Srvr) Run() {
log.Info("Starting the server")
// Starting a server this way to allow for shutdown.
// https://stackoverflow.com/questions/39320025/how-to-stop-http-listenandserve
err := server.Srv.ListenAndServe()
if err != http.ErrServerClosed {
log.Fatalf("ListenAndServe(): %s", err)
}
}
// Main resides outside the server package
func main() {
srvr := server.Load("9003")
// Now that all setup is done successfully, lets start the server
go srvr.Run()
// An unrelated forever loop executes below for different business logic
for {
glog.Info("Evaluation iteration begins now")
...
time.Sleep(time.Duration(evalFreq) * time.Minute)
}
}

Related

golang: serving net.Conn using a gin router

I have a function that handles an incoming TCP connection:
func Handle(conn net.Conn) error {
// ...
}
Also, I have an initialized gin router with implemented handles:
router := gin.New()
router.GET(...)
router.POST(...)
The router.Run(addr) call will start a separate HTTP server on the addr.
Is there any way to handle incoming connections inside the Handle function using this router without running a separate HTTP server?
Create a net.Listener implementation that accepts connections by receiving on a channel:
type listener struct {
ch chan net.Conn
addr net.Addr
}
// newListener creates a channel listener. The addr argument
// is the listener's network address.
func newListener(addr net.Addr) *listener {
return &listener{
ch: make(chan net.Conn),
addr: addr,
}
}
func (l *listener) Accept() (net.Conn, error) {
c, ok := <-l.ch
if !ok {
return nil, errors.New("closed")
}
return c, nil
}
func (l *listener) Close() error { return nil }
func (l *listener) Addr() net.Addr { return l.addr }
Handle connections by sending to the channel:
func (l *listener) Handle(c net.Conn) error {
l.ch <- c
return nil
}
Here's how to tie it all together:
Create the listener:
s := newListener(someAddr)
Configure the Gin engine as usual.
router := gin.New()
router.GET(...)
router.POST(...)
Run the net/http server in a goroutine using the channel listener and the Gin engine:
err := http.Serve(s, router)
if err != nil {
// handle error
}
In your dispatching code, call the s.Handle(c) to pass the connection to the net/http server and then on to the Gin engine.
For those who have a similar task - handle TCP connections from multiple ports using a single router, here's a workaround that I eventually found. Instead of running an HTTP server on a port, I run it with a UNIX socket using router.RunUnix(socketName). The full solution consists of three steps:
Run a HTTP server to listen through a UNIX socket using router.RunUnix(socketName)
Inside the Handle function read the incoming bytes from the connection and send them to the socket. After that, read the HTTP response from the socket and write it into the connection.
Close the connection.

Memcached Ping() doesn't return an error on an invalid server

I use memcache for caching and the client I use is https://github.com/bradfitz/gomemcache. When I tried initiate new client with dummy/invalid server address and then pinging to it, it return no error.
package main
import (
"fmt"
m "github.com/bradfitz/gomemcache"
)
func main() {
o := m.New("dummy_adress")
fmt.Println(o.Ping()) // return no error
}
I think it suppose to return error as the server is invalid. What do I miss?
It looks like the New() call ignores the return value for SetServers:
func New(server ...string) *Client {
ss := new(ServerList)
ss.SetServers(server...)
return NewFromSelector(ss)
}
The SetServers() function will only set the server list to valid servers (in
your case: no servers) and the Ping() funtion will only ping servers that are
set, and since there are no servers set it doesn't really do anything.
This is arguably a feature; if you have 4 servers and one is down then that's
not really an issue. Even with just 1 server memcache is generally optional.
You can duplicate the New() logic with an error check:
ss := new(memcache.ServerList)
err := ss.SetServers("example.localhost:11211")
if err != nil {
panic(err)
}
c := memcache.NewFromSelector(ss)
err = c.Ping()
if err != nil {
panic(err)
}
Which gives:
panic: dial tcp 127.0.0.1:11211: connect: connection refused

How to release a websocket and redis gateway server resource in golang?

I have a gateway server, which can push message to client side by using websocket, A new client connected to my server, I will generate a cid for it. And then I also subscribe a channel, which using cid. If any message publish to this channel, My server will push it to client side. For now, all unit are working fine, but when I try to test with benchmark test by thor, it will crash, I fine the DeliverMessage has some issue, it would never exit, since it has a die-loop. but since redis need to subscribe something, I don't know how to avoid loop.
func (h *Hub) DeliverMessage(pool *redis.Pool) {
conn := pool.Get()
defer conn.Close()
var gPubSubConn *redis.PubSubConn
gPubSubConn = &redis.PubSubConn{Conn: conn}
defer gPubSubConn.Close()
for {
switch v := gPubSubConn.Receive().(type) {
case redis.Message:
// fmt.Printf("Channel=%q | Data=%s\n", v.Channel, string(v.Data))
h.Push(string(v.Data))
case redis.Subscription:
fmt.Printf("Subscription message: %s : %s %d\n", v.Channel, v.Kind, v.Count)
case error:
fmt.Println("Error pub/sub, delivery has stopped", v)
panic("Error pub/sub")
}
}
}
In the main function, I have call the above function as:
go h.DeliverMessage(pool)
But when I test it with huge connection, it get me some error like:
ERR max number of clients reached
So, I change the redis pool size by change MaxIdle:
func newPool(addr string) *redis.Pool {
return &redis.Pool{
MaxIdle: 5000,
IdleTimeout: 240 * time.Second,
Dial: func() (redis.Conn, error) { return redis.Dial("tcp", addr) },
}
}
But it still doesn't work, so I wonder to know, if there any good way to kill a goroutine after my websocket disconnected to my server on the below selection:
case client := <-h.Unregister:
if _, ok := h.Clients[client]; ok {
delete(h.Clients, client)
delete(h.Connections, client.CID)
close(client.Send)
if err := gPubSubConn.Unsubscribe(client.CID); err != nil {
panic(err)
}
// TODO kill subscribe goroutine if don't client-side disconnected ...
}
But How do I identify this goroutine? How can I do it like unix way. kill -9 <PID>?
Look at the example here
You can make your goroutine exit by having a return statement inside your switch case in your DeliverMessage, once you're not receiving anything more. I'm guessing case error, or as seen in the example, case 0 you'd want to return from that, and your goroutine will cancel. Or if I'm misunderstanding things, and case client := <-h.Unregister: is inside the DeliverMessage, just return.
You're also closing your connection twice. defer gPubSubConn.Close() simply calls conn.Close() so you don't need defer conn.Close()
Also take a look at the Pool and look at what all the parameters actually do. If you want to handle many connections, set MaxActive to 0 "When zero, there is no limit on the number of connections in the pool." (and do you actually want the idle timeout?)
Actually, I got wrong design architecture, I am going to explain what I want to do.
A client can connect to my websocket server;
The server have several handler of http, and the admin can post data via the handler, the structure of the data can be like:
{
"cid": "something",
"body": {
}
}
Since, I have several Nodes are running to service our client, and the Nginx can dispatch each request from admin to totally different Node, but only one Node has hold on the connection about cid with "something", so I will need to publish this data to Redis, if any Node has got the data, it's going to send this message to the client side.
3.Looking for the NodeID, which i am going to Publish to by given an cid.
// redis code & golang
NodeID, err := conn.Do("HGET", "NODE_MAP", cid)
4.For now, I can publish any message from the admin, and publish to the NodeID, which we have got at step 3.
// redis code & golang
NodeID, err := conn.Do("PUBLISH", NodeID, data)
Time to show the core code, which related to this question. I am going to subscribe a channel, which name is NodeID. like the following.
go func(){
for {
switch v := gPubSubConn.Receive().(type) {
case redis.Message:
fmt.Println("Got a message", v.Data)
h.Broadcast <- v.Data
pipeline <- v.Data
case error:
panic(v)
}
}
}()
6.To manage your websocket, you do also need a goroutine to do that. like the following way:
go func () {
for {
select {
case client := <-h.Register:
h.Clients[client] = true
cid := client.CID
h.Connections[cid] = client
body := "something"
client.Send <- msg // greeting
case client := <-h.Unregister:
if _, ok := h.Clients[client]; ok {
delete(h.Clients, client)
delete(h.Connections, client.CID)
close(client.Send)
}
case message := <-h.Broadcast:
fmt.Println("message is", message)
}
}
}()
The last thing is manage a redis pool, you don't really need a connection pool right now. since we only have two goroutine, one main process.
func newPool(addr string) *redis.Pool {
return &redis.Pool{
MaxIdle: 100,
IdleTimeout: 240 * time.Second,
Dial: func() (redis.Conn, error) { return redis.Dial("tcp", addr) },
}
}
var (
pool *redis.Pool
redisServer = flag.String("redisServer", ":6379", "")
)
pool = newPool(*redisServer)
conn := pool.Get()
defer conn.Close()

Redis instance missing cache often using go-lang client redigo

I'm developing an api for blog or online publishing website to develop a recommendation engine for their content.
Since my api returns same json for same url request, I decided to use Redis as a cache for high traffic websites by passing the url as key and json as value. I am developing this api in go-lang recently and have been using redigo to talk to our Redis instance. The way I decided to architect my system is to check the url of the query sent by the client (blog) and search for it in redis. If however, the url response in not cached I do a 301 redirect to another api that applied the logic to generate the json response for that particular url and also set the redis cache. However, while I'm testing if my Redis is working properly, I realised that it is missing cache far too often than what I would like. It's definitely caching the json response mapped to the url as confirmed by doing a simple GET in Redis-cli but after 3-4 hits I could see Redis missing cache again. I'm still very new to go-lang and caching world so I'm not sure if I'm missing something in my implementation. Also, I would like to know under what circumstances can Redis instance miss caches ? It can't be timeout because Redis docs says "By default recent versions of Redis don't close the connection with the client if the client is idle for many seconds: the connection will remain open forever." so I'm not sure what exactly is happening with my setup. Relevant part of my code is below:
package main
import (
"flag"
"fmt"
"github.com/garyburd/redigo/redis"
"log"
"net/http"
"time"
)
var (
port int
folder string
pool *redis.Pool
redisServer = flag.String("redisServer", "redisip:22121", "")
redisPassword = flag.String("redisPassword", "", "")
)
func init() {
flag.IntVar(&port, "port", 80, "HTTP Server Port")
flag.StringVar(&folder, "folder", "www", "Serve this folder")
}
func newPool(server, password string) *redis.Pool {
return &redis.Pool{
MaxIdle: 3,
MaxActive: 25000,
IdleTimeout: 30 * time.Second,
Dial: func() (redis.Conn, error) {
c, err := redis.Dial("tcp", server)
if err != nil {
return nil, err
}
return c, err
},
TestOnBorrow: func(c redis.Conn, t time.Time) error {
_, err := c.Do("PING")
return err
},
}
}
func main() {
flag.Parse()
pool = newPool(*redisServer, *redisPassword)
httpAddr := fmt.Sprintf(":%v", port)
log.Printf("Listening to %v", httpAddr)
http.HandleFunc("/api", api)
http.Handle("/static/", http.StripPrefix("/static/", http.FileServer(http.Dir(folder))))
log.Fatal(http.ListenAndServe(httpAddr, nil))
}
func api(w http.ResponseWriter, r *http.Request) {
link := r.URL.Query().Get("url")
fmt.Println(link)
heading := r.URL.Query().Get("heading")
conn := pool.Get()
reply, err := redis.String(conn.Do("GET", link))
defer conn.Close()
if err != nil {
fmt.Println("Error for link %v:%v", heading, err)
http.Redirect(w, r, "json-producing-api", 301)
}
fmt.Fprint(w, reply)
}
I must also mention here that in the above code, my redis instance is actually a twemproxy client built by twitter which proxies three different redis client running behind on three different ports. Everything seemed to worked normal yesterday and I did a successful load test for 5k concurrent reuquests. However, when I checked the log today some queries were being missed by redis and were being redirected to my json-producing-api and I could see redigo:nil error. I'm totally confused as to what exactly is going wrong? Any help will be greatly appreciated.
EDIT: As per discussions below, I'm detailing the code that I use to set the data in Redis
func SetIntoRedis(key string, value string) bool {
// returns true if successfully set, returns false in case of an error
conn := pool.Get()
_, err := conn.Do("SET", key, value)
if err != nil {
log.Printf("Error Setting %v : %v", key, err)
return false
}
return true
}
Configuration of my twemproxy client
leaf:
listen: 0.0.0.0:22121
hash: fnv1a_64
distribution: ketama
redis: true
auto_eject_hosts: true
server_retry_timeout: 3000
server_failure_limit: 3
servers:
- 127.0.0.1:6379:1
- 127.0.0.1:6380:1
- 127.0.0.1:6381:1

RPC from both client and server in Go

Is it actually possible to do RPC calls from a server to a client with the net/rpc package in Go? If no, is there a better solution out there?
I am currently using thrift (thrift4go) for server->client and client->server RPC functionality. By default, thrift does only client->server calls just like net/rpc. As I also required server->client communication, I did some research and found bidi-thrift. Bidi-thrift explains how to connect a java server + java client to have bidirectional thrift communication.
What bidi-thrift does, and it's limitations.
A TCP connection has an incomming and outgoing communication line (RC and TX). The idea of bidi-thrift is to split RS and TX and provide these to a server(processor) and client(remote) on both client-application and server-application. I found this to be hard to do in Go. Also, this way there is no "response" possible (the response line is in use). Therefore, all methods in the service's must be "oneway void". (fire and forget, call gives no result).
The solution
I changed the idea of bidi-thrift and made the client open two connections to the server, A and B. The first connection(A) is used to perform client -> server communication (where client makes the calls, as usual). The second connection(B) is 'hijacked', and connected to a server(processor) on the client, while it is connected to a client(remote) on the server. I've got this working with a Go server and a Java client. It works very well. It's fast and reliable (just like normal thrift is).
Some sources.. The B connection (server->client) is set up like this:
Go server
// factories
framedTransportFactory := thrift.NewTFramedTransportFactory(thrift.NewTTransportFactory())
protocolFactory := thrift.NewTBinaryProtocolFactoryDefault()
// create socket listener
addr, err := net.ResolveTCPAddr("tcp", "127.0.0.1:9091")
if err != nil {
log.Print("Error resolving address: ", err.Error(), "\n")
return
}
serverTransport, err := thrift.NewTServerSocketAddr(addr)
if err != nil {
log.Print("Error creating server socket: ", err.Error(), "\n")
return
}
// Start the server to listen for connections
log.Print("Starting the server for B communication (server->client) on ", addr, "\n")
err = serverTransport.Listen()
if err != nil {
log.Print("Error during B server: ", err.Error(), "\n")
return //err
}
// Accept new connections and handle those
for {
transport, err := serverTransport.Accept()
if err != nil {
return //err
}
if transport != nil {
// Each transport is handled in a goroutine so the server is availiable again.
go func() {
useTransport := framedTransportFactory.GetTransport(transport)
client := worldclient.NewWorldClientClientFactory(useTransport, protocolFactory)
// Thats it!
// Lets do something with the connction
result, err := client.Hello()
if err != nil {
log.Printf("Errror when calling Hello on client: %s\n", err)
}
// client.CallSomething()
}()
}
}
Java client
// preparations for B connection
TTransportFactory transportFactory = new TTransportFactory();
TProtocolFactory protocolFactory = new TBinaryProtocol.Factory();
YourServiceProcessor processor = new YourService.Processor<YourServiceProcessor>(new YourServiceProcessor(this));
/* Create thrift connection for B calls (server -> client) */
try {
// create the transport
final TTransport transport = new TSocket("127.0.0.1", 9091);
// open the transport
transport.open();
// add framing to the transport layer
final TTransport framedTransport = new TFramedTransport(transportFactory.getTransport(transport));
// connect framed transports to protocols
final TProtocol protocol = protocolFactory.getProtocol(framedTransport);
// let the processor handle the requests in new Thread
new Thread() {
public void run() {
try {
while (processor.process(protocol, protocol)) {}
} catch (TException e) {
e.printStackTrace();
} catch (NullPointerException e) {
e.printStackTrace();
}
}
}.start();
} catch(Exception e) {
e.printStackTrace();
}
I came across rpc2 which implements it. An example:
Server.go
// server.go
package main
import (
"net"
"github.com/cenkalti/rpc2"
"fmt"
)
type Args struct{ A, B int }
type Reply int
func main(){
srv := rpc2.NewServer()
srv.Handle("add", func(client *rpc2.Client, args *Args, reply *Reply) error{
// Reversed call (server to client)
var rep Reply
client.Call("mult", Args{2, 3}, &rep)
fmt.Println("mult result:", rep)
*reply = Reply(args.A + args.B)
return nil
})
lis, _ := net.Listen("tcp", "127.0.0.1:5000")
srv.Accept(lis)
}
Client.go
// client.go
package main
import (
"fmt"
"github.com/cenkalti/rpc2"
"net"
)
type Args struct{ A, B int }
type Reply int
func main(){
conn, _ := net.Dial("tcp", "127.0.0.1:5000")
clt := rpc2.NewClient(conn)
clt.Handle("mult", func(client *rpc2.Client, args *Args, reply *Reply) error {
*reply = Reply(args.A * args.B)
return nil
})
go clt.Run()
var rep Reply
clt.Call("add", Args{5, 2}, &rep)
fmt.Println("add result:", rep)
}
RPC is a (remote) service. Whenever some computer requests a remote service then it is acting as a client asking the server to provide the service. Within this "definition" the concept of a server calling client RPC has no well defined meaning.

Resources