I have read through the whole Redigo documentation which can be found here.
https://godoc.org/github.com/garyburd/redigo/redis#pkg-variables
Here the documentation clearly states that connections do not support concurrent calls to Send(), Flush() or Receive() methods.
Connections do not support concurrent calls to the write methods
(Send, Flush) or concurrent calls to the read method (Receive).
Connections do allow a concurrent reader and writer.
And then it states that since the Do method can be a combination of Send(), Flush() and Receive(), we can't use Do() concurrently (with) the other methods.
Because the Do method combines the functionality of Send, Flush and
Receive, the Do method cannot be called concurrently with the other
methods.
Does this mean that we can use Do() concurrently alone using a single connection stored in a global variable as long as we don't mix it with the other methods?
For example like this:
var (
// Redis Conn.
redisConn redis.Conn
// Redis PubSubConn wraps a Conn with convenience methods for subscribers.
redisPsc redis.PubSubConn
)
func redisInit() {
c, err := redis.Dial(config.RedisProtocol, config.RedisAddress)
if err != nil {
log.Fatal(err)
}
c.Do("AUTH", config.RedisPass)
redisConn = c
c, err = redis.Dial(config.RedisProtocol, config.RedisAddress)
if err != nil {
log.Fatal(err)
}
c.Do("AUTH", config.RedisPass)
redisPsc = redis.PubSubConn{c}
for {
switch v := redisPsc.Receive().(type) {
case redis.Message:
// fmt.Printf("%s: message: %s\n", v.Channel, v.Data)
socketHub.broadcast <- v.Data
case redis.Subscription:
// fmt.Printf("%s: %s %d\n", v.Channel, v.Kind, v.Count)
case error:
log.Println(v)
}
}
}
And then calling the Do() method inside some go routine like this:
if _, err = redisConn.Do("PUBLISH", fmt.Sprintf("user:%d", fromId), message); err != nil {
log.Println(err)
}
if _, err = redisConn.Do("PUBLISH", fmt.Sprintf("user:%d", toId), message); err != nil {
log.Println(err)
}
And then later the document says that for full concurrent access to Redis, we need to create a pool and get connections from the pool and release them when we are done with it.
Does this mean that I can use, Send(), Flush() and Receive() as I want, as long as I get a connection from the pool? So in other words every time I need to do something in a go routine I have to get a new connection from the pool instead of reusing a global connection? And does this mean that I can use the Do() method with for example Send() as long as I get a new connection from the pool?
So to sum up:
1) Can I use the Do() method concurrently as long as I do not use it with the Send, Flush and Receive methods?
2) Can I use everything as I want as long as I get a new connection from the pool and release it when I'm done?
3) If (1) is true, does this affect performance? Is it better to use a global connection concurrently with only using the Do() method as in the provided example by me, and not mixing things up with Send, Flush and Receive?
You can have one concurrent writer and one concurrent reader. Because Do combines read and write operations, you can have one current current call to Do. To put this another way, you cannot call Do concurrently. You cannot store a connection in a global variable and call Do without protecting the connection with a mutex or using some other mechanism to ensure that there is no more than one concurrent caller to Do.
Pools support concurrent access. The connections returned by the pool Get method follow the rules for concurrency as described above. To get full concurrent access to the database, the application should within a single goroutine do the following: Get a connection from the pool; execute Redis commands on the connection; Close the connection to return the underlying resources to the pool.
Replace redisConn redis.Conn with a pool. Initialize the pool at app startup:
var redisPool *redis.Pool
...
redisPool = &redis.Pool{
MaxIdle: 3, // adjust to your needs
IdleTimeout: 240 * time.Second, // adjust to your needs
Dial: func () (redis.Conn, error) {
c, err := redis.Dial(config.RedisProtocol, config.RedisAddress)
if err != nil {
return nil, err
}
if _, err := c.Do("AUTH", config.RedisPass); err != nil {
c.Close()
return nil, err
}
return c, err
},
}
Use the pool to publish to the channels:
c := redisPool.Get()
if _, err = c.Do("PUBLISH", fmt.Sprintf("user:%d", fromId), message); err != nil {
log.Println(err)
}
if _, err = c.Do("PUBLISH", fmt.Sprintf("user:%d", toId), message); err != nil {
log.Println(err)
}
c.Close()
Do not initialize the pool in redisInit(). There's no guarantee that redisInit() will execute before other code in the application uses the pool.
Also add a call to Subscribe or PSubscribe.
Related
TL;DR - What is the proper way to close a golang.org/x/crypto/ssh session freeing all resources?
My investigation thus far:
The golang.org/x/crypto/ssh *Session has a Close() function which calls the *Channel Close() function which sends a message (I'm guessing to the remote server) to close, but I don't see anything about closing other resources like the pipe returned from the *Session StdoutPipe() function.
Looking at the *Session Wait() code, I see that the *Session stdinPipeWriter is closed but nothing about the stdoutPipe.
This package feels a lot like the os/exec package which guarantees that using the os/exec Wait() function will clean up all the resources. Doing some light digging there shows some similarities in the Wait() functions. Both use the following construct to report errors on io.Copy calls to their stdout, stderr, stdin readers/writers (well if I'm reading this correctly actually only one error) - crypto package shown:
var copyError error
for _ = range s.copyFuncs {
if err := <-s.errors; err != nil && copyError == nil {
copyError = err
}
}
But the os/exec Wait() also calls this close descriptor method
c.closeDescriptors(c.closeAfterWait)
which is just calling the close method on a slice of io.Closer interfaces:
func (c *Cmd) closeDescriptors(closers []io.Closer) {
for _, fd := range closers {
fd.Close()
}
}
when os/exec creates the pipe, it tracks what needs closing:
func (c *Cmd) StdoutPipe() (io.ReadCloser, error) {
if c.Stdout != nil {
return nil, errors.New("exec: Stdout already set")
}
if c.Process != nil {
return nil, errors.New("exec: StdoutPipe after process started")
}
pr, pw, err := os.Pipe()
if err != nil {
return nil, err
}
c.Stdout = pw
c.closeAfterStart = append(c.closeAfterStart, pw)
c.closeAfterWait = append(c.closeAfterWait, pr)
return pr, nil
}
During this I noticed that x/cyrpto/ssh *Session StdoutPipe() returns an io.Reader and ox/exec returns an io.ReadCloser. And x/crypto/ssh does not track what to close. I can't find a call to os.Pipe() in the library so maybe the implementation is different and I'm missing something and confused by the Pipe name.
A session is closed by calling Close(). There are no file descriptors involved, nor are there any calls to os.Pipe as the "pipe" returned from Session.StdOutPipe is only a pipe in concept and is of type ssh.Channel. Go channels don't need to be closed, because closing a channel is not a cleanup operation, rather it's simply a type of message sent to the channel. There is only ever one network connection involved in the ssh transport.
The only resource you need to close is the network connection; there are no other system resources to be freed. Calling Close() on the ssh.Client will call ssh.Conn.Close, and in turn close the net.Conn.
If you need the handle the network connection, you can always skip the ssh.Dial convenience function and Dial the network connection yourself:
c, err := net.DialTimeout(network, addr, timeout)
if err != nil {
return nil, err
}
conn, chans, reqs, err := ssh.NewClientConn(c, addr, config)
if err != nil {
return nil, err
}
// calling conn.Close will close the underlying net.Conn
client := ssh.NewClient(c, chans, reqs)
I have a number of routes in my routes.go file and they all call my redis database. I'm wondering how I can avoid calling the dial and AUTH calls in every route.
I've tried setting variables outside the functions like this:
var (
c, err = redis.Dial("tcp", ADDRESS)
_, err = c.Do("AUTH", "testing")
)
but then the compiler doesn't like err being used twice.
First, only use var for declaring variables. You can't run code outside of functions, so there's no use in trying to create connections inside a var statement. Use init() if you need something run at startup.
The redis connections can't be used with concurrent requests. If you want to share a redis connection across multiple routes, you need to have a safe method for concurrent use. In the case of github.com/garyburd/redigo/redis you want to use a Pool. You can do the AUTH call inside the Dial function, returning a ready connection each time.
var redisPool *redis.Pool
func init() {
redisPool = &redis.Pool{
MaxIdle: 3,
IdleTimeout: 240 * time.Second,
Dial: func() (redis.Conn, error) {
c, err := redis.Dial("tcp", server)
if err != nil {
return nil, err
}
if _, err := c.Do("AUTH", password); err != nil {
c.Close()
return nil, err
}
return c, err
},
}
}
Then each time you need a connection, you get one from the pool, and return it when you're done.
conn := redisPool.Get()
// conn.Close() just returns the connection to the pool
defer conn.Close()
if err := conn.Err(); err != nil {
// conn.Err() will have connection or Dial related errors
return nil, err
}
What I would do is instantiate a connection pool in main.go and pass the reference to the pool to your routes. This way you are setting up your redis client once, and your routes can leverage it.
I created a decorator around redigo that makes creating a Redis Client with a Connection pool very straightforward. Plus it is type-safe.
You can check it out here: https://github.com/shomali11/xredis
Connecting to Redigo and manipulating data inside a function is easy like butter, but the problem comes when you have to re-use its connection, obviously for performance/practicality reasons.
Doing it inside a function like this works:
func main() {
client, err := redis.Dial("tcp", ":6379")
if err != nil {
panic(err)
}
defer client.Close()
client.Do("GET", "test:1")
}
But bringing it outside doesn't:
var Client = redis.Dial("tcp", ":6379")
defer Client.Close()
func main() {
Client.Do("GET", "test:1")
}
With the following error(s) returned:
./main.go:1: multiple-value redis.Dial() in single-value context
./main.go:2: non-declaration statement outside function body
I've tried putting the connection as a const(ant), putting defer inside the main function to my dismay not working too.
This is an even bigger concern as I have many other functions that have to communicate to Redis, but recreating the connection to Redis everytime seems silly.
The Redigo API just shows how to create a Dial instance but doesn't go further by explaining how to re-use it.
You may've been lost in my talk, but I wanted to put a bit of context here, so my clear and concise question is: How do you go about re-using (not recreating everytime) a Redigo connection?
The best way turned out to be using Pools, which are briefly documented here: Redigo Pools.
A global variable won't eventually reuse a connection, so I ended up with something like this (using Pools as noted before):
func newPool() *redis.Pool {
return &redis.Pool{
MaxIdle: 80,
MaxActive: 12000, // max number of connections
Dial: func() (redis.Conn, error) {
c, err := redis.Dial("tcp", ":6379")
if err != nil {
panic(err.Error())
}
return c, err
},
}
}
var pool = newPool()
func main() {
c := pool.Get()
defer c.Close()
test,_:=c.Do("HGETALL", "test:1")
fmt.Println(test)
}
If for example you want to reuse a pool inside another function you do it like this:
func test() {
c := pool.Get()
defer c.Close()
test2,_:=c.Do("HGETALL", "test:2")
fmt.Println(test2)
}
The redis.Dial() method returns client error. To fix it, you should replace:
var Client = redis.Dial("tcp", ":6379")
with:
var Client, _ = redis.Dial("tcp", ":6379")
I have been playing around with golang and redis. I just stood up a simple http server and wanted to increment requests on redis. I am blowing up the connections (I think). I found that with redigo you can use connection pooling, but not sure how to implment that in go when I am serving the requests (where do you instantiate / call the pool from).
error: can't assign requested address.
Any suggestions would be appreciated....I am sure I am incorrectly making the connections, but just not sure how to change.
EDIT: Modified per pauljz's suggestions -- Works great now
var pool redis.Pool
func qryJson(rw http.ResponseWriter, req *http.Request){
incrementRedis()
}
func incrementRedis () {
t := time.Now().Format("2006-01-02 15:04:05")
conn := pool.Get()
defer conn.Close()
if _, err := conn.Do("HINCRBY", "messages", t, 1); err != nil {
log.Fatal(err)
}
}
func main() {
runtime.GOMAXPROCS(runtime.NumCPU())
pool = redis.Pool{
MaxIdle: 50,
MaxActive: 500, // max number of connections
Dial: func() (redis.Conn, error) {
c, err := redis.Dial("tcp", ":6379")
if err != nil {
panic(err.Error())
}
return c, err
},
}
http.HandleFunc("/testqry", qryJson)
log.Fatal(http.ListenAndServe(":8082", nil))
}
The redigo docs have a good starter example for connection pooling: http://godoc.org/github.com/garyburd/redigo/redis#Pool
In your case you would have a var pool redis.Pool in your package (i.e. not inside of a function).
In main(), before your ListenAndServe call, you would call pool = redis.Pool{ ... } from the redigo example to initialize the pool.
In incrementRedis() you would then do something like:
func incrementRedis () {
conn := pool.Get()
defer conn.Close()
if _, err := conn.Do("HINCRBY", "messages", t, 1); err != nil {
log.Fatal(err)
}
}
In your code, you create a connection to redis for each http request. Use a global variable to store connected redis connection and reuse it.
I am playing with go lately and trying to make some server which responds to clients on a tcp connection.
My question is how do i cleanly shutdown the server and interrupt the go-routine which is currently "blocked" in the following call
func (*TCPListener) Accept?
According to the documentation of Accept
Accept implements the Accept method in the Listener interface; it waits for the next call and returns a generic Conn.
The errors are also very scarcely documented.
Simply Close() the net.Listener you get from the net.Listen(...) call and return from the executing goroutine.
TCPListener Deadline
You don't necessarily need an extra go routine (that keeps accepting), simply specify a Deadline.
for example:
for {
// Check if someone wants to interrupt accepting
select {
case <- someoneWantsToEndMe:
return // runs into "defer listener.Close()"
default: // nothing to do
}
// Accept with Deadline
listener.SetDeadline(time.Now().Add(1 * time.Second)
conn, err := listener.Accept()
if err != nil {
// TODO: Could do some err checking (to be sure it is a timeout), but for brevity
continue
}
go handleConnection(conn)
}
Here is what i was looking for. Maybe helps someone in the future.
Notice the use of select and the "c" channel to combine it with the exit channel
ln, err := net.Listen("tcp", ":8080")
if err != nil {
// handle error
}
defer ln.Close()
for {
type accepted struct {
conn net.Conn
err error
}
c := make(chan accepted, 1)
go func() {
conn, err := ln.Accept()
c <- accepted{conn, err}
}()
select {
case a := <-c:
if a.err != nil {
// handle error
continue
}
go handleConnection(a.conn)
case e := <-ev:
// handle event
return
}
}