I have a number of routes in my routes.go file and they all call my redis database. I'm wondering how I can avoid calling the dial and AUTH calls in every route.
I've tried setting variables outside the functions like this:
var (
c, err = redis.Dial("tcp", ADDRESS)
_, err = c.Do("AUTH", "testing")
)
but then the compiler doesn't like err being used twice.
First, only use var for declaring variables. You can't run code outside of functions, so there's no use in trying to create connections inside a var statement. Use init() if you need something run at startup.
The redis connections can't be used with concurrent requests. If you want to share a redis connection across multiple routes, you need to have a safe method for concurrent use. In the case of github.com/garyburd/redigo/redis you want to use a Pool. You can do the AUTH call inside the Dial function, returning a ready connection each time.
var redisPool *redis.Pool
func init() {
redisPool = &redis.Pool{
MaxIdle: 3,
IdleTimeout: 240 * time.Second,
Dial: func() (redis.Conn, error) {
c, err := redis.Dial("tcp", server)
if err != nil {
return nil, err
}
if _, err := c.Do("AUTH", password); err != nil {
c.Close()
return nil, err
}
return c, err
},
}
}
Then each time you need a connection, you get one from the pool, and return it when you're done.
conn := redisPool.Get()
// conn.Close() just returns the connection to the pool
defer conn.Close()
if err := conn.Err(); err != nil {
// conn.Err() will have connection or Dial related errors
return nil, err
}
What I would do is instantiate a connection pool in main.go and pass the reference to the pool to your routes. This way you are setting up your redis client once, and your routes can leverage it.
I created a decorator around redigo that makes creating a Redis Client with a Connection pool very straightforward. Plus it is type-safe.
You can check it out here: https://github.com/shomali11/xredis
Related
I am writing a microservice in golang for a mqtt module. This module will be used by different function at the same time. I am using Grpc as a transport layer.
I have made a connect function which is this..
func Connect() { //it would be Connect(payload1 struct,topic string)
deviceID := flag.String("device", "handler-1", "GCP Device-Id")
bridge := struct {
host *string
port *string
}{
flag.String("mqtt_host", "", "MQTT Bridge Host"),
flag.String("mqtt_port", "", "MQTT Bridge Port"),
}
projectID := flag.String("project", "", "GCP Project ID")
registryID := flag.String("registry", "", "Cloud IoT Registry ID (short form)")
region := flag.String("region", "", "GCP Region")
certsCA := flag.String("ca_certs", "", "Download https://pki.google.com/roots.pem")
privateKey := flag.String("private_key", "", "Path to private key file")
server := fmt.Sprintf("ssl://%v:%v", *bridge.host, *bridge.port)
topic := struct {
config string
telemetry string
}{
config: fmt.Sprintf("/devices/%v/config", *deviceID),
telemetry: fmt.Sprintf("/devices/%v/events/topic", *deviceID),
}
qos := flag.Int("qos", 0, "The QoS to subscribe to messages at")
clientid := fmt.Sprintf("projects/%v/locations/%v/registries/%v/devices/%v",
*projectID,
*region,
*registryID,
*deviceID,
)
log.Println("[main] Loading Google's roots")
certpool := x509.NewCertPool()
pemCerts, err := ioutil.ReadFile(*certsCA)
if err == nil {
certpool.AppendCertsFromPEM(pemCerts)
}
log.Println("[main] Creating TLS Config")
config := &tls.Config{
RootCAs: certpool,
ClientAuth: tls.NoClientCert,
ClientCAs: nil,
InsecureSkipVerify: true,
Certificates: []tls.Certificate{},
MinVersion: tls.VersionTLS12,
}
flag.Parse()
connOpts := MQTT.NewClientOptions().
AddBroker(server).
SetClientID(clientid).
SetAutoReconnect(true).
SetPingTimeout(10 * time.Second).
SetKeepAlive(10 * time.Second).
SetDefaultPublishHandler(onMessageReceived).
SetConnectionLostHandler(connLostHandler).
SetReconnectingHandler(reconnHandler).
SetTLSConfig(config)
connOpts.SetUsername("unused")
///JWT Generation Starts from Here
token := jwt.New(jwt.SigningMethodES256)
token.Claims = jwt.StandardClaims{
Audience: *projectID,
IssuedAt: time.Now().Unix(),
ExpiresAt: time.Now().Add(24 * time.Hour).Unix(),
}
//Reading key file
log.Println("[main] Load Private Key")
keyBytes, err := ioutil.ReadFile(*privateKey)
if err != nil {
log.Fatal(err)
}
//Parsing key from file
log.Println("[main] Parse Private Key")
key, err := jwt.ParseECPrivateKeyFromPEM(keyBytes)
if err != nil {
log.Fatal(err)
}
//Signing JWT with private key
log.Println("[main] Sign String")
tokenString, err := token.SignedString(key)
if err != nil {
log.Fatal(err)
}
//JWT Generation Ends here
connOpts.SetPassword(tokenString)
connOpts.OnConnect = func(c MQTT.Client) {
if token := c.Subscribe(topic.config, byte(*qos), nil); token.Wait() && token.Error() != nil {
log.Fatal(token.Error())
}
}
client := MQTT.NewClient(connOpts)
if token := client.Connect(); token.Wait() && token.Error() != nil {
fmt.Printf("Not Connected..Retrying... %s\n", server)
} else {
fmt.Printf("Connected to %s\n", server)
}
}
i am calling this function in go routine in my main.go
func main() {
fmt.Println("Server started at port 5005")
lis, err := net.Listen("tcp", "0.0.0.0:5005")
if err != nil {
log.Fatalf("Failed to listen: %v", err)
}
//Creating keepAlive channel for mqttt subscribe
keepAlive := make(chan os.Signal)
defer close(keepAlive)
go func() {
//checking for internet connection
for !IsOnline() {
fmt.Println("No Internet Connection..Retrying")
//looking for internet connection after every 8 seconds
time.Sleep(8 * time.Second)
}
fmt.Println("Internet connected...connecting to mqtt broker")
repositories.Connect()
//looking for interupt(Ctrl+C)
value := <-keepAlive
//If Ctrl+C is pressed then exit the application
if value == os.Interrupt {
fmt.Printf("Exiting the application")
os.Exit(3)
}
}()
s := grpc.NewServer()
MqttRepository := repositories.MqttRepository()
// It creates a new gRPC server instance
rpc.NewMqttServer(s, MqttRepository)
if err := s.Serve(lis); err != nil {
log.Fatalf("Failed to serve: %v", err)
}
}
func IsOnline() bool {
timeout := time.Duration(5000 * time.Millisecond)
client := http.Client{
Timeout: timeout,
}
//default url to check connection is http://google.com
_, err := client.Get("https://google.com")
if err != nil {
return false
}
return true
}
I am using the go routine in my main in order for the application to start on every startup.
Now I want to use this MQTT Connect function to publish the data from other different functions.
e.g. Function A can call it like Connect(payload1,topic1) and function B can call it like Connect(payload2,topic2) and then this function should handle the publishing the data into the cloud.
Should I just add the topic and payload in this Connect function and then call it from another function? or is there any possibility that I can return or export the client as a global and then use it in another function or go routine? I am sorry if my question sound very stupid .. I am not a golang expert..
Now I want to use this MQTT Connect function to publish the data from other different functions.
I suspect I may be misunderstanding what you are trying to do here but unless you have a specific reason for making multiple connections you are best to connect once and then use that single connection to publish multiple messages. There are a few issues with establishing a connection each time you send a message including:
Establishing the connection takes time and generates a bit of network traffic (TLS handshake etc).
There can only be one active connection for a given ClientID (if you establish a second connection the broker will close the previous connection).
The library will not automatically disconnect - you would need to call Disconnect after publishing.
Incoming messages are likely to be lost due to the connection being down (note that CleanSession defaults to true).
Should I just add the topic and payload in this Connect function and then call it from another function?
As mentioned above the preferred approach would be to connect once and then publish multiple messages over the one connection. The Client is designed to be thread safe so you can pass it around and call Publish from multiple go routines. You can also make use of AutoConnect option (which you are) if you want the library to manage the connection (there is also a SetConnectRetry function) but bear in mind that a QOS 0 message will not be retried if the link is down when you attempt to send it.
I would suggest that your connect function return the client (i.e. func Connect() mqtt.Client) and then use that client to publish messages (you can store it somewhere or just pass it around; I'd suggest adding it you your grpc server struct).
I guess it is possible that you may need to establish multiple connections if you need to connect with a specific clientid in order to send to the desired topic (but generally you would give your servers connection access to a wide range of topics). This would require some work to ensure you don't try to establish multiple connections with the same client id simultaneously and, depending upon your requirements, receiving incoming messages.
A few additional notes:
If you use AutoConnect and SetConnectRetry you can simplify your code code (and just use IsConnectionOpen() to check if the connection is up removing the need for IsOnline()).
The spec states that "The Server MUST allow ClientIds which are between 1 and 23 UTF-8 encoded bytes in length" - it looks like yours is longer than that (I have not used GCP and it may well support/require a longer client ID).
You should not need InsecureSkipVerify in production.
I have read through the whole Redigo documentation which can be found here.
https://godoc.org/github.com/garyburd/redigo/redis#pkg-variables
Here the documentation clearly states that connections do not support concurrent calls to Send(), Flush() or Receive() methods.
Connections do not support concurrent calls to the write methods
(Send, Flush) or concurrent calls to the read method (Receive).
Connections do allow a concurrent reader and writer.
And then it states that since the Do method can be a combination of Send(), Flush() and Receive(), we can't use Do() concurrently (with) the other methods.
Because the Do method combines the functionality of Send, Flush and
Receive, the Do method cannot be called concurrently with the other
methods.
Does this mean that we can use Do() concurrently alone using a single connection stored in a global variable as long as we don't mix it with the other methods?
For example like this:
var (
// Redis Conn.
redisConn redis.Conn
// Redis PubSubConn wraps a Conn with convenience methods for subscribers.
redisPsc redis.PubSubConn
)
func redisInit() {
c, err := redis.Dial(config.RedisProtocol, config.RedisAddress)
if err != nil {
log.Fatal(err)
}
c.Do("AUTH", config.RedisPass)
redisConn = c
c, err = redis.Dial(config.RedisProtocol, config.RedisAddress)
if err != nil {
log.Fatal(err)
}
c.Do("AUTH", config.RedisPass)
redisPsc = redis.PubSubConn{c}
for {
switch v := redisPsc.Receive().(type) {
case redis.Message:
// fmt.Printf("%s: message: %s\n", v.Channel, v.Data)
socketHub.broadcast <- v.Data
case redis.Subscription:
// fmt.Printf("%s: %s %d\n", v.Channel, v.Kind, v.Count)
case error:
log.Println(v)
}
}
}
And then calling the Do() method inside some go routine like this:
if _, err = redisConn.Do("PUBLISH", fmt.Sprintf("user:%d", fromId), message); err != nil {
log.Println(err)
}
if _, err = redisConn.Do("PUBLISH", fmt.Sprintf("user:%d", toId), message); err != nil {
log.Println(err)
}
And then later the document says that for full concurrent access to Redis, we need to create a pool and get connections from the pool and release them when we are done with it.
Does this mean that I can use, Send(), Flush() and Receive() as I want, as long as I get a connection from the pool? So in other words every time I need to do something in a go routine I have to get a new connection from the pool instead of reusing a global connection? And does this mean that I can use the Do() method with for example Send() as long as I get a new connection from the pool?
So to sum up:
1) Can I use the Do() method concurrently as long as I do not use it with the Send, Flush and Receive methods?
2) Can I use everything as I want as long as I get a new connection from the pool and release it when I'm done?
3) If (1) is true, does this affect performance? Is it better to use a global connection concurrently with only using the Do() method as in the provided example by me, and not mixing things up with Send, Flush and Receive?
You can have one concurrent writer and one concurrent reader. Because Do combines read and write operations, you can have one current current call to Do. To put this another way, you cannot call Do concurrently. You cannot store a connection in a global variable and call Do without protecting the connection with a mutex or using some other mechanism to ensure that there is no more than one concurrent caller to Do.
Pools support concurrent access. The connections returned by the pool Get method follow the rules for concurrency as described above. To get full concurrent access to the database, the application should within a single goroutine do the following: Get a connection from the pool; execute Redis commands on the connection; Close the connection to return the underlying resources to the pool.
Replace redisConn redis.Conn with a pool. Initialize the pool at app startup:
var redisPool *redis.Pool
...
redisPool = &redis.Pool{
MaxIdle: 3, // adjust to your needs
IdleTimeout: 240 * time.Second, // adjust to your needs
Dial: func () (redis.Conn, error) {
c, err := redis.Dial(config.RedisProtocol, config.RedisAddress)
if err != nil {
return nil, err
}
if _, err := c.Do("AUTH", config.RedisPass); err != nil {
c.Close()
return nil, err
}
return c, err
},
}
Use the pool to publish to the channels:
c := redisPool.Get()
if _, err = c.Do("PUBLISH", fmt.Sprintf("user:%d", fromId), message); err != nil {
log.Println(err)
}
if _, err = c.Do("PUBLISH", fmt.Sprintf("user:%d", toId), message); err != nil {
log.Println(err)
}
c.Close()
Do not initialize the pool in redisInit(). There's no guarantee that redisInit() will execute before other code in the application uses the pool.
Also add a call to Subscribe or PSubscribe.
I am playing wit golang and orientdb to test them. i have written a tiny web app which uppon a request fetches a single document from local orientdb instance and returns it. when i bench this app with apache bench, when concurrency is above 1 it get following error:
2015/04/08 19:24:07 http: panic serving [::1]:57346: Get http://localhost:2480/d
ocument/t1/9:1441: EOF
when i bench Orientdb itself, it runs perfectley OK with any cuncurrency factor.
also when i change the url to fetch from this document to anything (other program whritten in golang, some internet site etc) the app runs OK.
here is the code:
func main() {
fmt.Println("starting ....")
var aa interface{}
router := gin.New()
router.GET("/", func(c *gin.Context) {
ans := getdoc("http://localhost:2480/document/t1/9:1441")
json.Unmarshal(ans, &aa)
c.JSON(http.StatusOK, aa)
})
router.Run(":3000")
}
func getdoc(addr string) []byte {
client := new(http.Client)
req, err := http.NewRequest("GET", addr, nil)
req.SetBasicAuth("admin","admin")
resp, err := client.Do(req)
if err != nil {
fmt.Println("oops", resp, err)
panic(err)
}
defer resp.Body.Close()
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
panic(err)
}
return body
}
thanks in advance
The keepalive connections are getting closed on you for some reason. You might be overwhelming the server, or going past the max number of connections the database can handle.
Also, the current http.Transport connection pool doesn't work well with synthetic benchmarks that make connections as fast as possible, and can quickly exhaust available file descriptors or ports (issue/6785).
To test this, I would set Request.Close = true to prevent the Transport from using the keepalive pool. If that works, one way to handle this with keepalive, is to specifically check for an io.EOF and retry that request, possibly with some backoff delay.
Connecting to Redigo and manipulating data inside a function is easy like butter, but the problem comes when you have to re-use its connection, obviously for performance/practicality reasons.
Doing it inside a function like this works:
func main() {
client, err := redis.Dial("tcp", ":6379")
if err != nil {
panic(err)
}
defer client.Close()
client.Do("GET", "test:1")
}
But bringing it outside doesn't:
var Client = redis.Dial("tcp", ":6379")
defer Client.Close()
func main() {
Client.Do("GET", "test:1")
}
With the following error(s) returned:
./main.go:1: multiple-value redis.Dial() in single-value context
./main.go:2: non-declaration statement outside function body
I've tried putting the connection as a const(ant), putting defer inside the main function to my dismay not working too.
This is an even bigger concern as I have many other functions that have to communicate to Redis, but recreating the connection to Redis everytime seems silly.
The Redigo API just shows how to create a Dial instance but doesn't go further by explaining how to re-use it.
You may've been lost in my talk, but I wanted to put a bit of context here, so my clear and concise question is: How do you go about re-using (not recreating everytime) a Redigo connection?
The best way turned out to be using Pools, which are briefly documented here: Redigo Pools.
A global variable won't eventually reuse a connection, so I ended up with something like this (using Pools as noted before):
func newPool() *redis.Pool {
return &redis.Pool{
MaxIdle: 80,
MaxActive: 12000, // max number of connections
Dial: func() (redis.Conn, error) {
c, err := redis.Dial("tcp", ":6379")
if err != nil {
panic(err.Error())
}
return c, err
},
}
}
var pool = newPool()
func main() {
c := pool.Get()
defer c.Close()
test,_:=c.Do("HGETALL", "test:1")
fmt.Println(test)
}
If for example you want to reuse a pool inside another function you do it like this:
func test() {
c := pool.Get()
defer c.Close()
test2,_:=c.Do("HGETALL", "test:2")
fmt.Println(test2)
}
The redis.Dial() method returns client error. To fix it, you should replace:
var Client = redis.Dial("tcp", ":6379")
with:
var Client, _ = redis.Dial("tcp", ":6379")
I have been playing around with golang and redis. I just stood up a simple http server and wanted to increment requests on redis. I am blowing up the connections (I think). I found that with redigo you can use connection pooling, but not sure how to implment that in go when I am serving the requests (where do you instantiate / call the pool from).
error: can't assign requested address.
Any suggestions would be appreciated....I am sure I am incorrectly making the connections, but just not sure how to change.
EDIT: Modified per pauljz's suggestions -- Works great now
var pool redis.Pool
func qryJson(rw http.ResponseWriter, req *http.Request){
incrementRedis()
}
func incrementRedis () {
t := time.Now().Format("2006-01-02 15:04:05")
conn := pool.Get()
defer conn.Close()
if _, err := conn.Do("HINCRBY", "messages", t, 1); err != nil {
log.Fatal(err)
}
}
func main() {
runtime.GOMAXPROCS(runtime.NumCPU())
pool = redis.Pool{
MaxIdle: 50,
MaxActive: 500, // max number of connections
Dial: func() (redis.Conn, error) {
c, err := redis.Dial("tcp", ":6379")
if err != nil {
panic(err.Error())
}
return c, err
},
}
http.HandleFunc("/testqry", qryJson)
log.Fatal(http.ListenAndServe(":8082", nil))
}
The redigo docs have a good starter example for connection pooling: http://godoc.org/github.com/garyburd/redigo/redis#Pool
In your case you would have a var pool redis.Pool in your package (i.e. not inside of a function).
In main(), before your ListenAndServe call, you would call pool = redis.Pool{ ... } from the redigo example to initialize the pool.
In incrementRedis() you would then do something like:
func incrementRedis () {
conn := pool.Get()
defer conn.Close()
if _, err := conn.Do("HINCRBY", "messages", t, 1); err != nil {
log.Fatal(err)
}
}
In your code, you create a connection to redis for each http request. Use a global variable to store connected redis connection and reuse it.