Keeping connection for Golang net/rpc - go

As I understand, reading net/rpc package documentation here
https://pkg.go.dev/net/rpc#go1.17.5 that every time a client makes an rpc call to the server, a new connection established. How can achieve that each new client opens a new connection, keeps it alive and invokes RPC methods using only TPC, i.e. not using HTTP?

If you make a new client with any of the standard library methods:
client, err := rpc.DialHTTP("tcp", serverAddress + ":1234")
if err != nil {
log.Fatal("dialing:", err)
}
Underneath the hood it will call net.Dial, resulting in a single connection that is associated with the rpc.Client:
conn, err := net.Dial(network, address)
You can see NewClient taking a single connection when it's instantiated here: https://cs.opensource.google/go/go/+/refs/tags/go1.17.5:src/net/rpc/client.go;l=193-197;drc=refs%2Ftags%2Fgo1.17.5;bpv=1;bpt=1
Any calls to Client.Call on that client will write and read to that underlying connection without spawning a new connection.
So as long as you instantiate your client one time and then make all of your rpc calls to that same client you'll always use a single connection. If that connection ever is severed the client will not longer be usable.
rpc.Client is also threadsafe, so you can safely create it and use it all over the place without having to make new connections .
Answering your comment. If you wanted to run an rpc server and keep track of connections you could do this:
l, e := net.Listen("tcp", ":1234")
if e != nil {
log.Fatal("listen error:", e)
}
server := rpc.NewServer()
for {
conn, err := l.Accept()
if err != nil {
panic(err) // replace with log message?
}
// Do something with `conn`
go func() {
server.ServeConn(conn)
// The server has stopped serving this connection, you can remove it.
}()
}
And then do something with each connection as it came in, and remove it when it's done processing.

Related

Paho MQTT golang for multiple modules?

I am writing a microservice in golang for a mqtt module. This module will be used by different function at the same time. I am using Grpc as a transport layer.
I have made a connect function which is this..
func Connect() { //it would be Connect(payload1 struct,topic string)
deviceID := flag.String("device", "handler-1", "GCP Device-Id")
bridge := struct {
host *string
port *string
}{
flag.String("mqtt_host", "", "MQTT Bridge Host"),
flag.String("mqtt_port", "", "MQTT Bridge Port"),
}
projectID := flag.String("project", "", "GCP Project ID")
registryID := flag.String("registry", "", "Cloud IoT Registry ID (short form)")
region := flag.String("region", "", "GCP Region")
certsCA := flag.String("ca_certs", "", "Download https://pki.google.com/roots.pem")
privateKey := flag.String("private_key", "", "Path to private key file")
server := fmt.Sprintf("ssl://%v:%v", *bridge.host, *bridge.port)
topic := struct {
config string
telemetry string
}{
config: fmt.Sprintf("/devices/%v/config", *deviceID),
telemetry: fmt.Sprintf("/devices/%v/events/topic", *deviceID),
}
qos := flag.Int("qos", 0, "The QoS to subscribe to messages at")
clientid := fmt.Sprintf("projects/%v/locations/%v/registries/%v/devices/%v",
*projectID,
*region,
*registryID,
*deviceID,
)
log.Println("[main] Loading Google's roots")
certpool := x509.NewCertPool()
pemCerts, err := ioutil.ReadFile(*certsCA)
if err == nil {
certpool.AppendCertsFromPEM(pemCerts)
}
log.Println("[main] Creating TLS Config")
config := &tls.Config{
RootCAs: certpool,
ClientAuth: tls.NoClientCert,
ClientCAs: nil,
InsecureSkipVerify: true,
Certificates: []tls.Certificate{},
MinVersion: tls.VersionTLS12,
}
flag.Parse()
connOpts := MQTT.NewClientOptions().
AddBroker(server).
SetClientID(clientid).
SetAutoReconnect(true).
SetPingTimeout(10 * time.Second).
SetKeepAlive(10 * time.Second).
SetDefaultPublishHandler(onMessageReceived).
SetConnectionLostHandler(connLostHandler).
SetReconnectingHandler(reconnHandler).
SetTLSConfig(config)
connOpts.SetUsername("unused")
///JWT Generation Starts from Here
token := jwt.New(jwt.SigningMethodES256)
token.Claims = jwt.StandardClaims{
Audience: *projectID,
IssuedAt: time.Now().Unix(),
ExpiresAt: time.Now().Add(24 * time.Hour).Unix(),
}
//Reading key file
log.Println("[main] Load Private Key")
keyBytes, err := ioutil.ReadFile(*privateKey)
if err != nil {
log.Fatal(err)
}
//Parsing key from file
log.Println("[main] Parse Private Key")
key, err := jwt.ParseECPrivateKeyFromPEM(keyBytes)
if err != nil {
log.Fatal(err)
}
//Signing JWT with private key
log.Println("[main] Sign String")
tokenString, err := token.SignedString(key)
if err != nil {
log.Fatal(err)
}
//JWT Generation Ends here
connOpts.SetPassword(tokenString)
connOpts.OnConnect = func(c MQTT.Client) {
if token := c.Subscribe(topic.config, byte(*qos), nil); token.Wait() && token.Error() != nil {
log.Fatal(token.Error())
}
}
client := MQTT.NewClient(connOpts)
if token := client.Connect(); token.Wait() && token.Error() != nil {
fmt.Printf("Not Connected..Retrying... %s\n", server)
} else {
fmt.Printf("Connected to %s\n", server)
}
}
i am calling this function in go routine in my main.go
func main() {
fmt.Println("Server started at port 5005")
lis, err := net.Listen("tcp", "0.0.0.0:5005")
if err != nil {
log.Fatalf("Failed to listen: %v", err)
}
//Creating keepAlive channel for mqttt subscribe
keepAlive := make(chan os.Signal)
defer close(keepAlive)
go func() {
//checking for internet connection
for !IsOnline() {
fmt.Println("No Internet Connection..Retrying")
//looking for internet connection after every 8 seconds
time.Sleep(8 * time.Second)
}
fmt.Println("Internet connected...connecting to mqtt broker")
repositories.Connect()
//looking for interupt(Ctrl+C)
value := <-keepAlive
//If Ctrl+C is pressed then exit the application
if value == os.Interrupt {
fmt.Printf("Exiting the application")
os.Exit(3)
}
}()
s := grpc.NewServer()
MqttRepository := repositories.MqttRepository()
// It creates a new gRPC server instance
rpc.NewMqttServer(s, MqttRepository)
if err := s.Serve(lis); err != nil {
log.Fatalf("Failed to serve: %v", err)
}
}
func IsOnline() bool {
timeout := time.Duration(5000 * time.Millisecond)
client := http.Client{
Timeout: timeout,
}
//default url to check connection is http://google.com
_, err := client.Get("https://google.com")
if err != nil {
return false
}
return true
}
I am using the go routine in my main in order for the application to start on every startup.
Now I want to use this MQTT Connect function to publish the data from other different functions.
e.g. Function A can call it like Connect(payload1,topic1) and function B can call it like Connect(payload2,topic2) and then this function should handle the publishing the data into the cloud.
Should I just add the topic and payload in this Connect function and then call it from another function? or is there any possibility that I can return or export the client as a global and then use it in another function or go routine? I am sorry if my question sound very stupid .. I am not a golang expert..
Now I want to use this MQTT Connect function to publish the data from other different functions.
I suspect I may be misunderstanding what you are trying to do here but unless you have a specific reason for making multiple connections you are best to connect once and then use that single connection to publish multiple messages. There are a few issues with establishing a connection each time you send a message including:
Establishing the connection takes time and generates a bit of network traffic (TLS handshake etc).
There can only be one active connection for a given ClientID (if you establish a second connection the broker will close the previous connection).
The library will not automatically disconnect - you would need to call Disconnect after publishing.
Incoming messages are likely to be lost due to the connection being down (note that CleanSession defaults to true).
Should I just add the topic and payload in this Connect function and then call it from another function?
As mentioned above the preferred approach would be to connect once and then publish multiple messages over the one connection. The Client is designed to be thread safe so you can pass it around and call Publish from multiple go routines. You can also make use of AutoConnect option (which you are) if you want the library to manage the connection (there is also a SetConnectRetry function) but bear in mind that a QOS 0 message will not be retried if the link is down when you attempt to send it.
I would suggest that your connect function return the client (i.e. func Connect() mqtt.Client) and then use that client to publish messages (you can store it somewhere or just pass it around; I'd suggest adding it you your grpc server struct).
I guess it is possible that you may need to establish multiple connections if you need to connect with a specific clientid in order to send to the desired topic (but generally you would give your servers connection access to a wide range of topics). This would require some work to ensure you don't try to establish multiple connections with the same client id simultaneously and, depending upon your requirements, receiving incoming messages.
A few additional notes:
If you use AutoConnect and SetConnectRetry you can simplify your code code (and just use IsConnectionOpen() to check if the connection is up removing the need for IsOnline()).
The spec states that "The Server MUST allow ClientIds which are between 1 and 23 UTF-8 encoded bytes in length" - it looks like yours is longer than that (I have not used GCP and it may well support/require a longer client ID).
You should not need InsecureSkipVerify in production.

How to accept server connection or break out of accept() loop in Golang

I need to implement a server in Golang. I'm using the net package for this purpose but I do not understand how to break out from the accept loop gracefully.
So looking at the example from the net package:
ln, err := net.Listen("tcp", ":8080")
if err != nil {
// handle error
}
for {
conn, err := ln.Accept()
if err != nil {
// handle error
}
go handleConnection(conn)
}
I want to do something more along the lines of:
for {
select {
case <-done:
break
case conn, err := <-ln.Accept():
if err != nil {
break
}
...
}
I other words, I want to be able to terminate the program gracefully somehow.
The best practice, with any service, to ensure a clean shutdown is to ensure each part of the service supports cancelation. Adding context.Context support is the recommended way to achieve this.
So first, ensure your net.Listener does not hang on rogue client connection. So to add context.Context support:
var lc net.ListenConfig
ln, err := lc.Listen(ctx, "tcp", ":8080")
Canceling ctx will break any rogue client connection here that may be blocking during handshake.
EDIT:
To ensure we can break out of the Listen() call while it is blocked waiting, one can leverage the same ctx state and (as noted in a previous, now deleted, answer) close the connection when a cancelation event is detected:
go func() {
<-ctx.Done()
log.Println("shutting service down...")
ln.Close()
}()
Working example: https://play.golang.org/p/LO1XS4jBQ02

I'm trying to build a simple proxy with net package, but the upstream is not sending any data back when I don't use go routines

I wrote this simple proxy server with net package, which I expected to proxy connections from a local server at 8001 to any incoming connections via 8000. When I go to the browser and try it, I get a refused to connect error.
package main
import (
"net"
"log"
"io"
)
func main() {
l, err := net.Listen("tcp", "localhost:8000")
if err != nil {
log.Fatal(err)
}
for {
conn, err := l.Accept()
if err != nil {
log.Fatal(err)
}
go proxy(conn)
}
}
func proxy(conn net.Conn) {
defer conn.Close()
upstream, err := net.Dial("tcp","localhost:8001")
if err != nil {
log.Print(err)
return
}
defer upstream.Close()
io.Copy(upstream, conn)
io.Copy(conn, upstream)
}
But if I change the following lines in the proxy function
io.Copy(upstream, conn)
io.Copy(conn, upstream)
to
go io.Copy(upstream, conn)
io.Copy(conn, upstream)
then it works as expected. Shouldn't the io.Copy(upstream, conn) block the io.Copy(conn, upstream)? As per my understanding, the conn should be written only after upstream has responded back? And how does having a go routine for io.Copy(upstream, conn) part solve this?
Shouldn't io.Copy block?
Yes. "Copy copies from src to dst until either EOF is reached on src or an error occurs.". Since this is a network connection, this means it returns after the client closes the connection. If and when the client closes the connection depends on the application protocol. In HTTP it may never happen, for instance.
How does having a goroutine solve this?
Because then the second Copy can execute while the client is still connected, allowing the upstream to write its response. Without the goroutine nothing is reading from the upstream, so it is likely blocked on its write call.
The client (presumably) waits for a response, the proxy waits for the client to close the connection, and the upstream waits for the proxy to start reading the response: no one can make progress and you're in a deadlock.

Some questions about Redigo and concurrency

I have read through the whole Redigo documentation which can be found here.
https://godoc.org/github.com/garyburd/redigo/redis#pkg-variables
Here the documentation clearly states that connections do not support concurrent calls to Send(), Flush() or Receive() methods.
Connections do not support concurrent calls to the write methods
(Send, Flush) or concurrent calls to the read method (Receive).
Connections do allow a concurrent reader and writer.
And then it states that since the Do method can be a combination of Send(), Flush() and Receive(), we can't use Do() concurrently (with) the other methods.
Because the Do method combines the functionality of Send, Flush and
Receive, the Do method cannot be called concurrently with the other
methods.
Does this mean that we can use Do() concurrently alone using a single connection stored in a global variable as long as we don't mix it with the other methods?
For example like this:
var (
// Redis Conn.
redisConn redis.Conn
// Redis PubSubConn wraps a Conn with convenience methods for subscribers.
redisPsc redis.PubSubConn
)
func redisInit() {
c, err := redis.Dial(config.RedisProtocol, config.RedisAddress)
if err != nil {
log.Fatal(err)
}
c.Do("AUTH", config.RedisPass)
redisConn = c
c, err = redis.Dial(config.RedisProtocol, config.RedisAddress)
if err != nil {
log.Fatal(err)
}
c.Do("AUTH", config.RedisPass)
redisPsc = redis.PubSubConn{c}
for {
switch v := redisPsc.Receive().(type) {
case redis.Message:
// fmt.Printf("%s: message: %s\n", v.Channel, v.Data)
socketHub.broadcast <- v.Data
case redis.Subscription:
// fmt.Printf("%s: %s %d\n", v.Channel, v.Kind, v.Count)
case error:
log.Println(v)
}
}
}
And then calling the Do() method inside some go routine like this:
if _, err = redisConn.Do("PUBLISH", fmt.Sprintf("user:%d", fromId), message); err != nil {
log.Println(err)
}
if _, err = redisConn.Do("PUBLISH", fmt.Sprintf("user:%d", toId), message); err != nil {
log.Println(err)
}
And then later the document says that for full concurrent access to Redis, we need to create a pool and get connections from the pool and release them when we are done with it.
Does this mean that I can use, Send(), Flush() and Receive() as I want, as long as I get a connection from the pool? So in other words every time I need to do something in a go routine I have to get a new connection from the pool instead of reusing a global connection? And does this mean that I can use the Do() method with for example Send() as long as I get a new connection from the pool?
So to sum up:
1) Can I use the Do() method concurrently as long as I do not use it with the Send, Flush and Receive methods?
2) Can I use everything as I want as long as I get a new connection from the pool and release it when I'm done?
3) If (1) is true, does this affect performance? Is it better to use a global connection concurrently with only using the Do() method as in the provided example by me, and not mixing things up with Send, Flush and Receive?
You can have one concurrent writer and one concurrent reader. Because Do combines read and write operations, you can have one current current call to Do. To put this another way, you cannot call Do concurrently. You cannot store a connection in a global variable and call Do without protecting the connection with a mutex or using some other mechanism to ensure that there is no more than one concurrent caller to Do.
Pools support concurrent access. The connections returned by the pool Get method follow the rules for concurrency as described above. To get full concurrent access to the database, the application should within a single goroutine do the following: Get a connection from the pool; execute Redis commands on the connection; Close the connection to return the underlying resources to the pool.
Replace redisConn redis.Conn with a pool. Initialize the pool at app startup:
var redisPool *redis.Pool
...
redisPool = &redis.Pool{
MaxIdle: 3, // adjust to your needs
IdleTimeout: 240 * time.Second, // adjust to your needs
Dial: func () (redis.Conn, error) {
c, err := redis.Dial(config.RedisProtocol, config.RedisAddress)
if err != nil {
return nil, err
}
if _, err := c.Do("AUTH", config.RedisPass); err != nil {
c.Close()
return nil, err
}
return c, err
},
}
Use the pool to publish to the channels:
c := redisPool.Get()
if _, err = c.Do("PUBLISH", fmt.Sprintf("user:%d", fromId), message); err != nil {
log.Println(err)
}
if _, err = c.Do("PUBLISH", fmt.Sprintf("user:%d", toId), message); err != nil {
log.Println(err)
}
c.Close()
Do not initialize the pool in redisInit(). There's no guarantee that redisInit() will execute before other code in the application uses the pool.
Also add a call to Subscribe or PSubscribe.

Sharing Redis settings across routes

I have a number of routes in my routes.go file and they all call my redis database. I'm wondering how I can avoid calling the dial and AUTH calls in every route.
I've tried setting variables outside the functions like this:
var (
c, err = redis.Dial("tcp", ADDRESS)
_, err = c.Do("AUTH", "testing")
)
but then the compiler doesn't like err being used twice.
First, only use var for declaring variables. You can't run code outside of functions, so there's no use in trying to create connections inside a var statement. Use init() if you need something run at startup.
The redis connections can't be used with concurrent requests. If you want to share a redis connection across multiple routes, you need to have a safe method for concurrent use. In the case of github.com/garyburd/redigo/redis you want to use a Pool. You can do the AUTH call inside the Dial function, returning a ready connection each time.
var redisPool *redis.Pool
func init() {
redisPool = &redis.Pool{
MaxIdle: 3,
IdleTimeout: 240 * time.Second,
Dial: func() (redis.Conn, error) {
c, err := redis.Dial("tcp", server)
if err != nil {
return nil, err
}
if _, err := c.Do("AUTH", password); err != nil {
c.Close()
return nil, err
}
return c, err
},
}
}
Then each time you need a connection, you get one from the pool, and return it when you're done.
conn := redisPool.Get()
// conn.Close() just returns the connection to the pool
defer conn.Close()
if err := conn.Err(); err != nil {
// conn.Err() will have connection or Dial related errors
return nil, err
}
What I would do is instantiate a connection pool in main.go and pass the reference to the pool to your routes. This way you are setting up your redis client once, and your routes can leverage it.
I created a decorator around redigo that makes creating a Redis Client with a Connection pool very straightforward. Plus it is type-safe.
You can check it out here: https://github.com/shomali11/xredis

Resources