How to initiate connection to aws sqs in golang - go

I am building microservices application in golang, and each service talks to another service through sqs, however, i am having the difficulty to initiate the sqs connection when the server is up, so how do i initiate the sqs connection and use it in my service. building the service using go-kit so i have a file named service.go, main.go, endpoint.go and transport.go.
basically i have the code for the connection
creds := credentials.NewStaticCredentials(aws_access_key_id, aws_secret_access_key, token)
cfg := aws.NewConfig().WithRegion("region").WithCredentials(creds)
service := sqs.New(session.New(), cfg)
qURL := "q_url"
receive_params := &sqs.ReceiveMessageInput{
QueueUrl: aws.String(qURL),
MaxNumberOfMessages: aws.Int64(1),
VisibilityTimeout: aws.Int64(30),
WaitTimeSeconds: aws.Int64(1),
}
receive_resp, err := service.ReceiveMessage(receive_params)
if err != nil {
log.Println(err)
}
fmt.Printf("[Receive message] \n%v \n\n", receive_resp)
return true, nil
so how do i initiate the connection and start getting the messages in my services. Thank you all.

Your code should retrieve messages but it is instructed to fetch the messages once.
You'd need to put into some loop like such and probably run it on another goroutine if the app needs to run something else concurrently.
go func() {
for {
receive_resp, err := service.ReceiveMessage(receive_params)
if err != nil {
log.Println(err)
}
fmt.Printf("[Receive message] \n%v \n\n", receive_resp)
}
}()
Reference on ReceiveMessage
Reference on Short vs Long polling

Related

Keeping connection for Golang net/rpc

As I understand, reading net/rpc package documentation here
https://pkg.go.dev/net/rpc#go1.17.5 that every time a client makes an rpc call to the server, a new connection established. How can achieve that each new client opens a new connection, keeps it alive and invokes RPC methods using only TPC, i.e. not using HTTP?
If you make a new client with any of the standard library methods:
client, err := rpc.DialHTTP("tcp", serverAddress + ":1234")
if err != nil {
log.Fatal("dialing:", err)
}
Underneath the hood it will call net.Dial, resulting in a single connection that is associated with the rpc.Client:
conn, err := net.Dial(network, address)
You can see NewClient taking a single connection when it's instantiated here: https://cs.opensource.google/go/go/+/refs/tags/go1.17.5:src/net/rpc/client.go;l=193-197;drc=refs%2Ftags%2Fgo1.17.5;bpv=1;bpt=1
Any calls to Client.Call on that client will write and read to that underlying connection without spawning a new connection.
So as long as you instantiate your client one time and then make all of your rpc calls to that same client you'll always use a single connection. If that connection ever is severed the client will not longer be usable.
rpc.Client is also threadsafe, so you can safely create it and use it all over the place without having to make new connections .
Answering your comment. If you wanted to run an rpc server and keep track of connections you could do this:
l, e := net.Listen("tcp", ":1234")
if e != nil {
log.Fatal("listen error:", e)
}
server := rpc.NewServer()
for {
conn, err := l.Accept()
if err != nil {
panic(err) // replace with log message?
}
// Do something with `conn`
go func() {
server.ServeConn(conn)
// The server has stopped serving this connection, you can remove it.
}()
}
And then do something with each connection as it came in, and remove it when it's done processing.

Paho MQTT golang for multiple modules?

I am writing a microservice in golang for a mqtt module. This module will be used by different function at the same time. I am using Grpc as a transport layer.
I have made a connect function which is this..
func Connect() { //it would be Connect(payload1 struct,topic string)
deviceID := flag.String("device", "handler-1", "GCP Device-Id")
bridge := struct {
host *string
port *string
}{
flag.String("mqtt_host", "", "MQTT Bridge Host"),
flag.String("mqtt_port", "", "MQTT Bridge Port"),
}
projectID := flag.String("project", "", "GCP Project ID")
registryID := flag.String("registry", "", "Cloud IoT Registry ID (short form)")
region := flag.String("region", "", "GCP Region")
certsCA := flag.String("ca_certs", "", "Download https://pki.google.com/roots.pem")
privateKey := flag.String("private_key", "", "Path to private key file")
server := fmt.Sprintf("ssl://%v:%v", *bridge.host, *bridge.port)
topic := struct {
config string
telemetry string
}{
config: fmt.Sprintf("/devices/%v/config", *deviceID),
telemetry: fmt.Sprintf("/devices/%v/events/topic", *deviceID),
}
qos := flag.Int("qos", 0, "The QoS to subscribe to messages at")
clientid := fmt.Sprintf("projects/%v/locations/%v/registries/%v/devices/%v",
*projectID,
*region,
*registryID,
*deviceID,
)
log.Println("[main] Loading Google's roots")
certpool := x509.NewCertPool()
pemCerts, err := ioutil.ReadFile(*certsCA)
if err == nil {
certpool.AppendCertsFromPEM(pemCerts)
}
log.Println("[main] Creating TLS Config")
config := &tls.Config{
RootCAs: certpool,
ClientAuth: tls.NoClientCert,
ClientCAs: nil,
InsecureSkipVerify: true,
Certificates: []tls.Certificate{},
MinVersion: tls.VersionTLS12,
}
flag.Parse()
connOpts := MQTT.NewClientOptions().
AddBroker(server).
SetClientID(clientid).
SetAutoReconnect(true).
SetPingTimeout(10 * time.Second).
SetKeepAlive(10 * time.Second).
SetDefaultPublishHandler(onMessageReceived).
SetConnectionLostHandler(connLostHandler).
SetReconnectingHandler(reconnHandler).
SetTLSConfig(config)
connOpts.SetUsername("unused")
///JWT Generation Starts from Here
token := jwt.New(jwt.SigningMethodES256)
token.Claims = jwt.StandardClaims{
Audience: *projectID,
IssuedAt: time.Now().Unix(),
ExpiresAt: time.Now().Add(24 * time.Hour).Unix(),
}
//Reading key file
log.Println("[main] Load Private Key")
keyBytes, err := ioutil.ReadFile(*privateKey)
if err != nil {
log.Fatal(err)
}
//Parsing key from file
log.Println("[main] Parse Private Key")
key, err := jwt.ParseECPrivateKeyFromPEM(keyBytes)
if err != nil {
log.Fatal(err)
}
//Signing JWT with private key
log.Println("[main] Sign String")
tokenString, err := token.SignedString(key)
if err != nil {
log.Fatal(err)
}
//JWT Generation Ends here
connOpts.SetPassword(tokenString)
connOpts.OnConnect = func(c MQTT.Client) {
if token := c.Subscribe(topic.config, byte(*qos), nil); token.Wait() && token.Error() != nil {
log.Fatal(token.Error())
}
}
client := MQTT.NewClient(connOpts)
if token := client.Connect(); token.Wait() && token.Error() != nil {
fmt.Printf("Not Connected..Retrying... %s\n", server)
} else {
fmt.Printf("Connected to %s\n", server)
}
}
i am calling this function in go routine in my main.go
func main() {
fmt.Println("Server started at port 5005")
lis, err := net.Listen("tcp", "0.0.0.0:5005")
if err != nil {
log.Fatalf("Failed to listen: %v", err)
}
//Creating keepAlive channel for mqttt subscribe
keepAlive := make(chan os.Signal)
defer close(keepAlive)
go func() {
//checking for internet connection
for !IsOnline() {
fmt.Println("No Internet Connection..Retrying")
//looking for internet connection after every 8 seconds
time.Sleep(8 * time.Second)
}
fmt.Println("Internet connected...connecting to mqtt broker")
repositories.Connect()
//looking for interupt(Ctrl+C)
value := <-keepAlive
//If Ctrl+C is pressed then exit the application
if value == os.Interrupt {
fmt.Printf("Exiting the application")
os.Exit(3)
}
}()
s := grpc.NewServer()
MqttRepository := repositories.MqttRepository()
// It creates a new gRPC server instance
rpc.NewMqttServer(s, MqttRepository)
if err := s.Serve(lis); err != nil {
log.Fatalf("Failed to serve: %v", err)
}
}
func IsOnline() bool {
timeout := time.Duration(5000 * time.Millisecond)
client := http.Client{
Timeout: timeout,
}
//default url to check connection is http://google.com
_, err := client.Get("https://google.com")
if err != nil {
return false
}
return true
}
I am using the go routine in my main in order for the application to start on every startup.
Now I want to use this MQTT Connect function to publish the data from other different functions.
e.g. Function A can call it like Connect(payload1,topic1) and function B can call it like Connect(payload2,topic2) and then this function should handle the publishing the data into the cloud.
Should I just add the topic and payload in this Connect function and then call it from another function? or is there any possibility that I can return or export the client as a global and then use it in another function or go routine? I am sorry if my question sound very stupid .. I am not a golang expert..
Now I want to use this MQTT Connect function to publish the data from other different functions.
I suspect I may be misunderstanding what you are trying to do here but unless you have a specific reason for making multiple connections you are best to connect once and then use that single connection to publish multiple messages. There are a few issues with establishing a connection each time you send a message including:
Establishing the connection takes time and generates a bit of network traffic (TLS handshake etc).
There can only be one active connection for a given ClientID (if you establish a second connection the broker will close the previous connection).
The library will not automatically disconnect - you would need to call Disconnect after publishing.
Incoming messages are likely to be lost due to the connection being down (note that CleanSession defaults to true).
Should I just add the topic and payload in this Connect function and then call it from another function?
As mentioned above the preferred approach would be to connect once and then publish multiple messages over the one connection. The Client is designed to be thread safe so you can pass it around and call Publish from multiple go routines. You can also make use of AutoConnect option (which you are) if you want the library to manage the connection (there is also a SetConnectRetry function) but bear in mind that a QOS 0 message will not be retried if the link is down when you attempt to send it.
I would suggest that your connect function return the client (i.e. func Connect() mqtt.Client) and then use that client to publish messages (you can store it somewhere or just pass it around; I'd suggest adding it you your grpc server struct).
I guess it is possible that you may need to establish multiple connections if you need to connect with a specific clientid in order to send to the desired topic (but generally you would give your servers connection access to a wide range of topics). This would require some work to ensure you don't try to establish multiple connections with the same client id simultaneously and, depending upon your requirements, receiving incoming messages.
A few additional notes:
If you use AutoConnect and SetConnectRetry you can simplify your code code (and just use IsConnectionOpen() to check if the connection is up removing the need for IsOnline()).
The spec states that "The Server MUST allow ClientIds which are between 1 and 23 UTF-8 encoded bytes in length" - it looks like yours is longer than that (I have not used GCP and it may well support/require a longer client ID).
You should not need InsecureSkipVerify in production.

Cloud Run downloading a file from GCS is insanely slow

I have a Go cloud run app and when it starts, it downloads a 512mb file from GCS (it needs this for the program). Locally on my nothin-special home connection this works fine and it downloads in a few seconds, but when I deploy this to cloud run it downloads like a snail. I had to increase timeouts and log a progress counter in just to make sure it was doing something (it was). It might be downloading at about 30Kb/s which is not gonna work.
The cloud run instance and GCS regional bucket are both in us-east4. It doesn't seem like there are any knobs I can play with to get this to work and I don't see this issue/constraint documented.
Anyone have any ideas what could be the issue?
Here is the code doing the downloading, along with copious logging because I couldn't tell if it was doing anything at first:
func LoadFilter() error {
fmt.Println("loading filter")
ctx := context.Background()
storageClient, err := storage.NewClient(ctx)
if err != nil {
return err
}
defer storageClient.Close()
ctx, cancel := context.WithTimeout(ctx, time.Minute*60)
defer cancel()
obj := storageClient.Bucket("my_slow_bucket").Object("filter_export")
rc, err := obj.NewReader(ctx)
if err != nil {
return err
}
defer rc.Close()
attrs, err := obj.Attrs(ctx)
if err != nil {
return err
}
progressR := &ioprogress.Reader{
Reader: rc,
Size: attrs.Size,
DrawFunc: func(p int64, t int64) error {
fmt.Printf("%.2f\n", float64(p)/float64(t)*100)
return nil
},
}
fmt.Println("reading filter...")
data, err := ioutil.ReadAll(progressR)
if err != nil {
return err
}
fmt.Println("decoding filter...")
filter, err := cuckoo.Decode(data)
if err != nil {
return err
}
fmt.Println("filter decoded")
cf = filter
fmt.Println("initailized the filter successfully!")
return nil
}
Indeed what #wlhee said is perfectly true. if you have any activities that run outside or request pipeline, these activities will not have access to the full CPU provided to your instances. As the documentation says:
When an application running on Cloud Run finishes handling a request,
the container instance's access to CPU will be disabled or severely
limited. Therefore, you should not start background threads or
routines that run outside the scope of the request handlers.
Running background threads can result in unexpected behavior because
any subsequent request to the same container instance resumes any
suspended background activity.
I suggest that you run this download activity from Cloud Storage upon a request to your services by hitting some startup endpoint in your app, finish the download then return a response to indicate a request ends.
Please, check this documentation for tips on Cloud Run

Setting up Gmail Push notifications through GCP pub/sub using Go

I'm looking to basically setup my application such that it receives notifications every time a mail hits a Gmail inbox.
I'm following the guide here.
I have a Topic and subscription created.
My credentials are working. I can retrieve my emails using the credentials and a Go script as shown here.
I have enabled permissions on my topic gmail with gmail-api-push#system.gserviceaccount.com as a Pub/Sub Publisher.
I have tested the topic in pub/sub by manually sending a message through the console. The message shows up in the simple subscription I made.
main.go
func main() {
ctx := context.Background()
// Sets your Google Cloud Platform project ID.
projectID := "xxx"
// Creates a client.
// client, err := pubsub.NewClient(ctx, projectID)
_, err := pubsub.NewClient(ctx, projectID)
if err != nil {
log.Fatalf("Failed to create client: %v", err)
}
b, err := ioutil.ReadFile("credentials.json")
if err != nil {
log.Fatalf("Unable to read client secret file: %v", err)
}
// If modifying these scopes, delete your previously saved token.json.
config, err := google.ConfigFromJSON(b, gmail.GmailReadonlyScope)
if err != nil {
log.Fatalf("Unable to parse client secret file to config: %v", err)
}
gmailClient := getClient(config)
svc, err := gmail.New(gmailClient)
if err != nil {
log.Fatalf("Unable to retrieve Gmail client: %v", err)
}
var watchRequest gmail.WatchRequest
watchRequest.TopicName = "projects/xxx/topics/gmail"
svc.Users.Watch("xxx#gmail.com", &watchRequest)
...
The script runs fine although there's no stdout with any confirmation the Watch service is running.
However, with the above setup, I sent a mail from xxx#gmail.com to itself but the mail does not show up in my topic/subscription.
What else must I do to enable Gmail push notification through pub/sub using Go?
Make sure you are calling the Do method on your UsersWatchCall. The svc.Users.Watch returns only the structure, doesn't do the call immediately.
It would look something like this:
res, err := svc.Users.Watch("xxx#gmail.com", &watchRequest).Do()
if err != nil {
// don't forget to check the error
}

golang request to Orientdb http interface error

I am playing wit golang and orientdb to test them. i have written a tiny web app which uppon a request fetches a single document from local orientdb instance and returns it. when i bench this app with apache bench, when concurrency is above 1 it get following error:
2015/04/08 19:24:07 http: panic serving [::1]:57346: Get http://localhost:2480/d
ocument/t1/9:1441: EOF
when i bench Orientdb itself, it runs perfectley OK with any cuncurrency factor.
also when i change the url to fetch from this document to anything (other program whritten in golang, some internet site etc) the app runs OK.
here is the code:
func main() {
fmt.Println("starting ....")
var aa interface{}
router := gin.New()
router.GET("/", func(c *gin.Context) {
ans := getdoc("http://localhost:2480/document/t1/9:1441")
json.Unmarshal(ans, &aa)
c.JSON(http.StatusOK, aa)
})
router.Run(":3000")
}
func getdoc(addr string) []byte {
client := new(http.Client)
req, err := http.NewRequest("GET", addr, nil)
req.SetBasicAuth("admin","admin")
resp, err := client.Do(req)
if err != nil {
fmt.Println("oops", resp, err)
panic(err)
}
defer resp.Body.Close()
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
panic(err)
}
return body
}
thanks in advance
The keepalive connections are getting closed on you for some reason. You might be overwhelming the server, or going past the max number of connections the database can handle.
Also, the current http.Transport connection pool doesn't work well with synthetic benchmarks that make connections as fast as possible, and can quickly exhaust available file descriptors or ports (issue/6785).
To test this, I would set Request.Close = true to prevent the Transport from using the keepalive pool. If that works, one way to handle this with keepalive, is to specifically check for an io.EOF and retry that request, possibly with some backoff delay.

Resources