How to receive notifications from a Amazon SQS queue - ruby

What is the most efficient way to receive messages from a Amazon SQS queue?
I've been using the Peddler Gem to create, register and subscribe to an Amazon SQS queue that captures Amazon Marketplace order changes. All good there, the SQS queue is receiving the messages fine. The next step I'm a bit fuzzy on and need some help before I go down a rabbit hole.
It seems like the SQS queue should just be like a webhook that I can subscribe to, too receive notices. But I'm not seeing that option anywhere.
But then it looks like I can use the Shoryuken Gem or maybe Amazon's own AWS SDK for Ruby to create workers to poll the queue in order to get notified of new messages.
Is the Shoryuken gem the most efficient way to pull messages from SQS? Or is there a better way?

IMO Shoryuken is currently the most efficient way for polling SQS messages in Ruby.
You can go ahead and use only the aws-sdk, that would work - with certain limitations. If you go on that path, you will ended implementing a lot of stuff around the aws-sdk, which Shoryuken already does. With the sdk you can receive messages in a loop, call a Ruby class to consume them etc. Shoryuken is a process for polling messages, which uses multithread for performance wise. Besides that, a single process can receive messages from multiple queues.
It seems like the SQS queue should just be like a webhook that I can subscribe to, too receive notices. But I'm not seeing that option anywhere.
That is not SQS, the service that's like that is AWS SNS. If Amazon Marketplace can also integrate with SNS, you can implement a pub/sub calling webhooks.
PS: Shoryuken author here :)

Related

AWS SQS List Triggers from SDK

I'm looking for a method to programmatically identify the triggers associated with an SQS queue. Looking through the SQS sdk docs, it doesn't seem this is possible. I had thought instead to try from the other end, and it appears the Lambda ListEventSourceMappings function would likely do what I want, since I'm able to provide it with the queue ARN. However, this requires the ListSourceMappings permission on all lambdas (*), which isn't really ideal - though it shouldn't really hurt, just not what I want. Is there another mechanism for this that I'm missing, or another approach?
Lambda polls SQS queues. It doesn't appear that way in the console, because they hide some of the details from you, but behind the scenes there is a process running within the AWS Lambda system that is polling your SQS queue and invoking your Lambda function when a message is available.
SQS doesn't push messages to Lambda (or anywhere else). SQS just holds messages and hands them out to anything that asks for them. So from an SQS perspective, there is no knowledge of who the message consumers are.
Given the above, the only way to find what you want is to use the Lambda ListEventSourceMappings API.

How to Trigger an AWS Lambda Function from External SQS Queue Activity

I'm trying to configure a lambda function to consume from an SQS queue that I've been given read and delete permissions to, that I do not own/have configuration of. Is there a way to use lambda's SQS trigger functionality for a queue that doesn't exist inside my AWS account?
If not, what are some alternatives that don't include checking the queue on a scheduled event.
If the owner of the SQS queue gives you the necessary permissions (see the setup docs for what those permissions are), you can do this. But, you shouldn't.
Subscribing to someone else's SQS queue is an anti-pattern. This is because a queue represents a backlog of work, and the implicit functionality is that everything that goes in eventually comes out. All the queue does is separate input flow from output flow (data can flow in both faster and slower than they flow out).
This idea of flow, however, means that when something comes out, it's no longer in the queue. (Caveat here: there are work-arounds to this, but they're usually not preferred). A consumer, however, always has the goal of processing everything in the queue. This may be done by multiple threads under the control of one consumer, but the end result is still that everything is processed. If there are multiple consumers, then they by necessity compete with one another, and none of them get to process everything in the queue.
How do we ensure there aren't multiple consumers? Simple: the consumer owns the queue. No other consumer is granted read permissions. It might well be the case that someone other than the consumer controls the filling of the queue (receiving write permissions) - and AWS has the perfect solution for this:
SNS Topics: An SNS topic is a source of data. It is, in effect, a publisher. When someone else wants you to have access to their data, they allow you to become a subscriber to their topic. When a new message is published to the SNS topic, everyone who is subscribed to the topic gets a copy. What happens to that copy is decided by the subscriber: it may be acted upon directly, stored for later action, or acted on indirectly, e.g. by being placed in a queue. This is the Pub-Sub model. It separates the details of one entity (the publisher) creating messages and sending them out to many others, from each recipient's (subscriber's) individual decision about how to consume those messages.
TL;DR: get whoever is currently owning the queue to publish to an SNS topic instead, then set up a queue (or whatever you prefer) subscribed to that topic.

Notification microservice API or queue

I'm new to microservices architecture and want to create a centralised notification microservice to send emails/sms to users.
My first option was to create a notification Kafka queue where all other microservices can send notifications to. The notification microservice would then listen to this queue and send messages accordingly. If the notification service was restarted or taken down, we would not lose any messages as the messages will be stored on the queue.
My second option was to add a notification message API on the notifications microservice. This would make it easier for all other microservices as they just have to call an API as opposed to integrate with the queue. The API would then internally send the message to the notification Kafka queue and send the message. The only issue here is if the API is not available or there is an error, we will lose messages.
Any recommendations on the best way to handle this?
Either works. Some concepts that might help you decide:
A service that fronts "Kafka" would be helpful to:
Hide the implementation. This gives you the flexibility to change Kafka out later for something else. Your wrapper API would only respond with a 200 once it has put the notification request on the queue. I also see giving services direct access to "your" queue similar to allowing services to directly interact with a database they don't own. If you allow direct-access to Kafka and Kafka proves to be inadequate, a change to Kafka will require all of your clients to change their code.
Enforce the notification request contract (ensure the body of the request is well-formed). If you want to make sure that all of the items put on the queue are well-formed according to contract, an API can help enforce that. That will help prevent issues later when the "notifier" service picks notifications off the queue to send.
Adding a wrapper API would be less desirable if:
You don't want to/can't spend the time. Maybe deadlines are driving you to hurry and the days it would take to stand up a wrapper is just too much.
You are a small team and you don't have the resources/tools/time for service-explosion.
Your first design is simple and will work. If you're looking for the advantages I outlined, then consider your second design. And, to make sure I understand it, I would see it unfold like:
Client 1 needs to put out a notification and calls Service A POST /notifications
Service A that accepts POST /notifications
Service A checks the request, puts it on Kafka, responds to client with 200
Service B picks up notification request from Kafka queue.
Service A should be run as multiple instances for reliability.

Microservice and RabbitMQ

I am new to Microservices and have a question with RabbitMQ / EasyNetQ.
I am sending messages from one microservice to another microservice.
Each Microservice are Web API's. I am using CQRS where my Command Handler would consume message off the Queue and do some business logic. In order to call the handler, it will need to make a request to the API method.
I would like to know without having to explicit call the API endpoint to hit the code for consuming messages. Is there an automated way of doing it without having to call the API endpoint ?
Suggestion could be creating a separate solution which would be a Console App that will execute the RabbitMQ in order to start listening. Create a while loop to read messages, then call the web api endpoint to handle business logic every time a new message is sent to the queue.
My aim is to create a listener or a startup task where once messages are in the queue it will automatically pick it up from the Queue and continue with command handler but not sure how to do the "Automatic" way as i describe it. I was thinking to utilise Azure Webjob that will continuously be running and it will act as the Consumer.
Looking for a good architectural way of doing it.
Programming language being used is C#
Much Appreciated
The recommended way of hosting RabbitMQ subscriber is by writing a windows service using something like topshelf library and subscribe to bus events inside that service on its start. We did that in multiple projects with no issues.
If you are using Azure, the best place to host RabbitMQ subscriber is in a "Worker Role".
I am using CQRS where my Command Handler would consume message off
the Queue and do some business logic. In order to call the handler, it
will need to make a request to the API method.
Are you sure this is real CQRS? CQRS occures when you handle queries and commands differently in your domain logic. Receiving a message via a calss, that's called CommandHandler and just reacting to it is not yet CQRS.
My aim is to create a listener or a startup task where once messages
are in the queue it will automatically pick it up from the Queue and
continue with command handler but not sure how to do the "Automatic"
way as i describe it. I was thinking to utilise Azure Webjob that will
continuously be running and it will act as the Consumer. Looking for
a good architectural way of doing it.
The easier you do that, the better. Don't go searching for complex solutions until you tried out all the simple ones. When I was implementing something similar, I was just running a pool of message handler scripts using Linux cron. A handler poped a message off the queue, processed it and terminated. Simple.
I think using the CQRS pattern, you will have events as well and corresponding event handlers. As you are using RabbitMQ for asynchronous communication between command and query then any message put on specific channel on RabbitMQ, can be listened by a callback method
Receiving messages from the queue is more complex. It works by subscribing a callback function to a queue. Whenever we receive a message, this callback function is called by the Pika library.

Simplest Ruby code for SQS request signing?

I am working in the cool but very constrained environment of Tropo (cloud telephony) Ruby scripting. The entire app is a single JRuby file. No gems, no requires.
I need to send simple messages to a single SQS queue. I don't need to do any other SQS operations. Before I start pulling code out of existing gems to do this, I wanted to see if anyone has standalone code for sending SQS messages or code that does the HTTP request signing that SQS requires.
We ended up going with the following super-simple code to post a message to an SQS queue that already exists. No gems required.
super_simple_sqs.rb

Resources