Subscribe messages for selected routing keys in single subscription (rabbitmq) - spring

How to subscribe messages for selected routing keys in single subscription. example:
User sending message described by "tags" (tags=routing keys):
messagingTemplate.convertAndSend("/topic/example.tagA.tagB.tagC.tagD"), sending_message);
I want to subscribe messages for routing tagA OR tagB so it's working when I put 2 subscriptions:
socket.stomp.subscribe("/topic/example.#.tagA.#", notify());
socket.stomp.subscribe("/topic/example.#.tagB.#", notify());
Does it have some overhead(eg. network overhead) if there will be not 2 subscriptions but more eg. 50 ?
If previous solution have overhead is it possible to join this 2 subscriptions into one ? also solutions in single subscription looks better(because more concise).

Related

sending to queue with and without selector creates big queue

I have 2 consumers in my system, that consume from same destination, but some of the messages have specific selector, and some of them not.
as a result, i see that a lot of messages are stuck in the selector consumer(they are not matching the selector)
something like that:
consumer1: myMessageType = 'Funny'
consumer2: myMessageType = 'Sad'
consumer3: no selector defined
Message 1 : myMessageType = 'Funny'
Message 2 : myMessageType = 'Funny'
Message 3 : myMessageType = 'Sad'
Message 4 : myMessageType = 'Sad'
Message 5 : myMessageType = 'Weird'
Message 6 : myMessageType = 'Weird'
and when i look at the queue(hawtio console), i see that consumer 1 and 2 have a lot of messages in queue, and they cannot consume them because lack of selector in the messages
why is that? am i abusing the amq system?
Queues can only provide messages to consumers within the maxPageSize. This is done for performance reasons-- to avoid scanning the entire data store for messages. If consumers are starved of messages, it means you have a gap in your consumer selector coverage.
You either need to:
Add a consumer with a selector that catches all the 'rest' of messages
Move to server-side filtering of messages using filtered composite destinations
Add a content-based router (ie Camel, Mule, etc) to sort messages into individual queues for consumers so they do not need selectors.
There is a pretty good case to be made that options #2 and #3 are cleaner architecture than trying to solve it with #1, since it puts all the information about the selectors in one place, vs scattered in different consumer configurations.

PostGraphile subscriptions - what does "topic" refer to?

I'm using PgPubsub and I'm trying to get my head around listen and topic*:"" vis-a-vis what to put there.
For example, let's say I have a <PostList> component that renders a list of <Post> and I want to update the list when a Post is created or deleted.
I'm not sure how to structure my subscription so I'm listening for changes to PostList. Here's a screenshot of my GraphiQL:
In pubsub (publish-subscribe), messages are published to a "topic" and you can subscribe to that topic to receive the messages that are published there.
You appear to be using the "simple subscriptions" functionality in PostGraphile, so I'll answer assuming that's the case.
With the subscription listen(topic: "whatGoesHere?") you have, you need to broadcast to the postgraphile:whatGoesHere? topic to trigger a subscription event. You can do this by issuing the SQL statement NOTIFY "postgraphile:whatGoesHere?", '{"ok": true}';. You can do this with psql:
$ psql your_database_here
[your_database_here] # NOTIFY "postgraphile:whatGoesHere?", '{"ok": true}';
NOTIFY
[your_database_here] #
Assuming your GraphQL subscription is running, this should cause the selection set to be evaluated and the results to be sent to GraphiQL.
You'll probably want to fire this NOTIFY statement from a function or trigger; you can read more about that in the PostGraphile Subscriptions documentation.

Is it possible to receive MQTT messages from many IoT devices using only one Lambda function?

We are setting up the infrastructure in AWS to collect data from IOT devices. Once the devices are registered, they will starting sending json messages to a few MQTT topics. In order to receive the messages and parse them and save the data into a database I plan to create a rule which triggers a Lambda function when a message is received. The Lambda function does the parsing.
Based on the AWS IoT documentation, a rule can be created under IoT to evaluate messages sent by your things with query like SELECT * FROM 'mymsgs/+'. It appears that the rule is not associated with any particular devices. So can I assume it can listen to the topics from all devices under the same account? If that is the case, I can just have one Lambda function to process all the messages that come from different devices.
Correct topic rules are not associated with any device. Use the FROM statement to control what messages they receive. You might want to update the SQL statement to
SELECT * as data, topic() as topic FROM mymsgs/+
so that your lambda can know which topic the message was sent on. If a device publishes { foo: "bar", baz: 100 } on topic mymsgs/device then
{
"data": {
"foo": "bar",
"baz": 0
},
"topic": "mymsgs/device1"
}
will be send to the lambda function.
You can also use IoT policies attached to thing certificates to enforce that a thing is only publishing on the topics it should.
If the number of topics is less you can do the following
SELECT *, topic() as topic FROM 'mylog/+' where regexp_matches(topic(), 'mylog/\b(info|error|warn)\b') = TRUE

How to architecture a web-socket server with client subscription of specific responses in Phoenix?

I'm developing a web-socket server that I need to send real-time messages using Phoenix Framework to my clients.
The basic idea of my web-socket server is that a client can subscribe for some type of information and expect to receive only it, other clients would never receive it unless they subscribe to it too, the same information is broadcasted to every (and only) client subscribed to it in real-time.
Also, these information are separated in categories and sub categories, going down to 4 levels of categories.
So, for example, let's say I have 2 types of category information CatA, and CatB, each category can have sub categories, so CatA can have CatA.SubCatA and CatA.SubCatB sub categories, each sub categories can also have other subcategories and so on.
These information are generated by services, one for each root category (they handle all the information for the subcategories too), so we have CatAService and CatBService. These services needs to run as the server starts, always generating new information and broadcasting it to anyone that is subscribed to it.
Now, I have clients that will try to subscribe to these information, my solution for now is to have a channel for each information type available, so a client can join a channel to receive information of the channel's type.
For that I have something like that in the js code:
let channel = socket.channel("CatA:SubCatA:SubSubCatA", {})
channel.join()
channel.on("new_info", (payload) => { ... }
In this case, I would have a channel that all clients interested in SubSubCatA from SubCatA from CatA can join and a service for CatA that would generate and broadcast the information for all it's sub categories and so on.
I'm not sure if I was able to explain exactly what I want, but if something is not clear, please tell me what so I can better explain it, also, I made this (very bad) image as an example of how all the communication would happen https://ibb.co/fANKPb .
Also, note that I could only have one channel for each category and broadcast all the subcategories information for everyone that joined that category channel, but I'm very concerned about performance and network bandwidth, So my objective is to only send the information to only the clients that requested it.
Doing some tests here, it seems that If the client joins the channel as shown in the js code above, I can do this:
MyServerWeb.Endpoint.broadcast "CatA:SubCatA:SubSubCatA", "new_info", message
and that client (and all the other clients listening to that channel, but only then) will receive that message.
So, my question is divided in two parts, one is more generic and is what are the correct ways to achieve what I described above.
The second is if the solution I already came up is a good way to solve this since I'm not sure if the length of the string "CatA:SubCatA:SubSubCatA" creates an overhead when the server parses it or if there is some other limitation that I'm not aware.
Thanks!
You have to make separate channels for each class of clients and depending upon the ids which you are getting, you can broadcast the messages after checking about the clients joining the channel
def join("groups:" <> group_slug, _params, socket) do
%{team_id: team_id, current_user: user} = socket.assigns
case Repo.get_by(Group, slug: group_slug, team_id: team_id) do
nil ->
{:error, %{message: "group not found"}}
group ->
case GroupAuthorization.can_view?(group.id, user.id) do
true ->
messages = MessageQueries.group_latest_messages(group.id, user)
json = MessageView.render("index.json", %{messages: messages})
send self(), :after_join
{:ok, %{messages: json}, assign(socket, :group, group)}
false ->
{:error, %{message: "unauthorized"}}
end
end
end
This is an example of sending messages only to the users in groups which are subscribed and joined to the group. Hope this helps.

Implementing bulk-messaging from Salesforce to/from Twilio, hitting Salesforce API limits

I am building an integration between Salesforce and Twilio that sends/receives SMS using TwilioForce REST API. The main issue is getting around the 10-call API limit from Salesforce, as well as the prohibition on HTTP call outs from a trigger.
I am basing the design on Dan Appleman's Asynchronous Request processes, but in either Batch mode or RequestAsync(), ASync(), Sync(), repeat... I'm still hitting the limits.
I'd like to know how other developers have done this successfully; the integrations have been there for a while, but the examples are few and far between.
Are you sending unique messages for each record that has been updated? If not, then why not send one message to multiple recipients to save on your API limits?
Unfortunately, if you do actually need to send more than 10 unique messages there is no way to send messages in bulk with the Twilio API, you could instead write a simple application that runs on Heroku or some other application platform that you can call out to that will handle the SMS functionality for you.
I have it working now using the following structure (I apologize for the formatting - it's mostly pseudocode):
ASyncRequest object:
AsyncType (picklist: 'SMS to Twilio' is it for now),
Params (long text area: comma-separated list of Ids)
Message object:
To (phone), From (phone), Message (text), Sent (boolean), smsId (string), Error (text)
Message trigger: passes trigger details to CreateAsyncRequests() method.
CreateAsyncRequests: evaluate each new/updated Message__c; if Sent == false for any messages, we create an AsyncRequest, type=SMS to Twilio, Params += ',' + message.Id.
// Create a list to be inserted after all the Messages have been processed
List requests = new List();
Once we reach 5 message.Ids in a single AsyncRequest.Params list, add it to requests.
If all the messages have been processed and there's a request with < 5 Ids in Params, add it to requests as well.
If requests.size() > 0 {
insert requests;
AsyncProcessor.StartBatch();
}
AsyncProcessor implements .Batchable and .AllowsCallouts, and queries ASyncRequest__c for any requests that need to be processed, which in this case will be our Messages list.
The execute() method takes the list of ASyncRequests, splits each Params value into its component Message Ids, and then queries the Message object for those particular Messages.
StartBatch() calls execute() with 1 record at a time, so that each execute() process will still contain fewer than the maximum 10 callouts.
Each Message is processed in a try/catch block that calls SendMessage(), sets Message.smsId = Twilio.smsId and sets Message.Sent = true.
If no smsId is returned, then the message was not sent, and I set a boolean bSidIsNull = true indicating that (at least) one message was not sent.
** If any message failed, no smsIds are returned EVEN FOR MESSAGES THAT WERE SUCCESSFUL **
After each batch of messages is processed, I check bSidIsNull; if true, then I go back over the list of messages and put any that do not have an smsId into a map indexed by the Twilio number I'm trying to send them From.
Since I limited each ASyncRequest to 5 messages, I still have the use of a callout to retrieve all of the messages sent from that Twilio.From number for the current date, using
client.getAccount().getMessages('From' => fromNumber, 'DateSent' => currentDate)
Then I can update the Message.smsIds for all of the messages that were successful, and add an error message to Message.Error_on_Send__c for any that failed.

Resources