Aggregating notifications for multiple subscriber through MQTT - onem2m

My use case is the following:
I have hundreds (if not thousands) of AE that are subscribed to 1 other AE that publish sensor data on a container. Each of these AE has the same POA.
The issue is that for each sensor data, the same notification is sent in MQTT only to one subscriber: 1 new CIN triggers 1000 Notifications.
Is it possible in the oneM2M spec to aggregate all those notifications as one, and for each subscribed AE to listen to the same topic and receive the same notification ?
It looks like groups and "notificationForwardingURI" may be what I'm looking for; but after reading TS0001 10.2.7.11 and 10.2.7.12; and table 9.6.8-2 I am not sure that it answer my issue.

It sounds to me that the <subscription> resource may not be setup to do what you are attempting.
If you want to send the notification to multiple AE's, that should be configured by specifying the notificationURIs with the AE-ID of each receiver. From what you have described, it sounds like their is only 1 AE specified in the notificationURIs.
So if you have 1000 notificationURIs then there will be 1000 notifications.
If I understand what you are attempting to do, you want a single notification to be delivered using the MQTT message delivery mechanism. That is possible, but not defined in oneM2M. The oneM2M MQTT bindings are intended to create a one-to-one message flow between the CSE and the AE. You are trying to use MQTT in its intended method of 1 to many. So while not defined by oneM2M, this can be done in the following manner.
1) create 1 AE (notification receiver)
2) create a <subscription> with the notificationURI set to the AE.
Externally have all of your other listeners subscribe to the MQTT topic of the AE.
Ensure that only 1 AE responds to the notification request.

Using the notificationForwardingURI attribute might be the right way to implement your scenario. You also want to check TS-0001 (Functional Architecture), section "10.2.7.10 Subscribe and Un-Subscribe of a group" and TS-0004 (Service Layer Core Protocol Specification), section "7.4.14.2.3 Assign URI for aggregation of notification" and the following sections.
Also check in TS-0010 (MQTT Protocol Binding), section "6.6 URI format" for the format of the notification URI for MQTT.

Related

ZeroMQ pub sub filtering behavior

I'm new to ZeroMQ.
Today I am trying the pub/sub pattern with NetMQ (the ZMQ library for .NET). I noticed some strange behavior (at least, to me!).
I create a subscriber socket and subscribes to topic "12345".
The publisher then publishes messages with topic "1234567890".
The subscriber can receive the messages!
That means, the filter does not compare the whole topic string, but only checks if the published topic "starts with" the subscribed topic.
To confirm that, I changed the subscribed topic to "2345". And the subscriber did not receive the messages.
If I change the publishing topic to "23456890" (while the subscribed topic is "2345"), then the messages come!
I'd like to know, is that the normal behavior of the topic filter in ZeroMQ (and NetMQ)?
Thank you very much!
" ...is that the normal behavior of the topic filter in ZeroMQ (and NetMQ)? "
Yes, this is documented property of ZeroMQ implementation of ultra-fast TOPIC-filtering. It works this way since ever ( v2.1.1+, v3.+, v4.+ and most probably it will remain so due to its ultimate performance and scaling envelopes ).
Also be informed, that a similar approach was taken by Martin SUSTRIK, the ZeroMQ co-father, in foundations of nanomsg and its ports and more recent breeds ( pynano, pynng et al ), so it could be called a de-facto industry best-practice, could it not?
Establish a new message filter. Newly created Subsriber sockets will
filtered out all incoming messages. Call this method to subscribe to
messages beginning with the given prefix. Multiple filters may be
attached to a single socket, in which case a message shall be accepted
if it matches at least one filter. Subscribing without any filters
shall subscribe to all incoming messages. const sub = new Subscriber()
// Listen to all messages beginning with 'foo'. sub.subscribe("foo")
// Listen to all incoming messages. sub.subscribe() Params: prefixes –
The prefixes of messages to subscribe to.
subscribe(...prefixes: Array<Buffer | string>): void;

Event Driven Microservices: How to send protobuf messages of different types to one AWS SNS(stream)?

I am planning to develop an Event-Driven Microservices.
I create a protobuf project, which defines several types of messages.
EmployeeMessage
UserMessage
ProcessMessage
ApplicantMessage
Then, I compile the protobuf project to different languages, e.g. Ruby, Golang.
Then, the upstream application will push the following type of events to the SNS, the SNS fanout message to multiple SQS, which owned by the different downstream consumers.
Then, the downstream application consumes messages from SQS.
Here is a diagram to show the whole architecture.
When implementing it, I realize there is no way. Protobuf messages of different types are posted to SNS, the consumer doesn't know the type of each message and is not able to decode them.
Questions
How do you implement your Event-Driven microservices? Does each type of message have its own SNS (stream)?
Is there a way to allow me to push different message types to the same SNS (stream)? Do I need to append the message type in front of the payload?
In your protobuf schema, design the message to be sent to SNS like so:
message Event {
oneof request {
EmployeeMessage employee_message = 1;
UserMessage user_message = 2;
// etc
}
}
Then have your receivers decode Event and check for the correct message type.
I think this answers both of your questions.
A couple of options:
Use a composition of SNS/SQS Topic for each targeted language. In your application building the protobufs, manage a mapping in SSM/AppConfig. Pros: Reduce Costs, SNS/SQS both charge for messages handled. Cons: Increase infrastructure wrangling. The map generation is easy to do in CFN automatically.
Example: RubySNS/RubySQS --> RubyApp
Store an attribute in the protobuf or use SNS Message Attributes to signal language destination. Message attributes may be cleaner for all SQS workers. Pro: Aligns with your current design if there are constraints forcing this. Cons: Requires the downstream workers to have some knowledge of the message.

how to use same rabbitmq queue in different java microservice [duplicate]

I have implemented the example from the RabbitMQ website:
RabbitMQ Example
I have expanded it to have an application with a button to send a message.
Now I started two consumer on two different computers.
When I send the message the first message is sent to computer1, then the second message is sent to computer2, the thrid to computer1 and so on.
Why is this, and how can I change the behavior to send each message to each consumer?
Why is this
As noted by Yazan, messages are consumed from a single queue in a round-robin manner. The behavior your are seeing is by design, making it easy to scale up the number of consumers for a given queue.
how can I change the behavior to send each message to each consumer?
To have each consumer receive the same message, you need to create a queue for each consumer and deliver the same message to each queue.
The easiest way to do this is to use a fanout exchange. This will send every message to every queue that is bound to the exchange, completely ignoring the routing key.
If you need more control over the routing, you can use a topic or direct exchange and manage the routing keys.
Whatever type of exchange you choose, though, you will need to have a queue per consumer and have each message routed to each queue.
you can't it's controlled by the server check Round-robin dispatching section
It decides which consumer turn is. i'm not sure if there is a set of algorithms you can pick from, but at the end server will control this (i think round robin algorithm is default)
unless you want to use routing keys and exchanges
I would see this more as a design question. Ideally, producers should create the exchanges and the consumers create the queues and each consumer can create its own queue and hook it up to an exchange. This makes sure every consumer gets its message with its private queue.
What youre doing is essentially 'worker queues' model which is used to distribute tasks among worker nodes. Since each task needs to be performed only once, the message is sent to only one node. If you want to send a message to all the nodes, you need a different model called 'pub-sub' where each message is broadcasted to all the subscribers. The following link shows a simple pub-sub tutorial
https://www.rabbitmq.com/tutorials/tutorial-three-python.html

dynamic topics with pub-sub with zmq, will that be fine?

I've read my docs most examples are for basic use cases.
Where simply one process publish X event and another subscribe to X event.
But in my applications X is kind of variable. so lets say i've X means my user.
so i can do publish from one server event like user-ID means if i've 1000s of user connected to server so will that be Okay to publish and subscribe to so many dynamic topics, and then another 20 servers subscribe to that 1000s topics on this server.
Lets see the example.
i've 10 servers. each server with 1000 users connected. so total 10k users.
i need to send X data from each user to another user.
so i've did this.
X server publish user-ID data (1 publish user's who is connected, 1K publish)
Y server subscribe user-ID data (10k subscribe request to sent each server)
What should be optimal way of pub sub with dynamic topics so less bandwidth used among servers?
Notice::
user-ID is just an example where ID is dynamic number, and it publish some real time data which can't be stored anywhere.
In ZeroMQ subscription matching is implemented in the PUB socket with a prefix-matching trie. This is a very efficient data structure, and I would expect that 10K subscriptions and 10K msg/sec would be no problem at all.
The PUB socket only sends messages for matching subscriptions (so there is no "waste"). If a message doesn't match any subscription then the PUB socket will drop it. Matching messages are only sent to SUB sockets that have subscribed to them.
When you add or remove a subscription, the SUB socket will send a message its connected PUB socket(s). Each PUB socket will then update its topic trie.
My guess is 10k subs and 10k msgs/s is no problem, but the best thing to do would be to write some test code and try it out. Once nice thing about ZeroMQ is that it's not much work to test different architectures.
As far as I know in pyzmq API publisher can send messages to any topic
socket.send("%d %d" % (topic, messagedata))
and subscribers set a filter on these topics for topic of their interests with setsockopt
topicfilter = "10001"
socket.setsockopt(zmq.SUBSCRIBE, topicfilter)
So I think you can fully implement your plan.

ZeroMQ distribution pattern

I currently have a pub/sub system running which allows clients to connect to a central message routing daemon, subscribe for a range of messages, and then start chattering away. The routing daemon tracks and maintains each subscriber's messages of interest (based on a simple tag) and delivers the appropriate messages of interest as each of the subscribers produce them. Essentially, each connection is considered a potential publisher OR subscriber AND USUALLY both, the daemon handles the routing and delivery as needed.
For example, three clients all connect and subscribe for their message tag(s) (MT) of interest:
Client 1(C1) subscribes to MT => 123
Client 2(C2) subscribes to MT => 123 & 456
Client 3(C3) subscribes to MT => 123 & 456 & 789
C1 produces MT 456: daemon delivers a copy to C2 and C3
C2 produces MT 123: daemon delivers a copy to C1 and C3 (not self)
C3 produces MT 999: daemon delivers it to none (nobody subscribed)
ZeroMQ came up in a discussion with a coworker and after tinkering with it for a few days I don't think I'm seeing the proper pattern for implementing/replacing the system that we currently have in place. Additionally, I would like to use EPGM in order to take advantage of the multicast gains and to eliminate the TCP based daemon, monkey in the middle, that I currently have.
Any suggestions?
It's possible to design a system like that using ZeroMQ. Basically speaking, you may create a daemon that binds two sockets: PULL to receive messages from clients and PUB to publish messages. Each of clients connects SUB socket and PUSH socket to server. EPGM might be used for PUB/SUB sockets, but PUSH/PULL sockets are still TCP.
The disadvantage of this design is that topic filtering and dropping out own messages must be done manually. For example, you might create message of three parts:
Topic
ID of producer
Message body
Client should read messages part by part immediately dropping tail of message it's not interested in. Working with PUB/SUB message envelopes is described in detail in this section of the guide: http://zguide.zeromq.org/page:all#Pub-Sub-Message-Envelopes. Client filtering shouldn't affect performance, since all PGM packets must be delivered to all connected receivers anyway.
This design is very simple yet pretty effective. It doesn't cover reliability, high availability, failure recovery and other important aspects - it's all doable with ZeroMQ and covered in the guide. Probably the best feature of ZeroMQ is the ability to start with something simple and add functionality as necessary without pain and/or major rewrites.
Something very similar (plus state snapshots, reliability and many more) is described in the chapter "Reliable Pub-Sub (Clone Pattern)" of the guide: http://zguide.zeromq.org/page:all#toc119
BTW, it's also possible to design p2p system with the central daemon used only as a name server, but it will be definitely more complex.

Resources