MassTransit - publish to all consumer instances - masstransit

I am looking for a way for each consumer instance to receive a message that is published to RabbitMQ via MassTransit. The scenario would be, we have multiple microservices that need to invalidate a cache on notification. Pub-Sub won't work in this instance as there will be 5 consumers of the same type as its the same code per service instance, so only one would receive the message in a traditional PubSub.
Message observation could be an option but this means the messages would never be consumed and hang around forever on the bus.
Can anyone suggest a pattern to use in the context of MassTransit?
Thanks in advance.

You should create a management endpoint in each service, which could even be a temporary queue (just request a receive endpoint without a queue name and one will be dynamically generated). Then, put your queue invalidation consumers on that endpoint. Each service instance will receive a unique instance of the message (when Publish is called), and those queues and bindings will automatically be removed once the service exits.
This is exactly how the bus endpoint works, but in your case, you're creating a receive endpoint which can have consumer message type bindings, so that published messages are received, one copy per service.
cfg.ReceiveEndpoint(cfg => { ... });
Note that the queue name is not specified, and will be automatically generated uniquely.

Related

Create SQS queue for consuming events with spring boot

I have a spring boot application which needs to consume different types of events. The events will be published by another service to an SQS queue.
My question is, do I have to create one SQS queue where all the events will be published/consumed or do I have to create one SQS queue for each and every event that will be published/consumed?
It is possible to create a queue for each message, but I don't recommend it ass you'll have to add a listener to each queue every time you add a new message.
First thing that you can do is to create a message structure that holds the topic inside it. Once your SQS handler receives a message, it will check the topic and route it to the right handler or handlers.
I still don't like this option because this means what you are implementing is a message broker and I don't think that's what you are aiming for.
The best solution would be to use SNS and publish each message to a different topic. Then configure SNS to route the message to that SQS and let your spring SQS message handler to get the right handler by the topic.
While this solution is similar to the previous one, the use of SNS here gives you the ability to publish a message to more than one queue (client) and handles, and you get the topic outside the message which is where it should be.

Azure Service bus queue - What if receiving Web API could not process the message

As a part of data integration between two application, we plan to send data to Azure service bus queue. On the receiving end I've an API that loads the data to target DB.
I need to understand how can I handle the following:
1. How should the API be notified whenever the messages arrives in the queue. I am thinking of Azure function or web job or some scheduling component.
2. What if an API is down, messages should retain in queue.
3. What if when API receives the message but somehow the target DB is down. In that case, my messages should be retained in the queue for retries.
Please help me and guide me to get the correct approach to implement this.
Thanks!
Logic Apps! Use Logic Apps with Service Bus Queue Trigger so that it fires on the arrival of new messages in Service Bus Queue. To send the data to Web API, use Logic Apps Http Connector.
It is scalable by individual messages.
How should the API be notified whenever the messages arrives in the queue. I am thinking of Azure function or web job or some scheduling component. Logic Apps
What if an API is down, messages should retain in queue. Azure Service Bus Queue ensures FIFO, Batching,.. Messages are retained until it is de-queued and deleted
What if when API receives the message but somehow the target DB is down. In that case, my messages should be retained in the queue for retries. Check transaction feature of Azure Service Bus Queue

Apache Kafka: How to check, that an event has been fully handled?

I am facing an issue when decoupling two systems by an event/message broker like Apache Kafka. The issue is related to a frontend triggering actions in a backend:
How does the producer (frontend service) know, that the published event has been properly handled by all the backend services (as consumers), if the publisher does not know neither the "identities" nor the count of consuming backends?
To be precise: Users can change for example their email address using a frontend UI. An associated service publishes that "change request" event to an appropriate topic within Kafka. The UI form is then "locked" to prevent subsequent change requests, until the change event has been fully processed by every consumer. But it's unclear how to detect this state.
You can use another topic to publish handled jobs. So your front-end publishes to one topic and your back-end publishes to another once it is done.
In Kafka terms, neither the producer nor consumer are considered backend - they're both clients connecting to a broker, which is generally considered to be the backend.
A producer will know that it has produced a message successfully, by virtue of the acks setting. A consumer will read a message, and then at a later point, its offset will be updated to a point corresponding to the last message it read. However, there is generally no interaction between a producer and a consumer, and they are generally completely unaware of one another.

MassTransit3 how to make request from consumer

I want to make a req/res request from IConsumer.Consume() method, but I don't see any method on ConsumeContext<> that returns a reference to IRequestClient<,>. Do I need to hold a reference to IBusControl somewhere and use it or I can use context somehow?
In this case, it is best to create the request client outside of the consumer and pass it as a dependency to the consumer as an IRequestClient<,> interface. The request client is created with IBus, which is outside of the consumer context.
It also ensures that the request will be less likely to deadlock with the broker because responses are received on the bus endpoint, and not the receive endpoint of the consumer (which, if you had a concurrency limit of 1, would never finish).
It's also not possible to connect consumers to a started receive endpoint, which is a requirement of the request/response handling (it happens under the covers). The bus, however, can connect consumers for messages which are sent (responses are sent to the bus address, versus being published) which is why it is used for responses to the requests.
To keep message tracking continuous, it may be nice to set the InitiatorId on the outbound request to the CorrelationId of the consumer message, as well as copying the ConversationId. This helps with tracing and keeping track of the overall message command/event flow.

Enterprise Integration Scatter-Gather across multiple app servers

I am looking for a way to aggregate JMS messages sent from multiple application servers, load-balanced via JMS. The problem is basically this:
At the end of our registration form, there exists a container in the http session, and the container has two objects of the same type. Each object needs to be processed, then the container needs to be delivered. Processing an object is resource intensive, so the processing is requested (InOnly, asynchronous) and queued up in OpenMQ. The JMS message is consumed by one of two competing consumers, that are basically duplicate application servers, that also serve up the web requests.
Currently, I just have a hard-coded delay on the container delivery, but with increased traffic there are plenty of delivery failures, since the objects have not finished processing yet. I am using Apache Camel 2.6 and Spring Remoting, and the Camel Aggregator would be ideal, except that each app server must have a duplicate camel context, so they would be competing for the aggregate components.
Perhaps a temporary queue and endpoint for each aggregation, but I'm not sure how to go about doing that, especially the tear-down. What would be the best way to process both objects, then deliver the container?
You could send a message to a topic when each object is finished. The message should contain the context id and the object id. Then you would have a from route on the topic. When a message is received it would persist the state in a simple db table and check if the other confirmation is already persisted. If yes it would deliver the container.

Resources