Azure Service bus queue - What if receiving Web API could not process the message - azure-servicebus-queues

As a part of data integration between two application, we plan to send data to Azure service bus queue. On the receiving end I've an API that loads the data to target DB.
I need to understand how can I handle the following:
1. How should the API be notified whenever the messages arrives in the queue. I am thinking of Azure function or web job or some scheduling component.
2. What if an API is down, messages should retain in queue.
3. What if when API receives the message but somehow the target DB is down. In that case, my messages should be retained in the queue for retries.
Please help me and guide me to get the correct approach to implement this.
Thanks!

Logic Apps! Use Logic Apps with Service Bus Queue Trigger so that it fires on the arrival of new messages in Service Bus Queue. To send the data to Web API, use Logic Apps Http Connector.
It is scalable by individual messages.
How should the API be notified whenever the messages arrives in the queue. I am thinking of Azure function or web job or some scheduling component. Logic Apps
What if an API is down, messages should retain in queue. Azure Service Bus Queue ensures FIFO, Batching,.. Messages are retained until it is de-queued and deleted
What if when API receives the message but somehow the target DB is down. In that case, my messages should be retained in the queue for retries. Check transaction feature of Azure Service Bus Queue

Related

How DLQ works in Azure Service bus queue?

I am learning how DLQ works in Azure service bus queue. i.e., unconsumed messages will be in DLQ. I have enabled dead lettering (deadLetteringOnMessageExpiration) on message expiration.
References:
Azure Service Bus - Subscriptions and DLQ
Azure Service Bus - *move* message from DLQ to main
https://learn.microsoft.com/en-us/azure/service-bus-messaging/enable-dead-letter
AMR template:
https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-resource-manager-namespace-queue
Questions:
If deadLetteringOnMessageExpiration is enabled, would DLQ be available for that queue (like below screenshot)?
If yes, how can I process messages from DLQ? (I guess I can view such messages here but not sure what will happen next)
My goal is to create a queue with DLQ where unprocessed message can be processed at some point and what is the best way to achieve that.
If deadLetteringOnMessageExpiration is enabled, would DLQ be available for that queue?
Dead-letter queue is always there for queues and subscriptions, unrelated to how you configure your entity.
If yes, how can I process messages from DLQ?
Up to you. You can peek DLQ-ed messages, recieve and process, etc. It really depends on how you want to handle those dead-lettered messages in the context of your system.

Microservice retrieving OrderId from an Azure service bus topic

I am trying to understand Azure service bus which I intend
to use in a current project. I am completely new to this.
Here is the scenario. There are two microservices.
Microservice A - Writes order details to a database and also writes
OrderId to an Azure service bus topic
Microservice B - Should be able to pick up the OrderId when ever this exists
from the same topic and use it to process some other transactions. There can be multiple OrderId's generated in a day by users.
How do I setup Microservice B to perform this duty? How does this work in reality?
How can this microservice constantly monitor the topic?
You can do write a job to check new message every "X" seconds or continuously but that's not a good approach. A better way is Pub-sub approach .
In pub-sub when message arrives at the topic , any service subscribed to topic receive the message and doesn't need to poll for it. Your Microservice B should be subscribed to the topic and message will be pushed to it.
For your microservice B to subscribed to message you have two options. Either you can azure function which will automatically process the message for you. If you don't want to write the azure function then you will have to use Azure Service Bus Library to get this done. The official example only accommodates console app (Pub-Sub Azure Service Bus) however for .Net Core Web API you can look here Subscribe Azure Service Bus

Should an API Gateway Communicate via a Queue or directly to other μServices?

I was wondering which of my two methods is more appropriate, or is there event another one?
(1) Direct
Direct communication between GATEWAY and μSERVICE A
UI sends HTTP request to GATEWAY
GATEWAY sends HTTP request to μSERVICE A
μSERVICE A returns either SUCCESS or ERROR
Event is stored in EVENT STORE and published to QUEUE
PROJECTION DATABASE is updated
Other μSERVICES might consume event
(2) Events
Event-based communication via a message queue
UI sends HTTP request to GATEWAY
GATEWAY published event to QUEUE
μSERVICE A consumes event
Event is stored in EVENT STORE and published to QUEUE
PROJECTION DATABASE is updated
Other μSERVICES might consume event
GATEWAY consumes event and sends response (SUCCESS or ERROR) to UI
I am really sorry if I misunderstood some concept, I am relatively new to this style of architecture.
Thanks in advance for every help! :)
Second approach is a preferred way and is async approach.
Direct
In first approach your microsvc B and C wait for the event to get published . The scalability of this system is directly dependent on microsvc A. what if microsvc A is down or falling behind writing events to queue? it's like single point of failure and bottleneck. you can't scale system easily.
Events
In microservices we keep system async so they can scale.
Gateway should be writing to the queue using pub/sub and all these microservices can use events at same time. system over all is more robust and can be scaled.

Notification microservice API or queue

I'm new to microservices architecture and want to create a centralised notification microservice to send emails/sms to users.
My first option was to create a notification Kafka queue where all other microservices can send notifications to. The notification microservice would then listen to this queue and send messages accordingly. If the notification service was restarted or taken down, we would not lose any messages as the messages will be stored on the queue.
My second option was to add a notification message API on the notifications microservice. This would make it easier for all other microservices as they just have to call an API as opposed to integrate with the queue. The API would then internally send the message to the notification Kafka queue and send the message. The only issue here is if the API is not available or there is an error, we will lose messages.
Any recommendations on the best way to handle this?
Either works. Some concepts that might help you decide:
A service that fronts "Kafka" would be helpful to:
Hide the implementation. This gives you the flexibility to change Kafka out later for something else. Your wrapper API would only respond with a 200 once it has put the notification request on the queue. I also see giving services direct access to "your" queue similar to allowing services to directly interact with a database they don't own. If you allow direct-access to Kafka and Kafka proves to be inadequate, a change to Kafka will require all of your clients to change their code.
Enforce the notification request contract (ensure the body of the request is well-formed). If you want to make sure that all of the items put on the queue are well-formed according to contract, an API can help enforce that. That will help prevent issues later when the "notifier" service picks notifications off the queue to send.
Adding a wrapper API would be less desirable if:
You don't want to/can't spend the time. Maybe deadlines are driving you to hurry and the days it would take to stand up a wrapper is just too much.
You are a small team and you don't have the resources/tools/time for service-explosion.
Your first design is simple and will work. If you're looking for the advantages I outlined, then consider your second design. And, to make sure I understand it, I would see it unfold like:
Client 1 needs to put out a notification and calls Service A POST /notifications
Service A that accepts POST /notifications
Service A checks the request, puts it on Kafka, responds to client with 200
Service B picks up notification request from Kafka queue.
Service A should be run as multiple instances for reliability.

MassTransit - publish to all consumer instances

I am looking for a way for each consumer instance to receive a message that is published to RabbitMQ via MassTransit. The scenario would be, we have multiple microservices that need to invalidate a cache on notification. Pub-Sub won't work in this instance as there will be 5 consumers of the same type as its the same code per service instance, so only one would receive the message in a traditional PubSub.
Message observation could be an option but this means the messages would never be consumed and hang around forever on the bus.
Can anyone suggest a pattern to use in the context of MassTransit?
Thanks in advance.
You should create a management endpoint in each service, which could even be a temporary queue (just request a receive endpoint without a queue name and one will be dynamically generated). Then, put your queue invalidation consumers on that endpoint. Each service instance will receive a unique instance of the message (when Publish is called), and those queues and bindings will automatically be removed once the service exits.
This is exactly how the bus endpoint works, but in your case, you're creating a receive endpoint which can have consumer message type bindings, so that published messages are received, one copy per service.
cfg.ReceiveEndpoint(cfg => { ... });
Note that the queue name is not specified, and will be automatically generated uniquely.

Resources