Microservice retrieving OrderId from an Azure service bus topic - microservices

I am trying to understand Azure service bus which I intend
to use in a current project. I am completely new to this.
Here is the scenario. There are two microservices.
Microservice A - Writes order details to a database and also writes
OrderId to an Azure service bus topic
Microservice B - Should be able to pick up the OrderId when ever this exists
from the same topic and use it to process some other transactions. There can be multiple OrderId's generated in a day by users.
How do I setup Microservice B to perform this duty? How does this work in reality?
How can this microservice constantly monitor the topic?

You can do write a job to check new message every "X" seconds or continuously but that's not a good approach. A better way is Pub-sub approach .
In pub-sub when message arrives at the topic , any service subscribed to topic receive the message and doesn't need to poll for it. Your Microservice B should be subscribed to the topic and message will be pushed to it.
For your microservice B to subscribed to message you have two options. Either you can azure function which will automatically process the message for you. If you don't want to write the azure function then you will have to use Azure Service Bus Library to get this done. The official example only accommodates console app (Pub-Sub Azure Service Bus) however for .Net Core Web API you can look here Subscribe Azure Service Bus

Related

Why are people using a message Bus in their code - when to message vs call code

When building an application before scaling to multiple micro services. You have a codebase consisting of services that are decoupled. IE a services no longer depends on another service, not even loosely via a interface. It receives input from a service via a message buss. It has a method receivePaymentRequest but its callee is not the Order service. Its invoked via the message bus, perhaps in the future on another server. But imagine theres no need to run multiple servers at this point.
a order services posts to the message bus payment-request event
the payment services picks up on this message
payment is completed
payment service send a payment-complete event message to the message bus
the order service picks up this message
I"m not thinking about the patterns that enable this to be fault tolerant. But instead when to use this approach since it adds a lot of complexity. So please ignore what i've left out with regards to this
Correct? Is it stupid to implement it like such before scaling to microservices. How does this. Is SOA the step before actual microservices?
When should a class receive/publish on the message buss and when should it depend on a service as a class (even injected via a interface) ?

Saga Pattern on hardware failure and inter services communication

I am building a Spring Boot microservice application. I am planning on adopting the Saga pattern to tackle the distributed transaction problem. Below is the list of questions and problems that I am facing.
Here is the context for ease of explanation.
Client -> Service A -> Service B
Handling of non-alive microservices due to failure
Assuming that Service B is not alive due to hardware / software failure, how should A react?
Async communication
It is recommended that we have async communication for saga pattern. Assuming that time for client -> A < A -> B, how does the Client receive the data that A receives from B at a later time? Is it that A has to return an Async object back to client? Something like CompletableFuture class?
Service requesting resources from other services.
Assuming that Service A has to request some resources from Service B, how should A go about doing this? All I can think of is using HTTP / gRPC (eliminated communication from message broker).
If you happened to have some experience / advice, please share :)
Any help or advice on Saga pattern is appreciated!
SAGA is used for distributed transaction. It can be implemented by using Orchestration or Choreography based. It is mostly (prefer) implemented by using async way of communication. Message Broker plays important role here.
There are lots of queries. Let me try to answer those.
If one service is down - You can setup a monitoring system for SAGA. In case, if any service is down or SAGA is not processed for some threshold time then you can raise alert.
Async Communication - It is mostly used to process some commands (not query). Whenever client call service A, it initiate the SAGA and reply back with current status. It also return a id (you can say job id). Now there are 2 ways through which Client get updated status. One is Poll (where client ask for status update after N sec) and 2nd is Push (where server push the changes when there is change in state.)
Service request resource from other - Yeah, prefer way is REST or gRPC. Also, if data is type of constant then you can use cache.
Suggestion - SRE (Monitoring etc.) play an important role in Microservice architecture. So, if you have setup that well then you can easily handle other challenges of microservice.

Spring Cloud Stream with autoscaling subscribers will it process the same message using cloud pubsub binder?

I have two services on Google Appengine that autoscale with increased load from 2 - 20 instances. I just added spring cloud stream and the pubsub binder to publish messages that a communication service subscribes to. It seems to work so far but I have been questioned as to what would happen when the subscriber autoscales. Is there a chance to pull the same message from the queue? I saw under the properties in the docs
https://docs.spring.io/spring-cloud-stream/docs/Elmhurst.SR2/reference/htmlsingle/#_configuration_options
It shows instance count and instance index. I am not sure how to even set these when the instances are ephemeral and read from the same external configs. If anyone has ran into this issue please let me know.
There is no something like queue in the Google Cloud Pub/Sub. There is a topic and subscriptions. So, if all your subscribers use the same subscription (consumer group in terms of SCSt), only exactly one instance is going to get a message from the topic.
See more info in the Reference Manual: https://cloud.spring.io/spring-cloud-static/spring-cloud-gcp/1.1.0.RELEASE/multi/multi__spring_cloud_stream.html#_consumer_destination_configuration

Notification microservice API or queue

I'm new to microservices architecture and want to create a centralised notification microservice to send emails/sms to users.
My first option was to create a notification Kafka queue where all other microservices can send notifications to. The notification microservice would then listen to this queue and send messages accordingly. If the notification service was restarted or taken down, we would not lose any messages as the messages will be stored on the queue.
My second option was to add a notification message API on the notifications microservice. This would make it easier for all other microservices as they just have to call an API as opposed to integrate with the queue. The API would then internally send the message to the notification Kafka queue and send the message. The only issue here is if the API is not available or there is an error, we will lose messages.
Any recommendations on the best way to handle this?
Either works. Some concepts that might help you decide:
A service that fronts "Kafka" would be helpful to:
Hide the implementation. This gives you the flexibility to change Kafka out later for something else. Your wrapper API would only respond with a 200 once it has put the notification request on the queue. I also see giving services direct access to "your" queue similar to allowing services to directly interact with a database they don't own. If you allow direct-access to Kafka and Kafka proves to be inadequate, a change to Kafka will require all of your clients to change their code.
Enforce the notification request contract (ensure the body of the request is well-formed). If you want to make sure that all of the items put on the queue are well-formed according to contract, an API can help enforce that. That will help prevent issues later when the "notifier" service picks notifications off the queue to send.
Adding a wrapper API would be less desirable if:
You don't want to/can't spend the time. Maybe deadlines are driving you to hurry and the days it would take to stand up a wrapper is just too much.
You are a small team and you don't have the resources/tools/time for service-explosion.
Your first design is simple and will work. If you're looking for the advantages I outlined, then consider your second design. And, to make sure I understand it, I would see it unfold like:
Client 1 needs to put out a notification and calls Service A POST /notifications
Service A that accepts POST /notifications
Service A checks the request, puts it on Kafka, responds to client with 200
Service B picks up notification request from Kafka queue.
Service A should be run as multiple instances for reliability.

Azure Service bus queue - What if receiving Web API could not process the message

As a part of data integration between two application, we plan to send data to Azure service bus queue. On the receiving end I've an API that loads the data to target DB.
I need to understand how can I handle the following:
1. How should the API be notified whenever the messages arrives in the queue. I am thinking of Azure function or web job or some scheduling component.
2. What if an API is down, messages should retain in queue.
3. What if when API receives the message but somehow the target DB is down. In that case, my messages should be retained in the queue for retries.
Please help me and guide me to get the correct approach to implement this.
Thanks!
Logic Apps! Use Logic Apps with Service Bus Queue Trigger so that it fires on the arrival of new messages in Service Bus Queue. To send the data to Web API, use Logic Apps Http Connector.
It is scalable by individual messages.
How should the API be notified whenever the messages arrives in the queue. I am thinking of Azure function or web job or some scheduling component. Logic Apps
What if an API is down, messages should retain in queue. Azure Service Bus Queue ensures FIFO, Batching,.. Messages are retained until it is de-queued and deleted
What if when API receives the message but somehow the target DB is down. In that case, my messages should be retained in the queue for retries. Check transaction feature of Azure Service Bus Queue

Resources