linking Microservices and allowing for one to be unavailable - microservices

I'm new to the microservices architecture and am seeing that it is possible under the model to call a microservice from another via HTTP request. However, I am reading that if a service is down all other services should still operate.
My question is, how is this generally achieved?
Example, a microservice that handles all Car record manipulation may need access to the service which handles the Vehicle data. How can the Car Microservice complete it's operations if that service is down or doesn't respond?

You should generally consider almost zero sync communication between microservices(if still you want sync comminucation try considering circuit breakers which allow your service to be able to respond but with logical error message , if no circuit breaking used dependent services will also go down completly).This could be achieved by questioning the consistency requirement of the micorservice.
Sometimes these things are not directly visible , for eg: lets say there are two services order service and customer service and order service expose a api which say place a order for customer id. and business say you cannot place a order for a unknown customer
one implementation is from the order service you call the customer service in sync ---- in this case customer service down will impact your service, now lets question do we really need this.
Because a scenario could happen where customer just placed an order and somebody deleted that customer from customer service, now we have a order which dosen't belong to customer.Consistency cannot be guaranteed.
In the new sol. we are saying allow the order service to place the order without checking the customer id and do one of the following:
Using ProcessManager check the customer validity and update the status of the order as invalid and when customer get deleted using ProcessManager update the order status as invalid or perform business logic
Do not check at all , because placing a order dosen't count a thing, when this order will be in the process of dispatch that service will anyway check the customer status
Statement here is try to achieve more async communication between microservices , mostly you will be able find the sol. in the consistency required by the business. But in case your business wants to check it 100% you have to call other service and if other service is down , your service will give logical errors.

Related

When the BPMN process shoud start in the microservices architecture with Camunda orchestration

Consider an architecture like this:
API Gateway - responsible for aggregating services
Users microservice - CRUD operations on the user (users, addresses, consents, etc)
Notification microservice- sending email and SMS notifications
Security microservice - a service responsible for granting / revoking permissions to users and clients. For example, by connecting to Keycloak, it creates a user account with basic permission
Client - any application that connects to API Gateway in order to perform a given operation, e.g. user registration
Now, we would like to use Camunda for the entire process.
For example:
Client-> ApiGateway-> UsersMicroservice.Register-> SecurityMicroservice.AddDefaultPermition-> NotificationMicroservice.SendEmail
We would like to make this simplified flow with the use of e.g. Camunda.
Should the process start in UsersMicroservice.RegisterUser after receiving "POST api/users/" - that is UsersMicroservice.RegisterUser starts the process in Camunda and how does this endpoint know what specific process is to run in Camunda?
What if the BPMN process in Camunda is designed in such a way that immediately after entering the process there will be a Business Rule Task that will validate the Input and if there is no "Name", for example, it will interrupt the registration process? How UsersMicroservice will find out that the process has been interrupted and it should not perform any further standard operation like return this.usersService.Create (userInput);
Should the call to Camunda be in the Controller or rather in the Service layer?
How in the architecture as above, make a change to use Camunda to change the default Client-> UsersMicroservice-> UsersService-> Database flow, adding e.g. input validation before calling return this.usersService.Create (someInput);
If your intention is to let the process engine orchestrate the business process, then why not start the business process first? Either expose the start process API or a facade, which gets called by the API gateway when the desired business request should be served. Now let the process model decide which steps need to be taken to serve the request and deliver the desired result/business value. The process may start with a service task to create a user. However, like you wrote, the process may evolve and perform additional checks before the user is created. Maybe a DMN validates data. Maybe it is followed by a gateway which lead to a rejection path, a path that call an additional blacklist service, a path with a manual review, and the "happy path' with automated creation of the user. Whatever needs to happen, this is business logic, which you can make flexible by giving control to the process engine first.
The process should be started by the controller via a start process endppoint, before/not form UsersMicroservice.RegisterUser. You use a fixed process definition key to start. From here everything can be changed in the process model. You could potentially have an initial routing process ("serviceRequest") first which determines based on a process data ("request type") what kind of request it is ("createUser", "disableUser",...) and dispatches to the correct specific process for the given request ("createUser" -> "userCreationProcess").
The UsersMicroservice should be stateless (request state is managed in the process engine) and should not need to know. If the process is started first, the request may never reach UsersMicroservice. this.usersService.Create will only be called if the business logic in the process has determined that it is required - same for any subsequent service calls. If a subsequent step fails error handling can include retries, handling of a business error (e.g. "email address already exists") via an exceptional error path in the model (BPMNError), or eventually triggering a 'rollback' of operations already performed (compensation).
Controller - see above. The process will call the service if needed.
Call the process first, then let it decide what needs to happen.

Why are people using a message Bus in their code - when to message vs call code

When building an application before scaling to multiple micro services. You have a codebase consisting of services that are decoupled. IE a services no longer depends on another service, not even loosely via a interface. It receives input from a service via a message buss. It has a method receivePaymentRequest but its callee is not the Order service. Its invoked via the message bus, perhaps in the future on another server. But imagine theres no need to run multiple servers at this point.
a order services posts to the message bus payment-request event
the payment services picks up on this message
payment is completed
payment service send a payment-complete event message to the message bus
the order service picks up this message
I"m not thinking about the patterns that enable this to be fault tolerant. But instead when to use this approach since it adds a lot of complexity. So please ignore what i've left out with regards to this
Correct? Is it stupid to implement it like such before scaling to microservices. How does this. Is SOA the step before actual microservices?
When should a class receive/publish on the message buss and when should it depend on a service as a class (even injected via a interface) ?

need clarification on microservices

I need some clarifications on microservices.
1) As I understand only choreography needs event sourcing and in choreography we use publish/subscribe pattern. Also we use program likes RabbitMQ to ensure communication between publisher and subscribers.
2) Orchestration does not use event sourcing. It uses observer pattern and directly communicate with observers. So it doesn't need bus/message brokers (like RabbitMQ). And to cooridante all process in orchestration we use mediator pattern.
Is that correct?
In microservice orchestration , a centralized approach is followed for execution of the decisions and control with help of orchestrator. The orchestrator has to communicate directly with respective service , wait for response and decide based on the response from and hence it is tightly coupled. It is more of synchronous approach with business logic predominantly in the orchestrator and it takes ownership for sequencing with respect to business logic. The orchestration approach typically follows a request/response type pattern whereby there are point-to-point connection between the services.
In, microservice choreography , a decentralized approach is followed whereby there is more liberty such that every microservice can execute their function independently , they are self-aware and it does not require any instruction from a centralized entity. It is more of asynchronous approach with business logic spread across the microservices, whereby every microservice shall listen to other service events and make it's own decision to perform an action or not. Accordingly, the choreography approach relies on a message broker (publish/subscribe) for communication between the microservices whereby each service shall be observing the events in the system and act on events autonomously.
TLDR: Choreography is the one which doesn't need persistance of the status of the process, orchestration needs to keep the status of the process somewhere.
I think you got this somewhat mixed up with implementation details.
Orchestration is called such, because there is a central process manager (sometimes mentioned as saga, wrongly imho) which directs (read orchestrates) operations across other services. In this pattern, the process manager directs actions to BC's, but needs to keep a state on previous operations in order to undo, roll back, or take any corrective or reporting actions deemed necessary. This status can be held either in an event stream, normal form db, or even implicitly and in memory (as in a method executing requests one by one and undoing the previous ones on an error), if the oubound requests are done through web requests for example. Please note that orchestrators may use synchronous, request-response communication (like making web requests). In that case the orchestrator still keeps a state, it's just that this state is either implicit (order of operations) or in-mem. State still exists though, and if you want to achieve resiliency (to be able to recover from an exception or any catastrophic failure), you would again need to persist that state on-disk so that you could recover.
Choreography is called such because the pieces of business logic doing the operations observe and respond to each other. So for example when a service A does things, it raises an event which is observed by B to do a follow up actions, and so on and so forth, instead of having a process manager ask A, then ask B, etc. Choregraphy may or may not need persistance. This really depends on the corrective actions that the different services need to do.
An example: As a practical example, let's say that on a purchase you want to reserve goods, take payment, then manifest a shipment with a courier service, then send an email to the recipient.
The order of the operations matter in both cases (because you want to be able to take corrective actions if possible), so we decide do the payment after the manifestation with the courier.
With orchestration, we'd have a process manager called PM, and the process would do:
PM is called when the user attempts to make a purchase
Call the Inventory service to reserve goods
Call the Courier integration service to manifest the shipment with a carrier
Call the Payments service to take a payment
Send an email to the user that they're receiving their goods.
If the PM notices an error on 4, they only corrective action is to retry to send the emai, and then report. If there was an error during payment then the PM would directly call Courier integration service to cancel the shipment, then call Inventory to un-reserve the goods.
With choreography, what would happen is:
An OrderMade event is raised and observed by all services that need data
Inventory handles the OrderMade event and raises an OrderReserved
CourierIntegration handles the OrderReserved event and raises ShipmentManifested
Payments service handles the ShipmentManifested and on success raises PaymentMade
The email service handles PaymentMade and sends a notification.
The rollback would be the opposite of the above process. If the Payments service raised an error, Courier Integration would handle it and raise a ShipmentCancelled event, which in turn is handled by Inventory to raise OrderUnreserved, which in turn may be handled by the email service to send a notification.

Microservices architecture event collaboration pattern

Martin Fowler's description of the Event Collaboration pattern (https://martinfowler.com/eaaDev/EventCollaboration.html) appears to imply that requisite external data (data from other services) that is needed for a service to function should be replicated and maintained within the service.
This seems to imply that we should not resort issuing explicit queries.
For example:
Say you have a communications service that is responsible for sending emails to clients and is dependent order information (that lives in the order service) to send an order confirmation email.
With Event Collaboration, the communications service will have some internal representation of all orders that it will have built up by consuming relevant order creation/modification events.
In this example a query to retrieve order details will not be necessary to generate the confirmation email.
Are there any instances in which we would use explicit query messages rather than data replication when adopting the Event Collaboration pattern?
i think even in this case, what i would have done is create a consumer of OrderPlaced event in Order Microservice Only. That event processor will read all the details from order create a MailToBeSent event and write it on a Topic or Queue , which CommunicationService should listen and send the email.
Communication Service should not understand , how to create a email based on order(as core purpose of cummunication service is to send emails).
Design wise also communication service should not require to change every time you add a new service which want a mail sending functionality.

Aggregated Notification Microservice

The Problem
We are currently architecting our new Notification Microservice but having trouble with how to handle aggregated emails. What we need to do is instead of sending one email every action performed (could be 20+ in a few minutes), we would send an email after an hour summarising all the actions that were completed.
What We Have So Far
We so far propose that we have this type of messaging pattern, where Client Service is any service in our cluster and Messagebot is our Notification Microservice.
Client Service sends a notification to Messagebot that it will need to send something in the future
Messagebot stores the details in its database
Messagebot periodically checks its database for what needs to be sent
Messagebot gets the required data from another service (could be Client Service) via API
Messagebot sends email using the data from #3 and an HTML template
The Debate
For the data that needs to be sent, we are less sure and it is what we need help with. So far we think this should be the structure of the JSON from Client Service to Notification Service (step #1):
{
template_id: SOME_TEMPLATE_ID,
user_id: SOME_USER_ID,
objectid: SOME_OBJECT_ID
}
or
{
template_id: SOME_TEMPLATE_ID,
user_id: SOME_USER_ID,
required_objects: { task_id: SOME_TASK_ID, document_id: SOME_DOCUMENT_ID }
}
Where task_id and document_id are just examples and it would change based on the template. It could just as easily be {product_id: SOME_PRODUCT_ID} for a different template.
Why The Debate
Our thoughts so far are that:
We only need template_id because the source of the data would be implied in the objects (like an ENV var). For example, the Task object would be at http://taskservice/:id. Otherwise, we can have problems with failing APIs or switching URLs in the future.
We should use userid instead of email and name because we prevent the issue of email/ name pairs not matching up over multiple messages
For the objects, we're still sceptical because it means that the client app service would need knowledge of the inner workings in Messagebot but a single objectid might not be very extensible. We could easily imagine many of our messages needing more than one object.
In Conclusion
Thank you for reading. The design of this service is important because it will be central to our entire organisation.
Which debated JSON structure is most appropriate in our situation? Also, knowing our requirements, what would be the proper setup for this type of service? (aka. Are we correct in our other assumptions?)
So your messagebot will
store notifications
get data from other services
compile emails from the data and
send the compiled emails
In my opinion, your messagebot were given too many tasks. If I were designing the system, I would like to keep the messagebot simpler. The servces should encapsulate the knowledge to compile the email, e.g. manage it's own template and so on. The services will push the compiled emails to a queue so the messagebot can pick up and send. The only logic in the messagebot is to pick up the emails from the queue and send. In this way, it doesn't matter how many more services you are going to have in the future, the messagebot will stay nice and simple.

Resources