Microservices architecture event collaboration pattern - microservices

Martin Fowler's description of the Event Collaboration pattern (https://martinfowler.com/eaaDev/EventCollaboration.html) appears to imply that requisite external data (data from other services) that is needed for a service to function should be replicated and maintained within the service.
This seems to imply that we should not resort issuing explicit queries.
For example:
Say you have a communications service that is responsible for sending emails to clients and is dependent order information (that lives in the order service) to send an order confirmation email.
With Event Collaboration, the communications service will have some internal representation of all orders that it will have built up by consuming relevant order creation/modification events.
In this example a query to retrieve order details will not be necessary to generate the confirmation email.
Are there any instances in which we would use explicit query messages rather than data replication when adopting the Event Collaboration pattern?

i think even in this case, what i would have done is create a consumer of OrderPlaced event in Order Microservice Only. That event processor will read all the details from order create a MailToBeSent event and write it on a Topic or Queue , which CommunicationService should listen and send the email.
Communication Service should not understand , how to create a email based on order(as core purpose of cummunication service is to send emails).
Design wise also communication service should not require to change every time you add a new service which want a mail sending functionality.

Related

Difficulty Understanding Event Sourcing Microservice Event Receiving/Communication

I've been aware of event sourcing, CQRS, DDD and micro services for a little while and I'm now at that point where I want to try and start implementing stuff and giving something a go.
I've been looking into the technical side of CQRS and I understand the DDD concepts in there. How both the write side handles commands from the UI and publishes events from it, and how the read side handles events and creates projections on them.
The difficulty I'm having is the communication & a handling events from service-to-service (both from a write to read service and between micro services).
So I want to focus on eventstore (this one: https://eventstore.com/ to be less ambiguous). This is what I want to use as I understand it is a perfect for event sourcing and the simple nature of storing the events means I can use this for a message bus as well.
So my issue falls into two questions:
Between the write and the read, in order for the read side to receive/fetch the events created from the write side, am i right in thinking something like a catch up subscription can be used to subscribe to a stream to receive any events written to it or do i use something like polling to fetch events from a given point?
Between micro services, I am having an even harder time... So when looking at CQRS tutorials/talks etc... they always seem to talk with an example of an isolated service which receives commands from the UI/API. This is fine. I understand the write side will have an API attached to it so the user can interact with it to perform commands. E.g. create a customer. However... say if I have two micro services, e.g. a order micro service and an shipping micro service, how does the shipping micro service get the events published from the order micro service. Specifically, how does those customer events, translate to commands for the shipping service.
So let's take a simple example of: - Command created from the order's API to place an order. - A OrderPlacedEvent is published to the event store. How does the shipping service listen and react to this is it need to then DispatchOrder and create ain turn an OrderDispatchedEvent.
Does the write side of the shipping microservice then need to poll or also have a catch up subscription to the order stream? If so how does an event get translated to an command using DDD approach?
something like a catch up subscription can be used to subscribe to a stream to receive any events written to it
Yes, using catch-up subscriptions is the right way of doing it. You need to keep the stream position of your subscription persisted somewhere as well.
Here you can find some sample code that works. I am not posting the whole snippet since it is too long.
The projection service startup flow is:
Load the checkpoint (first time ever it would be the stream start)
Subscribe to the stream from that checkpoint
The runtime flow will then be:
The subscription will then call the function you provide when it receives an event. There's some plumbing there to do, like if you subscribe to $all, you need to filter out system events (it will be easier in the next version of Event Store)
Project the event
Store the new checkpoint
If you make your projections idempotent, you can store the checkpoint from time to time and save some IO.
how does the shipping micro service get the events published from the order micro service
When you build a brand new system and you have a small team working on all the components, you can make a shortcut and subscribe to domain events from another service, as you'd do with projections. Within the integration context (between the boxes), ordering should not be important so you can use persistent subscriptions so you won't need to think about checkpoints. Event Store will do it for you.
Be aware that it introduces tight coupling on the domain event schema of the originating service. Your contexts will have the Partnership relationship or the downstream service will be a Conformist.
When you move forward with your system, you might decide to decouple those contexts properly. So, you introduce a stable event API for the service that publishes events for others to consume. The same subscription that you used for integration can now instead take care of translating domain (internal) events to integration (external) events. The consuming context would then use the stable API and the domain model of the upstream service will be free in iterating on their domain model, as soon as they keep the conversion up-to-date.
It won't be necessary to use Event Store for the downstream context, they could just as well use a message broker. Integration events usually don't need to be persisted due to their transient nature.
We are running a webinar series about Event Sourcing at Event Store, check our web site to get on-demand access to previous webinars and you might find interesting to join future ones.
The difficulty I'm having is the communication & a handling events from service-to-service (both from a write to read service and between micro services).
The difficulty is not your fault - the DDD literature is really weak when it comes to discussing the plumbing.
Greg Young discusses some of the issues of subscription in the latter part of his Polygot Data talk.
Eventide Project has documentation that does a decent job of explaining the principles behind how the plumbing fits things together.
Between micro services, I am having an even harder time...
The basic idea: your message store is fundamentally a database; when the host of your microservice wakes up, it queries the message store for messages after some checkpoint, and then feeds them to your domain logic (updating its own local copy of the checkpoint as needed).
So the host pulls a document with events in it from the store, and transforms that document into a stream of handle(Event) commands that ultimately get passed to your domain component.
Put another way, you build a host that polls the database for information, parses the response, and then passes the parsed data to the domain model, and writes its own checkpoints.

need clarification on microservices

I need some clarifications on microservices.
1) As I understand only choreography needs event sourcing and in choreography we use publish/subscribe pattern. Also we use program likes RabbitMQ to ensure communication between publisher and subscribers.
2) Orchestration does not use event sourcing. It uses observer pattern and directly communicate with observers. So it doesn't need bus/message brokers (like RabbitMQ). And to cooridante all process in orchestration we use mediator pattern.
Is that correct?
In microservice orchestration , a centralized approach is followed for execution of the decisions and control with help of orchestrator. The orchestrator has to communicate directly with respective service , wait for response and decide based on the response from and hence it is tightly coupled. It is more of synchronous approach with business logic predominantly in the orchestrator and it takes ownership for sequencing with respect to business logic. The orchestration approach typically follows a request/response type pattern whereby there are point-to-point connection between the services.
In, microservice choreography , a decentralized approach is followed whereby there is more liberty such that every microservice can execute their function independently , they are self-aware and it does not require any instruction from a centralized entity. It is more of asynchronous approach with business logic spread across the microservices, whereby every microservice shall listen to other service events and make it's own decision to perform an action or not. Accordingly, the choreography approach relies on a message broker (publish/subscribe) for communication between the microservices whereby each service shall be observing the events in the system and act on events autonomously.
TLDR: Choreography is the one which doesn't need persistance of the status of the process, orchestration needs to keep the status of the process somewhere.
I think you got this somewhat mixed up with implementation details.
Orchestration is called such, because there is a central process manager (sometimes mentioned as saga, wrongly imho) which directs (read orchestrates) operations across other services. In this pattern, the process manager directs actions to BC's, but needs to keep a state on previous operations in order to undo, roll back, or take any corrective or reporting actions deemed necessary. This status can be held either in an event stream, normal form db, or even implicitly and in memory (as in a method executing requests one by one and undoing the previous ones on an error), if the oubound requests are done through web requests for example. Please note that orchestrators may use synchronous, request-response communication (like making web requests). In that case the orchestrator still keeps a state, it's just that this state is either implicit (order of operations) or in-mem. State still exists though, and if you want to achieve resiliency (to be able to recover from an exception or any catastrophic failure), you would again need to persist that state on-disk so that you could recover.
Choreography is called such because the pieces of business logic doing the operations observe and respond to each other. So for example when a service A does things, it raises an event which is observed by B to do a follow up actions, and so on and so forth, instead of having a process manager ask A, then ask B, etc. Choregraphy may or may not need persistance. This really depends on the corrective actions that the different services need to do.
An example: As a practical example, let's say that on a purchase you want to reserve goods, take payment, then manifest a shipment with a courier service, then send an email to the recipient.
The order of the operations matter in both cases (because you want to be able to take corrective actions if possible), so we decide do the payment after the manifestation with the courier.
With orchestration, we'd have a process manager called PM, and the process would do:
PM is called when the user attempts to make a purchase
Call the Inventory service to reserve goods
Call the Courier integration service to manifest the shipment with a carrier
Call the Payments service to take a payment
Send an email to the user that they're receiving their goods.
If the PM notices an error on 4, they only corrective action is to retry to send the emai, and then report. If there was an error during payment then the PM would directly call Courier integration service to cancel the shipment, then call Inventory to un-reserve the goods.
With choreography, what would happen is:
An OrderMade event is raised and observed by all services that need data
Inventory handles the OrderMade event and raises an OrderReserved
CourierIntegration handles the OrderReserved event and raises ShipmentManifested
Payments service handles the ShipmentManifested and on success raises PaymentMade
The email service handles PaymentMade and sends a notification.
The rollback would be the opposite of the above process. If the Payments service raised an error, Courier Integration would handle it and raise a ShipmentCancelled event, which in turn is handled by Inventory to raise OrderUnreserved, which in turn may be handled by the email service to send a notification.

User credentials in messages in event driven architecture with Kafka

My first attempt to implement a microservice architecture using events with Kafka.
I have problems finding out how can I check for user credentials in a event.
My application is simple:
a service that controls users with email and passwords, able to create, edit and delete them.
a service that sends emails from those users.
My idea is to call create an event with a json like.
{
"status":"sendEmail",
"message":{
"sender":"abc#zxy.com",
"password":"123456",
"recipient":"jkl#asd.com",
"content":"this is my emails body"
}
}
Once I create this event at the second service, how can I validate with event that the user exist in the first service? I could easily do this wiht a REST communication but I would like to find out how to communicate responses between services with events messages.
Thanks.
You would need to either cache all user accounts in the second service (by consuming all user topic records), or perform an external lookup upon consuming email records. Messaging and RESTful services aren't necessarily exclusive.
FWIW, at least encrypt passwords before sending over unsecured/plaintext topics

Hotel communication microservice

In a microservice architecture for a hotel I want to create a communication service that will handle all the emails, sms, ... This service should be triggered by asynchronous events.
Should these events be called: SEND_RESERVATION_CONFIRMATION_EMAIL, making the reservation service aware of the email communication. Or should there be a more generic event RESERVATION_CONFIRMED, resulting in a confirmation email?
Should these events be called: SEND_RESERVATION_CONFIRMATION_EMAIL
No. Events should be named as sentences in the past.
making the reservation service aware of the email communication
I would not make that coupling. The reservation service is responsible with reservations, not with methods of notifying the customers.
Or should there be a more generic event RESERVATION_CONFIRMED, resulting in a confirmation email?
Yes, RESERVATION_CONFIRMED seems a good choice; it represent what really had happened and it does not contain indication of what should be done next. The workflow/process of notifying the customer should be managed by another component, i.e. a Saga/Process manager. This Saga would receive the RESERVATION_CONFIRMED event and then would send SEND_RESERVATION_CONFIRMATION_EMAIL command to the responsible microservice.

Aggregated Notification Microservice

The Problem
We are currently architecting our new Notification Microservice but having trouble with how to handle aggregated emails. What we need to do is instead of sending one email every action performed (could be 20+ in a few minutes), we would send an email after an hour summarising all the actions that were completed.
What We Have So Far
We so far propose that we have this type of messaging pattern, where Client Service is any service in our cluster and Messagebot is our Notification Microservice.
Client Service sends a notification to Messagebot that it will need to send something in the future
Messagebot stores the details in its database
Messagebot periodically checks its database for what needs to be sent
Messagebot gets the required data from another service (could be Client Service) via API
Messagebot sends email using the data from #3 and an HTML template
The Debate
For the data that needs to be sent, we are less sure and it is what we need help with. So far we think this should be the structure of the JSON from Client Service to Notification Service (step #1):
{
template_id: SOME_TEMPLATE_ID,
user_id: SOME_USER_ID,
objectid: SOME_OBJECT_ID
}
or
{
template_id: SOME_TEMPLATE_ID,
user_id: SOME_USER_ID,
required_objects: { task_id: SOME_TASK_ID, document_id: SOME_DOCUMENT_ID }
}
Where task_id and document_id are just examples and it would change based on the template. It could just as easily be {product_id: SOME_PRODUCT_ID} for a different template.
Why The Debate
Our thoughts so far are that:
We only need template_id because the source of the data would be implied in the objects (like an ENV var). For example, the Task object would be at http://taskservice/:id. Otherwise, we can have problems with failing APIs or switching URLs in the future.
We should use userid instead of email and name because we prevent the issue of email/ name pairs not matching up over multiple messages
For the objects, we're still sceptical because it means that the client app service would need knowledge of the inner workings in Messagebot but a single objectid might not be very extensible. We could easily imagine many of our messages needing more than one object.
In Conclusion
Thank you for reading. The design of this service is important because it will be central to our entire organisation.
Which debated JSON structure is most appropriate in our situation? Also, knowing our requirements, what would be the proper setup for this type of service? (aka. Are we correct in our other assumptions?)
So your messagebot will
store notifications
get data from other services
compile emails from the data and
send the compiled emails
In my opinion, your messagebot were given too many tasks. If I were designing the system, I would like to keep the messagebot simpler. The servces should encapsulate the knowledge to compile the email, e.g. manage it's own template and so on. The services will push the compiled emails to a queue so the messagebot can pick up and send. The only logic in the messagebot is to pick up the emails from the queue and send. In this way, it doesn't matter how many more services you are going to have in the future, the messagebot will stay nice and simple.

Resources