Do we really need Event Sourcing and CQRS in microservices? - microservices

In my understanding when database transactions span across microservices ,we can solve this problem with using message-broker(kafka,RabbitMQ etc) by publishing events so that Subscriber Microservices can update their database by listening to these events.
In case of exception we can send event for failure ,so that Subscriber services can update their state.
Is this not sufficient? What is the problem with this approach?
why and when we need event sourcing?
Do we need really event sourcing ?

Not at all. You can have well a very well defined microservices-styled architecture without CQRS and Event Sourcing. CQRS and Event Sourcing is a solution for intra-microservice design. You can choose to implement all or some of your microservices using CQRS and Event Sourcing.
Let's see how Event Sourcing may help you. Event Sourcing is an alternative to restoring your current state of an entity using events instead of an ORM like Entity Framework or Hibernate and a SQL database. Suppose you have a microservice to store data about Books. If you use SQL, you would have controllers and end-points to Create, Update, and Delete a book and store those books in a SQL table. If you want to update that book, then in order to get the current state you would go back to SQL table and query for that book (by its id) and then your ORM would convert that table representation into a book object (object‑relational impedance mismatch problem) and then you would apply the changes and save the changed book object back into SQL table. As an alternative, you can store events for the books objects in a NoSQL database like MongoDB or maybe an event store. Now in order to update the book, first you would want to restore the current state and you can do that by getting back all the events related to this book and replaying these events to restore the current state. Your events became a source of truth and you completely avoided the bottleneck of ORM mapping and SQL joins. Events are stored as JSON documents and are usually ultra-fast.
Now, coming to CQRS - CQRS is purely a pattern for separation of concerns. You would use CQRS to bifurcate your read-side from the write-side. End-points related to write-side like create, update, and delete live in one service and end-point for read-side live in another service. The advantage you get here is independent scaling, deploying, maintenance and many more. If your application is read-intensive, then have multiple numbers of instances deployed for read-side service.
If you want to learn more, feel free to PM me. Good Luck!

I think you're confused about the microservices stuff :) They're a solution to design a scalable application, nothing to do with db transactions. Even more, a db transaction (ideally) shouldn't span across processes, nevermind microservices.
The pub-sub approach is how different micro services (works for in process models as well) communicate. Nothing to do with db transaction. Event sourcing is all about looking at the domain state as a collection of relevant changes. Very different objective compared to microservices.
We're using ES because we like to store domain events as being the 'single source of truth', microservices or not. It's all about domain model design.
Learn more about ES, DDD, CQRS and leave microservices for the future. They're too buzzwordy for now, few apps needs them, few developers can actually use them properly.

You are describing a compensation pattern as a replacement of a distributed transaction. In a microservice-oriented architecture, this is a good approach to focus on availability by utilizing eventual consistency: Instead of having one centrally coordinated, distributed transaction across services, each service will execute its sub-task without a transactional context. If something goes wrong, each service will be informed about the failure and execute some kind of (semantic) compensation of the previous action. Thus, the transactional operation is eventually undone.
As you have already stated, communication can be done via a message bus system and there is no need for Event Sourcing or CQRS, the compensation pattern does not depend on these principles.

Related

How do I access data that my microservice does not own?

A have a microservice that needs some data it does not own. It needs a read-only cache of data that is owned by another service. I am looking for guidence on how to implement this.
I dont' want my microserivce to call another microservice. I have too much data that is used in a join for this to be successful. In addition, I don't want my service to be dependent on another service (which may be dependent on another ...).
Currently, I am publishing an event to a queue. Then my service subscribes and maintains a copy of the data. I am haivng problem staying in sync with the source system. Plus, our DBAs are complaining about data duplication. I don't see a lot of informaiton on this topic.
Is there a pattern for this? What the name?
First of all, there are couple of ways to share data and two of them you mention.
One service call another service to get the data when it is required. This is good as you get up to date data and also there is no extra management required on consuming service. Problem is that if you are calling this too many times then other service performance may impact.
Another solution is maintained local copy of that data in consuming service using Pub/Sub mechanism.
Depending on your requirement and architecture you can keep this in actual db of consuming service or some type of cache ( persisted cache)
Here cons is consistency. When working with distributed architecture you will not get strong consistency but you have to depends on Eventual consistency.
Another solution is that and depends on your required you can separate out that tables that needs to join in some separate service. It depends on your use case.
If you still want consistency then at the time when first service call that update the data and then publish. Instead create some mediator component and that will call two service in sync fashion. Here things get complicated as you now try to implement transaction over distributed system.
One another point, when product build around Microservice architecture then it is not only technical move, as a organization and as a team your team needs to understand something that work in Monolith, it is not same in Microservices. DBA needs to understand that part and in Microservices Duplication of data across schema ( other aspect like code) prefer over reusability.
Last but not least, If it is always required to call another service to get data, It is worth checking service boundary as well. It may possible that sometime service needs to merge as business functionality required to stay together.

Design guides for Event Sourced microservices

I am thinking what is the best way to structure your micro-services, in the past the team I was working with used Axon Framework and PostgreSQL and each microservice had its own event store in the PostgreSQL database, then we built communication between using REST.
I am thinking that it would be smarter to have all microservices talk to the same event store as we would be able to share events faster instead of rewriting the communication lines using REST.
The questions that follows from the backstory is:
What is the best practice for having an event store
Would each service have its own? Would they share the same eventstore?
Where would I find information to inspire and gather more answers? As searching the internet for best practices and how to structure the Event Store seems like searching for a needle in a haystack.
Bear in mind, the question stated is in no way aimed at Axon Framework, but more the general idea on building scalable and good code. As the applications would work with each own event store for write model and read models.
Thank you for reading and I wish you all the best
-- Me
I'd add a slightly different notion to Tore's response, although the mainline is identical to what I'm sharing here. So, I don't aim to overrule Tore, just hoping to provide additional insight.
If the (micro)services belong to the same Bounded Context, then they're allowed to "learn about each other's language."
This language thus includes the events these applications publish and store.
Whenever there's communication required between different Bounded Contexts, you'd separate the stores, as one context shouldn't be bothered by the specifics of another context.
Hence it is beneficial to deduce what services belong to which Bounded Context since that would dictate the required separation.
Axon aims to support this by allowing multiple contexts with the Axon Server, as you can read here.
It simply allows the registration of applications to specific contexts, within which it will completely separate all message streams (so commands, events, and queries) and the Event Store.
You can also set this up from scratch yourself, of course. Tore's recommendation of Kafka is what's used quite broadly for Event Streaming needs between applications. Honestly, any broadcast type of infrastructure suits event distribution, as that's how events are typically propagated.
You want to have one EventStore per service, just as you would want to have one relation database per service for a non EventSourced system.
Sharing a database/eventstore between services creates coupling and we have all learned the hard way that this is an anti-pattern today.
If you want to use a event log to share events across services, then Kafka is a popular choice.
Important to remember that you only do event-sourcing within a service bounded context.

Need defense against wacky challenge to Event Sourcing architecture w/CosmosDB

In the current plan, incoming commands are handled via Function Apps, resulting in Events being sent to an Event Hub, and then materializing the views
Someone is arguing that instead of storing events in something like table storage, and materializing views based on events and snapshots, that we should:
Just stream events to a log in Azure Monitor to have auditing
We can make changes to a domain object immediately in response to a command and use the change feed as our source of events for materialized views.
He doesn’t see the advantage of even having a materialized view. Why not just use a query? Argument is we don’t expect a lot of traffic.
He wants to fulfill the whole audit log by saving events to the azure monitor log - Just an application log. Instead, that commands should just directly modify the representation of an entity in cosmos, and we'd use the change feed from CosmosDB as our domain object events, or we would create new events off of that via subscribers to that stream.
Is this actually an advantageous approach? Can ya'll think of any reasons why we wouldn't want to do that? Seems like we'd be losing something here.
He's saying we'd no longer need to be concerned with eventual consistency, as we'd have immediate consistency.
Every reference implementation I've evaluated does NOT do it the way he's suggesting. I'm not deeply versed in the advantages/disadvantages of the event sourcing / CQRS paradigm so I'm at a loss at the moment.. Currently researching furiously
This is a conceptual issue so there's not so much a code example. However, here's some references that seem to back up the approach I'm taking..
https://medium.com/#thomasweiss_io/planet-scale-event-sourcing-with-azure-cosmos-db-48a557757c8d
https://sajeetharan.com/2019/02/03/event-sourcing-with-azure-eventhub-and-cosmosdb/
https://learn.microsoft.com/en-us/azure/architecture/patterns/event-sourcing
If your goal is only to have the audit log, state-based persistence could be a good choice. Event sourcing adds some complexity to the implementation side and unless you can identify more advantages of using it, you might not convince your team to bring this complexity to the system. There are numerous questions and answers on SO, as well as in some blog posts, about pros and cons of event sourcing, so I won't get into that discussion here.
I can warn you, though, that the second article in your list is very weak and would most probably lead you to many difficulties. The role of Event Hub there is completely unclear and it doesn't explain anything about projections and read-models (what you call "materialised views"). Only a very limited number of use-cases can live with only getting one entity by id and without being able to execute a query across multiple entities. That also probably answers your concern of having read-models at all. You will need them very soon when for the first time you will start figuring out how to get a list of entities based on some condition (query).
Using CosmosDb as the event store is completely feasible, as described in the first article if you can manage the costs involved. Just remember to set the change feed TTL to -1, otherwise, you won't be able to replay your projections when you need to.
To summarise:
Keeping the audit log can be done without event-sourcing, but you need to ensure that events are published reliably, preferably in the same transaction as the entity state update. It is often hard or impossible but you might accept the risk of your audit requirement is not strict. You can also base your audit log on the CosmosDb change feed, just collecting document changes and logging them somewhere.
Event sourcing is a powerful technique but it has both pros and cons. The most common prejudice against using event sourcing is its implementation complexity. It might not be a big issue if you have a team that is somewhat experienced in building event-sourced systems. If you don't have such a team, you might want to build a small-scale spike to get some experience.
If you don't get full buy-in from the team to use event sourcing, you will later get all the blame if anything goes wrong. And it will go wrong at some point, especially with little experience in this area.
Spend some time reading books and trying out things yourself, before going wild in production.
Don't use Event Hub for anything that it is not designed for. Event Hub is the powerful event ingestion transport with limited TTL and it should be used for that purpose.
Don't use Table Storage as the event store, unless you only read entities by id. I used it in production for such a scenario and it worked (to some extent) but you can't project read-models from there.
A simple rule of thumb is to not use products for tasks they weren't designed for.
Azure Monitor was not designed to store application domain data. Azure Monitor is designed to store telemetry data from your applications and services and provides features such as alerts and other types of integration into DevOps tools for managing the operation and health of your apps.
There is a simple reason why you were able to find articles on event sourcing using Cosmos DB and why our own docs talk about it. Because it was designed to be used this way. It is simple to set up Cosmos DB to be an append only event store for your applications and use Change Feed to fire off messages in other apps or services or, in your case, to maintain a materialized view state of domain objects within your app.

Microservices - Is event store technology (in event sourcing solutions) shared between all microservices?

As far as my little current experience allows me to understand, one of the core concepts about "microservice" is that it relies on its own database which is independent from other microservices.
Diving into how to handle distributed transactions in a microservices system, the best strategy seems to be the Event Sourcing pattern whose core is the Event Store.
Is the event store shared between different microservices? Or there are multiple independent event stores databases for each microservice and a single common event broker?
If the first option is the solution, using CQRS I can now assume that every microservice's database is intended as query-side, while the shared event store is on the command-side. Is it a wrong assumption?
And since we are in the topic: how many retries I have to do in case of a concurrent write in a Stream using optimistic locking?
A very big big thanks in advance for every piece of advice you can give me!
Is the event store shared between different microservices? Or there are multiple independent event stores databases for each microservice and a single common event broker?
Every microservice should write to its own Event store, from their point of view. This could mean separate instances or separate partitions inside the same instance. This allows the microservices to be scaled independently.
If the first option is the solution, using CQRS I can now assume that every microservice's database is intended as query-side, while the shared event store is on the command-side. Is it a wrong assumption?
Kinda. As I wrote above each microservice should have its own Event store (or a partition inside a shared instance). A microservice should not append events to other microservice Event store.
Regarding reading events, I think that reading events should be in general permitted. Polling the Event store is the simplest (and the best in my opinion) solution to propagate changes to other microservices. It has the advantage that the remote microservice polls at the rate it can and what events it wants. This can be scaled very nice by creating Event store replicas, as much as it is needed.
There are some cases when you would want to not publish every domain event from the Event store. Some say that there are could exist internal domain events on that the other microservices should not depend. In this case you could mark the events as free (or not) for external consuming.
The cleanest solution to propagate changes in a microservice is to have live queries to whom other microservices could subscribe. It has the advantage that the projection logic does not leak to other microservice but it also has the disadvantage that the emitting microservice must define+implement those queries; you can do this when you notice that other microservices duplicate the projection logic. An example of this query is the total order price in an ecommerce application. You could have a query like this WhatIsTheTotalPriceOfTheOrder that is published every time an item is added to/removed from/updated in an Order.
And since we are in the topic: how many retries I have to do in case of a concurrent write in a Stream using optimistic locking?
As many as you need, i.e. until the write succeeds. You could have a limit of 99999, just to be detect when something is horribly wrong with the retry mechanism. In any case, the concurrent write should be retried only when a write is done at the same time on the same stream (for one Aggregate instance) and not for the entire Event store.
As a rule: in service architectures, which includes micro services, each service tracks its state in a private database.
"Private" here primarily means that no other service is permitted to write or read from it. This could mean that each service has a dedicated database server of its own, or services might share a single appliance but only have access permissions for their own piece.
Expressed another way: services communicate with each other by sharing information via the public api, not by writing messages into each others databases.
For services using event sourcing, each service would have read and write access only to its streams. If those streams happen to be stored on the same home - fine; but the correctness of the system should not depend on different services storing their events on the same appliance.
TLDR: All of these patterns apply to a single bounded context (service if you like), don't distribute domain events outside your bounded context, publish integration events onto an ESB (enterprise service bus) or something similar, as the public interface.
Ok so we have three patterns here to briefly cover individually and then together.
Microservices
CQRS
Event Sourcing
Microservices
https://learn.microsoft.com/en-us/azure/architecture/microservices/
Core objective: Isolate and decouple changes in a system to individual services, enabling independent deployment and testing without collateral impact.
This is achieved by encapsulating change behind a public API and limiting runtime dependencies between services.
CQRS
https://learn.microsoft.com/en-us/azure/architecture/patterns/cqrs
Core objective: Isolate and decouple write concerns from read concerns in a single service.
This can be achieved in a few ways, but the core idea is that the read model is a projection of the write model optimised for querying.
Event Sourcing
https://learn.microsoft.com/en-us/azure/architecture/patterns/event-sourcing
Core objective: Use the business domain rules as your data model.
This is achieved by modelling state as an append-only stream of immutable domain events and rebuilding the current aggregate state by replaying the stream from the start.
All Together
There is a lot of great content here https://learn.microsoft.com/en-us/previous-versions/msp-n-p/jj554200(v=pandp.10)
Each of these has its own complexity, trade-offs and challenges and while a fun exercise you should consider if the cost outway the benefits. All of them apply within a single service or bounded context. As soon as you start sharing a data store between services, you open yourself up to issues, as the shared data store can not be changed in isolation as it is now a public interface.
Rather try publish integration events to a shared bus as the public interface for other services and bounded contexts to consume and use to build projections of other domain contexts data.
It's a good idea to publish integration events as idempotent snapshots of the current aggregate state (upsert X, delete X), especially if your bus is not persistent. This allows you to republish integration events from a domain if needed without producing an inconsistent state between consumers.

Update/Add as separate service and Get as separate service

We started to migrate our existing project into microservice architecture. After going through a lot of videos/lectures, we came to a conclusion that a service should do one task and only one task and should be great at it. The services should be designed around Noun and Verb.
We have an entity which has basically CRUD operations. Now the add, update and delete are least used operations but GET requests at too high compared to those operations. Typically, update/add/delete are done by admin guys.
What we thought of is breaking the CRUD entity into two services
EntityCUDService (create/update/delete)
EntityLookupService (get)
Now both these services point to the same collection in mongo or say some SQL.
Now if EntityCUDService has done some changes to collection/table then EntityLookupService fails.
We heard of maintaining semantic versioning, that sounds okay but We also heard microservices should not share model/data source. So what would be the optimal solution to handle this where we have tons of gets but tens of updates/adds of same entity
Any help is greatly appreciated.
Typically, a micro service should manage single entity. So in your case you can have one micro-service to manage the entity (for various operations on the entity). Now if you want to split the service again on the basis of read and write operation then you are following the CQRS pattern. In CQRS , you split your micro-service on the basis of read and write operations. So now you will have 2 services one called command service and other called query service over the same entity. I will suggest to go with one service first to manage the entity and then if required split it more for separate service for read and write operations. Again if you are going to use CQRS, then have a look at event sourcing as it nicely fits with CQRS in micro-services design.

Resources