I want to communicate my services using events. I gonna publish (internally) all my domain events and allow any other service to subscribe to them. But such approach couples those services togheter. I am not longer allowed to change my events. This is even worse than local coupling because I dont event know my consumers any more. This limits the ability of developing/refactoring to unacceptable dedree. I am thinging about versioning my events which solves most of the issues. But how to subscribe to versioned events? Introducing common interface that groups all event`s versions and then downcast event within listener to accepted one does not sound like a vital solution. I also take into account publishing all supported versions of the event to the bus. By definition each subsriber will handle just one version. I dont want my domain to by involved in this matters so I need to build kind of infrastructure listener that will be translate catched events to other versions. I cant find anything about that topic in the Internet which automatically makes me think if I am not thoroughly wrong :)
UPDATE: After a lot of thought, I no longer want to publish my domain events. I think it is no desirable to expose internal service mechanics to the outer world. It also could violate some domain data access restriction. I think, the way to go is to map my domain events to some more corase integrational events. But I still need way to wersion them probably :)
UPDATE2: After some consultations an idea came up. Assuming we stick to the concept of integration events. Such events may be considered just as type and event id. So outer listener just focus on event type. If event occur then listener will be provided with event id. This enable listener to fetch real event from the stream/bus/wtf in given version. $eventsStore->get($eventGuid, $eventType, 'v27') for example (PHP syntax)
I gonna publish (internally) all my domain events and allow any other service to subscribe to them.
This is a common pattern in Even-Driven Architecture. I assume that you publish the events on an Event Broker, e.g. Apache Kafka and that Consumers subscribe to topics on the Event Broker.
I am not longer allowed to change my events. This is even worse than local coupling because I don't event know my consumers any more. This limits the ability of developing/refactoring to unacceptable degree. I am thinking about versioning my events which solves most of the issues.
Nah, published contracts should be versioned and no backward incompatible changes can be added to them. If you need a change that is not backward compatible, you have to introduce a new version of the published contract - and keep the old one as long as there is consumers. This is no different from REST-based interfaces - you have to fulfill your contracts.
With REST you may do this by using both /v1/orders and /v2/orders at the same time. With an Event-Driven Architecture you use two topics e.g. orders-v1 and orders-v2 - and these two contain data following a schema, e.g. Avro.
With an Event-Driven Architecture, where the services are decoupled (with a broker in between), you can actually phase out the old producer, if you add a smaller transformer, e.g. that consume orders-v2 and transform the events to the old format and publishes them on orders-v1 - so both v1 and v2 is still published.
Building Event-Driven Microservices is a good book about this.
Related
As far as my little current experience allows me to understand, one of the core concepts about "microservice" is that it relies on its own database which is independent from other microservices.
Diving into how to handle distributed transactions in a microservices system, the best strategy seems to be the Event Sourcing pattern whose core is the Event Store.
Is the event store shared between different microservices? Or there are multiple independent event stores databases for each microservice and a single common event broker?
If the first option is the solution, using CQRS I can now assume that every microservice's database is intended as query-side, while the shared event store is on the command-side. Is it a wrong assumption?
And since we are in the topic: how many retries I have to do in case of a concurrent write in a Stream using optimistic locking?
A very big big thanks in advance for every piece of advice you can give me!
Is the event store shared between different microservices? Or there are multiple independent event stores databases for each microservice and a single common event broker?
Every microservice should write to its own Event store, from their point of view. This could mean separate instances or separate partitions inside the same instance. This allows the microservices to be scaled independently.
If the first option is the solution, using CQRS I can now assume that every microservice's database is intended as query-side, while the shared event store is on the command-side. Is it a wrong assumption?
Kinda. As I wrote above each microservice should have its own Event store (or a partition inside a shared instance). A microservice should not append events to other microservice Event store.
Regarding reading events, I think that reading events should be in general permitted. Polling the Event store is the simplest (and the best in my opinion) solution to propagate changes to other microservices. It has the advantage that the remote microservice polls at the rate it can and what events it wants. This can be scaled very nice by creating Event store replicas, as much as it is needed.
There are some cases when you would want to not publish every domain event from the Event store. Some say that there are could exist internal domain events on that the other microservices should not depend. In this case you could mark the events as free (or not) for external consuming.
The cleanest solution to propagate changes in a microservice is to have live queries to whom other microservices could subscribe. It has the advantage that the projection logic does not leak to other microservice but it also has the disadvantage that the emitting microservice must define+implement those queries; you can do this when you notice that other microservices duplicate the projection logic. An example of this query is the total order price in an ecommerce application. You could have a query like this WhatIsTheTotalPriceOfTheOrder that is published every time an item is added to/removed from/updated in an Order.
And since we are in the topic: how many retries I have to do in case of a concurrent write in a Stream using optimistic locking?
As many as you need, i.e. until the write succeeds. You could have a limit of 99999, just to be detect when something is horribly wrong with the retry mechanism. In any case, the concurrent write should be retried only when a write is done at the same time on the same stream (for one Aggregate instance) and not for the entire Event store.
As a rule: in service architectures, which includes micro services, each service tracks its state in a private database.
"Private" here primarily means that no other service is permitted to write or read from it. This could mean that each service has a dedicated database server of its own, or services might share a single appliance but only have access permissions for their own piece.
Expressed another way: services communicate with each other by sharing information via the public api, not by writing messages into each others databases.
For services using event sourcing, each service would have read and write access only to its streams. If those streams happen to be stored on the same home - fine; but the correctness of the system should not depend on different services storing their events on the same appliance.
TLDR: All of these patterns apply to a single bounded context (service if you like), don't distribute domain events outside your bounded context, publish integration events onto an ESB (enterprise service bus) or something similar, as the public interface.
Ok so we have three patterns here to briefly cover individually and then together.
Microservices
CQRS
Event Sourcing
Microservices
https://learn.microsoft.com/en-us/azure/architecture/microservices/
Core objective: Isolate and decouple changes in a system to individual services, enabling independent deployment and testing without collateral impact.
This is achieved by encapsulating change behind a public API and limiting runtime dependencies between services.
CQRS
https://learn.microsoft.com/en-us/azure/architecture/patterns/cqrs
Core objective: Isolate and decouple write concerns from read concerns in a single service.
This can be achieved in a few ways, but the core idea is that the read model is a projection of the write model optimised for querying.
Event Sourcing
https://learn.microsoft.com/en-us/azure/architecture/patterns/event-sourcing
Core objective: Use the business domain rules as your data model.
This is achieved by modelling state as an append-only stream of immutable domain events and rebuilding the current aggregate state by replaying the stream from the start.
All Together
There is a lot of great content here https://learn.microsoft.com/en-us/previous-versions/msp-n-p/jj554200(v=pandp.10)
Each of these has its own complexity, trade-offs and challenges and while a fun exercise you should consider if the cost outway the benefits. All of them apply within a single service or bounded context. As soon as you start sharing a data store between services, you open yourself up to issues, as the shared data store can not be changed in isolation as it is now a public interface.
Rather try publish integration events to a shared bus as the public interface for other services and bounded contexts to consume and use to build projections of other domain contexts data.
It's a good idea to publish integration events as idempotent snapshots of the current aggregate state (upsert X, delete X), especially if your bus is not persistent. This allows you to republish integration events from a domain if needed without producing an inconsistent state between consumers.
Edit v1:
I have been going through some system design videos and learnt about microservice architecture using message queues and event-driven architecture.
But I don't seem to find any substantial point of difference between the two.
Both have different components/services publishing or subscribing to eventBus/messagingQueues and performing the tasks associated with the published event.
Is microservice architecture with messaging queues a subset of event driven architecture or is there something more to it that I need to figure out.
Original V0:
I have been going through some system design videos and learnt about microservice architecture and event-driven architecture.
But I don't seem to find any substantial point of difference between the two.
Both have different components/services publishing or subscribing to eventBus/messagingQueues and performing the tasks associated with the published event.
Is microservice architecture a subset of event driven architecture or is there something more to it that I need to figure out.
Event Driven Architecture is a system design concept where we use "techniques" to achieve synchronous or asynchronous communication in our system. More likely than not we want asynchronous communication.
Such techniques can be pub/sub, long polling, Queueing, websockets and etc.
Microservice is an approach to designing our system where we make our services decoupled to one another or at least we try our best to. For example, Facebook's newsfeed service is independent of other services like Profile, photos and messaging. One benefit of this is "separation of concerns", so for example if newsfeed goes down we can still continue to upload photos and chat our friends. If FB was "monolith", one service going down could have taken down the whole site. Another benefit of microservice is deployability, the smaller the service the faster to test and deploy.
Let's take pizza for example, deciding whether to cut it in squares or triangular, or how big/small the slices are is thinking microservices. Which one to eat first and next is thinking event-driven. Do you go for the larger slices, mixed, small ones or meatier ones? Just like how our systems can decide intelligently what events to trigger next.
Just remember that these are concepts to help you understand an existing system, or help you decide how you would build your system. In the real-world when you onboard to a new company you'll find yourself asking questions like
How service-oriented is the system?
How eventful is the flow of data?
Short answer to your question... they're not necessarily related but inevitably we implement them together when scaling one or the other.
For example given this microservice architecture
[checkout service] ---> [email service]
Let's say the user waits very long for checkout and email to finish. 90% of the wait is coming from the email service. In reality the user should be able to continue browsing the other pages while they wait for the email.
In this example we solved the long wait time by adding Queue
[checkout service] ---> [Queue] ---> [email service]
We've improved user experience by making our microservice more eventful. When the user clicks the checkout button, a response is returned immediately allowing the user to continue browsing while the "email event" is dispatched to the queue.
Short answer: No, these are not the same and not subsets.
Both have different components/services publishing or subscribing to eventBus/messagingQueues and performing the tasks associated with the published event.
This is wrong. Microservices are not necessary about events and publishing/subscribing.
In this case Wikipedia tackles this very question.
From a formal perspective, what is produced, published, propagated,
detected or consumed is a (typically asynchronous) message called the
event notification, and not the event itself, which is the state
change that triggered the message emission. Events do not travel, they
just occur. However, the term event is often used metonymically to
denote the notification message itself, which may lead to some
confusion. This is due to Event-Driven architectures often being
designed atop message-driven architectures, where such communication
pattern requires one of the inputs to be text-only, the message, to
differentiate how each communication should be handled.
https://en.wikipedia.org/wiki/Event-driven_architecture
I'll be honest, I treated them the same when designing and writing code. But I guess technically there is a difference, as per the paragraph quoted above.
Technically we cannot use the word "Same" in this case. Below will give a clear relation between these artifacts:
Event-driven Microservices rely on message queues (to store/forward messages) to send/receive events, wrapped in messages.
Event-driven architectures usually leverage messaging technology in order to transport the information that something has happened in the past from one place to another or many other places.
Message queues can also be used in a non-event driven architecture, for instance, to perform asynchronous request/response communication.
In addition, when using an event-driven approach the information that is transmitted is usually different, it is just indicating what (business) event has happened with usually fewer information than provided by normal messages.
For instance, you can send a message to create a new order in an online shop system and the message could contain all the information the receiver needs to process it. The important thing is also that there is dedicated receiver of the message.
In the event-driven approach some component would rather send some order checkout requested event (or similar) without knowing what other component or what other components (think of publish/subscribe mechanisms) will "listen* to that event and perform corresponding actions. In such an architecture it would also make sense to send other events before the actual checkout happens, such as new shopping cart created or item added to cart event.
So the event-driven approach also implies some kind of choreography between your Microservices where more complex business operations can include lots of events published and processed by different components without having one central component which orchestrates who gets what information in what order.
Of course, from my experience, it makes perfect sense to combine event-driven choreography and non-event driven orchestration in a Microservices architecture depending on the use cases.
We are designing a reporting system using microservice architecture. All the services are supposed to be subscribers to the event bus and they communicate by raising events. We also decided to expose each of our services using REST api. Now the question is , is it a good idea to create our services as web api [RESTful] applications which are also subscribers to the event bus? so basically there are 2 ponits of entry to each service - api and events. I have a feeling that we should separate out these 2 as these are 2 different concerns. Any ideas?
Since Microservices architecture are Un-opinionated software design. So you may get different answers on this questions.
Yes, REST and Event based are two different things but sometime both combined gives design to achieve better flexibility.
Answering to your concerns, I don't see any harm if REST APIs also subscribe to a queue as long as you can maintain both of them i.e changes to message does not have any impact of APIs and you have proper fallback and Eventual consistency mechanism in place. you can check discussion . There are already few project which tried it such as nakadi and ponte.
So It all depends on your service's communication behaviour to choose between REST APIs and Event-Based design Or Both.
What you do is based on your requirement you can choose REST APIs where you see synchronous behaviour between services
and go with Event based design where you find services needs asynchronous behaviour, there is no harm combining both also.
Ideally for inter-process communication protocol it is better to go with messaging and for client-service REST APIs are best fitted.
Check the Communication style in microservices.io
REST based Architecture
Advantage
Request/Response is easy and best fitted when you need synchronous environments.
Simpler system since there in no intermediate broker
Promotes orchestration i.e Service can take action based on response of other service.
Drawback
Services needs to discover locations of service instances.
One to one Mapping between services.
Rest used HTTP which is general purpose protocol built on top of TCP/IP which adds enormous amount of overhead when using it to pass messages.
Event Driven Architecture
Advantage
Event-driven architectures are appealing to API developers because they function very well in asynchronous environments.
Loose coupling since it decouples services as on a event of once service multiple services can take action based on application requirement. it is easy to plug-in any new consumer to producer.
Improved availability since the message broker buffers messages until the consumer is able to process them.
Drawback
Additional complexity of message broker, which must be highly available
Debugging an event request is not that easy.
I am trying to develop a multi-agent application in F#. Here's what I'm trying to do:
Create few agents (say, 100).
Have these agents asynchronously communicate with each other, using events.
HOWEVER, a requirement is that each of these agents should have no knowledge of each other.
The essence of the above point is that, for an agent (say A1, who's the publisher in this case) to send an event to another agent (say A2, who's a subscriber), the agent A2 needs to instantiate A1 to receive notifications from it. Both the Events framework in F# and the Reactive Extensions (Rx) follow this instantiation methodology.
What I'm looking for is a F# based event-broker-framework/middleware that allows an agent to subscribe to an event without instantiating the agent which publishes that event. i.e. the agents do not have knowledge of other agents the system. They just know the list of events that exist, and subscribe to (one or more) events from that list. On receiving the subscribed event(s), the agent invokes one of its methods.
One solution I can think of for this is the Event Aggregator pattern (e.g. in Prism). But haven't seen any F# implementation of this pattern.
Any reference/pointers would be appreciated. Thanks in advance.
You may be interested in reactive programming with the Reactive Extensions framework. It lets you create and manipulate event streams.
A basic introductory scenario would be to create a Subject to which your agents can subscribe or send events, thus acting as either producer or consumer.
I'm not sure if there are any concrete F# implementation of different eventing patterns, but the framework itself is incredibly powerful and definitely worth investigating in my opinion.
I'm working for a while with silverlight and MVVM (in its simplest form, it's to say hand-made), but I barely understand what is an event aggregator (and how to make an implementation of this).
What is hidding behind this name?
Can someone explain this quickly (or post a link?).
An event aggregator is generally a broker object that you can take a reference to and specify what type of events you want to receive, without having to take a reference or even be aware of the objects generating the events.
Prism's EventAggregator is the most common one. See: http://msdn.microsoft.com/en-us/library/ff649187.aspx
It describes itself as:
The EventAggregator service is
primarily a container for events that
allow decoupling of publishers and
subscribers so they can evolve
independently. This decoupling is
useful in modularized applications
because new modules can be added that
respond to events defined by the shell
or, more likely, other modules.