Pattern of pub-sub events is that the
publisher should not know or care if
there are any subscribers out there,
nor should it care what the
subscribers do if they are there (from
Brian Noyes'
blog)
What are the best practices to using EventAggregator in Prism? Currently I have few modules which are loosely coupled and work independently. These modules use EventAggregator to communicate to other modules. As the application grows I'm confused on how to document my code. There could be many modules publishing Events and many others subscribing to it as Brian puts neither of them knows what other does exactly. When creating a new module how do I make sure they are subscribed to some XYZ event without breaking the loosely coupled structure?
How do I represent a module using EventAggregator visually (some kind of diagrams)?
You have a lot of questions in your post that can be answered "it depends on your application," but I'll try to answer some of them.
One thing that I see most often with EventAggregator is abuse. Many people use EventAggregator in a way that makes both the publisher and subscriber dependent on each other. This brings me to my first bit of advise:
Never assume there are any subscibers to an event.
EventAggregator is useful for publishing events other views might be interested in. For example, in our application we allow a user to change someone's name. This name might be displayed on other views already open in the application (we have a tabbed UI). Our use case was we wanted to have those UIs update when the name was changed, so we published a "UserDataChanged" event so that open views could subscribe and refresh their data appropriately, but if no views that were open were interested in this data, no subscribers were notified.
Favor .NET Events over EventAggregator events where appropriate
Another mistake I see frequently is a business process that is implemented using EventAggregator where data is sent to a central party and then that party replies, all using EventAggregator. This is leads to some side-effects you'd likely want to avoid.
A variation on that I see a lot is communication from a parent view to a sub-view, or vice-versa. Something like "TreeItemChecked" or "ListViewItemSelected". This is a situation where traditional .NET Events would be used, but an author decided that if they have a hammer (EventAggregator), everything (Events) looks like a nail.
You asked about modeling the EventAggregator and I would say this: the EventAggregator is only special in that it allows for decoupling and doesn't create strong references to events (avoiding memory leaks, etc). Other than that, it's really just a very slight variation of the Observer Pattern. However you are modeling Observers is how you would model the EventAggregator in whatever type of diagram you are trying to create.
As to your question about making sure some module or another is subscribed to an event: you don't. If you need to ensure there are subscribers, you should not use the EventAggregator. In these cases I would recommend a service running in your application that modules can grab from your container and use or other similar thing.
The thing to keep in mind about your modules is that you should be able to completely remove one and the rest of your application functions normally. If this is not the case, you either have a module dependency (best to be avoided, but understandable), or dependent modules should be combined into one.
Related
I am thinking what is the best way to structure your micro-services, in the past the team I was working with used Axon Framework and PostgreSQL and each microservice had its own event store in the PostgreSQL database, then we built communication between using REST.
I am thinking that it would be smarter to have all microservices talk to the same event store as we would be able to share events faster instead of rewriting the communication lines using REST.
The questions that follows from the backstory is:
What is the best practice for having an event store
Would each service have its own? Would they share the same eventstore?
Where would I find information to inspire and gather more answers? As searching the internet for best practices and how to structure the Event Store seems like searching for a needle in a haystack.
Bear in mind, the question stated is in no way aimed at Axon Framework, but more the general idea on building scalable and good code. As the applications would work with each own event store for write model and read models.
Thank you for reading and I wish you all the best
-- Me
I'd add a slightly different notion to Tore's response, although the mainline is identical to what I'm sharing here. So, I don't aim to overrule Tore, just hoping to provide additional insight.
If the (micro)services belong to the same Bounded Context, then they're allowed to "learn about each other's language."
This language thus includes the events these applications publish and store.
Whenever there's communication required between different Bounded Contexts, you'd separate the stores, as one context shouldn't be bothered by the specifics of another context.
Hence it is beneficial to deduce what services belong to which Bounded Context since that would dictate the required separation.
Axon aims to support this by allowing multiple contexts with the Axon Server, as you can read here.
It simply allows the registration of applications to specific contexts, within which it will completely separate all message streams (so commands, events, and queries) and the Event Store.
You can also set this up from scratch yourself, of course. Tore's recommendation of Kafka is what's used quite broadly for Event Streaming needs between applications. Honestly, any broadcast type of infrastructure suits event distribution, as that's how events are typically propagated.
You want to have one EventStore per service, just as you would want to have one relation database per service for a non EventSourced system.
Sharing a database/eventstore between services creates coupling and we have all learned the hard way that this is an anti-pattern today.
If you want to use a event log to share events across services, then Kafka is a popular choice.
Important to remember that you only do event-sourcing within a service bounded context.
I want to communicate my services using events. I gonna publish (internally) all my domain events and allow any other service to subscribe to them. But such approach couples those services togheter. I am not longer allowed to change my events. This is even worse than local coupling because I dont event know my consumers any more. This limits the ability of developing/refactoring to unacceptable dedree. I am thinging about versioning my events which solves most of the issues. But how to subscribe to versioned events? Introducing common interface that groups all event`s versions and then downcast event within listener to accepted one does not sound like a vital solution. I also take into account publishing all supported versions of the event to the bus. By definition each subsriber will handle just one version. I dont want my domain to by involved in this matters so I need to build kind of infrastructure listener that will be translate catched events to other versions. I cant find anything about that topic in the Internet which automatically makes me think if I am not thoroughly wrong :)
UPDATE: After a lot of thought, I no longer want to publish my domain events. I think it is no desirable to expose internal service mechanics to the outer world. It also could violate some domain data access restriction. I think, the way to go is to map my domain events to some more corase integrational events. But I still need way to wersion them probably :)
UPDATE2: After some consultations an idea came up. Assuming we stick to the concept of integration events. Such events may be considered just as type and event id. So outer listener just focus on event type. If event occur then listener will be provided with event id. This enable listener to fetch real event from the stream/bus/wtf in given version. $eventsStore->get($eventGuid, $eventType, 'v27') for example (PHP syntax)
I gonna publish (internally) all my domain events and allow any other service to subscribe to them.
This is a common pattern in Even-Driven Architecture. I assume that you publish the events on an Event Broker, e.g. Apache Kafka and that Consumers subscribe to topics on the Event Broker.
I am not longer allowed to change my events. This is even worse than local coupling because I don't event know my consumers any more. This limits the ability of developing/refactoring to unacceptable degree. I am thinking about versioning my events which solves most of the issues.
Nah, published contracts should be versioned and no backward incompatible changes can be added to them. If you need a change that is not backward compatible, you have to introduce a new version of the published contract - and keep the old one as long as there is consumers. This is no different from REST-based interfaces - you have to fulfill your contracts.
With REST you may do this by using both /v1/orders and /v2/orders at the same time. With an Event-Driven Architecture you use two topics e.g. orders-v1 and orders-v2 - and these two contain data following a schema, e.g. Avro.
With an Event-Driven Architecture, where the services are decoupled (with a broker in between), you can actually phase out the old producer, if you add a smaller transformer, e.g. that consume orders-v2 and transform the events to the old format and publishes them on orders-v1 - so both v1 and v2 is still published.
Building Event-Driven Microservices is a good book about this.
In the current plan, incoming commands are handled via Function Apps, resulting in Events being sent to an Event Hub, and then materializing the views
Someone is arguing that instead of storing events in something like table storage, and materializing views based on events and snapshots, that we should:
Just stream events to a log in Azure Monitor to have auditing
We can make changes to a domain object immediately in response to a command and use the change feed as our source of events for materialized views.
He doesn’t see the advantage of even having a materialized view. Why not just use a query? Argument is we don’t expect a lot of traffic.
He wants to fulfill the whole audit log by saving events to the azure monitor log - Just an application log. Instead, that commands should just directly modify the representation of an entity in cosmos, and we'd use the change feed from CosmosDB as our domain object events, or we would create new events off of that via subscribers to that stream.
Is this actually an advantageous approach? Can ya'll think of any reasons why we wouldn't want to do that? Seems like we'd be losing something here.
He's saying we'd no longer need to be concerned with eventual consistency, as we'd have immediate consistency.
Every reference implementation I've evaluated does NOT do it the way he's suggesting. I'm not deeply versed in the advantages/disadvantages of the event sourcing / CQRS paradigm so I'm at a loss at the moment.. Currently researching furiously
This is a conceptual issue so there's not so much a code example. However, here's some references that seem to back up the approach I'm taking..
https://medium.com/#thomasweiss_io/planet-scale-event-sourcing-with-azure-cosmos-db-48a557757c8d
https://sajeetharan.com/2019/02/03/event-sourcing-with-azure-eventhub-and-cosmosdb/
https://learn.microsoft.com/en-us/azure/architecture/patterns/event-sourcing
If your goal is only to have the audit log, state-based persistence could be a good choice. Event sourcing adds some complexity to the implementation side and unless you can identify more advantages of using it, you might not convince your team to bring this complexity to the system. There are numerous questions and answers on SO, as well as in some blog posts, about pros and cons of event sourcing, so I won't get into that discussion here.
I can warn you, though, that the second article in your list is very weak and would most probably lead you to many difficulties. The role of Event Hub there is completely unclear and it doesn't explain anything about projections and read-models (what you call "materialised views"). Only a very limited number of use-cases can live with only getting one entity by id and without being able to execute a query across multiple entities. That also probably answers your concern of having read-models at all. You will need them very soon when for the first time you will start figuring out how to get a list of entities based on some condition (query).
Using CosmosDb as the event store is completely feasible, as described in the first article if you can manage the costs involved. Just remember to set the change feed TTL to -1, otherwise, you won't be able to replay your projections when you need to.
To summarise:
Keeping the audit log can be done without event-sourcing, but you need to ensure that events are published reliably, preferably in the same transaction as the entity state update. It is often hard or impossible but you might accept the risk of your audit requirement is not strict. You can also base your audit log on the CosmosDb change feed, just collecting document changes and logging them somewhere.
Event sourcing is a powerful technique but it has both pros and cons. The most common prejudice against using event sourcing is its implementation complexity. It might not be a big issue if you have a team that is somewhat experienced in building event-sourced systems. If you don't have such a team, you might want to build a small-scale spike to get some experience.
If you don't get full buy-in from the team to use event sourcing, you will later get all the blame if anything goes wrong. And it will go wrong at some point, especially with little experience in this area.
Spend some time reading books and trying out things yourself, before going wild in production.
Don't use Event Hub for anything that it is not designed for. Event Hub is the powerful event ingestion transport with limited TTL and it should be used for that purpose.
Don't use Table Storage as the event store, unless you only read entities by id. I used it in production for such a scenario and it worked (to some extent) but you can't project read-models from there.
A simple rule of thumb is to not use products for tasks they weren't designed for.
Azure Monitor was not designed to store application domain data. Azure Monitor is designed to store telemetry data from your applications and services and provides features such as alerts and other types of integration into DevOps tools for managing the operation and health of your apps.
There is a simple reason why you were able to find articles on event sourcing using Cosmos DB and why our own docs talk about it. Because it was designed to be used this way. It is simple to set up Cosmos DB to be an append only event store for your applications and use Change Feed to fire off messages in other apps or services or, in your case, to maintain a materialized view state of domain objects within your app.
I'm working for a while with silverlight and MVVM (in its simplest form, it's to say hand-made), but I barely understand what is an event aggregator (and how to make an implementation of this).
What is hidding behind this name?
Can someone explain this quickly (or post a link?).
An event aggregator is generally a broker object that you can take a reference to and specify what type of events you want to receive, without having to take a reference or even be aware of the objects generating the events.
Prism's EventAggregator is the most common one. See: http://msdn.microsoft.com/en-us/library/ff649187.aspx
It describes itself as:
The EventAggregator service is
primarily a container for events that
allow decoupling of publishers and
subscribers so they can evolve
independently. This decoupling is
useful in modularized applications
because new modules can be added that
respond to events defined by the shell
or, more likely, other modules.
I have an application that lends itself to an event/listener model. Several different kinds of data get published (event), then many different things may or may not need to act on that data (listeners). There's no specific order the listeners need to happen in and each listener would determine whether or not it needs to act on the event.
What tools for Rails apps are there to accomplish this task? I'm hoping to not have to do this myself (although, I can. It's not THAT big a deal.)
Edit: Observer pattern might be a better choice for this
Check out EventMachine. It is a very popular event-processing library for Ruby. It looks quite good, and a lot of other libraries seem to take advantage of it (Cramp).
Here is a good introduction: http://rubylearning.com/blog/2010/10/01/an-introduction-to-eventmachine-and-how-to-avoid-callback-spaghetti/
You'll probably want to hook into ActiveRecord's Observer class.
http://api.rubyonrails.org/v3.2.13/classes/ActiveRecord/Observer.html
With it, your models can execute custom logic for several lifecycle events:
http://api.rubyonrails.org/classes/ActiveRecord/Callbacks.html
If I understand your intent correctly, all you'll need to do is call the methods that represent your listener's action to an event from those callbacks.
You may want to use ActiveSupport::Notifications.instrument.
It is a general-purpose bridge for decoupling event sending to event reacting. It's geared towards executing all listeners during a single web request, unlike EventMachine, which is geared towards having lots of concurrent things happening.
I have created a ruby gem responding exactly to this use case : event_dispatcher
This gem provides a simple observer implementation, allowing you to subscribe and listen for events in your application with a simple and effective way.