I'm Using ngrx/store in my app. And I ran into a question:
I need same state in several components of my module.
What are pros and cons of using the following strategies?
1) subscribe to the same state in every component where you need the state.
2) subscribe to the state in your module's shared service and inject service in your components.
You should always pull the state from the store, otherwise it loses it's power as the single source of truth in your application. The selectors are memoized, so there's no performance penalty for subscribing to the store in multiple places.
Related
I've been aware of event sourcing, CQRS, DDD and micro services for a little while and I'm now at that point where I want to try and start implementing stuff and giving something a go.
I've been looking into the technical side of CQRS and I understand the DDD concepts in there. How both the write side handles commands from the UI and publishes events from it, and how the read side handles events and creates projections on them.
The difficulty I'm having is the communication & a handling events from service-to-service (both from a write to read service and between micro services).
So I want to focus on eventstore (this one: https://eventstore.com/ to be less ambiguous). This is what I want to use as I understand it is a perfect for event sourcing and the simple nature of storing the events means I can use this for a message bus as well.
So my issue falls into two questions:
Between the write and the read, in order for the read side to receive/fetch the events created from the write side, am i right in thinking something like a catch up subscription can be used to subscribe to a stream to receive any events written to it or do i use something like polling to fetch events from a given point?
Between micro services, I am having an even harder time... So when looking at CQRS tutorials/talks etc... they always seem to talk with an example of an isolated service which receives commands from the UI/API. This is fine. I understand the write side will have an API attached to it so the user can interact with it to perform commands. E.g. create a customer. However... say if I have two micro services, e.g. a order micro service and an shipping micro service, how does the shipping micro service get the events published from the order micro service. Specifically, how does those customer events, translate to commands for the shipping service.
So let's take a simple example of: - Command created from the order's API to place an order. - A OrderPlacedEvent is published to the event store. How does the shipping service listen and react to this is it need to then DispatchOrder and create ain turn an OrderDispatchedEvent.
Does the write side of the shipping microservice then need to poll or also have a catch up subscription to the order stream? If so how does an event get translated to an command using DDD approach?
something like a catch up subscription can be used to subscribe to a stream to receive any events written to it
Yes, using catch-up subscriptions is the right way of doing it. You need to keep the stream position of your subscription persisted somewhere as well.
Here you can find some sample code that works. I am not posting the whole snippet since it is too long.
The projection service startup flow is:
Load the checkpoint (first time ever it would be the stream start)
Subscribe to the stream from that checkpoint
The runtime flow will then be:
The subscription will then call the function you provide when it receives an event. There's some plumbing there to do, like if you subscribe to $all, you need to filter out system events (it will be easier in the next version of Event Store)
Project the event
Store the new checkpoint
If you make your projections idempotent, you can store the checkpoint from time to time and save some IO.
how does the shipping micro service get the events published from the order micro service
When you build a brand new system and you have a small team working on all the components, you can make a shortcut and subscribe to domain events from another service, as you'd do with projections. Within the integration context (between the boxes), ordering should not be important so you can use persistent subscriptions so you won't need to think about checkpoints. Event Store will do it for you.
Be aware that it introduces tight coupling on the domain event schema of the originating service. Your contexts will have the Partnership relationship or the downstream service will be a Conformist.
When you move forward with your system, you might decide to decouple those contexts properly. So, you introduce a stable event API for the service that publishes events for others to consume. The same subscription that you used for integration can now instead take care of translating domain (internal) events to integration (external) events. The consuming context would then use the stable API and the domain model of the upstream service will be free in iterating on their domain model, as soon as they keep the conversion up-to-date.
It won't be necessary to use Event Store for the downstream context, they could just as well use a message broker. Integration events usually don't need to be persisted due to their transient nature.
We are running a webinar series about Event Sourcing at Event Store, check our web site to get on-demand access to previous webinars and you might find interesting to join future ones.
The difficulty I'm having is the communication & a handling events from service-to-service (both from a write to read service and between micro services).
The difficulty is not your fault - the DDD literature is really weak when it comes to discussing the plumbing.
Greg Young discusses some of the issues of subscription in the latter part of his Polygot Data talk.
Eventide Project has documentation that does a decent job of explaining the principles behind how the plumbing fits things together.
Between micro services, I am having an even harder time...
The basic idea: your message store is fundamentally a database; when the host of your microservice wakes up, it queries the message store for messages after some checkpoint, and then feeds them to your domain logic (updating its own local copy of the checkpoint as needed).
So the host pulls a document with events in it from the store, and transforms that document into a stream of handle(Event) commands that ultimately get passed to your domain component.
Put another way, you build a host that polls the database for information, parses the response, and then passes the parsed data to the domain model, and writes its own checkpoints.
I'm working on a "microservice-like" architecture. Each microservice can fire some events to RabbitMQ. The events are identified by an event code. At the moment, the code of the event triggered is an hard coded const string declared inside the microservice that fire the event.
My problem is that each microservice that want to subscribe to this event must duplicate this event code string. This is error prone especially when an event code is renamed because all microservices that subscribed to this event code need to be changed accordingly... which is very bad.
I see the possible alternatives:
Declare the event code only in the microservice that fire the event. Let the consumers microservices directly access to the code declared in the microservice that fire the event. In this case, the event is declared once but it creates a source code dependency between microservices... which is bad.
Create a source file (outside all microservices) that contains all the events code of all the application. This source file is shared by all microservices. In this case, each event is declared once but it creates a global dependency for all microservices which is against the single responsability principle... which is bad.
How do you tackle this problem ?
At the moment, the code of the event triggered is an hard coded const string declared inside the microservice that fire the event. My problem is that each microservice that want to subscribe to this event must duplicate this event code string. This is error prone especially when an event code is renamed because all microservices that subscribed to this event code need to be changed accordingly... which is very bad.
Events are messages. All of the constraints that we use to manage the evolution of messages applies to events as well.
In a microservices architecture, we expect to be able to deploy instances of the services independently of one another. Requiring that all of the services shut down together to coordinate a change in message schema kind of misses the point. That in turn implies that we need to design reasonable behaviors for the cases where the producer and consumer don't have matching understandings of the message.
In practice, this means something like
We never introduce a new required field, only optional fields (with documented default values).
Unrecognized fields are ignored (but forwarded)
Consumers of optional fields know to use default value to use when an expected field is missing.
When these constraints cannot be satisfied, then you are introducing a new message.
If you have the message contracts in place, then you aren't restricting yourself to microservice implementations that share the same runtime platform (because two different implementations of the same contract are equivalent).
Recommend reading:
ZeroMQ RFC 42/C4, specifically section 2.6 which describes the evolution of public contracts
Versioning in an Event Sourced System, speficically "Basic Type Based Versioning"
We are working on a system that is supposed to 'run' jobs on distributed systems.
When jobs are accepted they need to go through a pipeline before they can be executed on the end system.
We've decided to go with a micro-services architecture but there one thing that bothers me and i'm not sure what would be the best practice.
When a job is accepted it will first be persisted into a database, then - each micro-service in the pipeline will do some additional work to prepare the job for execution.
I want the persisted data to be updated on each such station in the pipeline to reflect the actual state of the job, or the its status in the pipeline.
In addition, while a job is being executed on the end system - its status should also get updated.
What would be the best practice in sense of updating the database (job's status) in each station:
Each such station (micro-service) in the pipeline accesses the database directly and updates the job's status
There is another micro-service that exposes the data (REST) and serves as DAL, each micro-service in the pipeline updates the job's status through this service
Other?....
Help/advise would be highly appreciated.
Thanx a lot!!
To add to what was said by #Anunay and #Mohamed Abdul Jawad
I'd consider writing the state from the units of work in your pipeline to a view (table/cache(insert only)), you can use messaging or simply insert a row into that view and have the readers of the state pick up the correct state based on some logic (date or state or a composite key). as this view is not really owned by any domain service it can be available to any readers (read-only) to consume...
Consider also SAGA Pattern
A Saga is a sequence of local transactions where each transaction updates data within a single service. The first transaction is initiated by an external request corresponding to the system operation, and then each subsequent step is triggered by the completion of the previous one.
http://microservices.io/patterns/data/saga.html
https://dzone.com/articles/saga-pattern-how-to-implement-business-transaction
https://medium.com/#tomasz_96685/saga-pattern-and-microservices-architecture-d4b46071afcf
If you would like to code the workflow:
Micorservice A which accepts the Job and command for update the job
Micorservice B which provide read model for the Job
Based on JobCreatedEvents use some messaging queue and process and update the job through queue pipelines and keep updating JobStatus through every node in pipeline.
I am assuming you know things about queues and consumers.
Myself new to Camunda(workflow engine), that might be used not completely sure
accessing some shared database between microservices is highly not recommended as this will violate the basic rule of microservices architecture.
microservice must be autonomous and keep it own logic and data
also to achive a good microservice design you should losely couple your microservices
Multiple microservices accessing the database is not recommended. Here you have the case where each of the service needs to be triggered, then they update the data and then some how call the next service.
You really need a mechanism to orchestrate the services. A workflow engine might fit the bill.
I would however suggest an event driven system. I might be going beyond with a limited knowledge of the data that you have. Have one service that gives you basic crud on data and other services that have logic to change the data (I would at this point would like to ask why you want different services to change the state, if its a biz req, its fine) Once you get the data written just create an event to which services can subscribe and react to it.
This will allow you to easily add more states to your pipeline in future.
You will need a service to manage the event queue.
As far as logging the state of the event was concerned it can be done easily by logging the events.
If you opt for workflow route you may use Amazon SWF or Camunda or really there quite a few options out there.
If going for the event route you need to look into event driven system in mciroservies.
Is it possible to have one Laravel application listen for events triggered in another?
I've built a REST API to complement an existing web app. It uses the same database but I've built it as a separate application and there are certain events which clear some cached results. At the moment the events are not being shared between the two applications so I'm getting the cached results in spite of having updated the database. Is there a way for one app to pick up on events fired by the other? I haven't found anything about this in the docs.
Redis is completely agnostic about what application is listening to it. You can set your broadcast driver to redis and invoke your events in one application while listening on the other as long as they both use the same Redis instance. The other can then listen for those events. However, of note is that the way that Laravel handles the listeners is to bind to a specific class. So you would still have to make sure the class existed so you may define a listener for it.
Who should be responsible for handling domain events? Application services, domain services or entities itself?
Let's use simple example for this question.
Let's say we work on shop application, and we have an application service dedicated to order operations. In this application Order is an aggregate root and following rules, we can work only with one aggregate within single transaction. After Order is placed, it is persisted in a database. But there is more to be done. First of all, we need to change number of items available in the inventory and secondly notify some other part of a system (probably another bounded context) that shipping procedure for that particular order should be started. Because, as already stated, we can modify only one aggregate within transaction, I think about publishing OrderPlacedEvent that will be handled by some components in the separate transactions.
Question arise: which components should handle this type of event?
I'd like to:
1) Application layer if the event triggers modification of another Aggregate in the same bounded context.
2) Application layer if the event trigger some infrastructure service.
e.g. An email is sent to the customer. So an application service is needed to load order for mail content and mail to and then invoke infrastructure service to send the mail.
3) I prefer a Domain Service personally if the event triggers some operations in another bounded context.
e.g. Shipping or Billing, an infrastructure implementation of the Domain Service is responsible to integrate other bounded context.
4) Infrastructure layer if the event need to be split to multiple consumers. The consumer goes to 1),2) or 3).
For me, the conclusion is Application layer if the event leads to an seperate acceptance test for your bounded context.
By the way, what's your infrastructure to ensure durability of your event? Do you include the event publishing in the transaction?
These kind of handlers belong to application layer. You should probably create a supporting application service's method too. This way you can start separate transaction.
I think the most common and usual place to put the EventHandlers is in the application layer. Doing the analogy with CQRS, EventHandlers are very similar to CommandHandlers and I usually put them both close to each other (in the application layer).
This article from Microsoft also gives some examples putting handlers there. Look a the image bellow, taken from the related article: