Domain Events vs Event notification vs Event-carried state transfer ECST in event driven architecture, implementing Domain Driven Design (DDD) - events

I would like to clarify in my mind the way different kinds of events could be implemented in a EDA system (system with Event Driven Architecture) implementing DDD (Domain Driven Design). Let assume that we are not using event sourcing.
More specifically having read relevant articles it seems there are 3 kinds of events:
Event notification: This kind of event seems not to carry much details, it just notifies the for the event which has happened, providing a way to query for more information.
"type": "paycheck-generated",
"event-id": "537ec7c2-d1a1-2005-8654-96aee1116b72",
"delivery-id": "05011927-a328-4860-a106-737b2929db4e",
"timestamp": 1615726445,
"payload": {
"employee-id": "456123",
"link": "/paychecks/456123/2021/01" }
}
Event-carried state transfer (ECST): This event seems to come in two flavours, either it has a delta of some information which was changed, or it contains all the relevant information (snapshot) for a resource.
{
"type": "customer-updated",
"event-id": "6b7ce6c6-8587-4e4f-924a-cec028000ce6",
"customer-id": "01b18d56-b79a-4873-ac99-3d9f767dbe61",
"timestamp": 1615728520,
"payload": {
"first-name": "Carolyn",
"last-name": "Hayes",
"phone": "555-1022",
"status": "follow-up-set",
"follow-up-date": "2021/05/08",
"birthday": "1982/04/05",
"version": 7
}
}
{
"type": "customer-updated",
"event-id": "6b7ce6c6-8587-4e4f-924a-cec028000ce6",
"customer-id": "01b18d56-b79a-4873-ac99-3d9f767dbe61",
"timestamp": 1615728520,
"payload": {
"status": "follow-up-set",
"follow-up-date": "2021/05/10",
"version": 8
}
}
Domain event: This event lies somewhere in between on both of the other two, it has more information than the event notification but this info is more relevant to a specific domain.
The examples above for each one from the Khononov's book (Learning Domain-Driven Design: Aligning Software Architecture and Business Strategy)
Having said the previous statements I would like to clarify the following questions:
(1) Is the typical use of the Event-carried state transfer (ECST) and Event notifications type of events to be used in the form of integration events (in a DDD EDA system) when communicating with other bounded contexts? (via transforming domain events to integration
events depending on the use case)
(2) Is there one or many other typical categories of events in a Domain Driven Designed system utilising Event Driven Architecture? For example: event notifications when domain errors occur - for client notification for the specific error (which in this case there is no aggregate persistence taking place) - how are these kind of errors are propagated back to the client, and what could be the name of such events accepted by the DDD EDA community?

Related

/actuator/health does not detect stopped binders in Spring Cloud Stream

We are using Spring Cloud Streams with multiple bindings based on Kafka Streams binders.
The output of /actuator/health correctly lists all our bindings and their state (RUNNING) - see example below.
Our expectation was, when a binding is stopped using
curl -d '{"state":"STOPPED"}' -H "Content-Type: application/json" -X POST http://<host>:<port>/actuator/bindings/mystep1,
it is still listed, but with threadState = NOT_RUNNING or SHUTDOWN and the overall health status is DOWN.
This is not the case!. After stopping a binder, it is removed from the list and the overall state of /actuator/health is still UP.
Is there a reason for this? We would like to have a alert, on this execution state of our application.
Are there code examples, how we can achieve this by a customized solution based on KafkaStreamsBinderHealthIndicator?
Example output of /actuator/health with Kafka Streams:
{
"status": "UP",
"components": {
"binders": {
"status": "UP",
"components": {
"kstream": {
"status": "UP",
"details": {
"mystep1": {
"threadState": "RUNNING",
...
},
...
},
"mystep2": {
"threadState": "RUNNING",
...
},
...
}
}
}
}
},
"refreshScope": {
"status": "UP"
}
}
}
UPDATE on the exact situation:
We do not stop the binding manually via the bindings endpoint.
We have implemented integrated error queues for runtime errors within all processing steps based on StreamBridge.
The solution has also some kind of circuit breaker feature. This is the one that stops a binding from within the code, when a configurable limit of consecutive runtime errors is reached, because we do not want to flood our internal error queues.
Our application is monitored by Icinga via /actuator/health, therefore we would like to get an alarm, when on of the bindings is stopped.
Switching in Icinga to another endpoint like /actuator/bindings cannot be done easily by our team.
Presently, the Kafka Streams binder health indicator only considers the currently active Kafka Streams for health check. What you are seeing as the output when the binding is stopped is expected. Since you used the bindings endpoint to stop the binding, you can use /actuator/bindings to get the status of the bindings. There you will see the state of all the bindings in the stopped processor as stopped. Does that satisfy your use case? If not, please add a new issue in the repository and we could consider making some changes in the binder so that the health indicator is configurable by the users. At the moment, applications cannot customize the health check implementation. We could also consider adding a property, using which you can force the stopped/inactive kafka streams processors as part of the health check output. This is going to be tricky - for e.g. what will be the overall status of the health if some processors are down?

nestjs microservices - have one clientProxy to publish message to any microService

Sometimes, you want to say, "I have this message, who can handle it?"
In nestjs a client proxy is bounded directly to a single microservice.
So, as an example, let say that I have the following micro-services:
CleaningService, FixingService.
Both of the above can handle the message car, but only CleaningService can handle the message glass.
So, I want to have something like:
this.generalProxy.emit('car', {id: 2});
In this case, I want 2 different microservices to handle the car: CleaningService and FixingService.
in this case:
this.generalProxy.emit('glass', {id: 5});
I want only CleaningService to handle it.
How is that possible? how can I create clientProxy that is not bonded directly to a specific microservice.
The underlying transport layer matters because despite the fact that there is an abstraction in front of the different transports each underlying one has completely different characteristics and capabilities. The type of messaging pattern you're talking about is simple to accomplish with RabbitMQ because it has the notion of exchanges, queues, publisher, subscribers etc while a TCP based microservice requires a connection from one service to another. Likewise, the Redis transport layer uses simple channels without the necessary underlying implementation to be able to support some messages being fanned out to multiple subscribers and some going directly to specific subscribers.
This might not be the most popular opinion but I've been using NestJS professionally for over 3 years and I can definitely say that the official microservices packages are not sufficient for most actual production applications. They work great as a proof of concept but quickly fall apart because of exactly these types of issues.
Luckily, NestJS provides great building blocks and primitives in the form of the Module and DI system to allow for much more feature rich plugins to be built. I created one specifically for RabbitMQ to be able to support the exact type of scenario you are describing.
I highly recommend that since you're using RabbitMQ already that you check out #golevelup/nestjs-rabbitmq which can easily support what you want to accomplish using native RMQ concepts like Exchanges and Routing Keys. (Disclaimer: I am the author). It also allows you to manage as many exchanges and queues as you like (instead of being forced to try to push all things through a single queue) and has native support for multiple messaging patterns including PubSub and RPC.
You simply decorate your methods that you want to act as microservice message handlers with the appropriate metadata and messaging will just work as expected. For example:
#Injectable()
export class CleaningService {
#RabbitSubscribe({
exchange: 'app',
routingKey: 'cars',
queue: 'cleaning-cars',
})
public async cleanCar(msg: {}) {
console.log(`Received message: ${JSON.stringify(msg)}`);
}
#RabbitSubscribe({
exchange: 'app',
routingKey: 'glass',
queue: 'cleaning-glass',
})
public async cleanGlass(msg: {}) {
console.log(`Received message: ${JSON.stringify(msg)}`);
}
}
#Injectable()
export class FixingService {
#RabbitSubscribe({
exchange: 'app',
routingKey: 'cars',
queue: 'fixing-cars',
})
public async fixCar(msg: {}) {
console.log(`Received message: ${JSON.stringify(msg)}`);
}
}
With this setup both the cleaning service and the fixing service will receive the car message to their individual handlers (since they use the same routing key) and only the cleaning service will receive the glass message
Publishing message is simple. You just include the exchange and routing key and the right handlers will receive it based on their configuration:
amqpConnection.publish('app', 'cars', { year: 2020, make: 'toyota' });

Feature Flags - Should they be exposed to client applications?

I'm considering using Feature Flags in a web based app that has both javascript/html and mobile native clients, and am trying to make an informed decision on the following:
Should feature flags be exposed to client applications?
When discussing this with others, 2 approaches have appeared with how clients deal with feature flags, those being:
1) Clients know nothing about feature flags at all.
Server side endpoints that respond with data would include extra data to say if a feature was on or off.
e.g. for a fictional endpoint, /posts, data could be returned like so
enhanced ui feature enabled:
{
enhanced_ui: true,
[1,2,3,4,5]
}
enhanced ui feature disabled:
{
enhanced_ui: false,
[1,2,3,4,5]
}
2) Clients can access an endpoint, and ask for feature flag states.
e.g. /flagstates
{
'enhanced_ui:true
}
Clients then use this to hide or show features as required.
Some thoughts:
Approach #1 has less moving parts - no client side libraries are needed for implementing gates at all.
The question comes up though - when dynamic flags are updated, how do clients know? We can implement pub/sub to receive notifications and reload clients, then they'd automatically get the new up to date data.
Approach #2 feels like it might be easier to manage listening for flag updates, since it's a single endpoint that returns features, and state changes could be pushed out easily.
This is something that interested me as well as I have a requirement to implement feature flags/switches in the product I am working on. I have been researching this area for the past week and I will share my findings and thoughts (I am not claiming they are best practice in any way). These findings and thoughts will be heavily based on ASP.Net Zero and ASP.Net Boilerplate as I found these to be the closest match for an example implementation for what I am looking for.
Should feature flags be exposed to client applications?
Yes and no. If you are building a software as a service product (with multitenancy potentially), then you will most likely have to have some sort of management ui where admin users can manage (CRUD/Enable/Disable) features . This means that if you are building a SPA, you will obviously have to implement endpoints in your api (appropriately secured, of course) that your front end can use to retrieve details about the features and their current state for editing purposes. This could look like something below:
"features": [
{
"parentName": "string",
"name": "string",
"displayName": "string",
"description": "string",
"defaultValue": "string",
"inputType": {
"name": "string",
"attributes": {
"additionalProp1": {},
"additionalProp2": {},
"additionalProp3": {}
},
....
Model for features can, of course, vary based on your problem domain, but above should give you an idea of a generic model to hold feature definition.
Now as you can see, there is more to the feature than just a boolean flag whether it is enabled or not - it may have attributes around it. This is something that was not obvious at all for me to begin with as I only thought about my problem in the context of fairly simple features (true/false) where as actually, there may be features that are a lot more complex.
Lastly, when your users will be browsing your app, if you are rendering the UI for tenant who has your EnhancedUI feature enabled, you will need to know if the feature is enabled. In ASP.Net Zero this done by using something called IPermissionService, which is implemented in both, front end and back end. In back end, the permission service would basically check if the user is supposed to be allowed to access some resource, which in feature switch context means to check whether the feature is enabled for a given tenant. In front end (Angular), the permission service retrieves these permissions (
/api/services/app/Permission/GetAllPermissions):
{
"items": [
{
"level": 0,
"parentName": "string",
"name": "string",
"displayName": "string",
"description": "string",
"isGrantedByDefault": true
}
]
}
This can then be used to create some sort of RouteGuard where if something is not enabled or not allowed, you can appropriately redirect for example to an Upgrade your edition page.
Hopefully this gives you some ideas to think about.

Slack messages incoming web hooks as unique message not continued

I am hitting an API to return stats of some websites, I analyse the returned values and add some of the sites to an array.
I then construct a slack message and add the array of sites to the fields section like this;
"attachments": [
{
"fallback": "",
"color": "#E50000",
"author_name": "title",
"title": "metrics recorded",
"title_link": "https://mor47992.live.dynatrace.com/#dashboard;id=cc832197-3b50-489e-b2cc-afda34ab6018;gtf=l_7_DAYS",
"text": "more title info",
"fields": sites,
"ts": Date.now() / 1000 | 0
}
]
This is all in a lambda which is triggered every 5 minutes, the first message comes through fine.
however subsequent messages just append to the fields section of the original message so it looks like I have delivered duplicate content in the message. is there a way to force each hit to the incoming web hook to post as a brand new message to slack?
here is an example of a followup message, notice the duplicate content.
No. Its a "feature" of Slack that is will automatically combine multiple message from the same user / bot without restating the user name if they are send within a short time frame.
To separate the attachments in your case would suggest to add an introduction text. Either via text property of the message (on same level than attachments property). Or by adding a pretext to each attachment.

Detecting calendar-event mail items in Office365 REST Mail API

I am successfully retrieving user emails from our Exchange Online server, via REST requests and the ADAL library. We have been retrieving and processing calendar-event emails, and their associated calendar events, which are generated by Outlook, GMail/Google-Calendar, iPad, iPhone and Android devices.
We have been looking in the ClassName property for "meeting.request" or "meeting.cancelled", but those values were removed a week ago and have not returned. We have now been looking for a non-Null MeetingMessageType property (MeetingRequest or MeetingCancelled), but as of today, those properties have also been removed. This is incredibly valuable interop data but I don't know where to look next.
How can I associate a retrieved json Message object from a user's mailbox or a shared mailbox, with an (Exchange...) associated Calendar event? We can process Meeting creations, invitations, acceptances etc. with message items which we then purge; Whereas, querying the calendars for new and updated events is much much more intensive, since we certainly can't purge calendar events off the calendar as we process them!
Can I query the calendar for associated message Ids? I can't imagine this would be possible to do for every message.
Thanks!
Edit: #Venkat Thanks. Mail items are infinitely more process-able than emergent calendar-event standards. As an Exchange dev, I have to ask-- do you really need an example of how I can process a mail-bound event better as a mail item rather than a calendar event item? Ok that's cool, here is one:
One thing we are doing is cc/bcc-ing mail/mtg-requests to specific mailboxes for processing (or using client and server rules to accomplish the same thing). We can then poll individual mailboxes, shared mailboxes, and/or collections of mailboxes to auto-respond or not... and to move to specific calendars or not, and to redirect to specific users or not, and to change header information during routing for further category classification or not, and to even replace recipients/attendees or not etc. etc. To do the same thing with REST calendar requests, we'd lose all server rules automation, all client rules automation, procedural auto-respond, all headers manipulation (data-insertion/extraction), etc. We're just trying to push events to a cloud app, for certain collections of users, using shared mailboxes which redirect to specific daemon accounts, which hold calendars for specific subsets of our users/clients.
Like everyone else, we are trying to integrate with cloud apps. So we need procedural parsing, data-manipulation, and pushing of both mail and calendar items. So, for one thing, we have the massive advantages of server mail-processing rules, client/user mail rules, mail header modifications (easy item data modification), mail auto-respond control, and blind recipients. Calendar events don't get any of those things. For a second thing, we have a much more robust mail folders taxonomy than calendar(s) taxonomy (which is almost non-existent). For a third thing, Calendar event mail items are user-specific and have less persistent value than shared calendar events. Finally, if we're processing mail items any way-- why not at least have an eventId for events? Why take out ALL interop information? Having an eventId completely eliminates the need for a query against a calendar endpoint returning multiple items, and adds no addition queries against a mail endpoint.
Google includes an attached ics. Even if you eliminate the event item attachment from the API mail item, I don't see why you have to remove the eventId. Processing calendar events by mail is nothing new, but we have to have a data-binding between the two objects, to do it. That is all.
My Exchange Server still knows when a mail item is a calendar event. It just won't tell ~me~, any more, if I ask it over REST. So, as a brutish work-around I can set up a mail rule to add a category of "api_calendarEvent" for all incoming messages that are of type "Meeting Request". Then, after making a REST call for mail items, I can parse categories to manually repopulate a class property. But why remove the attachment, classname, MeetingMessageType, and EventId altogether from the mail item? Even if I made a server rule to re-categorize certain mail items in certain mailboxes as calendar events, and was able to know when to poll a calendar to get event details-- would I always know what calendar to poll, to find that event? All we'd need to avoid blind polling across multiple calendars, is for you to retain the EventId and/or ClassName. Then we'd also have massive automation of calendar processing again, which has currently been removed from the API.
Thanks!
Thanks for the detailed response to my comment! Your scenario is something we wish to support. As part of schema clean up, we removed event ID and meeting message type from message, as it was being included for every message. For calendar invites and responses, we plan to add back 2 properties:
1. A navigation link to the related event, so you can click to the event, and take actions on it, if you have calendar permissions.
2. A calendar response type e.g. Meeting Accepted, Meeting Declined etc., so you know what type of the page you have received.
We are currently working on the design and we don't have the exact timeline to share. But, we will update our documentation as soon as we have this API available.
[UPDATE] We now return calendar event invites and responses as EventMessage which is a subclass of Message. This entity includes a property called MeetingMessageType and a navigation link to the corresponding Event on the user's calendar. See below for an example:
{
#odata.context: "https://outlook.office365.com/api/v1.0/$metadata#Users('<snipped>')/Messages/$entity",
#odata.type: "#Microsoft.OutlookServices.EventMessage",
#odata.id: "https://outlook.office365.com/api/v1.0/Users('<snipped>')/Messages('<snipped>')",
#odata.etag: "<snipped>",
Id: "<snipped>",
ChangeKey: "<snipped>",
Categories: [ ],
DateTimeCreated: "2015-04-08T14:37:55Z",
DateTimeLastModified: "2015-04-08T14:37:55Z",
Subject: "<snipped>",
BodyPreview: "",
Body: {
ContentType: "HTML",
Content: "<snipped>"
},
Importance: "Normal",
HasAttachments: false,
ParentFolderId: "<snipped>",
From: {
EmailAddress: {
Address: "<snipped>",
Name: "<snipped>"
}
},
Sender: {
EmailAddress: {
Address: "<snipped>",
Name: "<snipped>"
}
},
ToRecipients: [{
EmailAddress: {
Address: "<snipped>",
Name: "<snipped>"
}
}],
CcRecipients: [ ],
BccRecipients: [ ],
ReplyTo: [ ],
ConversationId: "<snipped>",
DateTimeReceived: "2015-04-08T14:37:55Z",
DateTimeSent: "2015-04-08T14:37:48Z",
IsDeliveryReceiptRequested: null,
IsReadReceiptRequested: false,
IsDraft: false,
IsRead: false,
WebLink: "<snipped>",
MeetingMessageType: "MeetingRequest",
Event#odata.navigationLink: "https://outlook.office365.com/api/v1.0/Users('<snipped>')/Events('<snipped>')"
}
Please let me know if our proposed changes meet your requirements, if you have any questions or need more info.
Thanks,
Venkat

Resources