I want to know that further details about payloads in an event-driven architecture. I used several online resources and didn't get many details. Please help me to find,
Use of the Full Payload.
Provide Metadata and an API link with a token to access the Actual Payload, than sending the full data.
To answer your question, api link rather than full data let's take a sample:
In Amazon, Order Microservice sends a event OrderCancelled and Customer service listen to that event.
Now there could be two ways of sending the event data:
Send complete order data in the Event
Pros: Listener services do not need to query Order Service for their functioning.
Cons: Lots of data will be passed in the event even though only 10 % is used. Lots of I/O.
Send only order id, cancel reason , customer id , date in the event
Pros: If the data is choosen carefully, much less data is sent in the event.
Cons: If the data is choosen incorrectly, then that means lots of API requests.
Related
This question is about message queueing in between a service architecture. There is hardly something to find about this topic.
The situation:
Microservice A and microservice B. Microservice A deals about entity "something" and B needs to about. I keep it general to avoid discussions about boundaries.
In our case A sends a message which contains event and related entity id like
Event: somethingCreated
SomethingID: 1234
B consumes this message and if it needs further information it fetchs this from A with SomethingID.
The second approach would be that the message not only contains the information above but also meta data like
Event: somethingCreated
SomethingID: 1234
SomeFieldKey: someFieldValue
...
Lean message:
Pro:
* Less network usage
* Always the same structure of messages
Cons:
* If information from A is needed on demand there must be some mechanism to catch e. g. network failures
Fat message:
Pro:
* Information is already there
Con:
* What if the attached information is not enough?
So it has both pros and cons and my intention here is to get an overview Which approach you are using.
Thanks for answers in advance
Simple answer is it depends,
We have services that expose all the data with their events and we have services which just shares the reference id and also services which are between these two when it comes to event payload.
Our point is the service which is producing events is mostly in control of what the content of payload will be. We review the use case and monitor the usage of the events and services call and accordingly and make the payload fatter or leaner.
We do have an upper limit on our message size but other than that no restriction.
When data flows in between services network latency has not been an issue for us.( I mean not because of increase in size of the payload)
So you need to allow your individual services to take a call. Each service has an SLA to meet in terms of response time and when it gets breached, you review, find out bottlenecks and resolve them.
This is more of a hypothetical question, so I can't really show any code examples. Imagine if a site like Twitter wanted to live-update stats on a Tweet via web sockets/Socket.io. In terms of performance, which of these would be the best approach?
Each action (like, retweet, reply) sends a message to the server, which then gets emitted to all clients, and the client is responsible for updating the appropriate tweet.
Each tweet the client loads is connected to a different room so that it only emits and receives messages relevant to itself.
Other?
Or perhaps it's dependent on the scale of the application? Maybe 1 is better if you had a Twitter clone with only a few users, whereas I would think 2 is better in Twitter's case because it's a matter of hundreds of "rooms" vs millions of signals/second? And if that's the case, at what point is one approach preferred over the other?
At scale, you do not want to be sending messages to clients that they did not ask for and do not have any use for. Imagine a twitter client that was receiving every single tweet being sent in real time. That could overwhelm that client and it would mean the server would be delivering every single tweet to every single connected client. That obviously doesn't scale on either the server side or the client side.
So option 1 is out.
The appropriate solution has the server send to the client only the messages that is has a particular interest in seeing. This works just fine at any scale. I can't tell whether your option 2 is that or not since rooms are just a tool for making groups of connections that you can send the same message to - they don't really decide who gets what message - that logic must be baked into your server code.
For a twitter-like service, it seems you're going to have to have a system where your server can easily tell which users have an interest in this particular new message. That can presumably be for a number of reasons such as they are following the author, they are following a hashtag present in the message, they are mentioned in the message, etc... That is server-side logic, not just simple rooms.
Is there a way to get the members to a certain response of poll without the need to create segments?
I am sending mails and have a poll included (basically participating at an event).
Now I would like to easily collect the respondents for an event from various mails (announcement, invitation, reminder 1, reminder 2,..)
Currently I need to create segments for each response where I need to reference the campaigns individually. So whenever I send a campaign (email) I need to update all segments as there need to be a segment per question, which I would like to avaoid.
Hope thats clear enough.
I had a similar question and after a review of the mailchimp API docs, in particular the reports section I realized there was not a way to retrieve poll results.
After my review, I followed-up with mailchimp and they mentioned access to poll results via API is not available - detailed comments with image attached below:
MailChimp Response - Start
"To be completely honest and transparent, there currently wouldn't be a way of accessing the campaign poll result data directly through the report... With that being said, it would be possible to use the API to create segments based on poll response, then call those segments to view the number of responses for each option, as well as the specific subscribers who chose each individual option.
More info here: https://developer.mailchimp.com/documentation/mailchimp/reference/lists/segments/
MailChimp Response - End
As you can see, although accessing poll results via the API is not available, there is a work around using a method.
Good luck!
I'm developing small CQRS+ES framework and develop applications with it. In my system, I should log some action of the client and use it for analytics, statistics and maybe in the future do something in domain with it. For example, client (on web) download some resource(s) and I need save date, time, type (download, partial,...), from region or country (maybe IP), etc. after that in some view client can see count of download or some complex report. I'm not sure how to implement this feather.
First solution creates analytic context and some aggregate, in each client action send some command like IncreaseDownloadCounter(resourced) them handle the command and raise domain event's and updating view, but in this scenario first download occurred and after that, I send command so this is not really command and on other side version conflict increase.
The second solution is raising event, from client side and update the view model base on it, but in this type of handling my event not store in event store because it's not raise by command and never change any domain context. If is store it in event store, no aggregate to handle it after fetch for some other use.
Third solution is raising event, from client side and I store it on other database may be for each type of event have special table, but in this manner of event handle I have multiple event storage with different schema and difficult on recreating view models and trace events for recreating contexts states so in future if I add some domain for use this type of event's it's difficult to use events.
What is the best approach and solution for this scenario?
First solution creates analytic context and some aggregate
Unquestionably the wrong answer; the event has already happened, so it is too late for the domain model to complain.
What you have is a stream of events. Putting them in the same event store that you use for your aggregate event streams is fine. Putting them in a separate store is also fine. So you are going to need some other constraint to make a good choice.
Typically, reads vastly outnumber writes, so one concern might be that these events are going to saturate the domain store. That might push you towards storing these events separately from your data model (prior art: we typically keep the business data in our persistent book of record, but the sequence of http requests received by the server is typically written instead to a log...)
If you are supporting an operational view, push on the requirement that the state be recovered after a restart. You might be able to get by with building your view off of an in memory model of the event counts, and use something more practical for the representations of the events.
Thanks for your complete answer, so I should create something like the ES schema without some field (aggregate name or type, version, etc.) and collect client event in that repository, some offline process read and update read model or create command to do something on domain space.
Something like that, yes. If the view for the client doesn't actually require any validation by your model at all, then building the read model from the externally provided events is fine.
Are you recommending save some claim or authorization token of the user and sender app for validation in another process?
Maybe, maybe not. The token describes the authority of the event; our own event handler is the authority for the command(s) that is/are derived from the events. It's an interesting question that probably requires more context -- I'd suggest you open a new question on that point.
We're refactoring a large Backbone application to use Flux to help solve some tight coupling and event / data flow issues. However, we haven't yet figured out how to handle cases where we need to know the status of a specific ajax request
When a controller component requests some data from a flux store, and that data has not yet been loaded, we trigger an ajax request to fetch the data. We dispatch one action when the request is initiated, and another on success or failure.
This is sufficient to load the correct data, and update the stores once the data has been loaded. But, we have some cases where we need to know whether a certain ajax request is pending or completed - sometimes just to display a spinner in one or more views, or sometimes to block other actions until the data is loaded.
Are there any patterns that people are using for this sort of behavior in flux/react apps? here are a few approaches I've considered:
Have a 'request status' store that knows whether there is a pending, completed, or failed request of any type. This works well for simple cases like 'is there a pending request for workout data', but becomes complicated if we want to get more granular 'is there a pending request for workout id 123'
Have all of the stores track whether the relevant data requests are pending or not, and return that status data as part of the store api - i.e. WorkoutStore.getWorkout would return something like { status: 'pending', data: {} }. The problem with this approach is that it seems like this sort of state shouldn't be mixed in with the domain data as it's really a separate concern. Also, now every consumer of the workout store api needs to handle this 'response with status' instead of just the relevant domain data
Ignore request status - either the data is there and the controller/view act on it, or the data isn't there and the controller/view don't act on it. Simpler, but probably not sufficient for our purposes
The solutions to this problem vary quite a bit based on the needs of the application, and I can't say that I know of a one-size-fits-all solution.
Often, #3 is fine, and your React components simply decide whether to show a spinner based on whether a prop is null.
When you need better tracking of requests, you may need this tracking at the level of the request itself, or you might instead need this at the level of the data that is being updated. These are two different needs that require similar, but slightly different approaches. Both solutions use a client-side id to track the request, like you have described in #1.
If the component that calls the action creator needs to know the state of the request, you create a requestID and hang on to that in this.state. Later, the component will examine a collection of requests passed down through props to see if the requestID is present as a key. If so, it can read the request status there, and clear the state. A RequestStore sounds like a fine place to store and manage that state.
However, if you need to know the status of the request at the level of a particular record, one way to manage this is to have your records in the store hold on to both a clientID and a more canonical (server-side) id. This way you can create the clientID as part of an optimistic update, and when the response comes back from the server, you can clear the clientID.
Another solution that we've been using on a few projects at Facebook is to create an action queue as an adjunct to the store. The action queue is a second storage area. All of your getters draw from both the store itself and the data in the action queue. So your optimistic updates don't actually update the store until the response comes back from the server.