Event Triggered Workflows - events

I´m writing a software that work to process a set of data that came from use input process it and send an answer to the user.
The flow starts based on a configured API Callthat start a chain of API calls passing the result of each API for the next one until reachs the final output.
The problem is that the chain of calls is configurable by the user in order to process the data before saving it to the database.
Giving you a little example:
I receive data from an API that has the readings from a field sensor, on the arrival of this data I should do the following things:
Save the data on the database
Process the Data
Based on the data and on a configuration that should be made by the user I should get information from a diferent API (the APIs depend on the content of the data)
Send the information that I got from the other API to a third which will send it back to the sensor
Do you know any solution that´s capable of doing this kind of work?
Doesn´t mather the language or the framework, since it´s a brand new software we are free to start from the very first step.
Thank you

I am considering that, You have a form where user entering the Data, you are receiving the data, processing it and returning the answer based on the Data and your set of rules.
If you have a single form, get the data and pass to the API - RESTFul API
Process the data at server end
Response to client based on the set of rules of user entered data.
If you have Multiple form and coming to user one after one then do same.
Hope the process works. if you could clarify the Requirement more specifically then I can draw the Process in more depth.

Related

CQRS+ES: Client log as event

I'm developing small CQRS+ES framework and develop applications with it. In my system, I should log some action of the client and use it for analytics, statistics and maybe in the future do something in domain with it. For example, client (on web) download some resource(s) and I need save date, time, type (download, partial,...), from region or country (maybe IP), etc. after that in some view client can see count of download or some complex report. I'm not sure how to implement this feather.
First solution creates analytic context and some aggregate, in each client action send some command like IncreaseDownloadCounter(resourced) them handle the command and raise domain event's and updating view, but in this scenario first download occurred and after that, I send command so this is not really command and on other side version conflict increase.
The second solution is raising event, from client side and update the view model base on it, but in this type of handling my event not store in event store because it's not raise by command and never change any domain context. If is store it in event store, no aggregate to handle it after fetch for some other use.
Third solution is raising event, from client side and I store it on other database may be for each type of event have special table, but in this manner of event handle I have multiple event storage with different schema and difficult on recreating view models and trace events for recreating contexts states so in future if I add some domain for use this type of event's it's difficult to use events.
What is the best approach and solution for this scenario?
First solution creates analytic context and some aggregate
Unquestionably the wrong answer; the event has already happened, so it is too late for the domain model to complain.
What you have is a stream of events. Putting them in the same event store that you use for your aggregate event streams is fine. Putting them in a separate store is also fine. So you are going to need some other constraint to make a good choice.
Typically, reads vastly outnumber writes, so one concern might be that these events are going to saturate the domain store. That might push you towards storing these events separately from your data model (prior art: we typically keep the business data in our persistent book of record, but the sequence of http requests received by the server is typically written instead to a log...)
If you are supporting an operational view, push on the requirement that the state be recovered after a restart. You might be able to get by with building your view off of an in memory model of the event counts, and use something more practical for the representations of the events.
Thanks for your complete answer, so I should create something like the ES schema without some field (aggregate name or type, version, etc.) and collect client event in that repository, some offline process read and update read model or create command to do something on domain space.
Something like that, yes. If the view for the client doesn't actually require any validation by your model at all, then building the read model from the externally provided events is fine.
Are you recommending save some claim or authorization token of the user and sender app for validation in another process?
Maybe, maybe not. The token describes the authority of the event; our own event handler is the authority for the command(s) that is/are derived from the events. It's an interesting question that probably requires more context -- I'd suggest you open a new question on that point.

Tracking ajax request status in a Flux application

We're refactoring a large Backbone application to use Flux to help solve some tight coupling and event / data flow issues. However, we haven't yet figured out how to handle cases where we need to know the status of a specific ajax request
When a controller component requests some data from a flux store, and that data has not yet been loaded, we trigger an ajax request to fetch the data. We dispatch one action when the request is initiated, and another on success or failure.
This is sufficient to load the correct data, and update the stores once the data has been loaded. But, we have some cases where we need to know whether a certain ajax request is pending or completed - sometimes just to display a spinner in one or more views, or sometimes to block other actions until the data is loaded.
Are there any patterns that people are using for this sort of behavior in flux/react apps? here are a few approaches I've considered:
Have a 'request status' store that knows whether there is a pending, completed, or failed request of any type. This works well for simple cases like 'is there a pending request for workout data', but becomes complicated if we want to get more granular 'is there a pending request for workout id 123'
Have all of the stores track whether the relevant data requests are pending or not, and return that status data as part of the store api - i.e. WorkoutStore.getWorkout would return something like { status: 'pending', data: {} }. The problem with this approach is that it seems like this sort of state shouldn't be mixed in with the domain data as it's really a separate concern. Also, now every consumer of the workout store api needs to handle this 'response with status' instead of just the relevant domain data
Ignore request status - either the data is there and the controller/view act on it, or the data isn't there and the controller/view don't act on it. Simpler, but probably not sufficient for our purposes
The solutions to this problem vary quite a bit based on the needs of the application, and I can't say that I know of a one-size-fits-all solution.
Often, #3 is fine, and your React components simply decide whether to show a spinner based on whether a prop is null.
When you need better tracking of requests, you may need this tracking at the level of the request itself, or you might instead need this at the level of the data that is being updated. These are two different needs that require similar, but slightly different approaches. Both solutions use a client-side id to track the request, like you have described in #1.
If the component that calls the action creator needs to know the state of the request, you create a requestID and hang on to that in this.state. Later, the component will examine a collection of requests passed down through props to see if the requestID is present as a key. If so, it can read the request status there, and clear the state. A RequestStore sounds like a fine place to store and manage that state.
However, if you need to know the status of the request at the level of a particular record, one way to manage this is to have your records in the store hold on to both a clientID and a more canonical (server-side) id. This way you can create the clientID as part of an optimistic update, and when the response comes back from the server, you can clear the clientID.
Another solution that we've been using on a few projects at Facebook is to create an action queue as an adjunct to the store. The action queue is a second storage area. All of your getters draw from both the store itself and the data in the action queue. So your optimistic updates don't actually update the store until the response comes back from the server.

AngularJs - Persistence, storage, ajax requests, data integrity

I'm evaluating whether AngularJS will work as a solution for my moderately simple web application.
The aim is to cut down the amount of AJAX server requests for data as much as possible.
My actual question is simple, yet the repercussions of that request is leading to confusion.
In a nutshell: "Can Angular modify parts of JSON data received from a backend through user input and maintain state until I'm ready to return that data.
Scenario:
Grab JSON data from the server that contains a root name & associated address details for each root name. The list is rendered to screen along with an 'Edit Address' button for each item.
The user clicks 'Edit Address', Angular displays a form with the address data for the root name.
The user edits the data, clicks submit, the client sends JSON data to the server and, for arguments sake, we get a success return. The address details are modified.
This is where the meat of my question - and lack of understanding - comes to the fore.
Do I need to get the entire list of 10 items back from the server with the single modified address details, just from editing a single list item OR can I simply update that single item client side and hold state as the user returns to the list, say, to edit another item?
IOW, we get a success, but no data is actually returned aside from 'success' - our client has stored the changes.
This is where the data integrity issue rears it's ugly head.
** OR **
Grab a list of root items without associated address data.
The user clicks on an 'Edit Address' button for the root item.
We fetch the address data for the root name from the server and the form is displayed, the user edits the data, submits, send data asynchronously, get a success.
User returns to the list and another server request is made to grab the list from the server again.
This is hellishly difficult to explain, but the bottom line is about persistence and data integrity.
Is it best practice to make a server requests after each user edit of data, or can modified data be stored client side - with persistence?
obviously validation will be done server side, as well as client side.
What you're asking is more of a server-side question, on how to design a good RESTful API that allows changes to individual entities without sending/loading the entire list each time. So the answer to your question is that it's entirely up to you... angular does a great job of binding UI elements to the javascript objects in your controllers for you, but when it comes time to save that data to the server, you can do it however you want.
In an ideal world (IMO) your server-side API would support the following:
Get a list of addresses (angular stores them in $scope.addresses)
Get a single address
PUT/PATCH to update an address (when a user makes a change to a single address and confirms it) and return 204 no content
POST to create new addresses, and return the created address with a server-provided identifier (like "id"), that you can use in angular to determine whether an address has been persisted server-side or not. After POSTing, you rewrite the angular scope object with what you got from the server to save the id field.
DELETE to remove them (returning nothing)
With this, when you have the client create an address, you should send a POST to the server to create one, take the response JSON and copy it over the object you just saved, so that now it has an "id" field (or similar). You can use angular templates to visually represent that anything with an "id" field is saved to the server. This way you don't have to re-grab the whole list every time you save.
For updating addresses, this is why PATCH is useful: you can send only the changes to individual fields to the server and ensure that only things the user has changed get sent.
Deleting addresses can work by checking if the "id" field is there, and if so, send a DELETE to the server, and if not, the object was never "saved", so just remove the address from the scope. Upon successful deletion you can just remove the address from the scope, no need to reload everything.
When it comes to the "data integrity", ie. if there's other addresses created since you've done the original data request, you'll have to do this on your own... Ideally similarly to how Stack Overflow or Github does it, which is to hint in the UI that there has been server-side changes and you should click to refresh. How to determine refreshes is up to you, but you can keep it simple with polling at intervals, or you can go all out and do WebSockets/Server-side events and actually push changes to the browser.
The best way to create UIs that persist to the server is a complicated topic and there are a lot of best practices. Angular will support whatever you want, but you need coordination on the server to do it.

Parse.com. Execute backend code before response

I need to know the relative position of an object in a list. Lets say I need to know the position of a certain wine of all wines added to the database, based in the votes received by users. The app should be able to receive the ranking position as an object property when retrieving a "wine" class object.
This should be easy to do in the backend side but I've seen Cloud Code and it seems it only is able to execute code before or after saving or deleting, not before reading and giving response.
Any way to do this task?. Any workaround?.
Thanks.
I think you would have to write a Cloud function to perform this calculation for a particular wine.
https://www.parse.com/docs/cloud_code_guide#functions
This would be a function you would call manually. You would have to provide the "wine" object or objectId as a parameter and then get have your cloud function return the value you need. Keep in mind there are limitations on cloud functions. Read the documentation about time limits. You also don't want to make too many API calls every time you run this. It sounds like your computation could be fairly heavy if your dataset is large and you aren't caching at least some of the information.

Mule - Returning data from multiple flows as soon as it's ready

Hello there Stack Overflow.
My scenario is that I have a web page where a user can enter data (search terms, such as the name of a product on sale, a category, etc). On submission, this data is sent to the Mule ESB which then uses it to query two (or more) databases. One of these databases is rather quick and returns data fast, but the other is slow and can take a minute or longer to come back with information (if it doesn't timeout).
Currently, Mule is waiting to collect results from all flows before sending any information back to the web browser which made the query.
My problem is that this creates a very bad experience for the user - especially if the product that they're looking for is not in a database. They could be waiting quite a while before receiving anything back.
My current flow is here: http://i.stack.imgur.com/fyyI0.png
I have attempted to experiment with asynchronous flows but have never got them to send back data as and when it's ready.
Is there any way in Mule to return results from multiple flows as soon as the result is available? I would like to display the results for each query/flow as and when they come in, rather than waiting for all flows to terminate before sending data back to the user's browser.
I think the best option for your use case, if I understood it correctly, would be to use asynchronous processing and return the results through the Ajax transport: http://www.mulesoft.org/documentation/display/current/AJAX+Transport+Reference
This way you can return immediately to the client and publish results when you get them in the Ajax channel.

Resources