How to avoid action chains - reactjs-flux

I'm trying to understand Flux pattern.
I believe that in any good design the app should consist of relatively independent and universal (and thus reusable) components glued together by specific application logic.
In Flux there are domain-specific Stores encapsulating data and domain logic. These could be possibly reused in another application for the same domain.
I assume there should also be application-specific Store(s) holding app state and logic. This is the glue.
Now, I try to apply this to imaginary "GPS Tracker" app:
...
When a user clicks [Stop Tracking] button, corresponding ViewController raises STOP_CLICK.
AppState.on(STOP_CLICK):
dispatch(STOP_GEOLOCATION)
dispatch(STOP_TRACKING)
GeolocationService.on(STOP_GEOLOCATION):
stopGPS(); this.on = false; emit('change')
TrackStore.on(STOP_TRACKING):
saveTrack(); calcStatistics(); this.tracking = false; emit('change')
dispatch(START_UPLOAD)
So, I've got an event snowball.
It is said that in Flux one Action should not raise another.
But I do not understand how this could be done.
I think user actions can't go directly to domain Stores as these should be UI-agnostic.
Rather, AppState (or wherever the app logic lives) should translate user actions into domain actions.
How to redesign this the Flux way?
Where should application logic go?
Is that correct to try to keep domain Stores independent of the app logic?
Where is the place for "services"?
Thank you.

All of the application logic should live in the stores. They decide how they should respond to a particular action, if at all.
Stores have no setters. The only way into the stores is via a dispatched action, through the callback the store registered with the dispatcher.
Actions are not setters. Try not to think of them as such. Actions should simply report on something that happened in the real world: the user interacted with the UI in a certain way, the server responded in a certain way, etc.
This looks a lot like setter-thinking to me:
dispatch(STOP_GEOLOCATION)
dispatch(STOP_TRACKING)
Instead, dispatch the thing that actually happened: STOP_TRACKING_BUTTON_CLICKED (or TRACKING_STOPPED, if you want to be UI-agnostic). And then let the stores figure out what to do about it. All the stores will receive that action, and they can all respond to it, if needed. The code you have responding to two different actions should be responding to the same action.
Often, when we find that we want dispatch within a dispatch, we simply need to back up to the original thing that happened and make the entire application respond to that.

Related

CQRS+ES: Client log as event

I'm developing small CQRS+ES framework and develop applications with it. In my system, I should log some action of the client and use it for analytics, statistics and maybe in the future do something in domain with it. For example, client (on web) download some resource(s) and I need save date, time, type (download, partial,...), from region or country (maybe IP), etc. after that in some view client can see count of download or some complex report. I'm not sure how to implement this feather.
First solution creates analytic context and some aggregate, in each client action send some command like IncreaseDownloadCounter(resourced) them handle the command and raise domain event's and updating view, but in this scenario first download occurred and after that, I send command so this is not really command and on other side version conflict increase.
The second solution is raising event, from client side and update the view model base on it, but in this type of handling my event not store in event store because it's not raise by command and never change any domain context. If is store it in event store, no aggregate to handle it after fetch for some other use.
Third solution is raising event, from client side and I store it on other database may be for each type of event have special table, but in this manner of event handle I have multiple event storage with different schema and difficult on recreating view models and trace events for recreating contexts states so in future if I add some domain for use this type of event's it's difficult to use events.
What is the best approach and solution for this scenario?
First solution creates analytic context and some aggregate
Unquestionably the wrong answer; the event has already happened, so it is too late for the domain model to complain.
What you have is a stream of events. Putting them in the same event store that you use for your aggregate event streams is fine. Putting them in a separate store is also fine. So you are going to need some other constraint to make a good choice.
Typically, reads vastly outnumber writes, so one concern might be that these events are going to saturate the domain store. That might push you towards storing these events separately from your data model (prior art: we typically keep the business data in our persistent book of record, but the sequence of http requests received by the server is typically written instead to a log...)
If you are supporting an operational view, push on the requirement that the state be recovered after a restart. You might be able to get by with building your view off of an in memory model of the event counts, and use something more practical for the representations of the events.
Thanks for your complete answer, so I should create something like the ES schema without some field (aggregate name or type, version, etc.) and collect client event in that repository, some offline process read and update read model or create command to do something on domain space.
Something like that, yes. If the view for the client doesn't actually require any validation by your model at all, then building the read model from the externally provided events is fine.
Are you recommending save some claim or authorization token of the user and sender app for validation in another process?
Maybe, maybe not. The token describes the authority of the event; our own event handler is the authority for the command(s) that is/are derived from the events. It's an interesting question that probably requires more context -- I'd suggest you open a new question on that point.

Is it a good practice to call action within another action (in flux)

I have an action as follows:
SomeActions.doAction1(){
//..dispatch event "started"...
//...do some process....
FewActions.doAnotherAction(); //CAN WE DO THIS
//...do something more....
//..dispatch event "completed"..
}
While the above works with no problems, just wondering, if it is valid according to flux pattern/standard or is there a better way.
Also, I guess calling Actions from Stores are a bad idea. Correct me if I am wrong.
Yes, calling an Action within another Action is a bad practice. Actions should be atomic; all changes in the Stores should be in response to a single action. They should describe one thing that happened in the real world: the user clicked on a button, the server responded with data, the screen refreshed, etc.
Most people get confused by Actions when they are thinking about them as imperative instructions (first do A, then do B) instead of descriptions of what happened and the starting point for reactive processes.
This is why I recommend to people that they name their Action types in the past tense: BUTTON_CLICKED. This reminds the programmer of the fundamentally externally-driven, descriptive nature of Actions.
Actions are like a newspaper that gets delivered to all the stores, describing what happened.
Calling Actions from Stores is almost always the wrong thing to do. I can only think of one exception: when the Store responds to the first Action by starting up an asynchronous process. When the async process completes, you want to fire off a second Action. This is the case with a XHR call to the server. But the better way is to put the XHR handling code into a Utils module. The store can then respond to the first Action by calling a method in the Utils module, and then the Utils module has the code to call the second Action when the server response comes back.

How to handle data composition and retrieval with dependencies in Flux?

I'm trying to figure out what is the best way to handle a quite commons situation in medium complex apps using Flux architecture, how to retrieve data from the server when the models that compose the data have dependencies between them. For example:
An shop web app, has the following models:
Carts (the user can have multiple carts)
Vendors
Products
For each of the models there is an Store associated (CartsStore, VendorsStore, ProductsStore).
Assuming there are too many products and vendors to keep them always loaded, my problem comes when I want to show the list of carts.
I have a hierarchy of React.js components:
CartList.jsx
Cart.jsx
CartItem.jsx
The CartList component is the one who retrieves all the data from the Stores and creates the list of Cart components passing the specific dependencies for each of them. (Carts, Vendors, Products)
Now, if I knew beforehand which products and vendors I needed I would just launch all three requests to the server and use waitFor in the Stores to synch the data if needed. The problem is that until I get the carts and I don't know which vendors or products I need to request to the server.
My current solution is to handle this in the CartList component, in getState I get the Carts, Vendors and Products from each of the Stores, and on _onChange I do the whole flow:
This works for now, but there a few things I don't like:
1) The flow seems a bit brittle to me, specially because the component is listening to 3 stores but there is only entry point to trigger "something has changed in the data event", so I'm not able to distinguish what exactly has changed and react properly.
2) When the component is triggering some of the nested dependencies, it cannot create any action, because is in the _onChange method, which is considering as still handling the previous action. Flux doesn't like that and triggers an "Cannot dispatch in the middle of a dispatch.", which means that I cannot trigger any action until the whole process is finished.
3) Because of the only entry point is quite tricky to react to errors.
So, an alternative solution I'm thinking about is to have the "model composition" logic in the call to the API, having a wrapper model (CartList) that contains all 3 models needed, and storing that on a Store, which would only be notified when the whole object is assembled. The problem with that is to react to changes in one of the sub models coming from outside.
Has anyone figured out a nice way to handle data composition situations?
Not sure if it's possible in your application, or the right way, but I had a similar scenario and we ended up doing a pseudo implementation of Relay/GraphQL that basically gives you the whole tree on each request. If there's lots of data, it can be hard, but we just figured out the dependencies etc on the server side, and then returned it in a nice hierarchical format so the React components had everything they needed up to the level where the call came from.
Like I said, depending on details this might not be feasible, but we found it a lot easier to sort out these dependencies server-side with stuff like SQL/Java available rather than, like you mentioned, making lots of async calls and messing with the stores.

Model View Controller and Callbacks

I'm currently developing a multiplayer cardgame for android, with libgdx as the game engine. My question is more generel tho.
I'm not sure whats the best practice for handling callbacks in this architecture. My controller is a big statemachine, that checks inputs over and over while beeing called from the render() method of the gameengine.
I have two main callbacks, userinput from the gui, and network callbacks from the android google play services part.
Currently these callback methode/ inputListeners just set member variables, which are check by getter methods from the controller/statemachine, for example i call this from the controller over and over, check if its != null and proceed if it is.
#Override
public Boolean allPlayersConnected() {
Boolean allConnected = null;
if (startGame != null) {
allConnected = startGame;
startGame = null;
}
return allConnected;
}
The startGame "flag" beeing set by callbacks from the google play services api.
I dont know if this is good practice, doesnt look like.
I could call controller methods from the google play services callbacks that set a controller member variable, and check this in each render loop, but thats just moving the variable.
I could also design the controller as an observer of those events, but what am i going to do in the update method inside the controller thats beeing called if an event happens. i dont think i want change stats in these, even if i can access the currrent state. Im spreading state code all over the place with this, some in different parts of a huge update method and some in the actual state machine code. Just setting a member variable in the update method is quite similar to the think i did above.
Another thing would be, to directly change controller state from the callback methods. That would be less code, less variables and a little faster, but i think i'd destroy the MVP concept, cause i take away the control from the controller and let i.e. the gui change the state of the controller.
Any input on this ?
Edit:
The more i think about it, the more i think a combination of observer and command pattern is the way to go.
So i could indeed cut big part of the current state machine and pack it into the observer update() method. Instead of sending the commands through a big command enum, i could create command object with the information available, and pass them to the observer(controller), where i check the command as viable, and call the execute with the information needed to be excecuted, eg the model interface.
First, I think whether your commands are enums or command objects is independent of the main problem here -- which is how to connect user and network input to state management.
The most common game architecture I've seen is an update loop that checks input, iterates the game simulation, and then renders a frame. In the MVC world, this structure just synchronizes those steps; you still have an encapsulated view and data model, with the controller (the game loop) serving as a a bridge between those two worlds.
Input, whether from the local user or one over the net, is generally treated as a request to modify game state. That is, the controller (as the first part of its loop) reads in pending input messages and processes them, modifying state as it goes. This way, the code that changes state is in one place: that controller. You are right, spreading state-modification code throughout the app is a bad practice; basically, it's not MVC.
In other words, all of your callbacks should convert the input to commands and stick them into a queue. You don't want to synchronize the controller -- whose job it is to modify state -- with those input callbacks. You have no idea when input will occur relative to the game loop, so it's best to decouple them. Serializing input processing with game simulation should also make your logic simpler.
You have some choice in how to connect the callbacks to the controller; a shared queue (where one side writes into it and the other reads out from it) is a strong pattern and easy to make thread-safe.

Best practice for combining requests with possible different return types

Background
I'm working on a web application utilizing AJAX to fetch content/data and what have you - nothing out of the ordinary.
On the server-side certain events can happen that the client-side JavaScript framework needs to be notified about and vice versa. These events are not always related to the users immediate actions. It is not an option to wait for the next page refresh to include them in the document or to stick them in some hidden fields because the user might never submit a form.
Right now it is design in such a way that events to and from the server are riding a long with the users requests. For instance if the user clicks a 'view details' link this would fire a request to the server to fetch some HTML or JSON with details about the clicked item. Along with this request or rather the response, a server-side (invoked) event will return with the content.
Question/issue 1:
I'm unsure how to control the queue of events going to the server. They can ride along with user invoked events, but what if these does not occur, the events will get lost. I imagine having a timer setup up to send these events to the server in the case the user does not perform some action. What do you think?
Question/issue 2:
With regards to the responds, some being requested as HTML some as JSON it is a bit tricky as I would have to somehow wrap al this data for allow for both formalized (and unrelated) events and perhaps HTML content, depending on the request, to return to the client. Any suggestions? anything I should be away about, for instance returning HTML content wrapped in a JSON bundle?
Update:
Do you know of any framework that uses an approach like this, that I can look at for inspiration (that is a framework that wraps events/requests in a package along with data)?
I am tackling a similar problem to yours at the moment. On your first question, I was thinking of implementing some sort of timer on the client side that makes an asycnhronous call for the content on expiry.
On your second question, I normaly just return JSON representing the data I need, and then present it by manipulating the Document model. I prefer to keep things consistent.
As for best practices, I cant say for sure that what I am doing is or complies to any best practice, but it works for our present requirement.
You might want to also consider the performance impact of having multiple clients making asynchrounous calls to your web server at regular intervals.

Resources