I am new to reactive programming and trying to learn by developing an Angular + RxJs app. The idea of the app is to mimic github features like Manage repo, Manage branches, etc.. This app will have a left nav that will display all the repo(s) associated to the logged in user, clicking on the repo will bring all the branches associated to the repo, clicking on the branch will bring all the files under the branch. User can add/edit/delete branch, add/edit/delete files in a branch (the usual use cases...) basically a lazy loading concept.
My Data Model
Repository
-----UserDetails
-----Array[BranchArtifact]
-------------Array[Files] called currentElements
-------------Array[Files] called changedElements
-------------Array[Files] called conflictElements
Idea 1:
Have a BehaviorSubject<Array<Repository>> / method1() that will return the connected Observable<Array<Repository>> which can be subscribed in the component that will render the Repo level tree navigation nodes (This rendering part will be repsonsible for just the repo level operations like add/del repo)
Have a BehaviorSubject<Array<BranchArtifact>> / method2() that will return the connected Observable<Array<BranchArtifact>> which can be subscribed in the component that will render the Repo level tree navigation nodes (This rendering part will be repsonsible for just the Branch level operations like add/del Branch)
... and so on for each lazy loaded level
This idea feels like the nested object is flattened at the stream level but then on the UI side is still indirectly managed in a nested fashion (as each nested level will have to tie back to its parent node)
Idea 2:
Have one BehaviorSubject<Array<Repository>> / Observable<Arra<Repository>> and on every click on the tree node fetch the desired sub level data, update the subject with the returned value and let the subscribers infer the updated data (as the stream will always emit a Repository(s) object) and manage the UI accordingly (may be emit a change type to ease the subscribers work). This feels more like an imperative programming style... In this design the UI logic will have to navigate to the correct node to manage it for every value emitted in the stream.
I have gone tro a lot tutorials, videos and tech talks on this subject and couldn't find anything beyond a basic data structure example. Any help is greatly appreciated...
Question to experts here,
Are these ideas any good
Am I thinking reactive
If i am totally off base here please point to something that will help me understand...
If there is something simpler please suggest
Related
I have recently been building an application on top of Greg Young EventStore as my peristance layer and I have been pondering how big should I allow an event to get?
For example I have an UK Address Aggregate with the following fields
UK_Address
-BuildingName
-Street
-Locality
-Town
-Postcode
Now I'm building the UI using React/Redux and was thinking should I create a single FAT addressUpdated Event contatining all the above fields?
Or should I Create a event for each of the different fields? and batch them within the client until the Save event is fired? buildingNameUpdated Event, streetUpdated Event, localityUpdated Event.
I'm not sure if the answer is as black and white ask I have asked it what I really would like to know is what conditions/constraints could you use to make the decision?
should I create a event for each of the different fields?
No. The representations of your events are part of the API -- so you want to use spellings that make sense at the level of the business, not at the level of the implementation.
Now I'm building the UI using React/Redux and was thinking should I create a single FAT updateAddress Event containing all the above fields?
You don't need to constrain the data that you send to your UI to match that which is in the persistence store. The UI is just a cached representation of a read model; there's no reason that representation needs to have the same form as what is in your event store.
Consider the React model itself -- your code makes changes to the "in memory" representation of your data, and then the library computes the new DOM and replaces it, which in turn causes the browser to update its view, which in turn causes the pixels on the screen to change.
So taking a fat event from the store, and breaking it into field level events for the UI is fine. Taking multiple events from the store and aggregating them into a single message for the UI is also fine. Taking events from the event store and transforming them into a spelling that the UI will recognize is also fine.
Do you have any comment regarding Arien answer regarding keeping fields that need to be consistent together? so regardless of when your snapshop the current state of the world it would be in a valid state?
I don't believe that this makes sense, and I'm not sure if it is possible in general.
It doesn't make sense, because "valid state" is a write model concern only; events are things that have happened, its too late to vote on whether they are valid or not. For instance, if you deploy a new model, with a new invariant, it still needs to respect the history of what happened before. So you can build a snapshot for that new model, but the snapshot may not be "valid". Too bad.
Given that, I don't think it makes sense to worry over whether each individual event in a commit leaves the snapshot in a valid state.
In particular, if a particular transaction involves multiple entities, it is very likely that the domain language will suggest an event for each entity (we "debit cash" and "credit accounts receivable"). The entities themselves, of course, are capable of changing independently of each other -- it's the aggregate that maintains the balance.
You have to bundle al the information together in one event when this data has to be consistent with each other.
So when you update one field of an address you probably get an unwanted address.
This will happen when the client has not processed all the events at a certain time due to eventual consistency.
Example:
Change address (City=1, Street=1, Housenumber=1) to (City=2, Street=2, Housenumber=2)
When you do this with 3 events and you have just processed one at the time of reading you could get the address: (City=2, Street=1, Housenumber=1).
If puzzled, give a try to a solution that is easier to implement. I guess "FAT" event will be easier: you will end up spending less time for implementing/debugging/supporting.
It is usually referred as YAGNI-KISS-Occam's Razor principles.
In theory and I find it to be a good rule of thumb is to have your commands and events reflecting the intent of the user staying true to DDD. You can find a good explanation of the pros and cons about event granularity here: https://medium.com/#hugo.oliveira.rocha/what-they-dont-tell-you-about-event-sourcing-6afc23c69e9a
In Redux, every change to the store triggers a notify on all connected components. This makes things very simple for the developer, but what if you have an application with N connected components, and N is very large?
Every change to the store, even if unrelated to the component, still runs a shouldComponentUpdate with a simple === test on the reselected paths of the store. That's fast, right? Sure, maybe once. But N times, for every change? This fundamental change in design makes me question the true scalability of Redux.
As a further optimization, one can batch all notify calls using _.debounce. Even so, having N === tests for every store change and handling other logic, for example view logic, seems like a means to an end.
I'm working on a health & fitness social mobile-web hybrid application with millions of users and am transitioning from Backbone to Redux. In this application, a user is presented with a swipeable interface that allows them to navigate between different stacks of views, similar to Snapchat, except each stack has infinite depth. In the most popular type of view, an endless scroller efficiently handles the loading, rendering, attaching, and detaching of feed items, like a post. For an engaged user, it is not uncommon to scroll through hundreds or thousands of posts, then enter a user's feed, then another user's feed, etc. Even with heavy optimization, the number of connected components can get very large.
Now on the other hand, Backbone's design allows every view to listen precisely to the models that affect it, reducing N to a constant.
Am I missing something, or is Redux fundamentally flawed for a large app?
This is not a problem inherent to Redux IMHO.
By the way, instead of trying to render 100k components at the same time, you should try to fake it with a lib like react-infinite or something similar, and only render the visible (or close to be) items of your list. Even if you succeed to render and update a 100k list, it's still not performant and it takes a lot of memory. Here are some LinkedIn advices
This anwser will consider that you still try to render 100k updatable items in your DOM, and that you don't want 100k listeners (store.subscribe()) to be called on every single change.
2 schools
When developing an UI app in a functional way, you basically have 2 choices:
Always render from the very top
It works well but involves more boilerplate. It's not exactly the suggested Redux way but is achievable, with some drawbacks. Notice that even if you manage to have a single redux connection, you still have have to call a lot of shouldComponentUpdate in many places. If you have an infinite stack of views (like a recursion), you will have to render as virtual dom all the intermediate views as well and shouldComponentUpdate will be called on many of them. So this is not really more efficient even if you have a single connect.
If you don't plan to use the React lifecycle methods but only use pure render functions, then you should probably consider other similar options that will only focus on that job, like deku (which can be used with Redux)
In my own experience doing so with React is not performant enough on older mobile devices (like my Nexus4), particularly if you link text inputs to your atom state.
Connecting data to child components
This is what react-redux suggests by using connect. So when the state change and it's only related to a deeper child, you only render that child and do not have to render top-level components everytime like the context providers (redux/intl/custom...) nor the main app layout. You also avoid calling shouldComponentUpdate on other childs because it's already baked into the listener. Calling a lot of very fast listeners is probably faster than rendering everytime intermediate react components, and it also permits to reduce a lot of props-passing boilerplate so for me it makes sense when used with React.
Also notice that identity comparison is very fast and you can do a lot of them easily on every change. Remember Angular's dirty checking: some people did manage to build real apps with that! And identity comparison is much faster.
Understanding your problem
I'm not sure to understand all your problem perfectly but I understand that you have views with like 100k items in it and you wonder if you should use connect with all those 100k items because calling 100k listeners on every single change seems costly.
This problem seems inherent to the nature of doing functional programming with the UI: the list was updated, so you have to re-render the list, but unfortunatly it is a very long list and it seems unefficient... With Backbone you could hack something to only render the child. Even if you render that child with React you would trigger the rendering in an imperative way instead of just declaring "when the list changes, re-render it".
Solving your problem
Obviously connecting the 100k list items seems convenient but is not performant because of calling 100k react-redux listeners, even if they are fast.
Now if you connect the big list of 100k items instead of each items individually, you only call a single react-redux listener, and then have to render that list in an efficient way.
Naive solution
Iterating over the 100k items to render them, leading to 99999 items returning false in shouldComponentUpdate and a single one re-rendering:
list.map(item => this.renderItem(item))
Performant solution 1: custom connect + store enhancer
The connect method of React-Redux is just a Higher-Order Component (HOC) that injects the data into the wrapped component. To do so, it registers a store.subscribe(...) listener for every connected component.
If you want to connect 100k items of a single list, it is a critical path of your app that is worth optimizing. Instead of using the default connect you could build your own one.
Store enhancer
Expose an additional method store.subscribeItem(itemId,listener)
Wrap dispatch so that whenever an action related to an item is dispatched, you call the registered listener(s) of that item.
A good source of inspiration for this implementation can be redux-batched-subscribe.
Custom connect
Create a Higher-Order component with an API like:
Item = connectItem(Item)
The HOC can expect an itemId property. It can use the Redux enhanced store from the React context and then register its listener: store.subscribeItem(itemId,callback). The source code of the original connect can serve as base inspiration.
The HOC will only trigger a re-rendering if the item changes
Related answer: https://stackoverflow.com/a/34991164/82609
Related react-redux issue: https://github.com/rackt/react-redux/issues/269
Performant solution 2: listening for events inside child components
It can also be possible to listen to Redux actions directly in components, using redux-dispatch-subscribe or something similar, so that after first list render, you listen for updates directly into the item component and override the original data of the parent list.
class MyItemComponent extends Component {
state = {
itemUpdated: undefined, // Will store the local
};
componentDidMount() {
this.unsubscribe = this.props.store.addDispatchListener(action => {
const isItemUpdate = action.type === "MY_ITEM_UPDATED" && action.payload.item.id === this.props.itemId;
if (isItemUpdate) {
this.setState({itemUpdated: action.payload.item})
}
})
}
componentWillUnmount() {
this.unsubscribe();
}
render() {
// Initially use the data provided by the parent, but once it's updated by some event, use the updated data
const item = this.state.itemUpdated || this.props.item;
return (
<div>
{...}
</div>
);
}
}
In this case redux-dispatch-subscribe may not be very performant as you would still create 100k subscriptions. You'd rather build your own optimized middleware similar to redux-dispatch-subscribe with an API like store.listenForItemChanges(itemId), storing the item listeners as a map for fast lookup of the correct listeners to run...
Performant solution 3: vector tries
A more performant approach would consider using a persistent data structure like a vector trie:
If you represent your 100k items list as a trie, each intermediate node has the possibility to short-circuit the rendering sooner, which permits to avoid a lot of shouldComponentUpdate in childs.
This technique can be used with ImmutableJS and you can find some experiments I did with ImmutableJS: React performance: rendering big list with PureRenderMixin
It has drawbacks however as the libs like ImmutableJs do not yet expose public/stable APIs to do that (issue), and my solution pollutes the DOM with some useless intermediate <span> nodes (issue).
Here is a JsFiddle that demonstrates how a ImmutableJS list of 100k items can be rendered efficiently. The initial rendering is quite long (but I guess you don't initialize your app with 100k items!) but after you can notice that each update only lead to a small amount of shouldComponentUpdate. In my example I only update the first item every second, and you notice even if the list has 100k items, it only requires something like 110 calls to shouldComponentUpdate which is much more acceptable! :)
Edit: it seems ImmutableJS is not so great to preserve its immutable structure on some operations, like inserting/deleting items at a random index. Here is a JsFiddle that demonstrates the performance you can expect according to the operation on the list. Surprisingly, if you want to append many items at the end of a large list, calling list.push(value) many times seems to preserve much more the tree structure than calling list.concat(values).
By the way, it is documented that the List is efficient when modifying the edges. I don't think these bad performances on adding/removing at a given index are related to my technique but rather related to the underlying ImmutableJs List implementation.
Lists implement Deque, with efficient addition and removal from both the end (push, pop) and beginning (unshift, shift).
This may be a more general answer than you're looking for, but broadly speaking:
The recommendation from the Redux docs is to connect React components fairly high in the component hierarchy. See this section.. This keeps the number of connections manageable, and you can then just pass updated props into the child components.
Part of the power and scalability of React comes from avoiding rendering of invisible components. For example instead of setting an invisible class on a DOM element, in React we just don't render the component at all. Rerendering of components that haven't changed isn't a problem at all as well, since the virtual DOM diffing process optimizes the low level DOM interactions.
I have a react app where I'm using alt for the flux architecture side of things.
I have a situation where I have two stores which are fed by ajax calls in their corresponding actions.
Having read the alt getting started page on data dependencies it mentions dependencies between stores using waitFor - http://alt.js.org/guide/wait-for/ but I don't see a way to use this kind of approach if one of my store actions is dependent on another store action (both of which are async).
If I was doing this inside a single action handler, I might return or chain some promises but I'm not sure how to implement this across action handlers. Has anyone achieved this? or am I going about my usage of ajax in react the wrong way?
EDIT: More detail.
In my example I have a list of nodes defined in a local json config file, my node-store makes an ajax request to get the node detail.
Once it's complete, a different component (with a different action handler and store) wants to use the node collection to make an ajax query to different endpoints a node may expose.
The nodes are re-used across many different components so I don't want to roll their functionality into several different stores/action handlers if possible.
I'm trying to figure out what is the best way to handle a quite commons situation in medium complex apps using Flux architecture, how to retrieve data from the server when the models that compose the data have dependencies between them. For example:
An shop web app, has the following models:
Carts (the user can have multiple carts)
Vendors
Products
For each of the models there is an Store associated (CartsStore, VendorsStore, ProductsStore).
Assuming there are too many products and vendors to keep them always loaded, my problem comes when I want to show the list of carts.
I have a hierarchy of React.js components:
CartList.jsx
Cart.jsx
CartItem.jsx
The CartList component is the one who retrieves all the data from the Stores and creates the list of Cart components passing the specific dependencies for each of them. (Carts, Vendors, Products)
Now, if I knew beforehand which products and vendors I needed I would just launch all three requests to the server and use waitFor in the Stores to synch the data if needed. The problem is that until I get the carts and I don't know which vendors or products I need to request to the server.
My current solution is to handle this in the CartList component, in getState I get the Carts, Vendors and Products from each of the Stores, and on _onChange I do the whole flow:
This works for now, but there a few things I don't like:
1) The flow seems a bit brittle to me, specially because the component is listening to 3 stores but there is only entry point to trigger "something has changed in the data event", so I'm not able to distinguish what exactly has changed and react properly.
2) When the component is triggering some of the nested dependencies, it cannot create any action, because is in the _onChange method, which is considering as still handling the previous action. Flux doesn't like that and triggers an "Cannot dispatch in the middle of a dispatch.", which means that I cannot trigger any action until the whole process is finished.
3) Because of the only entry point is quite tricky to react to errors.
So, an alternative solution I'm thinking about is to have the "model composition" logic in the call to the API, having a wrapper model (CartList) that contains all 3 models needed, and storing that on a Store, which would only be notified when the whole object is assembled. The problem with that is to react to changes in one of the sub models coming from outside.
Has anyone figured out a nice way to handle data composition situations?
Not sure if it's possible in your application, or the right way, but I had a similar scenario and we ended up doing a pseudo implementation of Relay/GraphQL that basically gives you the whole tree on each request. If there's lots of data, it can be hard, but we just figured out the dependencies etc on the server side, and then returned it in a nice hierarchical format so the React components had everything they needed up to the level where the call came from.
Like I said, depending on details this might not be feasible, but we found it a lot easier to sort out these dependencies server-side with stuff like SQL/Java available rather than, like you mentioned, making lots of async calls and messing with the stores.
I'm doing a fairly complex emberjs application, and tying it to a backend of APIs.
The API calls are not usually tied to any particular model, but may return objects of various types in different sections of the response, e.g. a call to Events API would return events, but also return media assets and individuals involved in those events.
I've just started with the project, and I'd like to get some expert guidance on how best to separate concerns to have a clean maintainable code base.
The way I am approaching this is:
Models: essentially handle records with their fields, and other computed properties. However, models are not responsible for making requests.
e.g. Individual, Event, Picture, Post etc.
Stores: They are essentially caches. For example, an eventStore would store all events received from the server so far (from possibly different requests) in an array, and also in an hash of events indexed by id.
e.g. individualStore, eventStore etc.
Controllers: They tie to a set of related API calls, e.g. eventsController would be responsible for fetching events or a particular event, or creating a new event etc. They would 'route' the response to different stores for later retrieval. They don't keep the response once it has been sent to stores.
e.g. eventsController, userSearchController etc.
Views: They are tied to a particular view. In general, my application may have several views at different places, e.g. latestEventsView on the Dashboard in addition to having a separate events page.
Templates: are what they are.
Quite often, my templates require to be bound directly to the stores (e.g. peopleView wants to list all the individuals in the individualStore in a list, sorted by some order).
And sometimes, they bind to a computed property
alivePeople: function () { ... }.property('App.individualStore.content.#each'),
The various filtering and sorting options 'chosen' in the view, should return different lists from the store. You can see my last question at what is the right emberjs way to switch between various filtering options?
Who should do this filtering, the view themselves or the stores?
Is this kind of binding across layers okay, or a code smell? Is the separation of concerns good, or am I missing something? Shouldn't controllers be doing something more here? Should my views directly bind to stores?
Any particular special case of MVC more suited to my needs?
Update 17 April 2012
My research goes on, particularly from http://vimeo.com/user7276077/videos and http://jzajpt.github.com/2012/01/17/emberjs-app-architecture.html and http://jzajpt.github.com/2012/01/24/emberjs-app-architecture-data.html
Some issues with my design that I've figured out are:
controllers making requests (stores or models or something else should do it, not controllers)
statecharts are missing -- they are important for view-controller interactions (after sometime you realize your interactions are no more simple)
This is a good example of state charts in action: https://github.com/DominikGuzei/ember-routing-statechart-example
UPDATE 9th JANUARY 2013
Yes, it's been long but this question is lately getting lots of views, and that's why I'd like to edit it so that people may get a sense.
Ember's landscape has changed a lot since this question was framed, and the new guides are much improved. EmberJS has come up with conventions (like Rails) and the MVC is much more well defined now.
Anybody still confused should read all the guides, and watch some videos:
Seattle Ember.js Meetup
At the moment, I'm upgrading my application to Ember.js 1.0.0-pre2.
You should think of your application in terms of states. Have a look at this
Initially, only a route and a template are required to describe
something and finally display it in the browser, that's what the new
API of Emberjs tries to enforce. As your requirements get more
elaborate you can throw in a view, a controller or an object. Each
though answers a specific need.
Consider a view if you need to handle any browser events or wrap
any 3rd party javascript lib you're using for animation, styling ..
Consider an Object if you need to capture domain specific
information, most likely mimics backend information.
A controller is merely a proxy for the domain object and may encapsulate logic that doesn't pertain necessarily to the object.
That's all what's to it. If you learn how to design your application in terms of states, the rest will fall into the right place, providing you're using the latest api, enforcing the rules i mentioned previously.
Since the release of Ember 1.0.0-pre4 with the new router implementation I've seen two good references describing a standardised EmberJS app structure.
Those of you familiar with Rails would find it fairly familiar.
https://github.com/trek/ember-todos-with-build-tools-tests-and-other-modern-conveniences
http://reefpoints.dockyard.com/ember/2013/01/07/building-an-ember-app-with-rails-api-part-1.html
The ember-rails project at https://github.com/emberjs/ember-rails includes a Rails generator for creating an EmberJS application directory structure that is essentially the same as the structure described in the two links above.
The EmberJS guides also now describe the new routing structure. http://emberjs.com/guides/
UPDATE 21/08/2013
If you are using Rails then the ember-rails gem is great. I've used it with a lot of success.
There are two efforts within the ember community to assist in providing a standardised ember application layout. Apparently they are going to be merged but for now check out:
https://github.com/rpflorence/ember-tools
https://github.com/stefanpenner/ember-app-kit
See also this http://addyosmani.com/largescalejavascript/ It is not about EmberJs in particular but it's a great article that gives you an idea how to write larg scale javascript apps.