According to the Redux docs on performance, it states "connected list components should pass item IDs to their connected child list items (allowing the list items to look up their own data by ID)".
This makes sense to me. In this fashion the child items connect to the Redux store and find the item using the itemId that was passed in via props.
Since this involves a small computation however, it is unclear to me if a reselect selector should be used for memoization. I suppose the root of the confusion comes from connect and how state tree changes cause recomputations.
Related
When using RTK Query, you abstract away all the state management that comes with data fetching -- you call an endpoint and the documents are loaded into a variable, ready for use. Like so:
const {data: rangesInfo = []} = useGetRangesQuery(userId);
Let's say this rangesInfo variable contains a uniquely identifying ID, uuid, as well as a number, rangeValue, which specifies its position. This number can run from 0 to 100. For the sake of this example, let's imagine these ranges describe a user's food preferences. John is a 0 for sushi and a 100 for pizza. And as the end user clicks around the website, they can load other users' preferences, and so this set of ranges is constantly updating.
This all works fine -- you can call rangesInfo.map(range => <RangeComponent key={range.uuid} rangeValue={range.rangeValue}/>), and this will render a collection of children components, which all know how to display the UI of the actual HTML input[type=range].
But when using an range slider input in React, you must choose between either a controlled or an uncontrolled input. React's preference is for the input to be controlled by the state of its parent. In this case, the state of its parent is an RTK black box, and if you want to modify the cached data you must invalidate it, typically by triggering a mutation. This is RTK Query's term for a POST or an UPDATE request that will affect data in your backend. The thing is that in Chrome, a range input's onChange event triggers dozens of times a second, and it seems ridiculous to pummel your API with 40 requests when only the last one makes a difference.
That means we have to go with an uncontrolled component. The problem then becomes updating the display of the range when the props change. Because the props do change -- RTK is working fine -- but the props no longer have any bearing on the position of the range's value. (Remember, if you want to control an input's value via prop, you're no longer described an uncontrolled component!). If we could guarantee that the child components were remounted every time their props changed, we should be in the clear, but that wasn't my experience.
Even though those RangeComponents were given unique ids, and even though the docs suggest that this is sufficient, the new data was out of sync. When I loaded the page, I had User 0's info, and then I clicked on User 1 I still saw User 0. When I clicked on User 2, now User 1 popped in, and so forth.
My ultimately hacky solution was to attach a ref to the input range's DOM, and then use a side effect to dictate the its value, like so:
useEffect(() => {inputRef.current.value = props.rangeValue})
This solved my consistency problem, but introduced a ton of jank to the UX -- when I set the input range to a new state, it flickers back to its original position briefly.
Is there a way to solve this issue while staying in the RTK Query paradigm?
The no-jank, no-ref-necessary solution: Be extra sure that your child component's key is unique, because RTK is going to rerender it multiple times, and the render that "sticks" may not have the updated props at that point.
My fix was to append the value of the range to the key. So now the code looks like:
rangesInfo.map(range => <RangeComponent key={range.uuid + rangeValue} rangeValue={range.rangeValue}/>)
By linking the props and the keys together, you're guaranteeing that React will remount the child component.
Hope this saves somebody some time!
In Redux, every change to the store triggers a notify on all connected components. This makes things very simple for the developer, but what if you have an application with N connected components, and N is very large?
Every change to the store, even if unrelated to the component, still runs a shouldComponentUpdate with a simple === test on the reselected paths of the store. That's fast, right? Sure, maybe once. But N times, for every change? This fundamental change in design makes me question the true scalability of Redux.
As a further optimization, one can batch all notify calls using _.debounce. Even so, having N === tests for every store change and handling other logic, for example view logic, seems like a means to an end.
I'm working on a health & fitness social mobile-web hybrid application with millions of users and am transitioning from Backbone to Redux. In this application, a user is presented with a swipeable interface that allows them to navigate between different stacks of views, similar to Snapchat, except each stack has infinite depth. In the most popular type of view, an endless scroller efficiently handles the loading, rendering, attaching, and detaching of feed items, like a post. For an engaged user, it is not uncommon to scroll through hundreds or thousands of posts, then enter a user's feed, then another user's feed, etc. Even with heavy optimization, the number of connected components can get very large.
Now on the other hand, Backbone's design allows every view to listen precisely to the models that affect it, reducing N to a constant.
Am I missing something, or is Redux fundamentally flawed for a large app?
This is not a problem inherent to Redux IMHO.
By the way, instead of trying to render 100k components at the same time, you should try to fake it with a lib like react-infinite or something similar, and only render the visible (or close to be) items of your list. Even if you succeed to render and update a 100k list, it's still not performant and it takes a lot of memory. Here are some LinkedIn advices
This anwser will consider that you still try to render 100k updatable items in your DOM, and that you don't want 100k listeners (store.subscribe()) to be called on every single change.
2 schools
When developing an UI app in a functional way, you basically have 2 choices:
Always render from the very top
It works well but involves more boilerplate. It's not exactly the suggested Redux way but is achievable, with some drawbacks. Notice that even if you manage to have a single redux connection, you still have have to call a lot of shouldComponentUpdate in many places. If you have an infinite stack of views (like a recursion), you will have to render as virtual dom all the intermediate views as well and shouldComponentUpdate will be called on many of them. So this is not really more efficient even if you have a single connect.
If you don't plan to use the React lifecycle methods but only use pure render functions, then you should probably consider other similar options that will only focus on that job, like deku (which can be used with Redux)
In my own experience doing so with React is not performant enough on older mobile devices (like my Nexus4), particularly if you link text inputs to your atom state.
Connecting data to child components
This is what react-redux suggests by using connect. So when the state change and it's only related to a deeper child, you only render that child and do not have to render top-level components everytime like the context providers (redux/intl/custom...) nor the main app layout. You also avoid calling shouldComponentUpdate on other childs because it's already baked into the listener. Calling a lot of very fast listeners is probably faster than rendering everytime intermediate react components, and it also permits to reduce a lot of props-passing boilerplate so for me it makes sense when used with React.
Also notice that identity comparison is very fast and you can do a lot of them easily on every change. Remember Angular's dirty checking: some people did manage to build real apps with that! And identity comparison is much faster.
Understanding your problem
I'm not sure to understand all your problem perfectly but I understand that you have views with like 100k items in it and you wonder if you should use connect with all those 100k items because calling 100k listeners on every single change seems costly.
This problem seems inherent to the nature of doing functional programming with the UI: the list was updated, so you have to re-render the list, but unfortunatly it is a very long list and it seems unefficient... With Backbone you could hack something to only render the child. Even if you render that child with React you would trigger the rendering in an imperative way instead of just declaring "when the list changes, re-render it".
Solving your problem
Obviously connecting the 100k list items seems convenient but is not performant because of calling 100k react-redux listeners, even if they are fast.
Now if you connect the big list of 100k items instead of each items individually, you only call a single react-redux listener, and then have to render that list in an efficient way.
Naive solution
Iterating over the 100k items to render them, leading to 99999 items returning false in shouldComponentUpdate and a single one re-rendering:
list.map(item => this.renderItem(item))
Performant solution 1: custom connect + store enhancer
The connect method of React-Redux is just a Higher-Order Component (HOC) that injects the data into the wrapped component. To do so, it registers a store.subscribe(...) listener for every connected component.
If you want to connect 100k items of a single list, it is a critical path of your app that is worth optimizing. Instead of using the default connect you could build your own one.
Store enhancer
Expose an additional method store.subscribeItem(itemId,listener)
Wrap dispatch so that whenever an action related to an item is dispatched, you call the registered listener(s) of that item.
A good source of inspiration for this implementation can be redux-batched-subscribe.
Custom connect
Create a Higher-Order component with an API like:
Item = connectItem(Item)
The HOC can expect an itemId property. It can use the Redux enhanced store from the React context and then register its listener: store.subscribeItem(itemId,callback). The source code of the original connect can serve as base inspiration.
The HOC will only trigger a re-rendering if the item changes
Related answer: https://stackoverflow.com/a/34991164/82609
Related react-redux issue: https://github.com/rackt/react-redux/issues/269
Performant solution 2: listening for events inside child components
It can also be possible to listen to Redux actions directly in components, using redux-dispatch-subscribe or something similar, so that after first list render, you listen for updates directly into the item component and override the original data of the parent list.
class MyItemComponent extends Component {
state = {
itemUpdated: undefined, // Will store the local
};
componentDidMount() {
this.unsubscribe = this.props.store.addDispatchListener(action => {
const isItemUpdate = action.type === "MY_ITEM_UPDATED" && action.payload.item.id === this.props.itemId;
if (isItemUpdate) {
this.setState({itemUpdated: action.payload.item})
}
})
}
componentWillUnmount() {
this.unsubscribe();
}
render() {
// Initially use the data provided by the parent, but once it's updated by some event, use the updated data
const item = this.state.itemUpdated || this.props.item;
return (
<div>
{...}
</div>
);
}
}
In this case redux-dispatch-subscribe may not be very performant as you would still create 100k subscriptions. You'd rather build your own optimized middleware similar to redux-dispatch-subscribe with an API like store.listenForItemChanges(itemId), storing the item listeners as a map for fast lookup of the correct listeners to run...
Performant solution 3: vector tries
A more performant approach would consider using a persistent data structure like a vector trie:
If you represent your 100k items list as a trie, each intermediate node has the possibility to short-circuit the rendering sooner, which permits to avoid a lot of shouldComponentUpdate in childs.
This technique can be used with ImmutableJS and you can find some experiments I did with ImmutableJS: React performance: rendering big list with PureRenderMixin
It has drawbacks however as the libs like ImmutableJs do not yet expose public/stable APIs to do that (issue), and my solution pollutes the DOM with some useless intermediate <span> nodes (issue).
Here is a JsFiddle that demonstrates how a ImmutableJS list of 100k items can be rendered efficiently. The initial rendering is quite long (but I guess you don't initialize your app with 100k items!) but after you can notice that each update only lead to a small amount of shouldComponentUpdate. In my example I only update the first item every second, and you notice even if the list has 100k items, it only requires something like 110 calls to shouldComponentUpdate which is much more acceptable! :)
Edit: it seems ImmutableJS is not so great to preserve its immutable structure on some operations, like inserting/deleting items at a random index. Here is a JsFiddle that demonstrates the performance you can expect according to the operation on the list. Surprisingly, if you want to append many items at the end of a large list, calling list.push(value) many times seems to preserve much more the tree structure than calling list.concat(values).
By the way, it is documented that the List is efficient when modifying the edges. I don't think these bad performances on adding/removing at a given index are related to my technique but rather related to the underlying ImmutableJs List implementation.
Lists implement Deque, with efficient addition and removal from both the end (push, pop) and beginning (unshift, shift).
This may be a more general answer than you're looking for, but broadly speaking:
The recommendation from the Redux docs is to connect React components fairly high in the component hierarchy. See this section.. This keeps the number of connections manageable, and you can then just pass updated props into the child components.
Part of the power and scalability of React comes from avoiding rendering of invisible components. For example instead of setting an invisible class on a DOM element, in React we just don't render the component at all. Rerendering of components that haven't changed isn't a problem at all as well, since the virtual DOM diffing process optimizes the low level DOM interactions.
I have a react app where I'm using alt for the flux architecture side of things.
I have a situation where I have two stores which are fed by ajax calls in their corresponding actions.
Having read the alt getting started page on data dependencies it mentions dependencies between stores using waitFor - http://alt.js.org/guide/wait-for/ but I don't see a way to use this kind of approach if one of my store actions is dependent on another store action (both of which are async).
If I was doing this inside a single action handler, I might return or chain some promises but I'm not sure how to implement this across action handlers. Has anyone achieved this? or am I going about my usage of ajax in react the wrong way?
EDIT: More detail.
In my example I have a list of nodes defined in a local json config file, my node-store makes an ajax request to get the node detail.
Once it's complete, a different component (with a different action handler and store) wants to use the node collection to make an ajax query to different endpoints a node may expose.
The nodes are re-used across many different components so I don't want to roll their functionality into several different stores/action handlers if possible.
i am trying to implement an infinite scroll from many items that i get from the server, but i cannot find any proper way to keep the flux architecture design rules.
the idea is: on the first load, i get a full item list from server (only id's), then using ajax i fetch each time 20 more items.
the list is kept in the Store, and also the loaded items. the view listens on loaded items and render them, when it reaches scroll bottom it calls an action which should then fetch 20 more items, and so on.
the problem is: the Action should know what items to fetch, the unloaded items list is in the store, so it has to get it from the store directly, which is a "don't do it' in flux. other alternatives are to handle all the logic in the stores, which seems also a bad idea..
can anyone think of a nice solution?
UPDATE: it is OK within unidirectional flow for a component to read directly from store (see below)
Make your action explicitly say which items to fetch: "Give me items 21-40 please".
This fires a) (async) ajax call to get items 21-40 and b) dispatch to the store.
The component knows a) which items it has already rendered, and b) which items the user wants to see next, so it can pass along the above action message without talking to the store again.
The store receives the request. The store knows it does not have the items yet. The component does not know yet.
Store emits change, and your component (assuming it is listening to store changes) gets current state from store. If the items weren't there, the store provides a loading state ("loading items 21-40" or similar). The component displays the loading state. (or, if the loaded items are already fully in store, it simply renders items 21-40).
As soon as items 21-40 are delivered by ajax return, your store updates with the full items 21-40. (if they happened to be in store already, no problem, no update). Store emits another change. Component hears this, and re-renders.
ASIDE:
Unidirectional flow is for updates:
Component -> lower components -> actions (-> webAPI -> action) -> dispatcher -> stores -> components
In unidirectional flow rules are:
Components are allowed to push data updates only to lower components (by passing new props, which trigger re-render), not to higher components
Components are allowed to maintain an internal state, which they can pass on as props to children (see 1)
Components are allowed to push data updates or update requests also to the dispatcher (in "actions"). The dispatcher then forwards the updates to the stores and/or to some server via eg webAPI.
Components are allowed to listen to store changes and pull/ read data directly from the store.
Stores listen to the dispatcher, and update if they receive news from the dispatcher.
Stores may also listen to other stores, and read data from other stores to update themselves
Stores emit change as soon as they have updated, so that any components listening can do something (typically read new data) (see 4.)
WebAPI results from the server are "actions". They go through dispatcher which informs the relevant stores to update. (See 5)
Unidirectional flow breaks if:
Component actively fetches/ pulls data from a higher component - such data should be pushed by higher component as props (see 1)
Component actively fetches data from child - as parent, component should already have this data. If it is in child's state, then state is designed at too low level.
Component directly updates store - should be with an action through dispatcher
And also breaks if (although some disagree):
Store directly updates another store - should be pull instead of push (see 6)
Store pushes update through an action - only webAPI (see 8) and components (see 3) are allowed to issue actions
Component directly does webAPI request and handles result in state - should go through dispatcher
Hi the wise folks at SO. This is an SOS.
I'm in a deep trouble. In my web application there is an object (Say it is a request for something). User submits his/her request. After this it comes to the people who can approve/disapprove that request. During the period from submission to approval/disapproval many actions can be taken on the request. I have to present user with actions panel (collections of links) using which he/she can modify the state of the request.
Now based on which stage of processing the request is some actions are not allowed. Also if some action has already been taken it excludes the possibility of other actions.
Overall it creates a pretty complex matrix of allowed/forbidden actions that my tiny head is not able to take care of it.
I've create some static classes/methods which returns the arrays of allowed actions based on the state of the request. There are about 20 states that a application can be in. I've taken care based on state to remove/disable links for actions that are not possible in that state.
Now problem arises is that suppose request is in state X.
Now if in past action l has been taken on request we may not allow l or based on this some arbitrary actions m,n,o.
After writing all the methods to get arrays of links for 20 states, I have to filter the arrays based on the past history of actions (which is stored in sql db) which is very very big task.
Please suggest me some pattern which is easier to implement and efficient. It is getting on my nerves.
As I understand you have a real-world workflow scenario. In this case I would:
Model entire state as a single entity if possible (a single row with fixed number of fields). I would not model this as a set of actions.
Model each action as some change in the row. It is quite obvious when user enters some data, but I would also model each acceptance as either - a boolean field or a state field - depending on whether the acceptance is done by independent departments or it is a cascade of acceptance in a single department.
Also there may be a situation when an acceptance is given for some particular parameter and the parameter may change in the future, requiring new acceptance. In this case I would model such scenario as two fields. On for the parameter value and the second one for the accepted value. I would make the decision on whether an acceptance is still needed based on the difference of this two fields. This allows for implementing some thresholds.
Having a state modeled as a single row I would implement independent predicates for action allowance.
I think that point 4 is the most important one. If your are able to implement independent predicates for enabling actions then you will be able to easily modify them in the future.
Having 1-3 properly implemented you will be able to easily implement acceptance revoking, which may be required and in this case may make overall code size smaller.
Sounds like a job for a state machine workflow, or a few giant nested switches (which ever you prefer).
First thing that came into my mind: Statemachine. Each State is some kind of object. All states have some method "processRequest" that transits the execution into the next state.
The second thing that came into my mind - theses states have to be organized like a tree or graph. The graph represents the history of requests. You start in the initial State. You get Request A, you proceed to State A. After that, you get request B, you proceed to AB. Wether state AB is equal to BA is not clear by your description.
That way, you get far more states then your 20 states you have now, but each state includes the history. I'd suggest a naming convention after the path you had to take to get there (like AB before). And perhaps you can reuse state A and B in AB, to minimize coding.