How to reduce renders using redux + normalizr - performance

I have an app using React + Redux + Normalizr, I would like to know the best practices to reduce the number of renders when something changes on entities.
Right if I change just one entity inside the entities, it will re-render all the components and not only the components that need that specific entity

There are several things you can do to minimize the number of renders in your app, and make updates faster.
Let's look at them from your Redux store down to your React components.
In your Redux store, you can use immutable data structures (for example, Immutable.js). This will make all subsequent optimizations faster, as you'll be able to compare changes by only checking for previous/next state slice equality rather than recursively comparing all props.
In your containers, that is in your top-level components where you inject redux state as props, ask only for the state slices you need, and use the pure option (I assume you're using react-redux) to make sure your container will be re-rendered only if the state slices returned by your mapStateToProps functions have changed.
If you need to compute derived data, that is if you inject in your containers data computed from various state slices, use a memoized function to make sure the computation is not triggered again if the input doesn't change, and to keep object equality with the value the previous call returned. Reselect is a very good library to do that.
In your dumb components use the shouldComponentUpdate lifecycle to avoid a re-render when incoming props do not change. If you do not want to implement this manually, you can use React's PureRenderMixin to check all props for you, or, for example, the pure function from the Recompose library if you need more control. A good use case at this level is rendering a list of items. If your item component implements shouldComponentUpdate only the modified items will be re-rendered. But this shouldn't be a fix-all-problems habit : a good components separation is often preferable in that it makes flow props only to those components which will need them.
As far as Normalizr is concerned, there is nothing more specific to be done.

If in some case (it should be rare) you detect performance problems that are directly related to React's rendering cycles of components, then you should implement the shouldComponentUpdate() method in the involved components (details can be found in React's docs here).
Change-detection in shouldComponentUpdate() will be particularly easy because Redux forces you to implement immutable state:
shouldComponentUpdate(nextProps, nextState) {
return nextProps.dataObject !== this.props.dataObject;
// true when dataObject has become a new object,
// which happens if (and only if) its data has changed,
// thanks to immutability
}

Related

How to make volatile model/reference field?

I want to add selection field in a model which should be array of references. If I add this to model selection: types.array(types.reference(Todo)) then I have some undesirable side-effects like selection is being saved/loaded in snapshots and also changes to selection are recorded to the undo/redo history when using UndoManager middleware. If I put selection in volatile properties as just plain array then I lose reference sync capabilities(ie if one of selected elements removed from model selection will not be updated automatically).
Is there an approach which would allow to get benefits of both? Is there a way to ignore model field in patches/snapshots without moving it to volatile?
Good approach for models is to only have fields that belong to this entity and that you need in snapshots to send to server or elsewhere. Otherwise models become confusing and hard to manage.
Usually in such cases I put property like that into separate store or sub-store which bound to particular page/view for example. So this is structural issue more then anything else in my opinion.

Component with shouldComponentUpdate vs stateless Component. Performance?

I know that Stateless components are a lot more comfortable to use (In specific scenarios), But since you cannot use shouldComponentUpdate, Doesn't that mean that the component will re-render for each props change?. My question is whether or not its better (Performance-wise) to use a class component with a smart shouldComponentUpdate than use a stateless component.
Are stateless components a better solution as far as performance goes?
consider this silly example:
const Hello =(props) =>(
<div>
<div> {props.hello}</div>
<div>{props.bye}</div>
</div>);
vs
class Hello extends Component{
shouldComponentUpdate(nextProps){
return nextProps.hello !== this.props.hello
};
render() {
const {hello, bye} = this.props;
return (
<div>
<div> {hello}</div>
<div>{bye}</div>
</div>
);
}
}
Let's assume these components both has 2 props and only one of them we want to track and update if changed (which is a common use-case), would it be better to use Stateless functional component or a class component?
UPDATE
After doing some research as well I agree with the marked answer. Although the performance is better in a class component (with shouldComponentUpdate) it seems as the improvement is negligible for simple component. So my take is this:
For complex components use class extended by PureComponent or Component (Depending on if you were gonna implement your own shouldComponentUpdate)
For simple components Use functional components even if it means that the rerender runs
Try to cut the amount of updates from the closest possible class base component in order to make the tree as static as possible (if needed)
I think you should read Stateless functional components and shouldComponentUpdate #5677
For complex components, defining shouldComponentUpdate (eg. pure render) will generally exceed the performance benefits of stateless components. The sentences in the docs are hinting at some future optimizations that we have planned, whereby we won't allocate an internal instance for stateless functional components (we will just call the function). We also might not keep holding the props, etc. Tiny optimizations. We don't talk about the details in the docs because the optimizations aren't actually implemented yet (stateless components open the doors to these optimizations).
https://github.com/facebook/react/issues/5677#issuecomment-165125151
There are currently no special optimizations done for functions, although we might add such optimizations in the future. But for now, they perform exactly as classes.
https://github.com/facebook/react/issues/5677#issuecomment-241190513
I also recommend checking https://medium.com/missive-app/45-faster-react-functional-components-now-3509a668e69f
To measure the change, I created this benchmark, the results are quite staggering! From just converting a class-based component to a functional one, we get a little 6% speedup. But by calling it as a simple function instead of mounting, we get a ~45﹪ total speed improvement.
To answer the question: it depends.
If you have complex/heavy components you probably should implement shouldComponentUpdate. Otherwise going with regular functions should be fine. I don't think that implementing shouldComponentUpdate for components like you Hello will make big difference, it probably doesn't worth the time implementing.
You should also consider extending from PureComponent rather than Component.

Should React.PureComponent be used for components that are updated frequently?

If a component needs to render several times a second because of a prop change, should this component extend React.PureComponent?
The component has no child components, however, it is itself deeply nested... so the props are travelling through several other components.
In general, what are some key things to consider when deciding if React.PureComponent should be used or not. In which scenarios is it bad to use?
Yes this sounds like a good case for PureComponent because your component is unnecessarily being re-rendered frequently with the same props.
A child component extended from React.Component will call render every time its parent calls render. If instead the child component is extended from PureComponent it will only call render when the parent passes props that don't shallowEqual the previously passed props.
It's generally safe to use PureComponent as long as
your component and its children don't rely on context updates
your component doesn't have object or array props that are directly mutated by its parents (shallowEqual will not detect these changes)

Caching for frequently created from source components

I am in a highly dynamic context, heavily using dynamic instantiation of components from sources. Naturally I am concerned with the overhead from having to parse those sources each and every time an object is dynamically created. When the situation allows it, I am using manual caching:
readonly property var componentCache: new Object
function create(type) {
var comp = componentCache[type]
if (comp === undefined) { // "cache miss"
comp = Qt.createComponent(type)
if (comp.status !== Component.Ready) {
console.log("Component creation failed: " + comp.errorString())
return null
} else {
componentCache[type] = comp
}
}
return comp.createObject()
}
Except that this is not always applicable, for example, using a Loader with a component which needs to specify object properties using the setSource(source, properties) function. In this scenario it is not possible to use a manually cached Component as the function only takes an url. The doc does vaguely mention "caching", but it is not exactly clear whether this cache is QML engine wide for the component from that source or more likely - just for that particular Loader.
If the active property is false at the time when this function is
called, the given source component will not be loaded but the source
and initial properties will be cached. When the loader is made active,
an instance of the source component will be created with the initial
properties set.
The question is how to deal with this issue, and is it even necessary? Maybe Qt does component from source caching by default? Caching certainly would make sense in terms of avoiding redundant source loading (from disk or worse - network), parsing and component preparation, but its effects will only be prominent in the case of excessive dynamic instantiation, and the "typical" QML dynamic object creation scenarios usually involve a one-time object, in which case the caching would be useless memory overhead. Caching also doesn't make sense in the context of the possibility that the source may change in between the instantiations.
So since I don't have the time to dig through the mess that is the private implementations behind Qt APIs, if I had to guess, I'd say that component from source caching is not likely, but as a mere guess, it may as well be wrong.
Not an answer per se, I tripped into the question of component caching yesterday – and was surprised to discover that Qt appears to cache components. At least in creating dynamic components, the log statements related to createComponent only appear once in my test app. I've searched around and haven't seen any specific info in the docs about caching. I did come across a couple of interesting methods in the QQMLEngine Class. Then I came across the release notes for Qt 5.4
The component returned by Qt.createComponent() is no longer parented
to the engine. Be sure to hold a reference, or provide a parent.
So? If a parent is provided, it is cached (?) That appears to be the case. (and for 5.5+ ?) If you want to manage it yourself, don't provide a parent and retain the reference. (?)
QQmlEngine Class
QQmlEngine::clearComponentCache() Clears the engine's
internal component cache. This function causes the property
metadata of all components previously loaded by the engine to be
destroyed. All previously loaded components and the property bindings
for all extant objects created from those components will cease to
function. This function returns the engine to a state where it
does not contain any loaded component data. This may be useful in
order to reload a smaller subset of the previous component set, or to
load a new version of a previously loaded component. Once the
component cache has been cleared, components must be loaded before any
new objects can be created.
void QQmlEngine::trimComponentCache()
Trims the engine's internal component cache.
This function causes the property metadata of any loaded components
which are not currently in use to be destroyed.
A component is considered to be in use if there are any extant
instances of the component itself, any instances of other components
that use the component, or any objects instantiated by any of those
components.
After some toying with the code, it looks like caching also takes place in the "problematic case" of Loader.setSource().
Running the example code from this question, I found out that regular component creation fails to expand a tree deeper than 10 nodes, because of Qt's hard-coded limit:
// Do not create infinite recursion in object creation
static const int maxCreationDepth = 10;
This causes component instantiation to abort if there are more than nested 10 component instantiations in a regular scenario, product incorrect tree and generating a warning.
This doesn't happen when setSource() is used, the obvious reason for this would be that the component is cached and thus no component instantiation takes place.

breezejs angularjs ToDo SPA template in Visual Studio

I have a question about design. I've just been through the code of the ToDo template of Visual Studio for building a SPA with BreezejS and AngularJS.
There is a todo.model.js file that do various initialization. One interesting thing is that it extends the TodoList Entity with some additional function (addToDo).
What is the advantage of doing so, over having the addToDo function in the todo.controller instead and adding it to the $scope ?
You could make a good case for moving all of the TodoList level persistence operations out of the TodoList and into some other component. The controller is a potential candidate.
The primary reason that these operations are in the TodoList is ... because that's where the authors of the original ASP.NET template put them!
One of the "community template" design goals was for all of the "TodoList" apps to be as similar to each other as possible. By holding the design constant we made it easier for readers to compare the effect of the different frameworks: Knockout, Breeze, Backbone, Ember. Had any of them relocated these operations, you would not know if that change was imposed by the target framework or was simply the the implementer's preference. We wanted to take our ego out of it and let you concentrate on the technologies involved.
Don't treat these templates as gospel. In some respects they are unrealistic; I can't imagine saving every time a single property of a single object changes.
Learn from them. Regard them with healthy skepticism. Keeping asking questions like this one. Take what make sense to you. Discard the rest.
I believe this is just letting the entity handle its own save/delete functions for items in the list. The controller seems to be handling only adding of new lists. I'm not sure there's any advantage other than keeping the controller clean.

Resources