I'm wondering which technique is best to improve a React application's performance. I learned about the Component lifecycle methods componentWillReceiveProps and shouldComponentUpdate but I'm not really sure which one I should ideally implement to avoid unwanted re-rendering.
So is it best to implement both or just one of them?
Also, I don't understand why shouldComponentUpdate receives the nextProps if componentWillReceiveProps already (should) handle(s) those.
Use PureComponent instead of Component when you need behavior described in your question. https://reactjs.org/docs/react-api.html#reactpurecomponent
componentWillReceiveProps(props) {
console.log("componentWillReceiveProps");
return false; // This will not terminate further control flow in this component.
}
shouldComponentUpdate(props, newState) {
console.log("shouldComponentUpdate");
return false; // This terminates further control flow in this component.
}
Based on condition you can return boolean value from the shouldComponentUpdate method. If you are returning false it will not call the componentWillUpdate() method.
Related
For my application I need to fetch some data asynchronously and do some initialization for each page. Unfortunately, a constructor does not allow me to make asynchronous calls. I followed this article and put all of my code into the OnAppearing method. However, since then I ran into multiple issues since each platform handles the event a little bit differently. For example, I have pages where you can take pictures, on iOS the OnAppearing is called again every time after the camera is closed while Android doesn't. It doesn't seem like a reliable method for my needs, which is also described here:
Calls to the OnDisappearing and OnAppearing overrides cannot be treated as guaranteed indications of page navigation. For example, on iOS, the OnDisappearing override is called on the active page when the application terminates.
I am searching for a method/way where I can perform my own initialization. The constructor would be perfect for that but I cannot perform anything asynchronously in there. Please do not provide me with any work arounds, I am searching for a solution that is the "recommended" way or maybe someone with a lot of experience can tell me what they are doing. (I also don't want to .Wait() or .Result as it will lock my app)
You can use Stephen Cleary's excellent NotifyTaskCompletion class.
You can read more how it works and what to do/don't in these cases in Microsoft's excellent Async Programming : Patterns for Asynchronous MVVM Applications: Data Binding. The highlights of this topics are:
Let’s walk through the core method
NotifyTaskCompletion.WatchTaskAsync. This method takes a task
representing the asynchronous operation, and (asynchronously) waits
for it to complete. Note that the await does not use
ConfigureAwait(false); I want to return to the UI context before
raising the PropertyChanged notifications. This method violates a
common coding guideline here: It has an empty general catch clause. In
this case, though, that’s exactly what I want. I don’t want to
propagate exceptions directly back to the main UI loop; I want to
capture any exceptions and set properties so that the error handling
is done via data binding. When the task completes, the type raises
PropertyChanged notifications for all the appropriate properties.
A sample usage of it:
public class MainViewModel
{
public MainViewModel()
{
UrlByteCount = new NotifyTaskCompletion<int>(
MyStaticService.CountBytesInUrlAsync("http://www.example.com"));
}
public NotifyTaskCompletion<int> UrlByteCount { get; private set; }
}
Here, the demo is about binding the returned asynchronous value to some bindable property, but of course you can you is without any return value (for simple data loading).
This may be too simple to say, but you CAN run asynchronous tasks in the constructor. Just wrap it in an anonymous Task.
public MyConstructor() {
Task.Run(async () => {
<Your code>
}
}
Be careful when doing this though as you can get into resource conflict issues if you accidentally open the page twice.
Another thing I like to do is use an _isInit flag, which indicates a first time use, and then never again.
I know that Stateless components are a lot more comfortable to use (In specific scenarios), But since you cannot use shouldComponentUpdate, Doesn't that mean that the component will re-render for each props change?. My question is whether or not its better (Performance-wise) to use a class component with a smart shouldComponentUpdate than use a stateless component.
Are stateless components a better solution as far as performance goes?
consider this silly example:
const Hello =(props) =>(
<div>
<div> {props.hello}</div>
<div>{props.bye}</div>
</div>);
vs
class Hello extends Component{
shouldComponentUpdate(nextProps){
return nextProps.hello !== this.props.hello
};
render() {
const {hello, bye} = this.props;
return (
<div>
<div> {hello}</div>
<div>{bye}</div>
</div>
);
}
}
Let's assume these components both has 2 props and only one of them we want to track and update if changed (which is a common use-case), would it be better to use Stateless functional component or a class component?
UPDATE
After doing some research as well I agree with the marked answer. Although the performance is better in a class component (with shouldComponentUpdate) it seems as the improvement is negligible for simple component. So my take is this:
For complex components use class extended by PureComponent or Component (Depending on if you were gonna implement your own shouldComponentUpdate)
For simple components Use functional components even if it means that the rerender runs
Try to cut the amount of updates from the closest possible class base component in order to make the tree as static as possible (if needed)
I think you should read Stateless functional components and shouldComponentUpdate #5677
For complex components, defining shouldComponentUpdate (eg. pure render) will generally exceed the performance benefits of stateless components. The sentences in the docs are hinting at some future optimizations that we have planned, whereby we won't allocate an internal instance for stateless functional components (we will just call the function). We also might not keep holding the props, etc. Tiny optimizations. We don't talk about the details in the docs because the optimizations aren't actually implemented yet (stateless components open the doors to these optimizations).
https://github.com/facebook/react/issues/5677#issuecomment-165125151
There are currently no special optimizations done for functions, although we might add such optimizations in the future. But for now, they perform exactly as classes.
https://github.com/facebook/react/issues/5677#issuecomment-241190513
I also recommend checking https://medium.com/missive-app/45-faster-react-functional-components-now-3509a668e69f
To measure the change, I created this benchmark, the results are quite staggering! From just converting a class-based component to a functional one, we get a little 6% speedup. But by calling it as a simple function instead of mounting, we get a ~45﹪ total speed improvement.
To answer the question: it depends.
If you have complex/heavy components you probably should implement shouldComponentUpdate. Otherwise going with regular functions should be fine. I don't think that implementing shouldComponentUpdate for components like you Hello will make big difference, it probably doesn't worth the time implementing.
You should also consider extending from PureComponent rather than Component.
I have an app using React + Redux + Normalizr, I would like to know the best practices to reduce the number of renders when something changes on entities.
Right if I change just one entity inside the entities, it will re-render all the components and not only the components that need that specific entity
There are several things you can do to minimize the number of renders in your app, and make updates faster.
Let's look at them from your Redux store down to your React components.
In your Redux store, you can use immutable data structures (for example, Immutable.js). This will make all subsequent optimizations faster, as you'll be able to compare changes by only checking for previous/next state slice equality rather than recursively comparing all props.
In your containers, that is in your top-level components where you inject redux state as props, ask only for the state slices you need, and use the pure option (I assume you're using react-redux) to make sure your container will be re-rendered only if the state slices returned by your mapStateToProps functions have changed.
If you need to compute derived data, that is if you inject in your containers data computed from various state slices, use a memoized function to make sure the computation is not triggered again if the input doesn't change, and to keep object equality with the value the previous call returned. Reselect is a very good library to do that.
In your dumb components use the shouldComponentUpdate lifecycle to avoid a re-render when incoming props do not change. If you do not want to implement this manually, you can use React's PureRenderMixin to check all props for you, or, for example, the pure function from the Recompose library if you need more control. A good use case at this level is rendering a list of items. If your item component implements shouldComponentUpdate only the modified items will be re-rendered. But this shouldn't be a fix-all-problems habit : a good components separation is often preferable in that it makes flow props only to those components which will need them.
As far as Normalizr is concerned, there is nothing more specific to be done.
If in some case (it should be rare) you detect performance problems that are directly related to React's rendering cycles of components, then you should implement the shouldComponentUpdate() method in the involved components (details can be found in React's docs here).
Change-detection in shouldComponentUpdate() will be particularly easy because Redux forces you to implement immutable state:
shouldComponentUpdate(nextProps, nextState) {
return nextProps.dataObject !== this.props.dataObject;
// true when dataObject has become a new object,
// which happens if (and only if) its data has changed,
// thanks to immutability
}
I'm new to Backbone.js and in my recent project I need a custom validation mechanism for models. I see two ways I could do that.
Extending the Backbone.Model.prototype
_.extend(Backbone.Model.prototype, {
...
});
Creating custom model that inherit from Backbone model
MyApp.Model = Backbone.Model.extend({ ... });
I quite unsure which one is a good approach in this case. I'm aware that overriding prototype is not good for native objects but will that applies to backbone model prototype as well? What kind of problems I'll face if I go with the first approach?
You are supposed to use the second approach, that's the whole point of Backbone.Model.extend({}).
It already does your first approach + other near tricks to actually setup a proper inheritance chain (_.extend is only doing a copy of the object properties, you can look up the difference in code for Backbone's extend() and Underscore's _.extend, they are very large and not very interesting. Just extending the .prototype isn't enough for 'real' inheritance).
When I first read your question, I misunderstood and thought you were asking whether to extend from your own Model Class or directly extend from Backbone Extend. It's not your question, so I apologize for the first answer, and just to keep a summary here: you can use both approach. Most large websites I saw or worked on first extend from Backbone.Model to create a generic MyApp.Model (which is why I got confused, that's usually the name they give to it :)), which is meant to REPLACE the Backbone.Model. Then for each model (for instance User, Product, Comment, whatever..), they'll extend from this MyApp.Model and not from Backbone.Model. This way, they can modify some global Backbone behavior (for all their Models) without changing Backbone's code.
_.extend(Backbone.Model.prototype, {...mystuff...}) would add your property/ies to every Backbone.Model, and objects based on it. You might have meant to do the opposite, _.extend({...mystuff...}, Backbone.Model) which won't change Backbone itself.
If you look at the annotated Backbone source you'll see lines like
_.extend(Collection.prototype, Events, { ... Collection functions ...} )
This adds the Events object contents to every Collection, along with some other collection functions. Similarly, every Model has Events:
_.extend(Model.prototype, Events, { ... Model functions ...})
This seems to be a common pattern of making "classes" in Javascript:
function MyClass(args) {
//Do stuff
}
MyClass.prototype = {....}
It's even used in the Firefox source code.
Is it possible at all to create eventlisteners (i.e. when the value changes) for a variable of type string, int, bool, etc.?
I haven't seen this in any programming language so far, except for some Collections (like ArrayCollection in Flex), which use events to detect changes in the collection.
If not possible at all, why not? What's the reason for this? Are there any best practices to achieve the same sort of functionality? And what about extending functionality with databinding?
I don't think there is anything by default, however, you can create a custom event and raise it on the set of the method. Something like...
C# example
public delegate void MyValueChangedEventHandler(bool oldValue, bool newValue);
public event MyValueChangedEventHandler MyValueChanged;
private bool myValue;
...
public bool MyValue
{
get { return myValue; }
set
{
if (myValue != value)
{
var old = myValue;
myValue = value;
MyValueChanged(old, myValue);
}
}
}
I guess this sort of functionality is not added in any framework/runtime since it would create a big overhead (think on how many times you modify a variable holding a primitive type within the average application) while being not used under normal circunstances.
Anyway, in .NET at least (and I guess that in other OO environments as well), you can define properties, which are accessed as normal variables but can have associated code that reacts when its value is read or modified.
It is possible if you wrap your variables in getters and setters and fire the event when the setter is called.
How about using setter methods and having them register events when changing the value of the variable?
In general, no. The reason is that primitive types are simply bits and bytes stored in some memory location: changing the data in that memory location does just that, and nothing else. Firing events would require calling some methods/functions. So the functionality can be achieved by wrapping the primitive types in some kind of wrapper objects - but of course, they're not 100 % interchangeable: for instance Java's primitive wrapper types (Integer etc.) are marked final, so it's not possible to extend them with event-firing versions to take advantage of auto(un)boxing.
Another approach is to poll the variable frequently and fire appropriate events if it has changed. This is a "dirty" approach with obvious disadvantages (performance overhead, not immediate reaction), but could regardless be useful in some situations. If you do this from another thread in Java, be sure to mark the variable volatile.
It is possible to create listeners, as some of ther others have mentioned, by making a class that fires an event whenever a property changes. This is obviously a lot less efficient than just assigning a value, but there are cases where it could be useful.
Some languages (VB6 and some others) have the ability in debug mode to stop execution when the value of a variable changes. I haven't seen this in .net, but it's liable to be in there somewhere. :-)
It seems to me that using an event to signal a simple variable change could be accomplished with if statements at each assignment, unless the value that variable is being changed externally, in which case you could use a class to handle it.