React renderToString() Performance and Caching React Components - performance

I've noticed that the reactDOM.renderToString() method starts to slow down significantly when rendering a large component tree on the server.
Background
A bit of background. The system is a fully isomorphic stack. The highest level App component renders templates, pages, dom elements, and more components. Looking in the react code, I found it renders ~1500 components (this is inclusive of any simple dom tag that gets treated as a simple component, <p>this is a react component</p>.
In development, rendering ~1500 components takes ~200-300ms. By removing some components I was able to get ~1200 components to render in ~175-225ms.
In production, renderToString on ~1500 components takes around ~50-200ms.
The time does appear to be linear. No one component is slow, rather it is the sum of many.
Problem
This creates some problems on the server. The lengthy method results in long server response times. The TTFB is a lot higher than it should be. With api calls and business logic the response should be 250ms, but with a 250ms renderToString it is doubled! Bad for SEO and users. Also, being a synchronous method, renderToString() can block the node server and backup subsequent requests (this could be solved by using 2 separate node servers: 1 as a web server, and 1 as a service to solely render react).
Attempts
Ideally, it would take 5-50ms to renderToString in production. I've been working on some ideas, but I'm not exactly sure what the best approach would be.
Idea 1: Caching components
Any component that is marked as 'static' could be cached. By keeping a cache with the rendered markup, the renderToString() could check the cache before rendering. If it finds a component, it automatically grabs the string. Doing this at a high level component would save all the nested children component's mounting. You would have to replace the cached component markup's react rootID with the current rootID.
Idea 2: Marking components as simple/dumb
By defining a component as 'simple', react should be able to skip all the lifecycle methods when rendering. React already does this for the core react dom components (<p/>, <h1/>, etc). Would be nice to extend custom components to use the same optimization.
Idea 3: Skip components on server-side render
Components that do not need to be returned by the server (no SEO value) could simply be skipped on the server. Once the client loads, set a clientLoaded flag to true and pass it down to enforce a re-render.
Closing and other attempts
The only solution I've implemented thus far is to reduce the number of components that are rendered on the server.
Some projects we're looking at include:
React-dom-stream (still working on implementing this for a test)
Babel inline elements (seems like this is along the lines of Idea 2)
Has anybody faced similar issues? What have you been able to do?
Thanks.

Using react-router1.0 and react0.14, we were mistakenly serializing our flux object multiple times.
RoutingContext will call createElement for every template in your react-router routes. This allows you to inject whatever props you want. We also use flux. We send down a serialized version of a large object. In our case, we were doing flux.serialize() within createElement. The serialization method could take ~20ms. With 4 templates, that would be an extra 80ms to your renderToString() method!
Old code:
function createElement(Component, props) {
props = _.extend(props, {
flux: flux,
path: path,
serializedFlux: flux.serialize();
});
return <Component {...props} />;
}
var start = Date.now();
markup = renderToString(<RoutingContext {...renderProps} createElement={createElement} />);
console.log(Date.now() - start);
Easily optimized to this:
var serializedFlux = flux.serialize(); // serialize one time only!
function createElement(Component, props) {
props = _.extend(props, {
flux: flux,
path: path,
serializedFlux: serializedFlux
});
return <Component {...props} />;
}
var start = Date.now();
markup = renderToString(<RoutingContext {...renderProps} createElement={createElement} />);
console.log(Date.now() - start);
In my case this helped reduce the renderToString() time from ~120ms to ~30ms. (You still need to add the 1x serialize()'s ~20ms to the total, which happens before the renderToString()) It was a nice quick improvement. -- It's important to remember to always do things correctly, even if you don't know the immediate impact!

Idea 1: Caching components
Update 1: I've added a complete working example at the bottom. It caches components in memory and updates data-reactid.
This can actually be done easily. You should monkey-patch ReactCompositeComponent and check for a cached version:
import ReactCompositeComponent from 'react/lib/ReactCompositeComponent';
const originalMountComponent = ReactCompositeComponent.Mixin.mountComponent;
ReactCompositeComponent.Mixin.mountComponent = function() {
if (hasCachedVersion(this)) return cache;
return originalMountComponent.apply(this, arguments)
}
You should do this before you require('react') anywhere in your app.
Webpack note: If you use something like new webpack.ProvidePlugin({'React': 'react'}) you should change it to new webpack.ProvidePlugin({'React': 'react-override'}) where you do your modifications in react-override.js and export react (i.e. module.exports = require('react'))
A complete example that caches in memory and updates reactid attribute could be this:
import ReactCompositeComponent from 'react/lib/ReactCompositeComponent';
import jsan from 'jsan';
import Logo from './logo.svg';
const cachable = [Logo];
const cache = {};
function splitMarkup(markup) {
var markupParts = [];
var reactIdPos = -1;
var endPos, startPos = 0;
while ((reactIdPos = markup.indexOf('reactid="', reactIdPos + 1)) != -1) {
endPos = reactIdPos + 9;
markupParts.push(markup.substring(startPos, endPos))
startPos = markup.indexOf('"', endPos);
}
markupParts.push(markup.substring(startPos))
return markupParts;
}
function refreshMarkup(markup, hostContainerInfo) {
var refreshedMarkup = '';
var reactid;
var reactIdSlotCount = markup.length - 1;
for (var i = 0; i <= reactIdSlotCount; i++) {
reactid = i != reactIdSlotCount ? hostContainerInfo._idCounter++ : '';
refreshedMarkup += markup[i] + reactid
}
return refreshedMarkup;
}
const originalMountComponent = ReactCompositeComponent.Mixin.mountComponent;
ReactCompositeComponent.Mixin.mountComponent = function (renderedElement, hostParent, hostContainerInfo, transaction, context) {
return originalMountComponent.apply(this, arguments);
var el = this._currentElement;
var elType = el.type;
var markup;
if (cachable.indexOf(elType) > -1) {
var publicProps = el.props;
var id = elType.name + ':' + jsan.stringify(publicProps);
markup = cache[id];
if (markup) {
return refreshMarkup(markup, hostContainerInfo)
} else {
markup = originalMountComponent.apply(this, arguments);
cache[id] = splitMarkup(markup);
}
} else {
markup = originalMountComponent.apply(this, arguments)
}
return markup;
}
module.exports = require('react');

It's not a complete solution
I had the same issue, with my react isomorphic app, and I used a couple of things.
Use Nginx in front of your nodejs server, and cache the rendered response for a short time.
In Case of showing a list of items, I use only a subset of list. For example, I will render only X items to fill up the viewport, and load the rest of the list in the client side using Websocket or XHR.
Some of my components are empty in serverside rendering and will only load from client side code (componentDidMount).
These components are usually graphs or profile related components. Those components usually don't have any benefit from SEO point of view
About SEO, from my experience 6 Month with an isomorphic app. Google Bot can read Client side React Web page easily, so I'm not sure why we bother with the server side rendering.
Keep the <Head>and <Footer> as static string or use template engine (Reactjs-handlebars), and render only the content of the page, (it should save a few rendered components). In case of a single page app, you can update the title description in each navigation inside Router.Run.

I think fast-react-render can help you. It increases the performance of your server rendering three times.
For try it, you only need to install package and replace ReactDOM.renderToString to FastReactRender.elementToString:
var ReactRender = require('fast-react-render');
var element = React.createElement(Component, {property: 'value'});
console.log(ReactRender.elementToString(element, {context: {}}));
Also you can use fast-react-server, in that case render will be 14 times as fast as traditional react rendering. But for that each component, which you want to render, must be declared with it (see an example in fast-react-seed, how you can do it for webpack).

Related

adding logic to redux reducer to persist state

I am working with a react/redux learning project, where I'm building components to host headless CMS content. One part of the application is a dropdown that will select the content from all available content channels in the source CMS.
This works on the first pass, but when I navigate to another page (ie, the detail of a single CMS content item - the first page displays multiple items in a grid) it resets the state back to an initial (empty) variable.
The component is below:
import { FETCH_CHANNELS } from '../actions/types';
// set the initial state first
const initialState = {
isLoading: true,
data: [],
error: ""
}
// set the state depending on the dispatch coming in
const channelsReducer = (state = initialState, action) => {
switch(action.type) {
case FETCH_CHANNELS:
// reduce the returned state down to an array of options
// filter here to limit to searchable only
const activeChannels = [];
action.payload['channels'].forEach( el => {
if(el.isChannelSearchable) {
let singleItem = {
key: el.channelId,
value: el.channelId,
text: el.channelName,
type: el.channelType
}
activeChannels.push(singleItem);
}
});
return {...state, data: activeChannels, isLoading: false};
case "ERROR":
return {...state, error: action.msg};
default:
return state;
}
}
export default channelsReducer;
My issue here (as I see it), is the initialisation of the initialState constant at the beginning, everytime that the component is refreshed, it is set to empty again. Makes sense.
How can I persist the state that is returned in the FETCH_CHANNELS case (that action is a call to a back end api that returns all channels) so that upon the component remounting it still retains it's state?
Not sure if I have to either (quite possibly none of these are correct):
Attempt with some logic in the front end component that is calling this action, to not call it if data already exists?
Create another piece of state in the redux store and update that state from the front end component once a value from the drop down has been selected?
or 3. Try and handle it here with setting a variable in the reducer and logic to return that if necessary?
Like I said, I'm building this to try and learn a bit about react and redux, but i'm really not sure what the way to handle this is...
update, as suspected... neither of those options were correct. I was not calling the link correctly in the component generating the click event to drill into the detail content item. Implementing Link from react-router-dom was the right way to handle this.

how the nativescript radlist view load on demand works

This might not be the question but it was the list of doubts which comes when learning native script from scratch.
I had a 1000 or more list of data stored in data table. know i want to display it on a list view but i don't want to read all the data at once. because i have images stored in other directory and want to read that also. So, for 20 to 30 data's the performance is quite good. but for 1000 data it is taking more than 15 minutes to read the data as well as images associated with it. since i'm storing some high quality images.
Therefore i decided to read only 20 data's with their respective images. and display it on list. know when user reaches the 15th data of the list. i decided to read 10 more data from the server.
know when i search this i came across "RadListView Load on Demand".
then i just looked at the code below.
public addMoreItemsFromSource(chunkSize: number) {
let newItems = this._sourceDataItems.splice(0, chunkSize);
this.dataItems.push(newItems);
}
public onLoadMoreItemsRequested(args: LoadOnDemandListViewEventData) {
const that = new WeakRef(this);
const listView: RadListView = args.object;
if (this._sourceDataItems.length > 0) {
setTimeout(function () {
that.get().addMoreItemsFromSource(2);
listView.notifyLoadOnDemandFinished();
}, 1500);
args.returnValue = true;
} else {
args.returnValue = false;
listView.notifyLoadOnDemandFinished(true);
}
}
In nativescript if i want to access binding element xml element. i must use observables in viewmodel or exports.com_name on associated js file.
but in this example it is started with public..! how to use this in javascript.
what is new WeakRef(this) ?
why it is needed ?
how to identify user has scrolled to 15 data, as i want to load more data when he came at 15th data.
after getting data how to update array of list and show it in listview ?
Finally i just want to know how to use load on demand
i tried to create a playground sample of what i have tried but it is giving error. it cannot found module of radlistview.
Remember i'm a fresher So, kindly keep this in mind when answering. thank you,
please modify the question if you feel it is not upto standards.
you can check the updated answer here
https://play.nativescript.org/?template=play-js&id=1Xireo
TypeScript to JavaScript
You may use any TypeScript compiler to convert the source code to JavaScript. There are even online compilers like the official TypeScript Playground for instance.
In my opinion, it's hard to expect ES5 examples any more. ES6-9 introduced a lot of new features that makes JavaScript development much more easier and TypeScript takes JavaScript to next level, interpreter to compiler.
To answer your question, you will use the prototype chain to define methods on your class in ES5.
YourClass.prototype.addMoreItemsFromSource = function (chunkSize) {
var newItems = this._sourceDataItems.splice(0, chunkSize);
this.dataItems.push(newItems);
};
YourClass.prototype.onLoadMoreItemsRequested = (args) {
var that = new WeakRef(this);
var listView = args.object;
if (this._sourceDataItems.length > 0) {
setTimeout(function () {
that.get().addMoreItemsFromSource(2);
listView.notifyLoadOnDemandFinished();
}, 1500);
args.returnValue = true;
} else {
args.returnValue = false;
listView.notifyLoadOnDemandFinished(true);
}
}
If you are using fromObject syntax for your Observable, then these functions can be passed inside
addMoreItemsFromSource: function (chunkSize) {
....
};
WeakRef: It helps managing your memory effiencetly by keeping a loose reference to the target, read more on docs.
How to load more:
If you set loadOnDemandMode to Auto then loadMoreDataRequested event will be triggered whenever user reaches the end of scrolling.
loadOnDemandBufferSize decides how many items before the end of scroll the event should be triggered.
Read more on docs.
How to update the array:
That's exactly what showcased in addMoreItemsFromSource function. Use .push(item) on the ObservableArray that is linked to your list view.

Prevent custom functions from executing in Google Spreadsheets Google Apps Script

When writing custom functions to be used in spreadsheet cells, the default behavior for a sheet is to recalculate on edits, i.e. adding column or rows will cause a custom function to update.
This is a problem if the custom function calls a paid API and uses credits, the user will consuming API credits automatically.
I couldn't figure out a way to prevent this, so I decided to use the UserCache to cache the results for an arbitrary 25 minutes, and serve it back to the user should they happen to repeat the same function call. It's definitely not bulletproof but it's better than nothing I suppose. Apparently the cache can hold 10mb, but is this the right approach? Could I be doing something smarter?
var _ROOT = {
cache : CacheService.getUserCache(),
cacheDefaultTime: 1500,
// Step 1 -- Construct a unique name for function call storage using the
// function name and arguments passed to the function
// example: function getPaidApi(1,2,3) becomes "getPaidApi123"
stringifyFunctionArguments : function(functionName,argumentsPassed) {
var argstring = ''
for (var i = 0; i < argumentsPassed.length; i++) {
argstring += argumentsPassed[i]
}
return functionName+argstring
},
//Step 2 -- when a user calls a function that uses a paid api, we want to
//cache the results for 25 minutes
addToCache : function (encoded, returnedValues) {
var values = {
returnValues : returnedValues
}
Logger.log(encoded)
this.cache.put(encoded, JSON.stringify(values), this.cacheDefaultTime)
}
//Step 3 -- if the user repeats the exact same function call with the same
//arguments, we give them the cached result
//this way, we don't consume API credits as easily.
checkCache : function(encoded) {
var cached = this.cache.get(encoded);
try {
cached = JSON.parse(cached)
return cached.returnValues
} catch (e) {
return false;
}
}
}
Google Sheets already caches the values of custom functions, and will only run them again when either a) the inputs to the function have changed or b) the spreadsheet is being opened after being closed for a long time. I'm not able to replicate the recalculation you mentioned when adding and removing columns. Here's a simple example function I used to test that:
function rng() {
return Math.random();
}
Your approach of using an additional cache for expensive queries looks fine in general. I'd recommend using the DocumentCache instead of the UserCache, since all users of the document can and should see the same cell values.
I'd also recommend a more robust encoding of function signatures, since your current implementation is able to distinguish between the arguments [1, 2] and [12]. You could stringify the inputs and then base64 encode it for compactness:
function encode(functionName, argumentsPassed) {
var data = [functionName].concat(argumentsPassed);
var json = JSON.stringify(data);
return Utilities.base64Encode(json);
}

Time-based cache for REST client using RxJs 5 in Angular2

I'm new to ReactiveX/RxJs and I'm wondering if my use-case is feasible smoothly with RxJs, preferably with a combination of built-in operators. Here's what I want to achieve:
I have an Angular2 application that communicates with a REST API. Different parts of the application need to access the same information at different times. To avoid hammering the servers by firing the same request over and over, I'd like to add client-side caching. The caching should happen in a service layer, where the network calls are actually made. This service layer then just hands out Observables. The caching must be transparent to the rest of the application: it should only be aware of Observables, not the caching.
So initially, a particular piece of information from the REST API should be retrieved only once per, let's say, 60 seconds, even if there's a dozen components requesting this information from the service within those 60 seconds. Each subscriber must be given the (single) last value from the Observable upon subscription.
Currently, I managed to achieve exactly that with an approach like this:
public getInformation(): Observable<Information> {
if (!this.information) {
this.information = this.restService.get('/information/')
.cache(1, 60000);
}
return this.information;
}
In this example, restService.get(...) performs the actual network call and returns an Observable, much like Angular's http Service.
The problem with this approach is refreshing the cache: While it makes sure the network call is executed exactly once, and that the cached value will no longer be pushed to new subscribers after 60 seconds, it doesn't re-execute the initial request after the cache expires. So subscriptions that occur after the 60sec cache will not be given any value from the Observable.
Would it be possible to re-execute the initial request if a new subscription happens after the cache timed out, and to re-cache the new value for 60sec again?
As a bonus: it would be even cooler if existing subscriptions (e.g. those who initiated the first network call) would get the refreshed value whose fetching had been initiated by the newer subscription, so that once the information is refreshed, it is immediately passed through the whole Observable-aware application.
I figured out a solution to achieve exactly what I was looking for. It might go against ReactiveX nomenclature and best practices, but technically, it does exactly what I want it to. That being said, if someone still finds a way to achieve the same with just built-in operators, I'll be happy to accept a better answer.
So basically since I need a way to re-trigger the network call upon subscription (no polling, no timer), I looked at how the ReplaySubject is implemented and even used it as my base class. I then created a callback-based class RefreshingReplaySubject (naming improvements welcome!). Here it is:
export class RefreshingReplaySubject<T> extends ReplaySubject<T> {
private providerCallback: () => Observable<T>;
private lastProviderTrigger: number;
private windowTime;
constructor(providerCallback: () => Observable<T>, windowTime?: number) {
// Cache exactly 1 item forever in the ReplaySubject
super(1);
this.windowTime = windowTime || 60000;
this.lastProviderTrigger = 0;
this.providerCallback = providerCallback;
}
protected _subscribe(subscriber: Subscriber<T>): Subscription {
// Hook into the subscribe method to trigger refreshing
this._triggerProviderIfRequired();
return super._subscribe(subscriber);
}
protected _triggerProviderIfRequired() {
let now = this._getNow();
if ((now - this.lastProviderTrigger) > this.windowTime) {
// Data considered stale, provider triggering required...
this.lastProviderTrigger = now;
this.providerCallback().first().subscribe((t: T) => this.next(t));
}
}
}
And here is the resulting usage:
public getInformation(): Observable<Information> {
if (!this.information) {
this.information = new RefreshingReplaySubject(
() => this.restService.get('/information/'),
60000
);
}
return this.information;
}
To implement this, you will need to create your own observable with custom logic on subscribtion:
function createTimedCache(doRequest, expireTime) {
let lastCallTime = 0;
let lastResult = null;
const result$ = new Rx.Subject();
return Rx.Observable.create(observer => {
const time = Date.now();
if (time - lastCallTime < expireTime) {
return (lastResult
// when result already received
? result$.startWith(lastResult)
// still waiting for result
: result$
).subscribe(observer);
}
const disposable = result$.subscribe(observer);
lastCallTime = time;
lastResult = null;
doRequest()
.do(result => {
lastResult = result;
})
.subscribe(v => result$.next(v), e => result$.error(e));
return disposable;
});
}
and resulting usage would be following:
this.information = createTimedCache(
() => this.restService.get('/information/'),
60000
);
usage example: https://jsbin.com/hutikesoqa/edit?js,console

Fabric.js - Sync object:modified event to another client

Collaboration Mode:
What is the best way to propagate changes from Client #1's canvas to client #2's canvas? Here's how I capture and send events to Socket.io.
$scope.canvas.on('object:modified',function(e) {
Socket.whiteboardMessage({
eventId:'object:modified',
event:e.target.toJSON()
});
});
On the receiver side, this code works splendidly for adding new objects to the screen, but I could not find documentation on how to select and update an existing object in the canvas.
fabric.util.enlivenObjects([e.event], function(objects) {
objects.forEach(function(o) {
$scope.canvas.add(o);
});
});
I did see that Objects have individual setters and one bulk setter, but I could not figure out how to select an existing object based on the event data.
Ideally, the flow would be:
Receive event with targeted object data.
Select the existing object in the canvas.
Perform bulk update.
Refresh canvas.
Hopefully someone with Fabric.JS experience can help me figure this out. Thanks!
UPDATED ANSWER - Thanks AJM!
AJM was correct in suggesting a unique ID for every newly created element. I was also able to create a new ID for all newly created drawing paths as well. Here's how it worked:
var t = new fabric.IText('Edit me...', {
left: $scope.width/2-100,
top: $scope.height/2-50
});
t.set('id',randomHash());
$scope.canvas.add(t);
I also captured newly created paths and added an id:
$scope.canvas.on('path:created',function(e) {
if (e.target.id === undefined) {
e.target.set('id',randomHash());
}
});
However, I encountered an issue where my ID was visible in console log, but it was not present after executing object.toJSON(). This is because Fabric has its own serialization method which trims down the data to a standardized list of properties. To include additional properties, I had to serialize the data for transport like so:
$scope.canvas.on('object:modified',function(e) {
Socket.whiteboardMessage({
object:e.target.toJSON(['id']) // includes "id" in output.
})
});
Now each object has a unique ID with which to perform updates. On the receiver's side of my code, I added AJM's object-lookup function. I placed this code in the "startup" section of my application so it would only run once (after Fabric.js is loaded, of course!)
fabric.Canvas.prototype.getObjectById = function (id) {
var objs = this.getObjects();
for (var i = 0, len = objs.length; i < len; i++) {
if (objs[i].id == id) {
return objs[i];
}
}
return 0;
};
Now, whenever a new socket.io message is received with whiteboard data, I am able to find it in the canvas via this line:
var obj = $scope.canvas.getObjectById(e.object.id);
Inserting and removing are easy, but for updating, this final piece of code did the trick:
obj.set(e.object); // Updates properties
$scope.canvas.renderAll(); // Redraws canvas
$scope.canvas.calcOffset(); // Updates offsets
All of this required me to handle the following events. Paths are treated as objects once they're created.
$scope.canvas.on('object:added',function(e) { });
$scope.canvas.on('object:modified',function(e) { });
$scope.canvas.on('object:moving',function(e) { });
$scope.canvas.on('object:removed',function(e) { });
$scope.canvas.on('path:created',function(e) { });
I did something similar involving a single shared canvas between multiple users and ran into this exact issue.
To solve this problem, I added unique IDs (using a javascript UUID generator) to each object added to the canvas (in my case, there could be many users working on a canvas at a time, thus I needed to avoid collisions; in your case, something simpler could work).
Fabric objects' set method will let you add an arbitrary property, like an id: o.set('id', yourid). Before you add() a new Fabric object to your canvas (and send that across the wire), tack on an ID property. Now, you'll have a unique key by which you can pick out individual objects.
From there, you'd need a method to retrieve an object by ID. Here's what I used:
fabric.Canvas.prototype.getObjectById = function (id) {
var objs = this.getObjects();
for (var i = 0, len = objs.length; i < len; i++) {
if (objs[i].id == id) {
return objs[i];
}
}
return null;
};
When you receive data from your socket, grab that object from the canvas by ID and mutate it using the appropriate set methods or copying properties wholesale (or, if getObjectById returns null, create it).

Resources