What would be the best way to measure network latency in React Native?
E.g.
const startTime = Date.now();
let response = await fetch(url)
const endTime = Date.now();
const totalTimeInMs = endTime - startTime;
If I place start and stop timer before/after the network call as shared above that might not give true network latency because the JS might be busy doing some other work and would come to this eventually when there is nothing in the event loop / call back queue / task.
Hence wondering if there is any better way to measure network latency?
I found this question while looking for ways to measure just request-response time in axios....but I thought it was interesting enough to have an alternative answer to the core question.
If you really want to know the network latency the technique used by the Precision Time Protocol could be some inspiration.
The concept
This drawing hopefully explains what I mean by "network latency":
API Request API Response
| ^
v |
UI ---+--------------------------+-----------> Time
A \ ^ B
\ /
\ /
\ /
v /
Backend -----+---------------+-------------> Time
a | ^ b
| |
+- processing --+
time
Where:
- A is the time when the UI sends the request
- a is the time when the backend receives the request
- b is the time when the backend sends the response
- B is the time when the UI receives the response
The time it takes from A->a is the network latency from UI->backend.
The time it takes from b->B is the network latency from backend->UI.
Each step of request/response can calculate these and add them to
the respective request/response object.
What you cannot do with this
You probably won't be able to precisely sync the clocks this way, there will be too much jitter.
You can't really also tell the inbound/outbound latency. As you have no way to know the relationship in time of A to a, or of B to b.
What you might be able to do with this
The total time seen in the UI (B - A), less the total time seen in the backend (b - a), should be enough to get a good estimate of the round-trip network latency.
i.e. network_latency = ((B-A) - (b-a)) / 2
Averaged over enough samples this might be good enough?
FWIW, you could just have the backend include its own "processing_time" in the response, and the UI could then store "A" in the context of the request and calculate "B-A" once a successful response comes back. The idea is the same though.
You can use the react-native-debugger to get the familiar network tab from web development for use in react-native!
https://github.com/jhen0409/react-native-debugger/blob/master/docs/network-inspect-of-chrome-devtools.md
Solution 1:
install Axios library
yarn add axios
response.latency will give the totalTimeInMs
Full Code
import React, { Component } from "react";
import { Text, StyleSheet, View } from "react-native";
import axios from "axios";
const axiosTiming = (instance) => {
instance.interceptors.request.use((request) => {
request.ts = Date.now();
return request;
});
instance.interceptors.response.use((response) => {
const timeInMs = `${Number(Date.now() - response.config.ts).toFixed()}ms`;
response.latency = timeInMs;
return response;
});
};
axiosTiming(axios);
export default class App extends Component {
componentWillMount() {
axios.get("https://facebook.github.io/react-native/movies.json")
.then(function (response) {
console.log(response.latency); //17ms
})
.catch(function (error) {
console.log(error);
});
}
render() {
return (
<View>
<Text> test </Text>
</View>
);
}
}
Solution 2:
by using fetch, but fetch does not give us default timing attributes
const start = new Date();
return fetch('https://facebook.github.io/react-native/movies.json')
.then((response) => response.json())
.then((responseJson) => {
const timeTaken= (new Date())-start;
return responseJson.movies;
})
Related
As I understand, in FRP (Functional Reactive Programming), we model the system as a component which receives some input signals and generates some output signals:
,------------.
--- input1$ --> | | -- output1$ -->
| System | -- output2$ -->
--- input2$ --> | | -- output3$ -->
`------------'
In this way, if we have multiple subsystems, we can plump them together as long as we can provide operators that can pipe inputs and outputs.
Now, I'm building an app, which processes video frames asynchronously. The actual processing logic is abstracted and can be provided as an argument. In non-FRP way of thinking, I can construct the app as
new App(async (frame) => {
return await processFrame(frame)
})
The App is responsible for establishing communication with underlying video pipeline and repeatedly get video frames and then pass that frame to the given callback, and once the callback resolves,App sends back the processed frame.
Now I want to model the App in a FRP way so I can flexibly design the frame processing.
const processedFrameSubject = new Subject()
const { frame$ } = createApp(processedFrameSubject)
frame$.pipe(
map(toRGB),
mergeMap(processRGBFrame),
map(toYUV)
).subscribe(processedFrameSubject)
The benefit is that it enables the consumer of createApp to define the processing pipeline declaratively.
However, in createApp, given a processedFrame, I need to reason about which frame it is related to. Since frame$ and processedFrameSubject is now separated, it's really hard for createApp to reason about which frame a processedFrame is related to, which was quite easy in non-FRP implementation because the frame and processedFrame were in same closure.
In functional reactive programming, you would avoid using side effects as much as possible, this means avoiding .subscribe(, tap(() => subject.next()), etc. With FRP, your state is declared on how it works and how it's wired up, but it doesn't execute until someone needs it and performs the side effect.
So I think that the following API would still be considered FRP:
function createApp(
processFrame: (frame: Frame) => Observable<ProcessedFrame>
): Observable<void>
const app$ = createApp(frame => of(frame).pipe(
map(toRGB),
mergeMap(processRGBFrame),
map(toYUV)
));
// `app$` is an Observable that can be consumed by composing it to other
// observables, or by "executing the side effect" by .subscribe() on it
// possible implementation of createApp for this API
function createApp(
processFrame: (frame: Frame) => Observable<ProcessedFrame>
) {
return new Observable<void>(() => {
const stopVideoHandler = registerVideoFrameHandler(
(frame: Frame) => firstValueFrom(processFrame(frame))
);
return () => {
// teardown
stopVideoHandler()
}
});
}
Something worth noting is that createApp is returning a new Observable. Inside new Observable( we can escape from FRP because it's the only way we can integrate with external parties, and all the side effects we have written won't be called until someone .subscribe()s to the observable.
This API is simple and would still be FRP, but it has one limitation: the processFrame callback can only process frames independently from others.
If you need an API that supports that, then you need to expose the frames$, but again, this is a project function for createApp:
function createApp(
projectFn: (frame$: Observable<Frame>) => Observable<ProcessedFrame>
): Observable<void>
const app$ = createApp(frame$ => frame$.pipe(
map(toRGB),
mergeMap(processRGBFrame),
map(toYUV)
));
// possible declaration of createApp
function createApp(
projectFn: (frame$: Observable<Frame>) => Observable<ProcessedFrame>
) {
return new Observable<void>(() => {
const frame$ = new Subject<Frame>;
const processedFrame$ = connectable(frame$.pipe(projectFn));
const processedSub = processedFrame$.connect();
const stopVideoHandler = registerVideoFrameHandler(
(frame: Frame) => {
// We need to create the promise _before_ we send in the next `frame$`, in case it's processed synchronously
const resultFrame = firstValueFrom(processedFrame$);
frame$.next(frame);
return resultFrame;
})
);
return () => {
// teardown
stopVideoHandler()
processedSub.unsubscribe();
}
});
}
I'm guessing here registerVideoFrameHandler will call the function one-by-one without overlap? If there's overlap then you'd need to track the frame number in some way, if the SDK doesn't give you any option, then try something like:
// Assuming `projectFn` will emit frames in order. If not, then the API
// should change to be able to match them
const processedFrame$ = connectable(frame$.pipe(
projectFn,
map((result, index) => ({ result, index }))
));
const processedSub = processedFrame$.connect();
let frameIdx = 0;
const stopVideoHandler = registerVideoFrameHandler(
(frame: Frame) => {
const thisIdx = frameIdx;
frameIdx++;
const resultFrame = firstValueFrom(processedFrame$.pipe(
filter(({ index }) => index === thisIdx),
map(({ result }) => result)
));
frame$.next(frame);
return resultFrame;
})
);
Let's say I have a rather typical use of rx that does requests every time some change event comes in (I write this in the .NET style, but I'm really thinking of Javascript):
myChanges
.Throttle(200)
.Select(async data => {
await someLongRunningWriteRequest(data);
})
If the request takes longer than 200ms, there's a chance a new request begins before the old one is done - potentially even that the new request is completed first.
How to synchronize this?
Note that this has nothing to do with multithreading, and that's the only thing I could find information about when googling for "rx synchronization" or something similar.
You could use concatMap operator which will start working on the next item only after previous was completed.
Here is an example where events$ appear with the interval of 200ms and then processed successively with a different duration:
const { Observable } = Rx;
const fakeWriteRequest = data => {
console.log('started working on: ', data);
return Observable.of(data).delay(Math.random() * 2000);
}
const events$ = Observable.interval(200);
events$.take(10)
.concatMap(i => fakeWriteRequest(i))
.subscribe(e => console.log(e));
<script src="https://unpkg.com/rxjs/bundles/Rx.min.js"></script>
I'm working on a turn-based game in Angular that communicates to a backend via a socket.io implementation. In my component, I am listening for several types of communication from the server, each communication gives information on how to update my view to reflect the current state of the data in the server.
Right now, updates are immediately applied to the component's data. However I'd prefer to render each update with some delay in-between, so that the user has time to see the effect of each update.
(See my image at top for essentially what I'm trying to do)
I believe that I would achieve this via the subscribeOn operator, but unsure of how to specify my 'interval' n.
const example = Rx.Observable
.create(observer => {
observer.next(0);
observer.next(1);
observer.next(2);
setTimeout(() => {
observer.next(3);
observer.next(4);
observer.complete();
}, 2500);
});
const source = example
.subscribeOn(Scheduler.timeout);
source.subscribe(console.log);
Use the concatMap operator as follows:
const nInterval = 500;
const example$ = Rx.Observable.from([0, 1, 2])
.concat(Rx.Observable.from([3,4]).delay(2500));
const source$ = example$
.concatMap(item =>
Rx.Observable.of(item)
.concat(
Rx.Observable.of('ignored')
.delay(nInterval)
.ignoreElements()
)
);
source$.subscribe(console.log);
<script src="https://cdnjs.cloudflare.com/ajax/libs/rxjs/5.5.2/Rx.min.js"></script>
I'm new to ReactiveX/RxJs and I'm wondering if my use-case is feasible smoothly with RxJs, preferably with a combination of built-in operators. Here's what I want to achieve:
I have an Angular2 application that communicates with a REST API. Different parts of the application need to access the same information at different times. To avoid hammering the servers by firing the same request over and over, I'd like to add client-side caching. The caching should happen in a service layer, where the network calls are actually made. This service layer then just hands out Observables. The caching must be transparent to the rest of the application: it should only be aware of Observables, not the caching.
So initially, a particular piece of information from the REST API should be retrieved only once per, let's say, 60 seconds, even if there's a dozen components requesting this information from the service within those 60 seconds. Each subscriber must be given the (single) last value from the Observable upon subscription.
Currently, I managed to achieve exactly that with an approach like this:
public getInformation(): Observable<Information> {
if (!this.information) {
this.information = this.restService.get('/information/')
.cache(1, 60000);
}
return this.information;
}
In this example, restService.get(...) performs the actual network call and returns an Observable, much like Angular's http Service.
The problem with this approach is refreshing the cache: While it makes sure the network call is executed exactly once, and that the cached value will no longer be pushed to new subscribers after 60 seconds, it doesn't re-execute the initial request after the cache expires. So subscriptions that occur after the 60sec cache will not be given any value from the Observable.
Would it be possible to re-execute the initial request if a new subscription happens after the cache timed out, and to re-cache the new value for 60sec again?
As a bonus: it would be even cooler if existing subscriptions (e.g. those who initiated the first network call) would get the refreshed value whose fetching had been initiated by the newer subscription, so that once the information is refreshed, it is immediately passed through the whole Observable-aware application.
I figured out a solution to achieve exactly what I was looking for. It might go against ReactiveX nomenclature and best practices, but technically, it does exactly what I want it to. That being said, if someone still finds a way to achieve the same with just built-in operators, I'll be happy to accept a better answer.
So basically since I need a way to re-trigger the network call upon subscription (no polling, no timer), I looked at how the ReplaySubject is implemented and even used it as my base class. I then created a callback-based class RefreshingReplaySubject (naming improvements welcome!). Here it is:
export class RefreshingReplaySubject<T> extends ReplaySubject<T> {
private providerCallback: () => Observable<T>;
private lastProviderTrigger: number;
private windowTime;
constructor(providerCallback: () => Observable<T>, windowTime?: number) {
// Cache exactly 1 item forever in the ReplaySubject
super(1);
this.windowTime = windowTime || 60000;
this.lastProviderTrigger = 0;
this.providerCallback = providerCallback;
}
protected _subscribe(subscriber: Subscriber<T>): Subscription {
// Hook into the subscribe method to trigger refreshing
this._triggerProviderIfRequired();
return super._subscribe(subscriber);
}
protected _triggerProviderIfRequired() {
let now = this._getNow();
if ((now - this.lastProviderTrigger) > this.windowTime) {
// Data considered stale, provider triggering required...
this.lastProviderTrigger = now;
this.providerCallback().first().subscribe((t: T) => this.next(t));
}
}
}
And here is the resulting usage:
public getInformation(): Observable<Information> {
if (!this.information) {
this.information = new RefreshingReplaySubject(
() => this.restService.get('/information/'),
60000
);
}
return this.information;
}
To implement this, you will need to create your own observable with custom logic on subscribtion:
function createTimedCache(doRequest, expireTime) {
let lastCallTime = 0;
let lastResult = null;
const result$ = new Rx.Subject();
return Rx.Observable.create(observer => {
const time = Date.now();
if (time - lastCallTime < expireTime) {
return (lastResult
// when result already received
? result$.startWith(lastResult)
// still waiting for result
: result$
).subscribe(observer);
}
const disposable = result$.subscribe(observer);
lastCallTime = time;
lastResult = null;
doRequest()
.do(result => {
lastResult = result;
})
.subscribe(v => result$.next(v), e => result$.error(e));
return disposable;
});
}
and resulting usage would be following:
this.information = createTimedCache(
() => this.restService.get('/information/'),
60000
);
usage example: https://jsbin.com/hutikesoqa/edit?js,console
I've noticed that the reactDOM.renderToString() method starts to slow down significantly when rendering a large component tree on the server.
Background
A bit of background. The system is a fully isomorphic stack. The highest level App component renders templates, pages, dom elements, and more components. Looking in the react code, I found it renders ~1500 components (this is inclusive of any simple dom tag that gets treated as a simple component, <p>this is a react component</p>.
In development, rendering ~1500 components takes ~200-300ms. By removing some components I was able to get ~1200 components to render in ~175-225ms.
In production, renderToString on ~1500 components takes around ~50-200ms.
The time does appear to be linear. No one component is slow, rather it is the sum of many.
Problem
This creates some problems on the server. The lengthy method results in long server response times. The TTFB is a lot higher than it should be. With api calls and business logic the response should be 250ms, but with a 250ms renderToString it is doubled! Bad for SEO and users. Also, being a synchronous method, renderToString() can block the node server and backup subsequent requests (this could be solved by using 2 separate node servers: 1 as a web server, and 1 as a service to solely render react).
Attempts
Ideally, it would take 5-50ms to renderToString in production. I've been working on some ideas, but I'm not exactly sure what the best approach would be.
Idea 1: Caching components
Any component that is marked as 'static' could be cached. By keeping a cache with the rendered markup, the renderToString() could check the cache before rendering. If it finds a component, it automatically grabs the string. Doing this at a high level component would save all the nested children component's mounting. You would have to replace the cached component markup's react rootID with the current rootID.
Idea 2: Marking components as simple/dumb
By defining a component as 'simple', react should be able to skip all the lifecycle methods when rendering. React already does this for the core react dom components (<p/>, <h1/>, etc). Would be nice to extend custom components to use the same optimization.
Idea 3: Skip components on server-side render
Components that do not need to be returned by the server (no SEO value) could simply be skipped on the server. Once the client loads, set a clientLoaded flag to true and pass it down to enforce a re-render.
Closing and other attempts
The only solution I've implemented thus far is to reduce the number of components that are rendered on the server.
Some projects we're looking at include:
React-dom-stream (still working on implementing this for a test)
Babel inline elements (seems like this is along the lines of Idea 2)
Has anybody faced similar issues? What have you been able to do?
Thanks.
Using react-router1.0 and react0.14, we were mistakenly serializing our flux object multiple times.
RoutingContext will call createElement for every template in your react-router routes. This allows you to inject whatever props you want. We also use flux. We send down a serialized version of a large object. In our case, we were doing flux.serialize() within createElement. The serialization method could take ~20ms. With 4 templates, that would be an extra 80ms to your renderToString() method!
Old code:
function createElement(Component, props) {
props = _.extend(props, {
flux: flux,
path: path,
serializedFlux: flux.serialize();
});
return <Component {...props} />;
}
var start = Date.now();
markup = renderToString(<RoutingContext {...renderProps} createElement={createElement} />);
console.log(Date.now() - start);
Easily optimized to this:
var serializedFlux = flux.serialize(); // serialize one time only!
function createElement(Component, props) {
props = _.extend(props, {
flux: flux,
path: path,
serializedFlux: serializedFlux
});
return <Component {...props} />;
}
var start = Date.now();
markup = renderToString(<RoutingContext {...renderProps} createElement={createElement} />);
console.log(Date.now() - start);
In my case this helped reduce the renderToString() time from ~120ms to ~30ms. (You still need to add the 1x serialize()'s ~20ms to the total, which happens before the renderToString()) It was a nice quick improvement. -- It's important to remember to always do things correctly, even if you don't know the immediate impact!
Idea 1: Caching components
Update 1: I've added a complete working example at the bottom. It caches components in memory and updates data-reactid.
This can actually be done easily. You should monkey-patch ReactCompositeComponent and check for a cached version:
import ReactCompositeComponent from 'react/lib/ReactCompositeComponent';
const originalMountComponent = ReactCompositeComponent.Mixin.mountComponent;
ReactCompositeComponent.Mixin.mountComponent = function() {
if (hasCachedVersion(this)) return cache;
return originalMountComponent.apply(this, arguments)
}
You should do this before you require('react') anywhere in your app.
Webpack note: If you use something like new webpack.ProvidePlugin({'React': 'react'}) you should change it to new webpack.ProvidePlugin({'React': 'react-override'}) where you do your modifications in react-override.js and export react (i.e. module.exports = require('react'))
A complete example that caches in memory and updates reactid attribute could be this:
import ReactCompositeComponent from 'react/lib/ReactCompositeComponent';
import jsan from 'jsan';
import Logo from './logo.svg';
const cachable = [Logo];
const cache = {};
function splitMarkup(markup) {
var markupParts = [];
var reactIdPos = -1;
var endPos, startPos = 0;
while ((reactIdPos = markup.indexOf('reactid="', reactIdPos + 1)) != -1) {
endPos = reactIdPos + 9;
markupParts.push(markup.substring(startPos, endPos))
startPos = markup.indexOf('"', endPos);
}
markupParts.push(markup.substring(startPos))
return markupParts;
}
function refreshMarkup(markup, hostContainerInfo) {
var refreshedMarkup = '';
var reactid;
var reactIdSlotCount = markup.length - 1;
for (var i = 0; i <= reactIdSlotCount; i++) {
reactid = i != reactIdSlotCount ? hostContainerInfo._idCounter++ : '';
refreshedMarkup += markup[i] + reactid
}
return refreshedMarkup;
}
const originalMountComponent = ReactCompositeComponent.Mixin.mountComponent;
ReactCompositeComponent.Mixin.mountComponent = function (renderedElement, hostParent, hostContainerInfo, transaction, context) {
return originalMountComponent.apply(this, arguments);
var el = this._currentElement;
var elType = el.type;
var markup;
if (cachable.indexOf(elType) > -1) {
var publicProps = el.props;
var id = elType.name + ':' + jsan.stringify(publicProps);
markup = cache[id];
if (markup) {
return refreshMarkup(markup, hostContainerInfo)
} else {
markup = originalMountComponent.apply(this, arguments);
cache[id] = splitMarkup(markup);
}
} else {
markup = originalMountComponent.apply(this, arguments)
}
return markup;
}
module.exports = require('react');
It's not a complete solution
I had the same issue, with my react isomorphic app, and I used a couple of things.
Use Nginx in front of your nodejs server, and cache the rendered response for a short time.
In Case of showing a list of items, I use only a subset of list. For example, I will render only X items to fill up the viewport, and load the rest of the list in the client side using Websocket or XHR.
Some of my components are empty in serverside rendering and will only load from client side code (componentDidMount).
These components are usually graphs or profile related components. Those components usually don't have any benefit from SEO point of view
About SEO, from my experience 6 Month with an isomorphic app. Google Bot can read Client side React Web page easily, so I'm not sure why we bother with the server side rendering.
Keep the <Head>and <Footer> as static string or use template engine (Reactjs-handlebars), and render only the content of the page, (it should save a few rendered components). In case of a single page app, you can update the title description in each navigation inside Router.Run.
I think fast-react-render can help you. It increases the performance of your server rendering three times.
For try it, you only need to install package and replace ReactDOM.renderToString to FastReactRender.elementToString:
var ReactRender = require('fast-react-render');
var element = React.createElement(Component, {property: 'value'});
console.log(ReactRender.elementToString(element, {context: {}}));
Also you can use fast-react-server, in that case render will be 14 times as fast as traditional react rendering. But for that each component, which you want to render, must be declared with it (see an example in fast-react-seed, how you can do it for webpack).