How long will be take to find the DOM element in cypress - cypress

I am going to improve the cypress performance.
Let's assume, we have thousands of div components on the web page.
In this case. I would like to know how long will take to complete this command.
cy.get('div').contains('Testing sentence').click()
If we have a hundred div components on the web page, How long?
I want to set a limit for the DOM element count. What number are the best practices?
To avoid this problem, we can use more specific selectors such as class, id, data-testid.
But I just want to know How long will take to find the element above command.

Related

Iterating through Google events that are in the past

I'm implementing view for google events in my application using the following end-point:
https://developers.google.com/google-apps/calendar/v3/reference/events/list
The problem that I have is implementing a feature to make it possible to go to the previous page of events. For example: user is having 20 events for the current date and once he presses the button, they have 20 past events.
As I can see, Google provides only:
"nextPageToken": string
That fetches the results for the next page.
The way I see the problem can be solved:
Fetch results in descending order and then traverse them the same way as we do with nextPageToken. The problem is that it is stated in the doc that only asc is available:
"startTime": Order by the start date/time (ascending). This is only
available when querying single events (i.e. the parameter singleEvents
is True)
Fetch all the events for specific time period, traverse the pages until I get to the current date or to the end of the list, memorize all the nextPageTokens. Use memorized values to be able to go backwards. The clear drawback of it is the fact that we need to go through unpredictable number of pages to get the current date. That can dramatically affect the performance. But, at least it is something that Google APIs allow. Updated: Checked that approach with 5 years time span and sometimes it takes up to 20 seconds to get the current date page token.
Is there a more convenient way to implement the ability to go to the previous pages?

How to cache an illustrated map?

A website has an illustrated map as a central feature, with waterfall like loading on the initial visit. The goal is that the map shouldn't have to reload each time a user visits the page within a given session. How can we prevent the map from constantly reloading each time the user wants to navigate in a given session?
Link here: http://thebambergergroup.com/map/murray-hill/nyc
Hopefully I'm understanding this question right... here goes!
The quick-and-dirty way would be to use a cookie to store a bit of state about whether or not the user has loaded the map yet, then use that to decide how the map should load (either waterfall fade in animations, or all at once)
So, when the image load is done (if you don't have a handle on when it's finished, either cheat and use a timer, or just assume that if the user has visited the page at all:
//please don't actually do it this way,
//there are many better libraries to handle cookies
//you'll also want to look at expiry time, etc
function loadImagesFinished(){
document.cookie = 'mapLoaded=true';
}
Then, for when your page is loading and you want to decide how to load the images:
var haveMapImagesBeenLoaded = getCookie('mapLoaded');
if (haveMapImagesBeenLoaded){
loadImagesWaterfallAnimation();
} else {
loadImagesAllAtOnce();
}
Obviously a lot of oversimplification happening, but you get the idea.
If you'd like a more robust way, you can use server-side session storage (if your language supports it, which it likely does). The general approach is still the same, though. Check if the session contains the mapLoaded attribute, if not, set it and write out something to indicate to the browser it needs to load the map with the waterfall animation (different HTML, a JS variable inside a <script> tag, an http response header, a cookie... anything!). Otherwise, load it all at once!

Can a React-Redux app really scale as well as, say Backbone? Even with reselect. On mobile

In Redux, every change to the store triggers a notify on all connected components. This makes things very simple for the developer, but what if you have an application with N connected components, and N is very large?
Every change to the store, even if unrelated to the component, still runs a shouldComponentUpdate with a simple === test on the reselected paths of the store. That's fast, right? Sure, maybe once. But N times, for every change? This fundamental change in design makes me question the true scalability of Redux.
As a further optimization, one can batch all notify calls using _.debounce. Even so, having N === tests for every store change and handling other logic, for example view logic, seems like a means to an end.
I'm working on a health & fitness social mobile-web hybrid application with millions of users and am transitioning from Backbone to Redux. In this application, a user is presented with a swipeable interface that allows them to navigate between different stacks of views, similar to Snapchat, except each stack has infinite depth. In the most popular type of view, an endless scroller efficiently handles the loading, rendering, attaching, and detaching of feed items, like a post. For an engaged user, it is not uncommon to scroll through hundreds or thousands of posts, then enter a user's feed, then another user's feed, etc. Even with heavy optimization, the number of connected components can get very large.
Now on the other hand, Backbone's design allows every view to listen precisely to the models that affect it, reducing N to a constant.
Am I missing something, or is Redux fundamentally flawed for a large app?
This is not a problem inherent to Redux IMHO.
By the way, instead of trying to render 100k components at the same time, you should try to fake it with a lib like react-infinite or something similar, and only render the visible (or close to be) items of your list. Even if you succeed to render and update a 100k list, it's still not performant and it takes a lot of memory. Here are some LinkedIn advices
This anwser will consider that you still try to render 100k updatable items in your DOM, and that you don't want 100k listeners (store.subscribe()) to be called on every single change.
2 schools
When developing an UI app in a functional way, you basically have 2 choices:
Always render from the very top
It works well but involves more boilerplate. It's not exactly the suggested Redux way but is achievable, with some drawbacks. Notice that even if you manage to have a single redux connection, you still have have to call a lot of shouldComponentUpdate in many places. If you have an infinite stack of views (like a recursion), you will have to render as virtual dom all the intermediate views as well and shouldComponentUpdate will be called on many of them. So this is not really more efficient even if you have a single connect.
If you don't plan to use the React lifecycle methods but only use pure render functions, then you should probably consider other similar options that will only focus on that job, like deku (which can be used with Redux)
In my own experience doing so with React is not performant enough on older mobile devices (like my Nexus4), particularly if you link text inputs to your atom state.
Connecting data to child components
This is what react-redux suggests by using connect. So when the state change and it's only related to a deeper child, you only render that child and do not have to render top-level components everytime like the context providers (redux/intl/custom...) nor the main app layout. You also avoid calling shouldComponentUpdate on other childs because it's already baked into the listener. Calling a lot of very fast listeners is probably faster than rendering everytime intermediate react components, and it also permits to reduce a lot of props-passing boilerplate so for me it makes sense when used with React.
Also notice that identity comparison is very fast and you can do a lot of them easily on every change. Remember Angular's dirty checking: some people did manage to build real apps with that! And identity comparison is much faster.
Understanding your problem
I'm not sure to understand all your problem perfectly but I understand that you have views with like 100k items in it and you wonder if you should use connect with all those 100k items because calling 100k listeners on every single change seems costly.
This problem seems inherent to the nature of doing functional programming with the UI: the list was updated, so you have to re-render the list, but unfortunatly it is a very long list and it seems unefficient... With Backbone you could hack something to only render the child. Even if you render that child with React you would trigger the rendering in an imperative way instead of just declaring "when the list changes, re-render it".
Solving your problem
Obviously connecting the 100k list items seems convenient but is not performant because of calling 100k react-redux listeners, even if they are fast.
Now if you connect the big list of 100k items instead of each items individually, you only call a single react-redux listener, and then have to render that list in an efficient way.
Naive solution
Iterating over the 100k items to render them, leading to 99999 items returning false in shouldComponentUpdate and a single one re-rendering:
list.map(item => this.renderItem(item))
Performant solution 1: custom connect + store enhancer
The connect method of React-Redux is just a Higher-Order Component (HOC) that injects the data into the wrapped component. To do so, it registers a store.subscribe(...) listener for every connected component.
If you want to connect 100k items of a single list, it is a critical path of your app that is worth optimizing. Instead of using the default connect you could build your own one.
Store enhancer
Expose an additional method store.subscribeItem(itemId,listener)
Wrap dispatch so that whenever an action related to an item is dispatched, you call the registered listener(s) of that item.
A good source of inspiration for this implementation can be redux-batched-subscribe.
Custom connect
Create a Higher-Order component with an API like:
Item = connectItem(Item)
The HOC can expect an itemId property. It can use the Redux enhanced store from the React context and then register its listener: store.subscribeItem(itemId,callback). The source code of the original connect can serve as base inspiration.
The HOC will only trigger a re-rendering if the item changes
Related answer: https://stackoverflow.com/a/34991164/82609
Related react-redux issue: https://github.com/rackt/react-redux/issues/269
Performant solution 2: listening for events inside child components
It can also be possible to listen to Redux actions directly in components, using redux-dispatch-subscribe or something similar, so that after first list render, you listen for updates directly into the item component and override the original data of the parent list.
class MyItemComponent extends Component {
state = {
itemUpdated: undefined, // Will store the local
};
componentDidMount() {
this.unsubscribe = this.props.store.addDispatchListener(action => {
const isItemUpdate = action.type === "MY_ITEM_UPDATED" && action.payload.item.id === this.props.itemId;
if (isItemUpdate) {
this.setState({itemUpdated: action.payload.item})
}
})
}
componentWillUnmount() {
this.unsubscribe();
}
render() {
// Initially use the data provided by the parent, but once it's updated by some event, use the updated data
const item = this.state.itemUpdated || this.props.item;
return (
<div>
{...}
</div>
);
}
}
In this case redux-dispatch-subscribe may not be very performant as you would still create 100k subscriptions. You'd rather build your own optimized middleware similar to redux-dispatch-subscribe with an API like store.listenForItemChanges(itemId), storing the item listeners as a map for fast lookup of the correct listeners to run...
Performant solution 3: vector tries
A more performant approach would consider using a persistent data structure like a vector trie:
If you represent your 100k items list as a trie, each intermediate node has the possibility to short-circuit the rendering sooner, which permits to avoid a lot of shouldComponentUpdate in childs.
This technique can be used with ImmutableJS and you can find some experiments I did with ImmutableJS: React performance: rendering big list with PureRenderMixin
It has drawbacks however as the libs like ImmutableJs do not yet expose public/stable APIs to do that (issue), and my solution pollutes the DOM with some useless intermediate <span> nodes (issue).
Here is a JsFiddle that demonstrates how a ImmutableJS list of 100k items can be rendered efficiently. The initial rendering is quite long (but I guess you don't initialize your app with 100k items!) but after you can notice that each update only lead to a small amount of shouldComponentUpdate. In my example I only update the first item every second, and you notice even if the list has 100k items, it only requires something like 110 calls to shouldComponentUpdate which is much more acceptable! :)
Edit: it seems ImmutableJS is not so great to preserve its immutable structure on some operations, like inserting/deleting items at a random index. Here is a JsFiddle that demonstrates the performance you can expect according to the operation on the list. Surprisingly, if you want to append many items at the end of a large list, calling list.push(value) many times seems to preserve much more the tree structure than calling list.concat(values).
By the way, it is documented that the List is efficient when modifying the edges. I don't think these bad performances on adding/removing at a given index are related to my technique but rather related to the underlying ImmutableJs List implementation.
Lists implement Deque, with efficient addition and removal from both the end (push, pop) and beginning (unshift, shift).
This may be a more general answer than you're looking for, but broadly speaking:
The recommendation from the Redux docs is to connect React components fairly high in the component hierarchy. See this section.. This keeps the number of connections manageable, and you can then just pass updated props into the child components.
Part of the power and scalability of React comes from avoiding rendering of invisible components. For example instead of setting an invisible class on a DOM element, in React we just don't render the component at all. Rerendering of components that haven't changed isn't a problem at all as well, since the virtual DOM diffing process optimizes the low level DOM interactions.

How to be certain when the page load is complete to measure load time

I need to measure load time on a page navigation. Here is my situation:
When I navigate, the page laod is taking variable time as the ajax elements load. How to be certain that the page is fully loaded to measure its load time correctly?
I cannot be specific that locating a particular element(text, table, or image...) indicate the complete page load as page load depends on data.
Please help me deal with this situation.
Thanks
Do you want to be able to test this on an "as needed" basis or do you want to instrument the pages so that you gather data from all your users?
If you just need to do it on an ad-hoc basis then http://webpagetest.org will help you - providing there's not too long a gap between the AJAX requests it will include them.
If you want to look gather data across all AJAX calls then you will need to instrument the success and failure callbacks to store the time they finish and calculate the difference between the last one and the page start. Then once you've got this push the value to Google Analytics or something else.
If all your AJAX calls are designed to complete before onload fires then the existing SiteSpeed numbers in Google Analytics might be good enough for you.

GWT - Populate Grid asynchronously

we've got a GWT application with a simple search mask displaying the results as a grid.
Server side processing time is ok as well as network latency.
Client rendering time is ok even on low spec hardware with internet explorer 6 as long as the number of results is not too high (max 100 rows in the grid).
We have implemented a navigation scheme allowing the user to scroll up/down the grid. That's fast enough also.
Has anybody an idea if it is possible to display the first 100 results immediately and pull the rest in the background? The GWT architecture allows this. However I'm interested in possible pitfalls e.g. what happens if the user starts another query while the browser is still fetching previous results etc.
Thanks!
Holger
LazyPanel and this blog post might be a good starting point for you :)
The GWT Incubator has also many interesting (albeit not always complete/perfect/stable) tables and other pagination solutions - like PagingScrollTable.
Assuming your plan is to send the first 100, and then bring the rest, you can use bulks for the rest of the results. then, if a user initiates another search, you just wait for the end of the bulk ( ie, check between bulk retrivals if you have a pending query ).
Another way you can go is assign identifiers to the user searches. this will make the problem of mixed results non-existant, and will also help you with results history for multiple searches.
we found that users love the live grid look & feel, which solves most of those problems, but that might not be optional always.

Resources