useQuery hook, client side cache best practices in React Apollo - caching

Let’s say we have a component tree in react
<Tree>
<TreeTrunkDetails/>
<TreeLeavesDetails/>
</Tree>
and a useTree hook that wraps a useQuery hook that fetches something like
tree {
specie
genus
leaves {
shape
size
}
trunk {
bark
diameter
}
}
We could call useTree in <Tree> and then pass trunk and leaves data to
<TreeLeavesDetails details={leaves}/>
and
<TreeTrunkDetails details={trunk}/>
respectively.
We could also hook <Tree> and <TreeLeavesDetails/> and <TreeTrunkDetails/> with the same useTree hook. Given that results will be cached in the client (assuming we can cache) the performance will be ok
Or we could create a useTree, useTreeTrunk and a useTreeLeaves hooks and the hook each component with its specific hook (and query requesting only the data it needs).
What are the pros and cons of each approach? Is there a recommended pattern from the Apollo team?
Thank you

In the trivial case, it's fine to useQuery in each component. In the extreme case, you do add additional overhead per useQuery invocation. Each instance sets up a watch, which, when the cache updates needs to be evaluated.
If you think you're going to have a lot of Tree components on the page, it's best to issue one useQuery for each and pass data downwards.

Related

How to set up a pinia store to wait for the state

I have a (composition) store that is used in s couple of modules, they are unaware of each other and can be initialized in the same moment as well as at different times.
The store retrieves data from an API and saves it in the state. All of the modules await the execute store's fetchData method, all of them await a promise it returns.
The problem is that when the modules are initialized at the same moment, both perform fetchData resulting in two requests fired. The ideal situation would be to
Allow only one request being fired (that's pretty easy to do, just save the request state in a ref, f.e. "pending")
Export the state as a promise, so that the module awaits the state to be loaded from an API, receiving it's data only thereafter.
Those with a requirement that the modules are unaware of the store implementation - the less code on their side the better.
How do you handle such cases?
Is a there anything wrong with this approach? If not - how to implement it?

relay cache invalidation using pattern queries

I'm using relay for a react-native app and need to force invalidate, or forceFetch my a query at certain points in the app's lifecycle. I've noticed that you can create a query using Relay and then forceFetch it. This updates the cache. Ideally, I'd like to do what is done in the Mutations API. I'd like to provide a fat query that intersects with my existing data, and refetches that intersection. But I can't figure out how to do this. If I attempt to do this:
var query = Relay.createQuery(
Relay.QL`
query Root {
viewer {
unreadNotificationCount,
notifications
}
}
`, {
}
And apply environment.forceFetch({ query })I get validation errors about notifications not including either first or last arguments. But I'd rather not have to include first, or last arguments, because I don't know how many notifications have loaded in the rest of my app.
Any ideas here?
Unfortunately, there's no trivial way for you to do that intersection, at least not just using forceFetch. The mutation API does much more than try to just fetch a pattern query - it performs the intersection against the local graph to compute the actual mutation query that is sent.
You can definitely ask an individual container to refetch it's date using forceFetch, though. So if you have a container that fetches the notifications (I suspect you do), simply trigger forceFetch on it every N seconds.

Can a React-Redux app really scale as well as, say Backbone? Even with reselect. On mobile

In Redux, every change to the store triggers a notify on all connected components. This makes things very simple for the developer, but what if you have an application with N connected components, and N is very large?
Every change to the store, even if unrelated to the component, still runs a shouldComponentUpdate with a simple === test on the reselected paths of the store. That's fast, right? Sure, maybe once. But N times, for every change? This fundamental change in design makes me question the true scalability of Redux.
As a further optimization, one can batch all notify calls using _.debounce. Even so, having N === tests for every store change and handling other logic, for example view logic, seems like a means to an end.
I'm working on a health & fitness social mobile-web hybrid application with millions of users and am transitioning from Backbone to Redux. In this application, a user is presented with a swipeable interface that allows them to navigate between different stacks of views, similar to Snapchat, except each stack has infinite depth. In the most popular type of view, an endless scroller efficiently handles the loading, rendering, attaching, and detaching of feed items, like a post. For an engaged user, it is not uncommon to scroll through hundreds or thousands of posts, then enter a user's feed, then another user's feed, etc. Even with heavy optimization, the number of connected components can get very large.
Now on the other hand, Backbone's design allows every view to listen precisely to the models that affect it, reducing N to a constant.
Am I missing something, or is Redux fundamentally flawed for a large app?
This is not a problem inherent to Redux IMHO.
By the way, instead of trying to render 100k components at the same time, you should try to fake it with a lib like react-infinite or something similar, and only render the visible (or close to be) items of your list. Even if you succeed to render and update a 100k list, it's still not performant and it takes a lot of memory. Here are some LinkedIn advices
This anwser will consider that you still try to render 100k updatable items in your DOM, and that you don't want 100k listeners (store.subscribe()) to be called on every single change.
2 schools
When developing an UI app in a functional way, you basically have 2 choices:
Always render from the very top
It works well but involves more boilerplate. It's not exactly the suggested Redux way but is achievable, with some drawbacks. Notice that even if you manage to have a single redux connection, you still have have to call a lot of shouldComponentUpdate in many places. If you have an infinite stack of views (like a recursion), you will have to render as virtual dom all the intermediate views as well and shouldComponentUpdate will be called on many of them. So this is not really more efficient even if you have a single connect.
If you don't plan to use the React lifecycle methods but only use pure render functions, then you should probably consider other similar options that will only focus on that job, like deku (which can be used with Redux)
In my own experience doing so with React is not performant enough on older mobile devices (like my Nexus4), particularly if you link text inputs to your atom state.
Connecting data to child components
This is what react-redux suggests by using connect. So when the state change and it's only related to a deeper child, you only render that child and do not have to render top-level components everytime like the context providers (redux/intl/custom...) nor the main app layout. You also avoid calling shouldComponentUpdate on other childs because it's already baked into the listener. Calling a lot of very fast listeners is probably faster than rendering everytime intermediate react components, and it also permits to reduce a lot of props-passing boilerplate so for me it makes sense when used with React.
Also notice that identity comparison is very fast and you can do a lot of them easily on every change. Remember Angular's dirty checking: some people did manage to build real apps with that! And identity comparison is much faster.
Understanding your problem
I'm not sure to understand all your problem perfectly but I understand that you have views with like 100k items in it and you wonder if you should use connect with all those 100k items because calling 100k listeners on every single change seems costly.
This problem seems inherent to the nature of doing functional programming with the UI: the list was updated, so you have to re-render the list, but unfortunatly it is a very long list and it seems unefficient... With Backbone you could hack something to only render the child. Even if you render that child with React you would trigger the rendering in an imperative way instead of just declaring "when the list changes, re-render it".
Solving your problem
Obviously connecting the 100k list items seems convenient but is not performant because of calling 100k react-redux listeners, even if they are fast.
Now if you connect the big list of 100k items instead of each items individually, you only call a single react-redux listener, and then have to render that list in an efficient way.
Naive solution
Iterating over the 100k items to render them, leading to 99999 items returning false in shouldComponentUpdate and a single one re-rendering:
list.map(item => this.renderItem(item))
Performant solution 1: custom connect + store enhancer
The connect method of React-Redux is just a Higher-Order Component (HOC) that injects the data into the wrapped component. To do so, it registers a store.subscribe(...) listener for every connected component.
If you want to connect 100k items of a single list, it is a critical path of your app that is worth optimizing. Instead of using the default connect you could build your own one.
Store enhancer
Expose an additional method store.subscribeItem(itemId,listener)
Wrap dispatch so that whenever an action related to an item is dispatched, you call the registered listener(s) of that item.
A good source of inspiration for this implementation can be redux-batched-subscribe.
Custom connect
Create a Higher-Order component with an API like:
Item = connectItem(Item)
The HOC can expect an itemId property. It can use the Redux enhanced store from the React context and then register its listener: store.subscribeItem(itemId,callback). The source code of the original connect can serve as base inspiration.
The HOC will only trigger a re-rendering if the item changes
Related answer: https://stackoverflow.com/a/34991164/82609
Related react-redux issue: https://github.com/rackt/react-redux/issues/269
Performant solution 2: listening for events inside child components
It can also be possible to listen to Redux actions directly in components, using redux-dispatch-subscribe or something similar, so that after first list render, you listen for updates directly into the item component and override the original data of the parent list.
class MyItemComponent extends Component {
state = {
itemUpdated: undefined, // Will store the local
};
componentDidMount() {
this.unsubscribe = this.props.store.addDispatchListener(action => {
const isItemUpdate = action.type === "MY_ITEM_UPDATED" && action.payload.item.id === this.props.itemId;
if (isItemUpdate) {
this.setState({itemUpdated: action.payload.item})
}
})
}
componentWillUnmount() {
this.unsubscribe();
}
render() {
// Initially use the data provided by the parent, but once it's updated by some event, use the updated data
const item = this.state.itemUpdated || this.props.item;
return (
<div>
{...}
</div>
);
}
}
In this case redux-dispatch-subscribe may not be very performant as you would still create 100k subscriptions. You'd rather build your own optimized middleware similar to redux-dispatch-subscribe with an API like store.listenForItemChanges(itemId), storing the item listeners as a map for fast lookup of the correct listeners to run...
Performant solution 3: vector tries
A more performant approach would consider using a persistent data structure like a vector trie:
If you represent your 100k items list as a trie, each intermediate node has the possibility to short-circuit the rendering sooner, which permits to avoid a lot of shouldComponentUpdate in childs.
This technique can be used with ImmutableJS and you can find some experiments I did with ImmutableJS: React performance: rendering big list with PureRenderMixin
It has drawbacks however as the libs like ImmutableJs do not yet expose public/stable APIs to do that (issue), and my solution pollutes the DOM with some useless intermediate <span> nodes (issue).
Here is a JsFiddle that demonstrates how a ImmutableJS list of 100k items can be rendered efficiently. The initial rendering is quite long (but I guess you don't initialize your app with 100k items!) but after you can notice that each update only lead to a small amount of shouldComponentUpdate. In my example I only update the first item every second, and you notice even if the list has 100k items, it only requires something like 110 calls to shouldComponentUpdate which is much more acceptable! :)
Edit: it seems ImmutableJS is not so great to preserve its immutable structure on some operations, like inserting/deleting items at a random index. Here is a JsFiddle that demonstrates the performance you can expect according to the operation on the list. Surprisingly, if you want to append many items at the end of a large list, calling list.push(value) many times seems to preserve much more the tree structure than calling list.concat(values).
By the way, it is documented that the List is efficient when modifying the edges. I don't think these bad performances on adding/removing at a given index are related to my technique but rather related to the underlying ImmutableJs List implementation.
Lists implement Deque, with efficient addition and removal from both the end (push, pop) and beginning (unshift, shift).
This may be a more general answer than you're looking for, but broadly speaking:
The recommendation from the Redux docs is to connect React components fairly high in the component hierarchy. See this section.. This keeps the number of connections manageable, and you can then just pass updated props into the child components.
Part of the power and scalability of React comes from avoiding rendering of invisible components. For example instead of setting an invisible class on a DOM element, in React we just don't render the component at all. Rerendering of components that haven't changed isn't a problem at all as well, since the virtual DOM diffing process optimizes the low level DOM interactions.

Alt data dependency between actions not stores

I have a react app where I'm using alt for the flux architecture side of things.
I have a situation where I have two stores which are fed by ajax calls in their corresponding actions.
Having read the alt getting started page on data dependencies it mentions dependencies between stores using waitFor - http://alt.js.org/guide/wait-for/ but I don't see a way to use this kind of approach if one of my store actions is dependent on another store action (both of which are async).
If I was doing this inside a single action handler, I might return or chain some promises but I'm not sure how to implement this across action handlers. Has anyone achieved this? or am I going about my usage of ajax in react the wrong way?
EDIT: More detail.
In my example I have a list of nodes defined in a local json config file, my node-store makes an ajax request to get the node detail.
Once it's complete, a different component (with a different action handler and store) wants to use the node collection to make an ajax query to different endpoints a node may expose.
The nodes are re-used across many different components so I don't want to roll their functionality into several different stores/action handlers if possible.

Wicket and complex Ajax scenarios

When a screen has multiple interacting Ajax controls and you want to control the visibility of components to react to these controls (so that you only display what makes sense in any given situation), calling target.addComponent() manually on everything you want to update is getting cumbersome and isn't very maintainable.
Eventually the web of onClick and onUpdate callbacks can reach a point where adding a new component to the screen is getting much harder than it's supposed to be.
What are the commonly used strategies (or even libraries if such a thing exists) to avoid this build-up of complexity?
Update: Thank you for your answers, I found all of them very useful, but I can only accept one. Sorry.
In Wicket 1.5 there is an event bus. Each component has onEvent(Object payload) method. With component.send() you can broadcast events and each component can check the payload (e.g. UserJoinedEvent object) and decide whether it wants to participate in the current Ajax response. See http://www.wicket-library.com/wicket-examples/events/ for a simple demo.
You could add structural components such as WebMarkupContainers, when you add this to the AjaxTarget everything contained in it will also get updated. This allows you to update groups of components in a single line.
When I'm creating components for a page I tend to add them to component arrays:
Component[] pageComponents = {
new TextField<String>("Field1"),
new TextField<String>("Field2"),
new TextField<String>("Field3")
}
As of Wicket 1.5 the add functions take array parameters [1]. Therefore elements can be added to the page or target like this:
add(pageComponents);
target.add(pageComponents);
Components can then be grouped based on which you want to refresh together.
[1] http://www.jarvana.com/jarvana/view/org/apache/wicket/wicket/1.5-M3/wicket-1.5-M3-javadoc.jar!/org/apache/wicket/ajax/AjaxRequestTarget.html
Well, of how many components do we speak here? Ten? Twenty? Hundreds?
For up to twenty or about this you can have a state controller which controls which components should be shown. This controller sets the visible field of a components model and you do always add all components to your requests which are handled by the controller. The components ajax events you simply redirect to the controller handle method.
For really large numbers of components which have a too heavy payload for a good performance you could use javascript libraries like jQuery to do the show and hide things by the client.
I currently use some sort of modified Observer-Pattern to simulate an event-bus in Wicket 1.4.
My Pages act as an observable observer since my components don't know each other and are reused in different combinations across multiple pages. Whenever one component receives an Ajax-event that could affect other components as well, it calls a method on it's page with an event-object and the ajax-target. The page calls a similar method on all components which have registered themselves for this kind of event and each component can decide, on the base of the supplied event-object if and how it has to react and can attach itself to the target.
The same can be archived by using the wicket visitor. I don't know which one is better, but I think that's mainly a matter of taste.

Resources