What is the best way to 'reorder' a connection in RelayJS?
In my user interface, I allow my user to 'swap' two items, but creating a mutation around that is a bit tricky.
What I'm doing right now is the naive way, namely using FIELDS_CHANGE to change my node itself.
It works, but the problem is I can't seem to write an optimistic update for it. I am able to just pass a list of ids to my graphql server, but that doesn't work for the optimistic update because it expects the actual data.
So I guess I have to mock out my 'connection' interface, but unfortunately, it still doesn't work. I 'copied' my reordered nodes to getOptimisticResponse but it seems to be ignored. The data matches the actual server response. (ids simplified)
original:
{
item: {
edges: {
{cursor: 1, node: {id:2}}
{cursor: 2, node: {id:1}}
}
}
}
(doesn't do anything):
optimistic reponse:
{
item: {
edges: {
node: {id:1}
node: {id:2}
}
}
}
server reponse:
{
item: {
edges: {
{cursor: 1, node: {id:1}}
{cursor: 2, node: {id:2}}
}
}
}
What gives? It's equivalent (except for the cursor), and even if I add the cursor in, it still doesn't work.
What am I doing wrong? Also is there an easier way to do mock my ids to a connection?
Also, as an aside, is there a way to get this data piecemeal? Right now, reordering two item re-requests the whole list because of my mutation config. I suppose I can do it with RANGE_ADD, and RANGE_DELETE to 'simulate a swap` but is there any easier way to do it?
Since you trigger a mutation in response to the user reordering the items, I assume, you store the position or order of the items on the server side. For what you're doing, one way of creating optimistic response can be using that position or order information. On the server side, an item needs to provide an additional field position. On the client side, the items displayed are sorted by position.
When the user swaps two items, in the optimistic response of your client-side mutation, you just need to swap the position fields of those two items. The same applies on the server-side mutation.
The optimistic response code can be like:
getOptimisticResponse() {
return {
item1: {
id: this.props.item1.id,
position: this.props.item2.position,
},
item2: {
id: this.props.item2.id,
position: this.props.item1.position,
},
};
}
Related
I can’t seem to find a way to read an entire type without having to resort to individual fieldPolicies for every field in that type.
const cache = new InMemoryCache({
typePolicies: {
SomeType: {
fields:{
// defining individual field (READ) policies would be insane (at least for my case)
// is there at least something like a wildcard mechanism?
},
merge, // yeah... possible at type level
read // ??? not possible (WHYYYY), so... Is there any other way to do this?
}
}
})
Context
This problem is likely predicated on certain choices, some of which are changeable and some of which are not. We are using the following technologies and frameworks:
Relay / React / TypeScript
ContentStack (CMS)
Problem
I'm attempting to create a highly customizable page that can be built from multiple kinds of UI components based on the data presented to it (to allow pages to be built using a CMS using prefab UI in an unpredictable order).
My first attempt at this was to create a set of fragments for the potential UI components that may be referenced in an array:
query CustomPageQuery {
title
description
customContentConnection {
edges {
node {
... HeroFragment
... TweetBlockFragment
... EmbeddedVideoFragment
"""
Further fragments are added here as we add more kinds of UI
"""
}
}
}
}
In the CMS we're using (ContentStack), the complexity of this query has grown to the point that it is rejected because it requires too many calls to the database in a single query. For that reason, I'm hoping there's a way I can split up the calls for the fragments so that they are not part of the initial query, or some similar solution that results in splitting up this query into multiple pieces.
I was hoping the #defer directive would solve this for me, but it's not supported by relay-compiler.
Any ideas?
Sadly #defer is still not a standard so it is not supported by most implementation (since you would also need the server to support it).
I am not sure if I understand the problem correctly, but you might want to look more toward using #skip or #include to only fetch the fragment you need depending on the type of the thing. But it would require the frontend to know what it wants to query beforehand.
query CustomPageQuery($hero: Boolean, $tweet: Boolean, $video: Boolean) {
title
description
customContentConnection {
edges {
node {
... HeroFragment #include(if: $hero)
... TweetBlockFragment #include(if: $tweet)
... EmbeddedVideoFragment #include(if: $video)
}
}
}
}
Generally you want to be able to discriminate the type without having to do a database query. So say:
type Hero {
id: ID
name: String
}
type Tweet {
id: ID
content: String
}
union Content = Hero | Tweet
{
Content: {
__resolveType: (parent, ctx) => {
// That should be able to resolve the type without a DB query
},
}
}
Once that is passed, each fragment is then resolved, making more database queries. If those are not properly batched with dataloaders then you have a N+1 problem. I am not sure how much control (if at all) you have on the backend but there is no silver bullet for your problem.
If you can't make optimizations on the backend then I would suggest trying to limit the connection. They seem to be using cursor based pagination, so you start with say first: 10 and once the first batch is returned, you can query the next elements by setting the after to the last cursor of the previous batch:
query CustomPageQuery($after: String) {
customContentConnection(first: 10, after: $after) {
edges {
cursor
node {
... HeroFragment
... TweetBlockFragment
... EmbeddedVideoFragment
}
}
pageInfo {
hasNextPage
}
}
}
As a last resort, you could try to first fetch all the IDs and then do subsequent queries to the CMS for each id (using aliases I guess) or type (if you can filter on the connection field). But I feel dirty just writing it so avoid it if you can.
{
one: node(id: "UUID1") {
... HeroFragment
... TweetBlockFragment
... EmbeddedVideoFragment
}
two: node(id: "UUID2") {
... HeroFragment
... TweetBlockFragment
... EmbeddedVideoFragment
}
}
I have a filtered list of items based on a getAllItems query, which takes a filter and an order by option as arguments.
After creating a new item, I want to delete the cache for this query, no matter what variables were passed. I don't know how to do this.
I don't think updating the cache is an option. Methods mentionned in Apollo Client documentation (Updating the cache after a mutation, refetchQueries and update) all seem to need a given set of variables, but since the filter is a complex object (with some text information), I would need to update the cache for every given set of variables that were previously submitted. I don't know how to do this. Plus, only the server does know how this new item impact pagination and ordering.
I don't think fetch-policy (for instance setting it to cache-and-network) is what I'm looking for, because if accessing the network is what I want after having created a new item, when I'm just filtering the list (typing in a string to search), I want to stay with the default behavior (cache-only).
client.resetStore would reset the store for all type of queries (not only the getAllItems query), so I don't think it's what I'm looking for either.
I'm pretty sure I'm missing something here.
There's no officially supported way of doing this in the current version of Apollo but there is a workaround.
In your update function, after creating an item, you can iterate through the cache and delete all nodes where the key starts with the typename you are trying to remove from the cache. e.g.
// Loop through all the data in our cache
// And delete any items where the key start with "Item"
// This empties the cache of all of our items and
// forces a refetch of the data only when it is next requested.
Object.keys(cache.data.data).forEach(key =>
key.match(/^Item/) && cache.data.delete(key)
)
This works for queries that exist a number of times in the cache with different variables, i.e. paginated queries.
I wrote an article on Medium that goes in to much more detail on how this works as well as an implementation example and alternative solution that is more complicated but works better in a small number of use cases. Since this article goes in to more detail on a concept I have already explained in this answer, I believe it is ok to share here: https://medium.com/#martinseanhunt/how-to-invalidate-cached-data-in-apollo-and-handle-updating-paginated-queries-379e4b9e4698
this worked for me (requires apollo 2 for cache eviction feature) - clears query matched by regexp from cache
after clearing cache query will be automatically refeteched without need to trigger refetch manually (if you are using angular: gql.watch().valueChanges will perform xhr request and emit new value)
export const deleteQueryFromCache = (cache: any, matcher: string | RegExp): void => {
const rootQuery = cache.data.data.ROOT_QUERY;
Object.keys(rootQuery).forEach(key => {
if (key.match(matcher)) {
cache.evict({ id: "ROOT_QUERY", fieldName: key })
}
});
}
ngrx like
resolvers = {
removeTask(
parent,
{ id },
{ cache, getCacheKey }: { cache: InMemoryCache | any; getCacheKey: any }
) {
const key = getCacheKey({ __typename: "Task", id });
const { [key]: deleted, ...data } = cache.data.data;
cache.data.data = { ...data };
return id;
}
}
I'm dealing with a big json with a lot of editable values (*big means > 1000), entirely rendered on the same page, so my state is simply { data: bigBigJson }.
The initial rendering is quite long but it's ok.
The problem is that when an input triggers an onChange (and a redux action), the value is updated in the state, and the whole rendering happens again.
I wonder how people deal with that? Is there simple solutions (even not necessarily best practices).
Notes:
The json document is provided by an external API, I can't change it
I could separate the state in several sub-states (it's a multiple levels json), but hoping for a simpler/faster solution (I know it would probably be a best practice though)
I'm using react and redux, not immutable.js but everything is immutable (obviously)
––
Update (about DSS answer)
• (Case 1) Let's say the state is:
{
data: {
key1: value1,
// ...
key1000: value1000
}
}
If keyN is updated, all the state would be re-rendered anyway right? The reducer would return something like:
{
data: {
...state.data,
keyN: newValueN
}
That's one thing but it's not really my case.
• (Case 2) The state is more like (over simplified):
{
data: {
dataSet1: {
key1: value1,
// ...
key10: value1000
},
// ...
dataSet100: {
key1: value1,
// ...
key10: value1000
}
}
}
If dataN.keyN is updated, I would return in the reducer
{
data: {
...state.data,
dataN: {
...state.data.dataN,
keyN: newValueN
}
}
}
I guess i'm doing something wrong as it doesn't look really nice.
Would it change anything like that:
// state
{
dataSet1: {
key1: value1,
// ...
key10: value1000
},
// ...
dataSet100: {
key1: value1,
// ...
key10: value1000
}
}
// reducer
{
...state,
dataN: {
...state.dataN,
keyN: newValueN
}
}
Finally, just to be more specific about my case, here is more what my reducer looks like (still a bit simplified):
import get from 'lodash/fp/get'
import set from 'lodash/fp/set'
// ...
// reducer:
// path = 'values[3].values[4].values[0]'
return {
data: set(path, {
...get(path, state.data),
value: newValue
}, state.data)
}
• In case you are wondering, i can't just use:
data: set(path + '.value', newValue, state.data)
as other properties needs to be updated as well.
The reason everything gets rerendered is because everything in your store changes. It may look the same. All properties may have the same values. But all object references have changed. That is to say that even if two objects have the same properties, they still have separate identities.
Since React-Redux uses object identity to figure out if an object has changed, you should always make sure to use the same object reference whenever an object has not changed. Since Redux state must be immutable, using the old object in the new state is a guaranteed not to cause problems. Immutable objects can be reused in the same way an integer or a string can be reused.
To solve your dilemma, you can, in your reducer, go over the JSON and the store state sub objects and compare them. If they are the same, make sure to use the store object. By reusing the same object React-Redux will make sure the components that represent those objects will not be rerendered. This means that if only one of those 1000 objects changes, only one component will update.
Also make sure to use the React key property correctly. Each of those 1000 items needs its own ID that stays the same from JSON to JSON.
Finally, consider making your state itself more amenable to such updates. You could transform the JSON when loading and updating the state. You could store the items keyed by ID for instance which would make the update process a lot faster.
The RANGE_ADD mutation requires an edgeName so that it can insert the new edge into the client side connection. As part of its query, it also includes the cursor.
The issue is that the server has no way of knowing which args the client might be applying to a connection when it's generating the edge response.
Does this mean that the cursor should be stable?
In general, cursors are not required to be the same when connections are used with different arguments. For example, if I did:
{
namedFriends: friends(orderby:NAME first:5) {
edges { cursor, node { id } }
}
favoriteFriends: friends(orderby:FAVORITE first:5) {
edges { cursor, node { id } }
}
}
Different backends might be use to server those two connections, since we might have different backends for the two orderings; because of that, the cursors might be different for the same friend, because they might need to encode different information for the different backends.
This makes it tricky when performing a mutation, though:
mutation M {
addFriend($input) {
newFriendsEdge {
{ cursor, node { id } } // Which cursor is this?
}
}
}
In cases like this, where the mutation is going to return an edge from a connection, it's useful for the field to accept the same non-pagination arguments that the connection does. So in the above case, we would do:
mutation M {
addFriend($input) {
newNamedFriendsEdge: newFriendsEdge(orderby:NAME) {
{ cursor, node { id } } // Cursor for namedFriends
}
newFavoriteFriendsEdge: newFriendsEdge(orderby:FAVORITE) {
{ cursor, node { id } } // Cursor for favoriteFriends
}
}
}
And ideally, the implementation for newFriendsEdge(orderby:FAVORITE) and favoriteFriends: friends(orderby:FAVORITE first:5) share common code to generate cursors.
Note that while the cursors are not required to be the same, it's fine if they are, as an implementation detail of the server. Often, the cursor is just the ID of the node, which is a common way for this to happen. In practice, in these situations, if a argument on the connections doesn't affect the cursor, we would omit it from the mutation's edge field; so if orderby didn't affect the cursor, then:
mutation M {
addFriend($input) {
newFriendsEdge {
{ cursor, node { id } } // orderby didn't exist on newFriendsEdge, so this cursor must apply to both.
}
}
}
This is the common pattern in our mutations. Let me know if you run into any issues; we thought through the "arguments change cursors" case when developing the pattern of returning edges on mutations to make sure there was a possible solution to it (which is when we came up with the arguments on edge fields idea), but it hasn't come up in practice all that much, so if you run into trickiness definitely let me know, and we can and should revisit these assumptions / requirements!