RTK Query: Linking cache for batch queries - react-hooks

I have a RTK Query endpoint called getUsers which fetches user data in a batch for multiple user ids, provided as an array.
Now I am concerned about caching behaviour. Is there a way I can make RTK Query fetch user data for the missing users only?
for ex
const { data } = useGetUsersQuery([1, 2, 3])
should fetch users 1, 2 and 3
now, afterwards
const { data } = useGetUsersQuery([2, 3, 4, 5])
should treat this api as fetch users 4 and 5, merge with existing data for 2 and 3. And return the entire data?

In short, no. RTK Query is not a normalized cache - it is a document cache - which means that every api response is handled completely unrelated to every other api response.
That is mentioned e.g. here in the docs: https://redux-toolkit.js.org/rtk-query/comparison#no-normalized-or-deduplicated-cache

Related

Web.Content calling API service and merging pages with List.Transform started to fail

I created PowerBI report which which is connecting to data source via API service. Returning json contains thousands of entities. API service is called via Web.Content function. API service returns always total record count and so we are able to calculate nr. of pages which has to be called to obtain whole dataset. This report is displaying data from our servicedesk app, which is deployed on many servers and for many customers and use Query parameters to connect to any of these servers.
Detail of Power query is below.
Why am I writing here. This report was working without any issue more than 1,5 year but on August 17th one of servers start causing erros in step Pages where are some random lines (pages) with errors - see attached picture labeled "Errors in step Pages". and this is reason that next step Entities (List.Union) in query is stopping refresh and generate errors with message:
Expression.Error: We cannot apply field access to the type List. Details: Value=[List] Key=requests
What is notable
API service si returning records in the same order but faulty lists are random when calling with same parameters
some times is refresh without any error
The same power query called on another server is working correctly , problem is only with one specific server.
This problem started without notice on the most important server after 1,5 year without any problem.
Here is full text power of query for this main source, which is used later in other queries to extract all necessary data. Json is really complicated and I extract from it list of requests, list of solvers, list of solver groups,.... and this base query and its output is input for many referenced queries.
Errors in step Pages
let
BaseAPIUrl = apiurl&"apiservice?", /*apiurl is parameter - name of server e.g. https://xxxx.xxxxxx.sk/ */
EntitiesPerPage = RecordsPerPage, /*RecordsPerPage is parameter and defines nr. of record per page - we used as optimum 200-400 record per pages, but is working also with 4000 record per page*/
ApiToken = FnApiToken(), /*this function is returning apitoken value which is returning value of another api service apiurl&"api/auth/login", which use username and password in body of call to get apitoken */
GetJson = (QParm) => /*definiton general function to get data from data source*/
let
Options =
[ Query= QParm,
Headers=
[
Accept="application/json",
ApiKeyName="apitoken",
Authorization=ApiToken
]
],
RawData = Web.Contents(BaseAPIUrl, Options),
Json = Json.Document(RawData)
in Json,
GetEntityCount = () => /*one times called function to get nr of records using GetJson, which is returned as a part of each call*/
let
QParm = [pp="1", pg="1" ],
Json = GetJson(QParm),
Count = Json[totalRecord]
in
Count,
GetPage = (Index) => /*repeatadly called function to get each page of json using GetJson*/
let
PageNr = Text.From(Index+1),
PerPage = Text.From(EntitiesPerPage),
QParm = [pg = PageNr, pp=PerPage],
Json = GetJson(QParm),
Value = Json[data][requests]
in Value,
EntityCount = List.Max({ EntitiesPerPage, GetEntityCount() }), /*setup of nr. of records to variable*/
PageCount = Number.RoundUp(EntityCount / EntitiesPerPage), /*setup of nr. of pages */
PageIndices = { 0 .. PageCount - 1 },
Pages = List.Transform(PageIndices, each GetPage(_) /*Function.InvokeAfter(()=>GetPage(_),#duration(0,0,0,1))*/), /*here we call for each page GetJson function to get whole dataset - there is in comment test with delay between getpages but was not neccessary*/
Entities = List.Union(Pages),
Table = Table.FromList(Entities, Splitter.SplitByNothing(), null, null, ExtraValues.Error)
I also tried another way of appending pages to list using List.Generate. This is also bringing random errors in list but
it is bringing possibility to transform to table in contrast with original way with using List.Transform, but other referenced queries are failing and contains on the last row errors
When I am exploring content of faulty page/list extracting it via Add as New Query there are always all record without any fail.....
Source = List.Generate( /*another way to generate list of all pages*/
() => [Page = 0, ReqPageData = GetPage(0) ],
each [Page] < PageCount,
each [ReqPageData = GetPage( [Page] ),
Page = [Page] + 1 ],
each [ReqPageData]
),
#"Converted to Table" = Table.FromList(Source, Splitter.SplitByNothing(), null, null, ExtraValues.Error), /*here i am able to generate table from list in contrast when is used List.Generate*/
#"Expanded Column1" = Table.ExpandListColumn(#"Converted to Table", "Column1"), /*here aj can expand list to column*/
#"Removed Errors" = Table.RemoveRowsWithErrors(#"Expanded Column1", {"Column1"}) /*here i try to exclude errors, but i dont know what happend and which records (if any) are excluded*/
Extracting errored page
and finnaly I am tottaly clueless not able to find the cause of this behavior on this specific server. I tested to call pages which are errored via POSTMAN, I discused this issue with author of API service and He also tried to call this API service with all parameters but server is returning every page OK, only Power query is not able to List.Transform ...
I will be grateful and appreciate any tips or advice or if somebody solved the same issue in the past ....
Kuby
No, each error line of list in step List.Transform coud by extracted as new query and there are all records from one page OK. hmmmm
Finnaly, problem described in this issue was caused by "corrupted" content of returning json. The provider of core system informed me that they found bug and after fixing on the side of servisdesk is everything OK again. I tried to find problem in Power query and problem was in servisdesk. :(

How does Apollo paginated "read" and "merge" work?

I was reading through the docs to learn pagination approaches for Apollo. This is the simple example where they explain the paginated read function:
https://www.apollographql.com/docs/react/pagination/core-api#paginated-read-functions
Here is the relevant code snippet:
const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
feed: {
read(existing, { args: { offset, limit }}) {
// A read function should always return undefined if existing is
// undefined. Returning undefined signals that the field is
// missing from the cache, which instructs Apollo Client to
// fetch its value from your GraphQL server.
return existing && existing.slice(offset, offset + limit);
},
// The keyArgs list and merge function are the same as above.
keyArgs: [],
merge(existing, incoming, { args: { offset = 0 }}) {
const merged = existing ? existing.slice(0) : [];
for (let i = 0; i < incoming.length; ++i) {
merged[offset + i] = incoming[i];
}
return merged;
},
},
},
},
},
});
I have one major question around this snippet and more snippets from the docs that have the same "flaw" in my eyes, but I feel like I'm missing some piece.
Suppose I run a first query with offset=0 and limit=10. The server will return 10 results based on this query and store it inside cache after accessing merge function.
Afterwards, I run the query with offset=5 and limit=10. Based on the approach described in docs and the above code snippet, what I'm understanding is that I will get only the items from 5 through 10 instead of items from 5 to 15. Because Apollo will see that existing variable is present in read (with existing holding initial 10 items) and it will slice the available 5 items for me.
My question is - what am I missing? How will Apollo know to fetch new data from the server? How will new data arrive into cache after initial query? Keep in mind keyArgs is set to [] so the results will always be merged into a single item in the cache.
Apollo will not slice anything automatically. You have to define a merge function that keeps the data in the correct order in the cache. One approach would be to have an array with empty slots for data not yet fetched, and place incoming data in their respective index. For instance if you fetch items 30-40 out of a total of 100 your array would have 30 empty slots then your items then 60 empty slots. If you subsequently fetch items 70-80 those will be placed in their respective indexes and so on.
Your read function is where the decision on whether a network request is necessary or not will be made. If you find all the data in existing you will return them and no request to the server will be made. If any items are missing then you need to return undefined which will trigger a network request, then your merge function will be triggered once data is fetched, and finally your read function will run again only this time the data will be in the cache and it will be able to return them.
This approach is for the cache-first caching policy which is the default.
The logic for returning undefined from your read function will be implemented by you. There is no apollo magic under the hood.
If you use cache-and-network policy then a your read doesn't need to return undefined when data

Gatsby merge graphql data with dynamic data

I have a site made with gatsby that has some static data ( a catalogue of items ), but at runtime when the user logs in, it makes a request to an api to get a list of items they are entitled to ( this returns just a list of id's.
Is it possible ( and does this make sense to do ) - to:
return the graphql item data in a normalized list so I can do easy look ups of the item id's and then render out into a list?
So basically
items = { 1: { title: 'itemA', price: 1 }, 2: { title: 'itemB', price: 2 }}; // from a Gql query
userItems = [1, 3, 5]; // from a dynamic fetch request
userItems.map(id => <Item {...items[id] } />) // easy lookup of item info
I know if I didn't use Gql I could just fetch the json data for the items and either merge it with the user data when the service fetches it, or just do as in my example above as I would control the structure of the json data, I just wanted to make use of gql to learn it, and so the item data could be reused on different pages that might be totally static ( a list of all items for example ) and then I could pick and choose which item attributes I want.
My data currently is an array of items, as if I storred it as a keyed object of items then fed it to gql via the json transformer, each of my nodes were different, so I didn't seem to be able to fetch a list of all items.

Eager loading (withCount), limit on how many models you can preload?

I'm having an issue where I have to display a table rendered client side, but the returned number of rows is about 2.5k entries. The model has a many-to-many relationship with a Sentence model through a bridging/pivot table with about 7k rows:
public function sentences()
{
return $this->belongsToMany(Sentence::class)->withTimestamps();
}
I'm trying to preload the Sentences via $entries = Entry:::with('sentences')->get(); but this generates a query with 2.5k ids in it where entry_sentence.entry_id in (1, 2, 3, 4, 5, 7, ..., 2462, 2463, 2464).
This query generated runs in about 0.06s on my local machine. But it takes about 8 seconds to generate the collection, which I think is due to hydration? Without preloading its very fast, but then I run into n+1 issues looping over my rows.
Am I forced into either "raw" DB::table() queries without Eloquent or pagination (and thus server side filtering ect.) ? What is the reasonable limit on eager loaded relationships? I can't seem to find any advice on this anywhere. I'm using Datatables in frontend and their rule of thumb was doing AJAX until about 5k rows then you should consider going server side pagination instead.
You can customize the related query:
$entries = Entry:::with('sentences' => function($query) {
return $query->limit(5); // however many is useful
}])->get();

Apollo query does not return cached data available using readFragment

I have 2 queries: getGroups(): [Group] and getGroup($id: ID!): Group. One component first loads all groups using getGroups() and then later on a different component needs to access a specific Group data by ID.
I'd expect that Apollo's normalization would already have Group data in cache and would use it when getGroup($id: ID!) query is executed, but that's not the case.
When I set cache-only fetchPolicy nothing is returned. I can access the data using readFragment, but that's not as flexible as just using a query.
Is there an easy way to make Apollo return the cached data from a different query as I would expect?
It's pretty common to have a query field that returns a list of nodes and another that takes an id argument and returns a single node. However, deciding what specific node or nodes are returned by a field is ultimately part of your server's domain logic.
As a silly example, imagine if you had a field like getFavoriteGroup(id: ID!) -- you may have the group with that id in your cache but that doesn't necessarily mean it should be returned by the field (it may not be favorited). There's any number of factors (other arguments execution context, etc.) that might affect what nodes(s) are returned by a field. As a client, it's not Apollo's place to make assumptions about your domain logic.
However, you can effectively duplicate that logic by implementing query redirects.
const cache = new InMemoryCache({
cacheRedirects: {
Query: {
group: (_, args) => toIdValue(cache.config.dataIdFromObject({ __typename: 'Group', id: args.id })),
},
},
});

Resources