I have a site made with gatsby that has some static data ( a catalogue of items ), but at runtime when the user logs in, it makes a request to an api to get a list of items they are entitled to ( this returns just a list of id's.
Is it possible ( and does this make sense to do ) - to:
return the graphql item data in a normalized list so I can do easy look ups of the item id's and then render out into a list?
So basically
items = { 1: { title: 'itemA', price: 1 }, 2: { title: 'itemB', price: 2 }}; // from a Gql query
userItems = [1, 3, 5]; // from a dynamic fetch request
userItems.map(id => <Item {...items[id] } />) // easy lookup of item info
I know if I didn't use Gql I could just fetch the json data for the items and either merge it with the user data when the service fetches it, or just do as in my example above as I would control the structure of the json data, I just wanted to make use of gql to learn it, and so the item data could be reused on different pages that might be totally static ( a list of all items for example ) and then I could pick and choose which item attributes I want.
My data currently is an array of items, as if I storred it as a keyed object of items then fed it to gql via the json transformer, each of my nodes were different, so I didn't seem to be able to fetch a list of all items.
Related
I was reading through the docs to learn pagination approaches for Apollo. This is the simple example where they explain the paginated read function:
https://www.apollographql.com/docs/react/pagination/core-api#paginated-read-functions
Here is the relevant code snippet:
const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
feed: {
read(existing, { args: { offset, limit }}) {
// A read function should always return undefined if existing is
// undefined. Returning undefined signals that the field is
// missing from the cache, which instructs Apollo Client to
// fetch its value from your GraphQL server.
return existing && existing.slice(offset, offset + limit);
},
// The keyArgs list and merge function are the same as above.
keyArgs: [],
merge(existing, incoming, { args: { offset = 0 }}) {
const merged = existing ? existing.slice(0) : [];
for (let i = 0; i < incoming.length; ++i) {
merged[offset + i] = incoming[i];
}
return merged;
},
},
},
},
},
});
I have one major question around this snippet and more snippets from the docs that have the same "flaw" in my eyes, but I feel like I'm missing some piece.
Suppose I run a first query with offset=0 and limit=10. The server will return 10 results based on this query and store it inside cache after accessing merge function.
Afterwards, I run the query with offset=5 and limit=10. Based on the approach described in docs and the above code snippet, what I'm understanding is that I will get only the items from 5 through 10 instead of items from 5 to 15. Because Apollo will see that existing variable is present in read (with existing holding initial 10 items) and it will slice the available 5 items for me.
My question is - what am I missing? How will Apollo know to fetch new data from the server? How will new data arrive into cache after initial query? Keep in mind keyArgs is set to [] so the results will always be merged into a single item in the cache.
Apollo will not slice anything automatically. You have to define a merge function that keeps the data in the correct order in the cache. One approach would be to have an array with empty slots for data not yet fetched, and place incoming data in their respective index. For instance if you fetch items 30-40 out of a total of 100 your array would have 30 empty slots then your items then 60 empty slots. If you subsequently fetch items 70-80 those will be placed in their respective indexes and so on.
Your read function is where the decision on whether a network request is necessary or not will be made. If you find all the data in existing you will return them and no request to the server will be made. If any items are missing then you need to return undefined which will trigger a network request, then your merge function will be triggered once data is fetched, and finally your read function will run again only this time the data will be in the cache and it will be able to return them.
This approach is for the cache-first caching policy which is the default.
The logic for returning undefined from your read function will be implemented by you. There is no apollo magic under the hood.
If you use cache-and-network policy then a your read doesn't need to return undefined when data
I have two kinds of documents in my couchbase bucket with keys like -
product.id.1.main
product.id.2.main
product.id.3.main
and
product.id.1.extended
product.id.2.extended
product.id.3.extended
I want to write a view for documents of first kind, such that when some conditions are matched for a document, I can emit the attributes contained in the documents of first kind as well as the document of second kind.
Something like -
function(doc, meta){
if((meta.id).match("product.id.*.main") && doc.attribute1.match("value1"){
var extendedDocId = replaceMainWithExtended(meta.id)
emit(meta.id, doc.attribute1 + getExtendedDoc(extendedDocId).extendedAttribute1 );
}
}
I want to know how to implement this kind of function in couchbase views -
getExtendedDoc(extendedDocId).extendedAttribute1
right now in order for the list to render properly I need to have this kind of data passed in:
row = {
id: value,
name: value,
height: value,
categories: [1,2,3,4]
}
how can I adapt the code so that a list works with this kind of data?
row = {
id: value,
name: value,
height: value,
categories: [{id: "1"},{id: "2"},{id: "3"},{id: "4"}]
}
when I try to do that it seems that it applies JSON.stringify to the objects so it is trying to find category with id [Object object]
I would to avoid a per case conversion of data as I do now..
it seems that I cannot do anything in my restClient since the stringify was already applied
I have the same issue when I fetch just one data row e.g in Edit or Create.. categories ReferenceArrayInput is not populated when categories contains objects
Have you tried using format?
https://marmelab.com/admin-on-rest/Inputs.html#transforming-input-value-tofrom-record
Might help transform your input value. Then you can use format() to change values back to the format your API expects.
If this does not work then you will have to probably create a custom component out of ReferenceArrayInput.
I'm trying to build an api to create a collection in backbone. My Model is called log and has this (shortened) properties (format for getLog/<id>):
{
'id': string,
'duration': float,
'distance': float,
'startDate': string,
'endDate': string
}
I need to create a collection, because I have many logs and I want to display them in a list. The api for creating the collection (getAllLogs) takes 30 sec to run, which is to slow. It returns the same as the format as the api getLog/<id>, but in an array, one element for each log on the database.
To speed things up, I rebuild the api several times and optimize it to it's limits, but now I came to 30 sec, which is still to slow.
My question is if it is possible to have a collection filled with instances of a model without ALL the information in the model, just a part of it needed to display the list. This will increase the speed of loading the collection and displaying the list, while in the background I could continue loading all other properties, or load them only for the elements I really need.
In my case, the model would load only with this information:
{
'id': string,
'distance': float
}
and all other properties could be loaded later.
How can I do it? is it a good idea anyway?
thanks.
One way to do this is to use map to get the shortened model. Something like this will convert a Backbone.Collection "collection" with all properties to one with only "id" and "distance":
var shortCollection = new Backbone.Collection(collection.toJSON().map(function(x) {
return { id: x.id, distance: x.distance };
}));
Here's a Fiddle illustration.
At the moment, I'm trying to scrape forms from some sites using the following query:
select * from html
where url="http://somedomain.com"
and xpath="//form[#action]"
This returns a result like so:
{
form: {
action: "/some/submit",
id: "someId",
div: {
input: [
... some input elements here
]
}
fieldset: {
div: {
input: [
... some more input elements here
]
}
}
}
}
On some sites this could go many levels deep, so I'm not sure how to begin trying to filter out the unwanted elements in the result. If I could filter them out here, then it would make my back-end code much simpler. Basically, I'd just like the form and any label, input, select (and option) and textarea descendants.
Here's an XPath query I tried, but I realised that the element hierarchy would not be maintained and this might cause a problem if there are multiple forms on the page:
//form[#action]/descendant-or-self::*[self::form or self::input or self::select or self::textarea or self::label]
However, I did notice that the elements returned by this query were no longer returned under divs and other elements beneath the form.
I don't think it will be possible in a plain query as you have tried.
However, it would not be too much work to create a new data table containing some JavaScript that does the filtering you're looking for.
Data table
A quick, little <execute> block might look something like the following.
var elements = y.query("select * from html where url=#u and xpath=#x", {u: url, x: xpath}).results.elements();
var results = <url url={url}></url>;
for each (element in elements) {
var result = element.copy();
result.setChildren("");
result.normalize();
for each (descendant in y.xpath(element, filter)) {
result.node += descendant;
}
results.node += result;
}
response.object = results;
» See the full example data table.
Example query
use "store://VNZVLxovxTLeqYRH6yQQtc" as example;
select * from example where url="http://www.yahoo.com"
» See this query in the YQL console
Example results
Hopefully the above is a step in the right direction, and doesn't look too daunting.
Links
Open Data Tables Reference
Executing JavaScript in Open Data Tables
YQL Editor
This is how I would filter specific nodes but still allow the parent tag with all attributes to show:
//form[#name]/#* | //form[#action]/descendant-or-self::node()[name()='input' or name()='select' or name()='textarea' or name()='label']
If there are multiple form tags on the page, they should be grouped off by this parent tag and not all wedged together and unidentifiable.
You could also reverse the union if it would help how you'd like the nodes to appear:
//form[#action]/descendant-or-self::node()[name()='input' or name()='select' or name()='textarea' or name()='label'] | //form[#name]/#*