i have a problem with local apollo modification like this:
`updateBoth: (schema, value) =>
client.writeQuery({
query: queryObject,
variables: { id },
data: {
Object: {
...obj,
value,
objectSchema: {
...obj.objectSchema,
structure: schema,
},
},
},
}),`
i am struggling greatly anybo can help me?
it does not update value (this time but generally acts bad in many situations)
Related
With a schema like
schema {
query: QueryRoot
}
scalar MyBigUint
type Order {
id: Int!
data: OrderCommons!
kind: OrderType!
}
type OrderBook {
bids(limit: Int): [Order!]!
asks(limit: Int): [Order!]!
}
type OrderCommons {
quantity: Int!
price: MyBigUint! // where it doesn't matter whether it's MyBigUint or a simple Int - the issue occurs anyways
}
enum OrderType {
BUY
SELL
}
type QueryRoot {
orderbook: OrderBook!
}
And a query query { orderbook { bids { data { price } }, asks { data { price } } } }
In a graphql playground of my graphql API (and on the network level of my Apollo app too) I receive a result like
{
"data": {
"orderbook": {
"bids": [
{
"data": {
"price": "127"
}
},
{
"data": {
"price": "74"
}
},
...
],
"asks": [
{
"data": {
"price": "181"
}
},
{
"data": {
"price": "187"
}
},
...
]
}
}
}
where, for the purpose of this question, the bids are ordered in descending order by price like ["127", "74", "73", "72"], etc, and asks are ordered in ascending order, accordingly.
However, in Apollo, after a query is done, I notice that one of the arrays gets seemingly random data.
For the purpose of the question, useQuery react hook is used, but the same happens when I query imperatively from a freshly initialized ApolloClient.
const { data, subscribeToMore, ...rest } = useQuery<OrderbookResponse>(GET_ORDERBOOK_QUERY);
console.log(data?.orderbook?.bids?.map(r => r.data.price));
console.log(data?.orderbook?.asks?.map(r => r.data.price));
Here, corrupted data of Bids gets printed i.e. ['304', '306', '298', '309', '277', '153', '117', '108', '87', '76'] (notice the order being wrong, at the least), whereas Asks data looks just fine. Inspecting the network, I find that Bids are not only properly ordered there, but also have different (correct, from DB) values!
Therefore, it seems something's getting corrupted on the way while Apollo delivers the data.
What could be the issue here I wonder, and where to start debugging such kind of an issue? There seem to be no warnings from Apollo either, it seems to just silently corrupt the data.
I'm clearly doing something wrong, but what?
The issue seems to stem from how Apollo caches data.
My Bids and Asks could have the same numeric IDs but share the same Order graphql type. Apollo rightfully assumes a Bid and an Ask with the same ID are the same things and the resulting data gets wrecked as a consequence.
An easy fix is to show Apollo that there's a complex key to the Order type on cache initialization:
cache: new InMemoryCache({
typePolicies: {
Order: {
keyFields: ['id', 'kind'],
}
}
})
This way it'll understand that the Order entities Ask and Bid with the same ID are different pieces of data indeed.
Note that the field kind should be also added to the query strings accordingly.
I am new to apollo server and I am trying really hard to understand how to make remote schemas one huge scheme, I was able to join the schemas and can now query the data, however, I cannot seem to be able to link/resolve the type, my two micro services uses the same type name for those type that are same everywhere with the pk being common in all of them only that one has only the pk and another one has some extra fields,
my shop schema looks like this
type UserType implements Node {
id: ID!
shops: [ShopType]
pk: Int
}
and what really matters is the pk in this case because it is suppose to join with my auth schema which looks like this
type UserType implements Node {
id: ID!
username: String
email: String
pk: Int
}
with so many other fields, I would like to be able to join the data fields of the two in appolo server since that is where I am merging my two schemas so that when ever I query
{
shops {
shopOwner {
username
email
}
}
}
then even though the username and email are not in the first schema, then it can resolve those fields by pk from the auth schema
I have used something like this
const createNewSchema = async () => {
const schemas = await createRemoteExecutableSchemas();
return mergeSchemas({
schemas,
});
};
to join my schemas so how do I even make the two work together as I desire? thanks so much in advance
I was able to make it work, I used stitchschemas and it looked like this
const createNewSchema = async () => {
const schemas = await createRemoteExecutableSchemas();
return stitchSchemas({
subschemas: [
{
schema: schemas['shop'],
merge: {
UserType: {
entryPoints: [
{
fieldName: 'shopOwnerById',
selectionSet: '{ pk }',
args: originalObject => ({ pk: originalObject.pk }),
},
{
fieldName: 'shopById',
selectionSet: '{ pk }',
args: originalObject => ({ pk: originalObject.pk }),
}
]
},
},
}, {
schema: schemas['auth'],
merge: {
UserType: {
entryPoints: [
{
fieldName: 'userById',
selectionSet: '{ pk }',
args: originalObject => ({ pk: originalObject.pk }),
},
]
}
}
}
],
mergeTypes: true,
});
};
in my case just not to confuse anyone I made the schemas to be dictionary where each was names by the API it was linking to. also hade to make sure that I made queries for all the fieldName's in my respective enpoints. thanks for taking a looks the guid here https://www.graphql-tools.com/docs/stitch-type-merging helped alot
I have an array of countries received from Apollo backend without an ID field.
export const QUERY_GET_DELIVERY_COUNTRIES = gql`
query getDeliveryCountries {
deliveryCountries {
order
name
daysToDelivery
zoneId
iso
customsInfo
}
}
`
Schema of these objects:
{
customsInfo: null
daysToDelivery: 6
iso: "UA"
name: "Ukraine"
order: 70
zoneId: 8
__typename: "DeliveryCountry"
}
In nested components I read these objects from client.readQuery.
What I want is to insert it to localStorage, read it initially and write this data to Apollo Client Cache.
What I've already tried to do:
useEffect(() => {
const deliveryCountries = JSON.parse(localStorage.getItem('deliveryCountries') || '[]')
if(!deliveryCountries || !deliveryCountries.length) {
getCountriesLazy()
} else {
deliveryCountries.map((c: DeliveryCountry) => {
client.writeQuery({
query: QUERY_GET_DELIVERY_COUNTRIES,
data: {
deliveryCountries: {
__typename: "DeliveryCountry",
order: c.order,
name: c.name,
daysToDelivery: c.daysToDelivery,
zoneId: c.zoneId,
iso: c.iso,
customsInfo: c.customsInfo
}
}
})
})
}
}, [])
But after execution the code above I have only one object in countries cache. How to write all objects without having an explicit ID, how can I do it? Or maybe I'm doing something wrong?
Lol. I just had to put the array into necessary field without iterating. writeQuery replaces all the data and not add any "to the end".
client.writeQuery({
query: QUERY_GET_DELIVERY_COUNTRIES,
data: {
deliveryCountries: deliveryCountries
}
})
When I'm making a request to my backend through a mutation like that:
mutation{
resetPasswordByToken(token:"my-token"){
id
}
}
I'm getting a response in such format:
{
"data": {
"resetPasswordByToken": {
"id": 3
}
}
}
And that wrapper object named the same as the mutation seems somewhat awkward (and at least redundant) to me. Is there a way to get rid of that wrapper to make the returning result a bit cleaner?
This is how I define the mutation now:
export const ResetPasswordByTokenMutation = {
type: UserType,
description: 'Sets a new password and sends an informing email with the password generated',
args: {
token: { type: new GraphQLNonNull(GraphQLString) },
captcha: { type: GraphQLString },
},
resolve: async (root, args, request) => {
const ip = getRequestIp(request);
const user = await Auth.resetPasswordByToken(ip, args);
return user.toJSON();
}
};
In a word: No.
resetPasswordByToken is not a "wrapper object", but simply a field you've defined in your schema that resolves to an object (in this case, a UserType). While it's common to request just one field on your mutation type at a time, it's possible to request any number of fields:
mutation {
resetPasswordByToken(token:"my-token"){
id
}
someOtherMutation {
# some fields here
}
andYetAnotherMutation {
# some other fields here
}
}
If we were to flatten the structure of the response like you suggest, we would not be able to distinguish between the data returned by one mutation from another. We likewise need to nest all of this inside data to keep our actual data separate from any returned errors (which appear in a separate errors entry).
I'm attempting to use graphql to tie together a number of rest endpoints, and I'm stuck on how to filter, sort and page the resulting data. Specifically, I need to filter and/or sort by nested values.
I cannot do the filtering on the rest endpoints in all cases because they are separate microservices with separate databases. (i.e. I could filter on title in the rest endpoint for articles, but not on author.name). Likewise with sorting. And without filtering and sorting, pagination cannot be done on the rest endpoints either.
To illustrate the problem, and as an attempt at a solution, I've come up with the following using formatResponse in apollo-server, but am wondering if there is a better way.
I've boiled down the solution to the most minimal set of files that i could think of:
data.js represents what would be returned by 2 fictional rest endpoints:
export const Authors = [{ id: 1, name: 'Sam' }, { id: 2, name: 'Pat' }];
export const Articles = [
{ id: 1, title: 'Aardvarks', author: 1 },
{ id: 2, title: 'Emus', author: 2 },
{ id: 3, title: 'Tapir', author: 1 },
]
the schema is defined as:
import _ from 'lodash';
import {
GraphQLSchema,
GraphQLObjectType,
GraphQLList,
GraphQLString,
GraphQLInt,
} from 'graphql';
import {
Articles,
Authors,
} from './data';
const AuthorType = new GraphQLObjectType({
name: 'Author',
fields: {
id: {
type: GraphQLInt,
},
name: {
type: GraphQLString,
}
}
});
const ArticleType = new GraphQLObjectType({
name: 'Article',
fields: {
id: {
type: GraphQLInt,
},
title: {
type: GraphQLString,
},
author: {
type: AuthorType,
resolve(article) {
return _.find(Authors, { id: article.author })
},
}
}
});
const RootType = new GraphQLObjectType({
name: 'Root',
fields: {
articles: {
type: new GraphQLList(ArticleType),
resolve() {
return Articles;
},
}
}
});
export default new GraphQLSchema({
query: RootType,
});
And the main index.js is:
import express from 'express';
import { apolloExpress, graphiqlExpress } from 'apollo-server';
var bodyParser = require('body-parser');
import _ from 'lodash';
import rql from 'rql/query';
import rqlJS from 'rql/js-array';
import schema from './schema';
const PORT = 8888;
var app = express();
function formatResponse(response, { variables }) {
let data = response.data.articles;
// Filter
if ({}.hasOwnProperty.call(variables, 'q')) {
// As an example, use a resource query lib like https://github.com/persvr/rql to do easy filtering
// in production this would have to be tightened up alot
data = rqlJS.query(rql.Query(variables.q), {}, data);
}
// Sort
if ({}.hasOwnProperty.call(variables, 'sort')) {
const sortKey = _.trimStart(variables.sort, '-');
data = _.sortBy(data, (element) => _.at(element, sortKey));
if (variables.sort.charAt(0) === '-') _.reverse(data);
}
// Pagination
if ({}.hasOwnProperty.call(variables, 'offset') && variables.offset > 0) {
data = _.slice(data, variables.offset);
}
if ({}.hasOwnProperty.call(variables, 'limit') && variables.limit > 0) {
data = _.slice(data, 0, variables.limit);
}
return _.assign({}, response, { data: { articles: data }});
}
app.use('/graphql', bodyParser.json(), apolloExpress((req) => {
return {
schema,
formatResponse,
};
}));
app.use('/graphiql', graphiqlExpress({
endpointURL: '/graphql',
}));
app.listen(
PORT,
() => console.log(`GraphQL Server running at http://localhost:${PORT}`)
);
For ease of reference, these files are available at this gist.
With this setup, I can send this query:
{
articles {
id
title
author {
id
name
}
}
}
Along with these variables (It seems like this is not the intended use for the variables, but it was the only way I could get the post processing parameters into the formatResponse function.):
{ "q": "author/name=Sam", "sort": "-id", "offset": 1, "limit": 1 }
and get this response, filtered to where Sam is the author, sorted by id descending, and getting getting the second page where the page size is 1.
{
"data": {
"articles": [
{
"id": 1,
"title": "Aardvarks",
"author": {
"id": 1,
"name": "Sam"
}
}
]
}
}
Or these variables:
{ "sort": "-author.name", "offset": 1 }
For this response, sorted by author name descending and getting all articles except the first.
{
"data": {
"articles": [
{
"id": 1,
"title": "Aardvarks",
"author": {
"id": 1,
"name": "Sam"
}
},
{
"id": 2,
"title": "Emus",
"author": {
"id": 2,
"name": "Pat"
}
}
]
}
}
So, as you can see, I am using the formatResponse function for post processing to do the filtering/paging/sorting. .
So, my questions are:
Is this a valid use case?
Is there a more canonical way to do filtering on deeply nested properties, along with sorting and paging?
Is this a valid use case? Is there a more canonical way to do filtering on deeply nested properties, along with sorting and paging?
Major part of original questing lies on segregating collections on different databases on separate microservices. In fact, it's nessasary to perform collection joining and subsequent filtering on some key, but it's directly impossible since there is no field in original collection to filter, sort or paginate.
Strightforward solution is perform full or filtered queries to original collections, and then perform joining and filtering result dataset on application server, e.g. by lodash, such at your solution. In is possible for small collections, but in general case causes large data transfer and unefficent sorting since there is no index structure - real RB-tree or SkipList, so with quadratic complexity it's not very good.
Dependent on resource volume on application server, special cache and index tables can be build there. If collection structure is fixed, some relations between collection entries and their fields can be reflected in special search table and update respectively on demain. It's like find & search index creation, but not it database, but on application server. Of cource, it will consume resources, but will be more fast than direct lodash-like sorting.
Also task can be solved from another side, if there is access to structure of original databases. Key is denormalization. In counter for classical relation approach, collections can have dublicate information for avioding further join operation. E.g., Articles collection can have some information from Authors collection, which is nessasary to perform filtering, sorting and pagination in further operations.