Query not refetching after delete mutation - graphql

I have a simple delete mutation, which is supposed to refetch all the searches using the RecentSearchesQuery.
const onDeletePress = async (searchResultId: string) => {
await deleteSearch({
variables: {
_id: searchResultId,
},
refetchQueries: [{ query: RecentSearchesQuery }, "getUserSearches"],
});
};
const {
data: recentsData,
error: recentsError,
loading: recentsLoading,
} = useQuery(RecentSearchesQuery, {
fetchPolicy: "network-only",
nextFetchPolicy: "network-only",
});
When checking my DB, i do see the search being deleted, but the recentsData contains the old data, and the recentsLoading isn't true after the mutation. I am using the same structure in other places in my code, which works. What am I doing wrong here?

Related

How to query data using graphQL based on slug value

I am working with GatsbyJS and GraphQL for the first time and trying to generate certain pages dynamically based on slug values.
Currently I am able to generate all of the relevant pages using the createPages API extension, but the pages are blank, no data is being retrieved.
I'm not quite sure how to retrieve the data from within the template. When I write the query I get errors like "GraphQLError: Syntax Error: Expected Name, found \"$\".", when adding $slug: string! for example.
Any help would be appreciated on this!
gatsby-node.js
exports.createPages = async function ({ actions, graphql }) {
const { data } = await graphql(`
query {
dataJson {
work {
slug
}
}
}
`)
data.dataJson.work.forEach(edge => {
const slug = edge.slug;
actions.createPage({
path: `work/${slug}`,
component: require.resolve(`./src/templates/work-article.tsx`),
context: { slug: slug },
})
})
}
template.js
export const query = graphql`
query($slug: String!) {
dataJson {
work(slug: $slug) {
slug
title
}
}
}
`;
gatsby-config.js
{
resolve: 'gatsby-source-filesystem',
options: {
name: 'data', // Identifier. Will then be queried as `dataJson`
path: './src/assets/data', // Source folder containing the JSON files
},
},
Example of the data saved in data.json
{
"work": [
{
"slug": "my-slug",
"title": "My Title",
"img": "/static/img.jpg"
},
...
}

Urql cache invalidate is not removing items from the cache

My application has a list of organizations and I have buttons in my UI to delete them individually.
I would like the list to update and remove the deleted organization but that is not working for me.
I have set up a cache exchange like this, where I have (redundantly) tried two cache invalidation methods from the Urql docs:
const cache = cacheExchange({
updates: {
Mutation: {
delOrg(_result, args, cache, _info) {
// Invalidate cache based on type
cache.invalidate({ __typename: 'Org', id: args.id as number });
// Invalidate all fields
const key = 'Query';
cache
.inspectFields(key)
.filter((field) => field.fieldName === 'allOrgs')
.forEach((field) => {
cache.invalidate(key, field.fieldKey);
});
}
}
}
});
The GraphQL query that returns the list of organizations looks like:
query AllOrgs {
allOrgs {
id
name
logo {
id
url
}
}
}
And the mutation to delete an organization looks like: (it returns a boolean)
mutation DelOrg($id: ID!) {
delOrg(id: $id)
}
cache.invalidate does not appear to do anything. I have checked the cache using the debugging plugin as well as console.log. I can see the records in the cache and they don't get removed.
I am using
"#urql/exchange-graphcache": "^4.4.1",
"#urql/svelte": "^1.3.3",
I found a way around my issue. I now use cache.updateQuery to update data rather than a single line, I now have -
updates: {
Mutation: {
delOrg(_result, args, cache, _info) {
if (_result.delOrg) {
cache.updateQuery(
{
query: QUERY_ALL_ORGS
},
(data) => {
data = {
...data,
allOrgs: data.allOrgs.filter((n) => n.id !== args.id)
};
return data;
}
);
}
}
}
}
If it's just about cache invalidation, you can use the following invalidateCache function:
const cacheConfig: GraphCacheConfig = {
schema,
updates: {
Mutation: {
// create
createContact: (_parent, _args, cache, _info) =>
invalidateCache(cache, 'allContacts'),
// delete
deleteContactById: (_parent, args, cache, _info) =>
invalidateCache(cache, 'Contact', args),
},
},
}
const invalidateCache = (
cache: Cache,
name: string,
args?: { input: { id: any } }
) =>
args
? cache.invalidate({ __typename: name, id: args.input.id })
: cache
.inspectFields('Query')
.filter((field) => field.fieldName === name)
.forEach((field) => {
cache.invalidate('Query', field.fieldKey)
})
For creation mutations - with no arguments - the function inspects the root Query and invalidates all field keys having matching field names.
For deletion mutations the function invalidates the key that must have been given as a mutation argument.

Apollo-client: Add item to array in cache

Suppose I have the following GraphQL types:
type User {
id: String!
posts: [Post!]!
}
type Post {
id: String!
text: String,
}
And here is a mutation that returns the updated post:
mutation addNewPost(
$userId: String!
$text: String!
) {
addNewPost(userId: $userId, text: $text) {
id
text
}
}
After running this mutation my cache contains a new entry of a post. How do I add it to the user's posts array? I have tried cache.writeQuery and cache.modify but I cannot figure it out.
We do push the item into array inside the update function, which is one of the options of useMutation.
I'm writing the whole mutation so that you can get the idea 💡, let have a look at example:
By Update function:
const [createPost, { data, loading, error }] = useMutation(CREATE_POST, {
update(cache, response) {
// Here we write the update
// Step 1: Read/Fetch the data 👈
const data = client.readQuery({
query: FETCH_POSTS_QUERY,
});
// Step 2: Update the cache by immutable way 👈
client.writeQuery({
query: FETCH_POSTS_QUERY,
data: {
getPosts: [response.data.createPost, ...data.getPosts],
},
});
},
variables: formValues,
});
By refetchQueries:
That's really shortcut 🔥 way to update the cache, by using DocumentNode object parsed with the gql function
const [createPost, { data, loading, error }] = useMutation(CREATE_POST, {
refetchQueries: [ 👈
FETCH_POSTS_QUERY
'fetchPosts`' // OR Query name
],
variables: formValues,
});
You're going to want to directly write to the Apollo cache in order to update the other entities that your mutation has modified.
Have a look at the docs https://www.apollographql.com/docs/react/data/mutations/#making-all-other-cache-updates here for specifics (you're going to want to use cache.modify and cache.writeFragment)

When querying for a GraphQL float, what does the second argument represent?

I am making the following query in GraphQL:
{
metal(silver_bid_usd_toz: 1) {
silver_bid_usd_toz
}
}
which returns
{
"data": {
"metal": {
"silver_bid_usd_toz": 16.45
}
}
}
The JSON object returned by the API is flat:
{
silver_bid_usd_toz: 123,
gold_bid_usd_toz: 123,
copper_bid_usd_toz: 123
}
I don't understand what the int 1 in my graphql query means metal(silver_bid_usd_toz: 1)
It doesn't matter what I change it to, it could be 1 or 355, but it is required for the query to work. Why cant I just do
{
metal(silver_bid_usd_toz) {
silver_bid_usd_toz
}
}
My schema looks like this:
module.exports = new GraphQLSchema({
query: new GraphQLObjectType({
name: 'Query',
description: '...',
fields: () => ({
metal: {
type: MetalType,
args: {
gold_bid_usd_toz: { type: GraphQLFloat },
silver_bid_usd_toz: { type: GraphQLFloat }
},
resolve: (root, args) => fetch(
`api_url`
)
.then(response => response.json())
}
})
})
});
You are passing silver_bid_usd_toz as an argument for the field, but apparently you are not using it in the resolve function, so it's being ignored.
It seems to be the reason why the result is always the same when you change the argument value.
But it is weird when you say that it is required for the query to work, since it is not defined as a GraphQLNonNull type.
It should be possible to query this field without passing any argument, according to the Schema you passed us.

Elegant and efficient way to resolve related data in GraphQL

What can be the best way to resolve the data in GraphQL
Here i have a SeekerType and JobType, JobsType is nested in SeekerType
A Seeker can apply to many Jobs. When Querying for a seeker, One can just query for seeker's data or as well as he can query for nested JobType and can get the jobstype data too.
But the Question is that If One doesn't Query for nested JobType
he won't get the Jobs data but mine Seeker resolver in viewerType would be fetching that data too.
So, while providing data to the seeker query how can i handle that, Either he can only want seeker details or may want the jobs details too.
Shall I use resolver of each nestedType and get the parent object, and fetch the relevant data using fields from parent Object???
The code below is just for illustration and clarification, the question is about the best way to resolve data
ViewerType.js
const Viewer = new GraphQLObjectType({
name: 'Viewer',
fields: () => ({
Seeker: {
type: SeekerConnection,
args: _.assign({
seekerId: { type: GraphQLID },
status: { type: GraphQLString },
shortlisted: { type: GraphQLInt },
}, connectionArgs),
resolve: (obj, args, auth, rootValue) => {
const filterArgs = getFilters(args) || {};
return connectionFromPromisedArray(getSeekers(filterArgs), args)
.then((data) => {
// getSeekers() provides all the data required for SeekerType fields and it's
JobsType fields
data.args = filterArgs;
return data;
}).catch(err => new Error(err));
},
},
}),
});
SeekerType.js
const SeekerType = new GraphQLObjectType({
name: 'SeekerType',
fields: () => ({
id: globalIdField('SeekerType', obj => obj._id),
userId: {
type: GraphQLID,
resolve: obj => obj._id,
},
email: { type: GraphQLString },
password: { type: GraphQLString },
firstName: { type: GraphQLString },
lastName: { type: GraphQLString },
imageLink: { type: GraphQLString },
education: { type: GraphQLString },
address: { type: GraphQLString },
jobs: {
type: new GraphQLList(JobType),
},
}),
interfaces: [nodeInterface],
});
getSeekers() provide complete data as graphql fields format with nested
jobs field data too
const getSeekers = filterArgs => new Promise((resolve, reject) => {
if (Object.keys(filterArgs).length === 0) {
Seeker.find(filterArgs, { password: 0 }, (err, d) => {
if (err) return reject(err);
return resolve(d);
});
} else {
async.parallel([
(callback) => {
filterArgs._id = filterArgs.seekerId;
delete filterArgs.seekerId;
Seeker.find(filterArgs).lean()
.exec((err, d) => {
if (err) return callback(err);
if (err === null && d === null) return callback(null);
callback(null, d);
});
},
(callback) => {
filterArgs.seekerId = filterArgs._id;
delete filterArgs._id;
Applicant.find(filterArgs).populate('jobId').lean()
.exec((err, resp) => {
if (err) return callback(err);
callback(null, resp);
});
},
], (err, data) => {
const cleanedData = {
userData: data[0],
userJobMap: data[1],
};
const result = _.reduce(cleanedData.userData, (p, c) => {
if (c.isSeeker) {
const job = _.filter(cleanedData.userJobMap,
v => _.isEqual(v.seekerId, c._id));
const arr = [];
_.forEach(job, (i) => {
arr.push(i.jobId);
});
const t = _.assign({}, c, { jobs: arr });
p.push(t);
return p;
}
return reject('Not a Seekr');
}, []);
if (err) reject(err);
resolve(result);
// result have both SeekerType data and nested type
JobType data too.
});
}
});
I gather this to be a question about how to prevent overfetching related data...I.e. How not to necessarily request jobs data when querying the seeker.
This might have several motivations for optimization and security.
Considerations:
If the consumer (e.g. Web app) is under your control, you can simply avoid requesting the jobs field when querying seeker. As you may know, this is one of the stated goals of graphql to only return what is needed over the wire to the consumer, to minimize network traffic and do things in one trip. On the backend I would imagine the graphql engine is smart enough not to overfetch jobs data either, if it's not requested.
If your concern is more of security or unintentional overfetching by consumer apps out of your control, for example, then you can solve that by creating seperate queries and limiting access to them even. E.g. One query for seeker and another for seekerWithJobsData.
Another technique to consider is graphql directives which offer an include switch that can be used to conditionally serve specific fields. One advantage of using this technique in your scenario might be to allow a convenient way to conditionally display multiple fields in multiple queries depending on the value of a single boolean e.g. JobSearchFlag=false. Read here for an overview of directives: http://graphql.org/learn/queries/
I am not sure I completely understand the question but it seems to me you're loading both seeker and job types info on one level. You should load both of them on demand.
On the seeker level, you only get the seeker information, and you can get the IDs of any records related to that seeker. For example, job types ids (if a seeker has many job types)
On the job type level, when used as a nested level for one seeker, you can use those ids to fetch the actual records. This would make the fetching of job types record on-demand when the query asks for it.
The ID to record fetching can be cached and batched with a library like dataloader

Resources