I'm building an app using ApolloClient to query a GraphQL endpoint. I wish to utilize 'cache-and-network' fetch policy on normal queries since this particular policy only works for watchQueries. What I really want is the following:
If we can query the server, we get a response from the server.
If we can't query the server, we load the content from the cache, if it's cached
This is the code I'm using to instantiate the ApolloClient.
const defaultOptions = {
watchQuery: {
fetchPolicy: 'cache-and-network',
errorPolicy: 'ignore',
},
query: {
fetchPolicy: 'network-only',
errorPolicy: 'all',
},
mutate: {
errorPolicy: 'all'
}
}
const client = new ApolloClient({
cache: cache,
link: createUploadLink({
uri: 'http://localhost:3000/graphql',
}),
defaultOptions
});
So I think I have two options: Catch the first query response and if failed load the contents from the cache, or use watchQuery method to issue the queries.
I have no idea were how to do it so any help would be welcome!
I ended up ignoring the defaultOptions object in the constructor. I define the fetchPolicy in the query itself, depending on the network status.
function getZones() {
return ApolloService.client.query({
query: GET_ZONES_CLIENT,
fetchPolicy: navigator.onLine ? 'network-only' : 'cache-only'
})
}
Related
I'm working on a vue3 project using #vue/apollo-composable and #graphql-codegen.
My index page does a search query. Each result from that query has a tile made on the page. I'm expecting the tile queries will be answered by the cache, but instead, they always miss.
At the page level I do this query:
query getTokens($limit: Int!) {
tokens(limit: $limit) {
...tokenInfo
}
}
Inside of the tile component I execute:
query getToken($id: uuid!){
token(id: $id) {
...tokenInfo
}
}
The fragment looks like this:
fragment tokenInfo on token {
id
name
}
Expectation: The cache would handle 100% of the queries inside the tile components. (I'm hoping to avoid the downfalls of serializing this data to vuex).
Reality: I get n+1 backend calls. I've tried a bunch of permutations including getting rid of the fragment. If I send the getToken call with fetchPolicy: 'cache-only' no data is returned.
The apollo client configuration is very basic:
const cache = new InMemoryCache();
const defaultClient = new ApolloClient({
uri: 'http://localhost:8080/v1/graphql',
cache: cache,
connectToDevTools: true,
});
const app = createApp(App)
.use(Store, StateKey)
.use(router)
.provide(DefaultApolloClient, defaultClient);
I'm also attaching a screenshot of my apollo dev tools. It appears that the cache is in fact getting populated with normalized data:
Any help would be greatly appreciated! :)
I've gotten this worked out thanks to #xadm's comment as well as some feedback I received on the Vue discord. Really my confusion is down to me being new to so many of these tools. Deciding to live on the edge and be a vue3 early adopter (which I love in many ways) made it even easier for me to be confused with the variance in documentation qualities right now.
That said, here is what I've got as a solution.
Problem: The actual problem is that, as configured, Apollo has no way to know that getTokens and getToken return the same type (token).
Solution: The minimum configuration I've found that resolves this is as follows:
const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
token(_, { args, toReference }) {
return toReference({
__typename: 'token',
id: args?.id,
});
},
},
},
},
});
However, the feels.... kinda gross to me. Ideally, I'd love to see a way to just point apollo at a copy of my schema, or a schema introspection, and have it figure this out for me. If someone is aware of a better way to do that please let me know.
Better(?) Solution: In the short term here what I feel is a slightly more scalable solution:
type CacheRedirects = Record<string, FieldReadFunction>;
function generateCacheRedirects(types: string[]): CacheRedirects {
const redirects: CacheRedirects = {};
for (const type of types) {
redirects[type] = (_, { args, toReference }) => {
return toReference({
__typename: type,
id: args?.id,
});
};
}
return redirects;
}
const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
...generateCacheRedirects(['token']),
},
},
},
});
If anyone has any improvements on these, please add a comment/solution! :)
I am using the apollo-client for my graphql setup. I have a paginated call for which i use the fetchMore function returned by the useQuery hook.
In the fetchMore function i was using its updateQuery param to append the new fetched data to the results in the cache.
fetchMore({
variables: {
request: fetchMorerequest,
},
updateQuery: (previousResult, { fetchMoreResult }) => {
const updatedResults = update(previousResult, {
list: {
data: { $push: fetchMoreResult.list.data },
hasMore: { $set: fetchMoreResult.list.hasMore },
},
});
return updatedResults;
},
});
This was working fine for me.
Now, i have changed my fetchPolicy from the default cache-first to cache-and-network.
After that change the updatedResults that i am returning from my updateQuery function are being reflected in my cache, but not being returned in my component. i can see the updated items in my cache in the apollo extension in chrome. But not getting them in my component.
Any idea what can be an issue here? Am i missing something? Please help.
Set nextFetchPolicy: 'cache-first' in your defaultOptions or current query options.
read more here
I am trying to use Apollo-client to pull my users info and stuck with this problem:
I have this Container component responsible for pulling the user's data (not authentication) once it is rendered. User may be logged in or not, the query returns either viewer = null or viewer = {...usersProps}.
Container makes the request const { data, refetch } = useQuery<Viewer>(VIEWER);, successfully receives the response and saves it in the data property that I use to read .viewer from and set it as my current user.
Then the user can log-out, once they do that I clear the Container's user property setUser(undefined) (not showed in the code below, not important).
The problem occurred when I try to re-login: Call of refetch triggers the graphql http request but since it returns the same data that was returned during the previous initial login - useQuery() ignores it and does not update data. Well, technically there could not be an update, the data is the same. So my code setUser(viewer); does not getting executed for second time and user stucks on the login page.
const { data, refetch } = useQuery<Viewer>(VIEWER);
const viewer = data && data.viewer;
useEffect(() => {
if (viewer) {
setUser(viewer);
}
}, [ viewer ]);
That query with the same response ignore almost makes sense, so I tried different approach, with callbacks:
const { refetch } = useQuery<Viewer>(VIEWER, {
onCompleted: data => {
if (data.viewer) {
setUser(data.viewer);
}
}
});
Here I would totally expect Apollo to call the onCompleted callback, with the same data or not... but it does not do that. So I am kinda stuck with this - how do I make Apollo to react on my query's refetch so I could re-populate user in my Container's state?
This is a scenario where apollo's caches come handy.
Client
import { resolvers, typeDefs } from './resolvers';
let cache = new InMemoryCache()
const client = new ApolloClient({
cache,
link: new HttpLink({
uri: 'http://localhost:4000/graphql',
headers: {
authorization: localStorage.getItem('token'),
},
}),
typeDefs,
resolvers,
});
cache.writeData({
data: {
isLoggedIn: !!localStorage.getItem('token'),
cartItems: [],
},
})
LoginPage
const IS_LOGGED_IN = gql`
query IsUserLoggedIn {
isLoggedIn #client
}
`;
function IsLoggedIn() {
const { data } = useQuery(IS_LOGGED_IN);
return data.isLoggedIn ? <Pages /> : <Login />;
}
onLogin
function Login() {
const { data, refetch } = useQuery(LOGIN_QUERY);
let viewer = data && data.viewer
if (viewer){
localStorage.setItem('token',viewer.token)
}
// rest of the stuff
}
onLogout
onLogout={() => {
client.writeData({ data: { isLoggedIn: false } });
localStorage.clear();
}}
For more information regarding management of local state. Check this out.
Hope this helps!
I managed to get an nuxt.js + nest.js with typescript and apollo graphql running.
To test if graphql works, i used the files from this example, and added a Button to the nuxt.js-page (on:click -> load all cats via graphql).
Everything works, reading and writing.
The problem is that after doing a mutation via playground or restarting the nest.js server with other graphql-data, the nuxt.js-page is displaying the old data(on click). I have to reload the whole page in the browser, to get the Apollo-Client fetching the new data.
I've tried to add a 'no-cache'-flag and 'network-only'-flag to nuxt.config.ts without success:
apollo: {
defaultOptions: {
$query: {
loadingKey: 'loading',
fetchPolicy: 'no-cache'
}
},
clientConfigs: {
default: {
httpEndpoint: 'http://localhost:4000/graphql',
wsEndpoint: 'ws://localhost:4000/graphql'
}
}
}
The function to get the cats:
private getCats() {
this.$apollo.query({ query: GET_CATS_QUERY }).then((res:any) => {
alert(JSON.stringify(res.data, null, 0));
});
}
How can I disable the cache or is there an other solution?
I had a similar problem recently and managed to fix it by creating a Nuxt plugin which overrides default client's options:
// plugins/apollo-overrides.ts
import { Plugin } from '#nuxt/types';
const apolloOverrides: Plugin = ({ app }) => {
// disable caching on all the queries
app.apolloProvider.defaultClient.defaultOptions = {
query: {
fetchPolicy: 'no-cache',
},
};
};
export default apolloOverrides;
Don't forget to register it in Nuxt's config:
// nuxt.config.js
export default {
...
plugins: [
'~/plugins/apollo-overrides',
],
...
};
I had problem like this you can fix it easily with remove $ before query
defaultOptions: {
query: {
fetchPolicy: 'no-cache',
errorPolicy: 'all'
}
},
And Reopen your dev server
If this solution not working add fetch policy for each query
.query({
query: sample,
variables: {},
errorPolicy: "all",
fetchPolicy: "no-cache"
})
Initially, I tried to use a Serverless Lambda function to handle schema stitching for my APIs, but I started to move toward an Elastic Beanstalk server to keep from needing to fetch the initial schema on each request.
Even so, the request to my main API server is taking probably ten times as long to get the result from one of the child API servers as my child servers do. I'm not sure what is making the request so long, but it seems like there is something blocking the request from resolving quickly.
This is my code for the parent API:
import * as express from 'express';
import { introspectSchema, makeRemoteExecutableSchema, mergeSchemas } from 'graphql-tools';
import { ApolloServer } from 'apollo-server-express';
import { HttpLink } from 'apollo-link-http';
import fetch from 'node-fetch';
async function run () {
const createRemoteSchema = async (uri: string) => {
const link = new HttpLink({ uri, fetch });
const schema = await introspectSchema(link);
return makeRemoteExecutableSchema({
schema,
link
});
};
const remoteSchema = await createRemoteSchema(process.env.REMOTE_URL);
const schema = mergeSchemas({
schemas: [remoteSchema]
});
const app = express();
const server = new ApolloServer({
schema,
tracing: true,
cacheControl: true,
engine: false
});
server.applyMiddleware({ app });
app.listen({ port: 3006 });
};
run();
Any idea why it is so slow?
UPDATE:
For anyone trying to stitch together schemas on a local environment, I got a significant speed boost by fetching 127.0.0.1 directly instead of going through localhost.
http://localhost:3002/graphql > http://127.0.0.1:3002/graphql
This turned out not to be an Apollo issue at all for me.
I'd recommend using Apollo engine to observe what is really going on with each request as you can see on the next screenshot:
you can add it to your Apollo Server configuration
engine: {
apiKey: "service:xxxxxx-xxxx:XXXXXXXXXXX"
},
Also, I've experienced better performance when defining the defaultMaxAge on the cache controle:
cacheControl: {
defaultMaxAge: 300, // 5 min
calculateHttpHeaders: true,
stripFormattedExtensions: false
},
the other thing that can help is to add longer max cache age on stitched objects if it does make sense, you can do this by adding cache hints in the schema stitching resolver:
mergeSchemas({
schemas: [avatarSchema, mediaSchema, linkSchemaDefs],
resolvers: [
{
AvatarFlatFields: {
faceImage: {
fragment: 'fragment AvatarFlatFieldsFragment on AvatarFlatFields { faceImageId }',
resolve(parent, args, context, info) {
info.cacheControl.setCacheHint({maxAge: 3600});
return info.mergeInfo.delegateToSchema({
schema: mediaSchema,
operation: 'query',
fieldName: 'getMedia',
args: {
mediaId: parseInt(parent.faceImageId),
},
context,
info,
});
}
},
}
},
Finally, Using dataLoaders can make process requests much faster when enabling batch processing and dataloaders caching read more at their github and the code will be something like this:
public avatarLoader = (context): DataLoader<any, any> => {
return new DataLoader(ids => this.getUsersAvatars(dataLoadersContext(context), ids)
.then(results => new Validation().validateDataLoaderArrayResults(ids, results))
, {batch: true, cache: true});
};