I'm using apollo-client, apollo-link and react-apollo, I want to fully disable cache, but don't know how to do it.
I read the source of apollo-cache-inmemory, it has a config argument in its constructor, but I can't build a dummy storeFactory to make it works.
You can set defaultOptions to your client like this:
const defaultOptions: DefaultOptions = {
watchQuery: {
fetchPolicy: 'no-cache',
errorPolicy: 'ignore',
},
query: {
fetchPolicy: 'no-cache',
errorPolicy: 'all',
},
}
const client = new ApolloClient({
link: concat(authMiddleware, httpLink),
cache: new InMemoryCache(),
defaultOptions: defaultOptions,
});
fetchPolicy as no-cache avoids using the cache.
See https://www.apollographql.com/docs/react/api/core/ApolloClient/#defaultoptions
Actually, setting fetchPolicy to network-only still saves the response to the cache for later use, bypassing the reading and forcing a network request.
If you really want to disable the cache, read and write, use no-cache. Which is "similar to network-only, except the query's result is not stored in the cache."
Take a look at the official docs: https://www.apollographql.com/docs/react/data/queries/#configuring-fetch-logic
I would always suggest not to disable inbuild caching feature from apollo client. Instead you can always set fetchPolicy: 'network-only' for an individual queries.
Something like this
<Query
query={GET_DOG_PHOTO}
variables={{ breed }}
fetchPolicy='network-only'
>
{({ loading, error, data, refetch, networkStatus }) => {
...
}}
</Query>
While fetching data with this Query, it would always do a network request instead of reading from cache first.
Related
I have read through a couple other posts as well as a few github issues, and I am yet to find a solution. When I logout as one user, and sign in as a different user, the new user will appear for a split second and then be replaced by the previous user's data.
Here is my attempt to go nuclear on the cache:
onClick={() => {
client
.clearStore()
.then(() => client.resetStore())
.then(() => client.cache.reset())
.then(() => client.cache.gc())
.then(() => dispatch(logoutUser))
.then(() => history.push('/'));
}}
I've tried getting the client object from both these locations (I am using codegen):
const { data, loading, error, client } = useUserQuery();
const client = useApolloClient();
Here is my Apollo client setup:
const apolloClient = new ApolloClient({
uri: config.apiUrl,
headers: {
uri: 'http://localhost:4000/graphql',
Authorization: `Bearer ${localStorage.getItem(config.localStorage)}`,
},
cache: new InMemoryCache(),
});
When I login with a new user, I writeQuery to the cache. If I log the data coming back from the login mutation, the data is perfect, exactly what I want to write:
sendLogin({
variables: login,
update: (store, { data }) => {
store.writeQuery({
query: UserDocument,
data: { user: data?.login?.user },
});
},
})
UserDocument is generated from codegen:
export const UserDocument = gql`
query user {
user {
...UserFragment
}
}
${UserFragmentFragmentDoc}`;
Following the docs, I don't understand what my options are, I have tried writeQuery, writeFragment, and cache.modify and nothing changes. The Authentication section seems to suggest the same thing I am trying.
Seems like all I can do is force a window.location.reload() on the user which is ridiculous, there has to be a way.
Ok, part of me feels like a dumb dumb, the other thinks there's some misleading info in the docs.
despite what this link says:
const client = new ApolloClient({
cache,
uri: 'http://localhost:4000/graphql',
headers: {
authorization: localStorage.getItem('token') || '',
'client-name': 'Space Explorer [web]',
'client-version': '1.0.0',
},
...
});
These options are passed into a new HttpLink instance behind the scenes, which ApolloClient is then configured to use.
This doesn't work out of the box. Essentially what is happening is my token is being locked into the apollo provider and never updating, thus the payload that came back successfully updated my cache but then because the token still contained the old userId, the query subscriptions overwrote the new data from the new user's login. This is why refreshing worked, because it forced the client to re-render with my local storage.
The fix was pretty simple:
// headerLink :: base headers for graphql queries
const headerLink = new HttpLink({ uri: 'http://localhost:4000/graphql' });
// setAuthorizationLink :: update headers as localStorage changes
const setAuthorizationLink = setContext((request, previousContext) => {
return {
headers: {
...previousContext.headers,
Authorization: `Bearer ${localStorage.getItem(config.localStorage)}`,
},
};
});
// client :: Apollo GraphQL Client settings
const client = new ApolloClient({
uri: config.apiUrl,
link: setAuthorizationLink.concat(headerLink),
cache: new InMemoryCache(),
});
And in fact, I didn't even need to clear the cache on logout.
Hope this helps others who might be struggling in a similar way.
I'm hoping to hear some inputs from the experts here.
I'm currently working on NextJS project and my graphql is running on mocked data which is setup in another repo.
and now that the backend is built by other devs were slowly moving away from mocked data to the real ones.
They've given me an endpoint to the backend where I'm supposed to be querying data.
So the goal is to make both mocked graphql data and the real data in backend work side by side at least until we fully removed mocked data.
So far saw 2 ways of doing it, but I was looking for a way where I could still use hooks like useQuery and useMutation
Way #1
require('isomorphic-fetch');
fetch('https://graphql.api....', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ query: `
query {
popularBrands ( storefront:"bax-shop.nl", limit:10, page:1){
totalCount
items{id
logo
name
image
}
}
}`
}),
})
.then(res => res.json())
.then(res => console.log(res.data));
Way #2
const client = new ApolloClient({
uri: 'https://api.spacex.land/graphql/',
cache: new InMemoryCache()
});
async function test () {
const { data: Data } = await client.query({
query: gql`
query GetLaunches {
launchesPast(limit: 10) {
id
mission_name
launch_date_local
launch_site {
site_name_long
}
links {
article_link
video_link
mission_patch
}
rocket {
rocket_name
}
}
}
`
});
console.log(Data)
}
Pseudo code:
Query the real data first
check if its empty, if it is, query the mock data.
If both are empty, then it's really an empty result set.
You can write a wrapper around the hooks you use that does this for you so you don't have to repeat yourself in every component. When you're ready to remove the mocked data you just remove the check for the second. data set.
This is a common technique when switching to a new database.
I'm querying for my local variable isLeftSidebarOpen and I thought that it should immediately return result without loading state. isLeftSidebarOpen is initialized when ApolloClient is created.
const data = {
isLeftSidebarOpen: false,
};
const initializeCache = cache => {
cache.writeData({ data });
};
const cache = new InMemoryCache();
const client = new ApolloClient({
link: new HttpLink({
uri: 'http://localhost:4000/graphql',
credentials: 'include',
}),
cache,
resolvers,
});
initializeCache(cache);
query IsLeftSidebarOpen {
isLeftSidebarOpen #client
}
const { data, loading } = useQuery(IS_LEFTSIDEBAR_OPEN);
console.log(data);
console.log(loading);
The result is:
undefined
true
{ isLeftSidebarOpen: false }
false
Whereas I expected to be:
{ isLeftSidebarOpen: false }
false
What is wrong with my understanding?
The local resolver interface has to be asynchronous as apollo client can simply not know wether you'll implement your local resolvers asynchronoulsy or not. Asynchronous works in both scenarios, synchronous does not. Even if your implementation itself is synchronous, apollo will await that function before setting loading to false.
Another "benefit" is that your query behaves the exact same way, regardless of it being resolved locally or remote.
However, cache.readQuery is a synchronous operation, so you can use that instead of using a local resolver to return the data immediately, given that you get a cache hit.
For some reason, I had to build a client-side only GraphQL server, my schema is built as follow:
private buildSchema(): GraphQLSchema {
const allTypes: string = ...// my types
const allResolvers: IResolvers[] = ...// my resolvers
return makeExecutableSchema({
typeDefs: allTypes,
resolvers: allResolvers
});
}
The client is as follow:
this.client = new ApolloClient({
link: new SchemaLink({schema: this.buildSchema()}),
cache: new InMemoryCache({
addTypename: false
})
});
And everything works fine except that my queries are not defered. For instance if I run:
const gqlQuery: string = `
{
user {
name
slowResolver #defer {
text
}
}
}
`
const $result = this.apollo.getClient().watchQuery({
query: gql(gqlQuery)
});
The $result will be emited only when the whole query will be resolved (instead of user and then slowResolver as expected).
Any idea of what I missed in the workflow?
The #defer directive was actually removed from Apollo, although there's been some work done to reimplement it. Even if it's implemented, though, deferred queries would have to be handled outside of the execution context. In other words, executing the schema can return a deferred execution result, but something else (like Apollo server itself) has to handle how that response (both the initial payload, and the subsequent patches) are actually sent to the server over whatever transport.
If you're defining a schema client-side, unfortunately, it's not going to be possible to use the #defer directive.
I just learnt how to create a GraphlQL server using graphql-yoga and prisma-binding based on the HowToGraphQL tutorial.
Question: The only way to query the database so far was to use the Prisma Playground webpage that was started by running the command graphql playground.
Is it possible to perform the same query from a Node.js script? I came across the Apollo client but it seems to be meant for use from a frontend layer like React, Vue, Angular.
This is absolutely possible, in the end the Prisma API is just plain HTTP where you put the query into the body of a POST request.
You therefore can use fetch or prisma-binding inside your Node script as well.
Check out this tutorial to learn more: https://www.prisma.io/docs/tutorials/access-prisma-from-scripts/access-prisma-from-a-node-script-using-prisma-bindings-vbadiyyee9
This might also be helpful as it explains how to use fetch to query the API: https://github.com/nikolasburk/gse/tree/master/3-Use-Prisma-GraphQL-API-from-Code
This is what using fetch looks like:
const fetch = require('node-fetch')
const endpoint = '__YOUR_PRISMA_ENDPOINT__'
const query = `
query {
users {
id
name
posts {
id
title
}
}
}
`
fetch(endpoint, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ query: query })
})
.then(response => response.json())
.then(result => console.log(JSON.stringify(result)))
If you want to use a lightweight wrapper around fetch that saves you from writing boilerplate, be sure to check out graphql-request.
And here is how you use Prisma bindings:
const { Prisma } = require('prisma-binding')
const prisma = new Prisma({
typeDefs: 'prisma.graphql',
endpoint: '__YOUR_PRISMA_ENDPOINT__'
})
// send `users` query
prisma.query.users({}, `{ id name }`)
.then(users => console.log(users))
.then(() =>
// send `createUser` mutation
prisma.mutation.createUser(
{
data: { name: `Sarah` },
},
`{ id name }`,
),
)
.then(newUser => {
console.log(newUser)
return newUser
})
.then(newUser =>
// send `user` query
prisma.query.user(
{
where: { id: newUser.id },
},
`{ name }`,
),
)
.then(user => console.log(user))
Since you are using Prisma and want to query it from a NodeJS script, I think you might have overlooked the option to generate a client from your Prisma definitions.
It takes care of handling create/read/update/delete/upsert methods depending on your datamodel.
Also, you can worry less about keeping your models and queries/mutations in sync since it is generated using the Prisma CLI (prisma generate).
I find it to save a lot of coding time compared to using raw GrahQL queries, which I save for more complicated queries/mutations.
Check their official documentation for more details.
Also, note that using the Prisma client is the recommended way of using Prisma in prisma-binding resository, unless:
Unless you explicitly want to use schema delegation
which I can't tell you much about.
I did not know of the prisma-binding package untill I read your question.
EDIT:
Here is another link that puts them both in perspective