How to cache using apollo-server - graphql

The apollo basic example at https://www.apollographql.com/docs/apollo-server/features/data-sources.html#Implementing-your-own-cache-backend they state that doing a redis cache is as simple as:
const { RedisCache } = require('apollo-server-cache-redis');
const server = new ApolloServer({
typeDefs,
resolvers,
cache: new RedisCache({
host: 'redis-server',
// Options are passed through to the Redis client
}),
dataSources: () => ({
moviesAPI: new MoviesAPI(),
}),
});
When I look at the examples of non-redis, it states that it's a simple { get, set } for cache. This means I should theoretically be able to do.
cache : {
get : function() {
console.log("GET!");
},
set : function() {
console.log("SET!");
}
}
No matter what I try, my cache functions are never called when I'm utilizing the graphQL explorer that apollo-server provides natively.
I have tried with cacheControl : true and with cacheControl set like it is in https://medium.com/brikl-engineering/serverless-graphql-cached-in-redis-with-apollo-server-2-0-f491695cac7f . Nothing.
Is there an example of how to implement basic caching in Apollo that does not utilize the paid Apollo Engine system?

You can look at the implementation of this package which caches the full response to implement your own cache.
import { RedisCache } from "apollo-server-redis";
import responseCachePlugin from "apollo-server-plugin-response-cache";
const server = new ApolloServer({
...
plugins: [responseCachePlugin()],
cache: new RedisCache({
connectTimeout: 5000,
reconnectOnError: function(err) {
Logger.error("Reconnect on error", err);
const targetError = "READONLY";
if (err.message.slice(0, targetError.length) === targetError) {
// Only reconnect when the error starts with "READONLY"
return true;
}
},
retryStrategy: function(times) {
Logger.error("Redis Retry", times);
if (times >= 3) {
return undefined;
}
return Math.min(times * 50, 2000);
},
socket_keepalive: false,
host: "localhost",
port: 6379,
password: "test"
}),
});

You should be able to use the NPM package 'apollo-server-caching' by implementing your own interface. See Implementing Your Own Cache which provides an example.

Related

Which `apollo-server-express` Version Work Best For These Apollo Server Packages?

I’m trying to get apollo-server-lambda or apollo-server-express to work with an executable schema for v3.36.
Here are the packages we use:
apollo-server-express#3.36 or apollo-server-lambda#3+
graphql-constraint-directive#3.0.0
#graphql-tools/schema#7.1.3
I ran multiple regression test to make it work, and it does not seem to hit GraphQL.
Here’s my Apollo server config:
const apolloServer = new ApolloServer({
schema: initializeSchema(),
plugins: [
ApolloServerPluginLandingPageGraphQLPlayground(),
{
didEncounterErrors(errors) {
logger.info(`didEncounterErrors:`)
logger.info(errors)
},
async requestDidStart(requestContext) {
logger.info(`Request started! ${requestContext}`);
return {
async parsingDidStart(requestContext) {
logger.info(`Parsing started! ${requestContext}`);
},
async validationDidStart(requestContext) {
logger.info(`Validation started! ${requestContext}`);
}
}
},
}],
context: async ({ event, context, express }) => {
logger.info(`Loading event... ${JSON.stringify(event)}`)
const newContext = {
headers: event.headers,
functionName: context.functionName,
event,
context,
expressRequest: express.req,
user: {} ?? null,
}
logger.info(`context ${JSON.stringify(newContext)}`)
return newContext
},
dataSources: () => {
logger.info('!initializing datasource')
initializeDbConnection()
return {}
},
...(['staging', 'production', 'demo'].includes(process.env.stage as string)
? { introspection: false, playground: false }
: {}),
})
I was able to log the executable schema inside initializeSchema, but it does not seem to hit the GraphQL Typedef and Resolver after upgrading. It just goes straight to context. So, I'm kinda stumped how to make HTTP request hit the Typedef and Resolvers using makeExecutableSchema()
I just need some advise or a list of table that could help me which version works best with the given apollo-server-<server_version>.

Apollo server express - How to enable tracing in Apollo introspective playground?

I've searched the internet to find an example that implemented apollo-server-express tracing with no success.
I'm trying to enable tracing in apollo introspective playground however, I've managed "manually" adding the time using a custom plugin implementation, but was thinking if that is the best practice? The introspective is showing wrong time for the request and this is also not sure why!
This is my plugin. This plugin using sentry for performance tracking too. Sentry works perfect, but we need something faster for development here.
/**
* To read more about apollo server plugins #see https://www.apollographql.com/docs/apollo-server/v2/integrations/plugins/
* */
import {
ApolloServerPlugin,
GraphQLFieldResolverParams,
GraphQLRequestContextWillSendResponse,
GraphQLRequestListener,
} from 'apollo-server-plugin-base';
import {Context} from '../models';
const sentryPlugin: ApolloServerPlugin<Context> = {
async requestDidStart({
request,
context,
}): Promise<GraphQLRequestListener<Context>> {
const startTime = new Date().getTime();
if (request.operationName)
context.sentryTransaction.setName(request.operationName!);
return {
async executionDidStart() {
return {
willResolveField(
reqContext: GraphQLFieldResolverParams<any, Context>
) {
// hook for each new resolver
const span = reqContext.context.sentryTransaction.startChild({
op: 'resolver',
description: `${reqContext.info.parentType.name}.${reqContext.info.fieldName}`,
});
return () => {
// this will execute once the resolver is finished
span.finish();
};
},
};
},
async willSendResponse(
requestContext: GraphQLRequestContextWillSendResponse<Context>
) {
const endTime = new Date().getTime();
requestContext.response.extensions = {
...requestContext.response.extensions,
tracing: {
version: 1,
startTime: new Date(startTime).toISOString(),
endTime: new Date(endTime).toISOString(),
duration: endTime - startTime, // <<== the time here is correct but introspective show it wrong!!
execution: {
resolvers: [], // <<=== This array is for each field. I'm sure that should not be manually implemented therefor I left it empty.
},
},
};
// hook for transaction finished
requestContext.context.sentryTransaction.finish();
},
};
},
};
export default sentryPlugin;
If this is apollo-server-express#2.x (guessing from the comment above your code), I believe you just need to pass "tracing: true":
const server = new ApolloServer({
...otherConfig,
tracing: true
})
I've also seen some cases of
new ApolloServer({
plugins: [
require('apollo-tracing').plugin()
]
})

Pubsub publish multiple events Apollo Server

I am using Apollo Server and I want to publish 2 events in the row from same resolver. Both subscriptions are working fine but only if I dispatch only one event. If I try to dispatch both, second subscription resolver never gets called. If I comment out the first event dispatch second works normally.
const publishMessageNotification = async (message, me, action) => {
const notification = await models.Notification.create({
ownerId: message.userId,
messageId: message.id,
userId: me.id,
action,
});
// if I comment out this one, second pubsub.publish starts firing
pubsub.publish(EVENTS.NOTIFICATION.CREATED, {
notificationCreated: { notification },
});
const unseenNotificationsCount = await models.Notification.find({
ownerId: notification.ownerId,
isSeen: false,
}).countDocuments();
console.log('unseenNotificationsCount', unseenNotificationsCount);// logs correct value
// this one is not working if first one is present
pubsub.publish(EVENTS.NOTIFICATION.NOT_SEEN_UPDATED, {
notSeenUpdated: unseenNotificationsCount,
});
};
I am using default pubsub implementation. There are no errors in the console.
import { PubSub } from 'apollo-server';
import * as MESSAGE_EVENTS from './message';
import * as NOTIFICATION_EVENTS from './notification';
export const EVENTS = {
MESSAGE: MESSAGE_EVENTS,
NOTIFICATION: NOTIFICATION_EVENTS,
};
export default new PubSub();
Make sure, that you use pubsub from context of apollo server, for example:
Server:
const server = new ApolloServer({
schema: schemaWithMiddleware,
subscriptions: {
path: PATH,
...subscriptionOptions,
},
context: http => ({
http,
pubsub,
redisCache,
}),
engine: {
apiKey: ENGINE_API_KEY,
schemaTag: process.env.NODE_ENV,
},
playground: process.env.NODE_ENV === 'DEV',
tracing: process.env.NODE_ENV === 'DEV',
debug: process.env.NODE_ENV === 'DEV',
});
and example use in resolver, by context:
...
const Mutation = {
async createOrder(parent, { input }, context) {
...
try {
...
context.pubsub.publish(CHANNEL_NAME, {
newMessage: {
messageCount: 0,
},
participants,
});
dialog.lastMessage = `{ "orderID": ${parentID}, "text": "created" }`;
context.pubsub.publish(NOTIFICATION_CHANNEL_NAME, {
notification: { messageCount: 0, dialogID: dialog.id },
participants,
});
...
}
return result;
} catch (err) {
log.error(err);
return sendError(err);
}
},
};
...
It has been a while since this moment.
I have also been a struggle with pubsub not working problem.
and I would like to see your ApolloClient setup code.
I changed my configurations with regard to graphql version and client-side setup.
graphql version : 14.xx.xx -> 15.3.0
const client = new ApolloClient({
uri: 'http://localhost:8001/graphql',
cache: cache,
credentials: 'include',
link: ApolloLink.from([wsLink, httpLink])
});
I want you to clarify link order, especially about httpLink, if you use in your case, "HttpLink is a terminating Link.", according to Apollo official site.
At first, I used link order [httpLink, wsLink].
Therefore, pubsub.publish didn't work.
I hope this answer will help some of graphql users.

Apollo client not sending token to backend until page refresh

I've been working on an app and only realized this issue when I started to clear the cache, but my app only works fine on refresh. When I clear all the cache, refresh then run through my app, I realized that my queries were returning my custom error "GraphQL error: Not authenticated as user".
I believe something is wrong with the way that I've set up my apollo client. It seems that the context is being set as soon as it's instantiated and then never changes the context even if the token exists. It would also explain why after logging in then refreshing, the queries work with the token until the local storage/cache is cleared. So my question is what's wrong with what I have?
import { persistCache } from "apollo-cache-persist";
import { ISLOGGEDIN_QUERY } from "./components/gql/Queries"
const cache = new InMemoryCache();
const token = localStorage.getItem('token')
persistCache({
cache,
storage: localStorage
})
const client = new ApolloClient({
uri: "http://localhost:4000/graphql",
cache,
resolvers: {
Mutation: {
changeValue: (_, args, { cache }) => {
const { isAuth } = token ? cache.readQuery({ query: ISLOGGEDIN_QUERY }) : false;
cache.writeData({
data: { isAuth: !isAuth }
})
return null;
}
}
},
request: (operation) => {
operation.setContext({
headers: {
authorization: token ? token : ''
}
})
},
});
//set default values
client.cache.writeData({ data: { isAuth: token ? true : false } })
export default client;```
I know I'm a bit late but I was having this problem too and found these
https://www.apollographql.com/docs/react/networking/authentication/#reset-store-on-logout
https://stackoverflow.com/a/65204972/13491532
You can just call clear store after your login mutation
import { useApolloClient } from "#apollo/client";
const client = useApolloClient();
client.clearStore();

How do you make Schema Stitching in Apollo Server faster?

Initially, I tried to use a Serverless Lambda function to handle schema stitching for my APIs, but I started to move toward an Elastic Beanstalk server to keep from needing to fetch the initial schema on each request.
Even so, the request to my main API server is taking probably ten times as long to get the result from one of the child API servers as my child servers do. I'm not sure what is making the request so long, but it seems like there is something blocking the request from resolving quickly.
This is my code for the parent API:
import * as express from 'express';
import { introspectSchema, makeRemoteExecutableSchema, mergeSchemas } from 'graphql-tools';
import { ApolloServer } from 'apollo-server-express';
import { HttpLink } from 'apollo-link-http';
import fetch from 'node-fetch';
async function run () {
const createRemoteSchema = async (uri: string) => {
const link = new HttpLink({ uri, fetch });
const schema = await introspectSchema(link);
return makeRemoteExecutableSchema({
schema,
link
});
};
const remoteSchema = await createRemoteSchema(process.env.REMOTE_URL);
const schema = mergeSchemas({
schemas: [remoteSchema]
});
const app = express();
const server = new ApolloServer({
schema,
tracing: true,
cacheControl: true,
engine: false
});
server.applyMiddleware({ app });
app.listen({ port: 3006 });
};
run();
Any idea why it is so slow?
UPDATE:
For anyone trying to stitch together schemas on a local environment, I got a significant speed boost by fetching 127.0.0.1 directly instead of going through localhost.
http://localhost:3002/graphql > http://127.0.0.1:3002/graphql
This turned out not to be an Apollo issue at all for me.
I'd recommend using Apollo engine to observe what is really going on with each request as you can see on the next screenshot:
you can add it to your Apollo Server configuration
engine: {
apiKey: "service:xxxxxx-xxxx:XXXXXXXXXXX"
},
Also, I've experienced better performance when defining the defaultMaxAge on the cache controle:
cacheControl: {
defaultMaxAge: 300, // 5 min
calculateHttpHeaders: true,
stripFormattedExtensions: false
},
the other thing that can help is to add longer max cache age on stitched objects if it does make sense, you can do this by adding cache hints in the schema stitching resolver:
mergeSchemas({
schemas: [avatarSchema, mediaSchema, linkSchemaDefs],
resolvers: [
{
AvatarFlatFields: {
faceImage: {
fragment: 'fragment AvatarFlatFieldsFragment on AvatarFlatFields { faceImageId }',
resolve(parent, args, context, info) {
info.cacheControl.setCacheHint({maxAge: 3600});
return info.mergeInfo.delegateToSchema({
schema: mediaSchema,
operation: 'query',
fieldName: 'getMedia',
args: {
mediaId: parseInt(parent.faceImageId),
},
context,
info,
});
}
},
}
},
Finally, Using dataLoaders can make process requests much faster when enabling batch processing and dataloaders caching read more at their github and the code will be something like this:
public avatarLoader = (context): DataLoader<any, any> => {
return new DataLoader(ids => this.getUsersAvatars(dataLoadersContext(context), ids)
.then(results => new Validation().validateDataLoaderArrayResults(ids, results))
, {batch: true, cache: true});
};

Resources