Apollo server express - How to enable tracing in Apollo introspective playground? - apollo-server

I've searched the internet to find an example that implemented apollo-server-express tracing with no success.
I'm trying to enable tracing in apollo introspective playground however, I've managed "manually" adding the time using a custom plugin implementation, but was thinking if that is the best practice? The introspective is showing wrong time for the request and this is also not sure why!
This is my plugin. This plugin using sentry for performance tracking too. Sentry works perfect, but we need something faster for development here.
/**
* To read more about apollo server plugins #see https://www.apollographql.com/docs/apollo-server/v2/integrations/plugins/
* */
import {
ApolloServerPlugin,
GraphQLFieldResolverParams,
GraphQLRequestContextWillSendResponse,
GraphQLRequestListener,
} from 'apollo-server-plugin-base';
import {Context} from '../models';
const sentryPlugin: ApolloServerPlugin<Context> = {
async requestDidStart({
request,
context,
}): Promise<GraphQLRequestListener<Context>> {
const startTime = new Date().getTime();
if (request.operationName)
context.sentryTransaction.setName(request.operationName!);
return {
async executionDidStart() {
return {
willResolveField(
reqContext: GraphQLFieldResolverParams<any, Context>
) {
// hook for each new resolver
const span = reqContext.context.sentryTransaction.startChild({
op: 'resolver',
description: `${reqContext.info.parentType.name}.${reqContext.info.fieldName}`,
});
return () => {
// this will execute once the resolver is finished
span.finish();
};
},
};
},
async willSendResponse(
requestContext: GraphQLRequestContextWillSendResponse<Context>
) {
const endTime = new Date().getTime();
requestContext.response.extensions = {
...requestContext.response.extensions,
tracing: {
version: 1,
startTime: new Date(startTime).toISOString(),
endTime: new Date(endTime).toISOString(),
duration: endTime - startTime, // <<== the time here is correct but introspective show it wrong!!
execution: {
resolvers: [], // <<=== This array is for each field. I'm sure that should not be manually implemented therefor I left it empty.
},
},
};
// hook for transaction finished
requestContext.context.sentryTransaction.finish();
},
};
},
};
export default sentryPlugin;

If this is apollo-server-express#2.x (guessing from the comment above your code), I believe you just need to pass "tracing: true":
const server = new ApolloServer({
...otherConfig,
tracing: true
})
I've also seen some cases of
new ApolloServer({
plugins: [
require('apollo-tracing').plugin()
]
})

Related

apollo-server-lambda: Unable to determine event source based on event

I am using apollo-server-lambda for my app. I have create custom authoization http headers and it is required . if authoization: LETMEIN then it will return true and also return all data, if there is no any authoization or wrong authoization then it wll throw an error. For local development I used serverless-offline.In Local environment, it works as expected and here is the image but when I deploy my code to AWS, the api end does not work. It always throws me the error: here is the link.
I test my function AWS console. I am getting this error:
I did not get what I am doing wrong.
Here is my code
/* eslint-disable #typescript-eslint/no-var-requires */
import { ApolloServerPluginLandingPageGraphQLPlayground } from 'apollo-server-core';
import { ApolloServer, AuthenticationError } from 'apollo-server-lambda';
import schema from '../graphql/schema';
import resolvers from '../resolvers';
import runWarm from '../utils/run-warm';
export const authToken = (token: string) => {
if (token === 'LETMEIN') {
return;
} else {
throw new AuthenticationError('No authorization header supplied');
}
};
const server = new ApolloServer({
typeDefs: schema,
resolvers,
debug: false,
plugins: [ApolloServerPluginLandingPageGraphQLPlayground()],
context: ({ event }) => {
//console.log(context);
if (event.headers) {
authToken(event.headers.authorization);
}
},
});
export default runWarm(
server.createHandler({
expressGetMiddlewareOptions: {
cors: {
origin: '*',
credentials: true,
allowedHeaders: ['Content-Type', 'Origin', 'Accept'],
optionsSuccessStatus: 200,
maxAge: 200,
},
},
})
);
This is my Lambda function
/**
* Running warm functions help prevent cold starts
*/
const runWarm =
(lambdaFunc: AWSLambda.Handler): AWSLambda.Handler =>
(event, context, callback) => {
// Detect the keep-alive ping from CloudWatch and exit early. This keeps our
// lambda function running hot.
if (event.source === 'serverless-plugin-warmup') {
return callback(null, 'pinged');
}
return lambdaFunc(event, context, callback);
};
export default runWarm;
This is not a direct answer, but might help, and could be useful if anyone else (like me) found this thread because of the error "Unable to determine event source based on event" when using apollo-server-lambda.
That error is coming from #vendia/serverless-express which is being used by apollo-server-lambda.
Within serverless-express, in src/event-sources/utils.js, there is a function called getEventSourceNameBasedOnEvent(), which is throwing the error. It needs to find something in the event object, and after a bit of experimentation I found that writing the lambda function like this solved the issue for me:
const getHandler = (event, context) => {
const server = new ApolloServer({
typeDefs,
resolvers,
debug: true,
});
const graphqlHandler = server.createHandler();
if (!event.requestContext) {
event.requestContext = context;
}
return graphqlHandler(event, context);
}
exports.handler = getHandler;
Note that the context object is added to the event object with the key "requestContext"....that's the fix.
(Also note that I have defined typeDefs and resolvers elsewhere in the code)
I can't guarantee this is the ideal thing to do, but it did work for me.

Pubsub publish multiple events Apollo Server

I am using Apollo Server and I want to publish 2 events in the row from same resolver. Both subscriptions are working fine but only if I dispatch only one event. If I try to dispatch both, second subscription resolver never gets called. If I comment out the first event dispatch second works normally.
const publishMessageNotification = async (message, me, action) => {
const notification = await models.Notification.create({
ownerId: message.userId,
messageId: message.id,
userId: me.id,
action,
});
// if I comment out this one, second pubsub.publish starts firing
pubsub.publish(EVENTS.NOTIFICATION.CREATED, {
notificationCreated: { notification },
});
const unseenNotificationsCount = await models.Notification.find({
ownerId: notification.ownerId,
isSeen: false,
}).countDocuments();
console.log('unseenNotificationsCount', unseenNotificationsCount);// logs correct value
// this one is not working if first one is present
pubsub.publish(EVENTS.NOTIFICATION.NOT_SEEN_UPDATED, {
notSeenUpdated: unseenNotificationsCount,
});
};
I am using default pubsub implementation. There are no errors in the console.
import { PubSub } from 'apollo-server';
import * as MESSAGE_EVENTS from './message';
import * as NOTIFICATION_EVENTS from './notification';
export const EVENTS = {
MESSAGE: MESSAGE_EVENTS,
NOTIFICATION: NOTIFICATION_EVENTS,
};
export default new PubSub();
Make sure, that you use pubsub from context of apollo server, for example:
Server:
const server = new ApolloServer({
schema: schemaWithMiddleware,
subscriptions: {
path: PATH,
...subscriptionOptions,
},
context: http => ({
http,
pubsub,
redisCache,
}),
engine: {
apiKey: ENGINE_API_KEY,
schemaTag: process.env.NODE_ENV,
},
playground: process.env.NODE_ENV === 'DEV',
tracing: process.env.NODE_ENV === 'DEV',
debug: process.env.NODE_ENV === 'DEV',
});
and example use in resolver, by context:
...
const Mutation = {
async createOrder(parent, { input }, context) {
...
try {
...
context.pubsub.publish(CHANNEL_NAME, {
newMessage: {
messageCount: 0,
},
participants,
});
dialog.lastMessage = `{ "orderID": ${parentID}, "text": "created" }`;
context.pubsub.publish(NOTIFICATION_CHANNEL_NAME, {
notification: { messageCount: 0, dialogID: dialog.id },
participants,
});
...
}
return result;
} catch (err) {
log.error(err);
return sendError(err);
}
},
};
...
It has been a while since this moment.
I have also been a struggle with pubsub not working problem.
and I would like to see your ApolloClient setup code.
I changed my configurations with regard to graphql version and client-side setup.
graphql version : 14.xx.xx -> 15.3.0
const client = new ApolloClient({
uri: 'http://localhost:8001/graphql',
cache: cache,
credentials: 'include',
link: ApolloLink.from([wsLink, httpLink])
});
I want you to clarify link order, especially about httpLink, if you use in your case, "HttpLink is a terminating Link.", according to Apollo official site.
At first, I used link order [httpLink, wsLink].
Therefore, pubsub.publish didn't work.
I hope this answer will help some of graphql users.

Apollo useQuery() - "refetch" is ignored if the response is the same

I am trying to use Apollo-client to pull my users info and stuck with this problem:
I have this Container component responsible for pulling the user's data (not authentication) once it is rendered. User may be logged in or not, the query returns either viewer = null or viewer = {...usersProps}.
Container makes the request const { data, refetch } = useQuery<Viewer>(VIEWER);, successfully receives the response and saves it in the data property that I use to read .viewer from and set it as my current user.
Then the user can log-out, once they do that I clear the Container's user property setUser(undefined) (not showed in the code below, not important).
The problem occurred when I try to re-login: Call of refetch triggers the graphql http request but since it returns the same data that was returned during the previous initial login - useQuery() ignores it and does not update data. Well, technically there could not be an update, the data is the same. So my code setUser(viewer); does not getting executed for second time and user stucks on the login page.
const { data, refetch } = useQuery<Viewer>(VIEWER);
const viewer = data && data.viewer;
useEffect(() => {
if (viewer) {
setUser(viewer);
}
}, [ viewer ]);
That query with the same response ignore almost makes sense, so I tried different approach, with callbacks:
const { refetch } = useQuery<Viewer>(VIEWER, {
onCompleted: data => {
if (data.viewer) {
setUser(data.viewer);
}
}
});
Here I would totally expect Apollo to call the onCompleted callback, with the same data or not... but it does not do that. So I am kinda stuck with this - how do I make Apollo to react on my query's refetch so I could re-populate user in my Container's state?
This is a scenario where apollo's caches come handy.
Client
import { resolvers, typeDefs } from './resolvers';
let cache = new InMemoryCache()
const client = new ApolloClient({
cache,
link: new HttpLink({
uri: 'http://localhost:4000/graphql',
headers: {
authorization: localStorage.getItem('token'),
},
}),
typeDefs,
resolvers,
});
cache.writeData({
data: {
isLoggedIn: !!localStorage.getItem('token'),
cartItems: [],
},
})
LoginPage
const IS_LOGGED_IN = gql`
query IsUserLoggedIn {
isLoggedIn #client
}
`;
function IsLoggedIn() {
const { data } = useQuery(IS_LOGGED_IN);
return data.isLoggedIn ? <Pages /> : <Login />;
}
onLogin
function Login() {
const { data, refetch } = useQuery(LOGIN_QUERY);
let viewer = data && data.viewer
if (viewer){
localStorage.setItem('token',viewer.token)
}
// rest of the stuff
}
onLogout
onLogout={() => {
client.writeData({ data: { isLoggedIn: false } });
localStorage.clear();
}}
For more information regarding management of local state. Check this out.
Hope this helps!

How do you make Schema Stitching in Apollo Server faster?

Initially, I tried to use a Serverless Lambda function to handle schema stitching for my APIs, but I started to move toward an Elastic Beanstalk server to keep from needing to fetch the initial schema on each request.
Even so, the request to my main API server is taking probably ten times as long to get the result from one of the child API servers as my child servers do. I'm not sure what is making the request so long, but it seems like there is something blocking the request from resolving quickly.
This is my code for the parent API:
import * as express from 'express';
import { introspectSchema, makeRemoteExecutableSchema, mergeSchemas } from 'graphql-tools';
import { ApolloServer } from 'apollo-server-express';
import { HttpLink } from 'apollo-link-http';
import fetch from 'node-fetch';
async function run () {
const createRemoteSchema = async (uri: string) => {
const link = new HttpLink({ uri, fetch });
const schema = await introspectSchema(link);
return makeRemoteExecutableSchema({
schema,
link
});
};
const remoteSchema = await createRemoteSchema(process.env.REMOTE_URL);
const schema = mergeSchemas({
schemas: [remoteSchema]
});
const app = express();
const server = new ApolloServer({
schema,
tracing: true,
cacheControl: true,
engine: false
});
server.applyMiddleware({ app });
app.listen({ port: 3006 });
};
run();
Any idea why it is so slow?
UPDATE:
For anyone trying to stitch together schemas on a local environment, I got a significant speed boost by fetching 127.0.0.1 directly instead of going through localhost.
http://localhost:3002/graphql > http://127.0.0.1:3002/graphql
This turned out not to be an Apollo issue at all for me.
I'd recommend using Apollo engine to observe what is really going on with each request as you can see on the next screenshot:
you can add it to your Apollo Server configuration
engine: {
apiKey: "service:xxxxxx-xxxx:XXXXXXXXXXX"
},
Also, I've experienced better performance when defining the defaultMaxAge on the cache controle:
cacheControl: {
defaultMaxAge: 300, // 5 min
calculateHttpHeaders: true,
stripFormattedExtensions: false
},
the other thing that can help is to add longer max cache age on stitched objects if it does make sense, you can do this by adding cache hints in the schema stitching resolver:
mergeSchemas({
schemas: [avatarSchema, mediaSchema, linkSchemaDefs],
resolvers: [
{
AvatarFlatFields: {
faceImage: {
fragment: 'fragment AvatarFlatFieldsFragment on AvatarFlatFields { faceImageId }',
resolve(parent, args, context, info) {
info.cacheControl.setCacheHint({maxAge: 3600});
return info.mergeInfo.delegateToSchema({
schema: mediaSchema,
operation: 'query',
fieldName: 'getMedia',
args: {
mediaId: parseInt(parent.faceImageId),
},
context,
info,
});
}
},
}
},
Finally, Using dataLoaders can make process requests much faster when enabling batch processing and dataloaders caching read more at their github and the code will be something like this:
public avatarLoader = (context): DataLoader<any, any> => {
return new DataLoader(ids => this.getUsersAvatars(dataLoadersContext(context), ids)
.then(results => new Validation().validateDataLoaderArrayResults(ids, results))
, {batch: true, cache: true});
};

How to cache using apollo-server

The apollo basic example at https://www.apollographql.com/docs/apollo-server/features/data-sources.html#Implementing-your-own-cache-backend they state that doing a redis cache is as simple as:
const { RedisCache } = require('apollo-server-cache-redis');
const server = new ApolloServer({
typeDefs,
resolvers,
cache: new RedisCache({
host: 'redis-server',
// Options are passed through to the Redis client
}),
dataSources: () => ({
moviesAPI: new MoviesAPI(),
}),
});
When I look at the examples of non-redis, it states that it's a simple { get, set } for cache. This means I should theoretically be able to do.
cache : {
get : function() {
console.log("GET!");
},
set : function() {
console.log("SET!");
}
}
No matter what I try, my cache functions are never called when I'm utilizing the graphQL explorer that apollo-server provides natively.
I have tried with cacheControl : true and with cacheControl set like it is in https://medium.com/brikl-engineering/serverless-graphql-cached-in-redis-with-apollo-server-2-0-f491695cac7f . Nothing.
Is there an example of how to implement basic caching in Apollo that does not utilize the paid Apollo Engine system?
You can look at the implementation of this package which caches the full response to implement your own cache.
import { RedisCache } from "apollo-server-redis";
import responseCachePlugin from "apollo-server-plugin-response-cache";
const server = new ApolloServer({
...
plugins: [responseCachePlugin()],
cache: new RedisCache({
connectTimeout: 5000,
reconnectOnError: function(err) {
Logger.error("Reconnect on error", err);
const targetError = "READONLY";
if (err.message.slice(0, targetError.length) === targetError) {
// Only reconnect when the error starts with "READONLY"
return true;
}
},
retryStrategy: function(times) {
Logger.error("Redis Retry", times);
if (times >= 3) {
return undefined;
}
return Math.min(times * 50, 2000);
},
socket_keepalive: false,
host: "localhost",
port: 6379,
password: "test"
}),
});
You should be able to use the NPM package 'apollo-server-caching' by implementing your own interface. See Implementing Your Own Cache which provides an example.

Resources