Is it possible to define the HTTP headers for the GraphQL Playground that comes with Apollo Server? - apollo-server

I want to define some http headers for the GraphQL Playground, to be enabled by default and/or always. Essentially, I want to add:
"apollographql-client-name": "playground"
"apollographql-client-version": "yada-yada"
to be able to distinguish requests from the playground from any other requests on Apollo Studio. What's the best way?
By GraphQL Playground I refer to the one run by Apollo, the one documented here: https://www.apollographql.com/docs/apollo-server/testing/graphql-playground/
My current ApolloServer config looks something like this:
let apolloServerExpressConfig: ApolloServerExpressConfig = {
schema: schema,
playground: {
settings: {
"request.credentials": "include",
},
},
}
If I add tabs to it in an attempt to define the headers, like this:
let apolloServerExpressConfig: ApolloServerExpressConfig = {
schema: schema,
playground: {
settings: {
"request.credentials": "include",
},
tabs: [{
headers: {
"apollographql-client-name": "playground",
"apollographql-client-version": "yada-yada",
},
}],
},
}
the GraphQL playground no longer restores all tabs with their queries when reloading the page, which is very useful. I think there's some automatic tab management that gets removed as soon as you define tabs. I'm happy to have default headers defined for new tab creation, it's ok if those headers are exposed to the client.
My app already defines header, so, I can differentiate between the app and anything else that queries it, but I want to differentiate between my app, playground and anything else (the latter group should be empty).

Update:
https://github.com/apollographql/apollo-server/issues/1982#issuecomment-511765175
use the GraphQL Playground Express middleware directly [...] This would allow you to leverage the Express middleware req object, and set headers accordingly.
Here is a working example:
const app = require('express')()
const { ApolloServer, gql } = require('apollo-server-express')
// use this directly
const expressPlayground = require('graphql-playground-middleware-express').default
// just some boilerplate to make it runnable
const typeDefs = gql`type Book { title: String author: String } type Query { books: [Book] }`
const books = [{ title: 'Harry Potter and the Chamber of Secrets', author: 'J.K. Rowling' }, { title: 'Jurassic Park', author: 'Michael Crichton' }]
const resolvers = { Query: { books: () => books } }
const server = new ApolloServer({ typeDefs, resolvers });
//
// the key part {
//
const headers = JSON.stringify({
"apollographql-client-name" : "playground",
"apollographql-client-version": "yada-yada" ,
})
app.get('/graphql', expressPlayground({
endpoint: `/graphql?headers=${encodeURIComponent(headers)}`,
}))
server.applyMiddleware({ app })
//
// }
//
// just some boilerplate to make it runnable
app.listen({ port: 4000 }, () => console.log(`🚀 Server ready at http://localhost:4000${server.graphqlPath}`))
After page reload all tabs with their content are restored.
Answer to the original question:
It's not totally clear what you mean by Apollo Server GraphQL Playground. And what's your use case.
There is a desktop app, a web app, you can include GraphQL Playground as a module into your frontend, or as a middleware for your backend.
For the simplest case: switch to the "HTTP HEADERS" tab, add headers as JSON:
{
"apollographql-client-name": "playground",
"apollographql-client-version": "yada-yada",
}
For the case of frontend Playground you can pass tabs with headers property to <Playground/>:
<Playground
...
tabs={[{
name: 'Tab 1',
headers: {
"apollographql-client-name" : "playground",
"apollographql-client-version": "yada-yada" ,
}
...
}]}
/>,
For backend, you can use headers as well:
new ApolloServer({
...
playground: {
...
tabs: [{
...
headers: ...
}],
},
})
You can also
distinguish requests from the playground from requests from the actual apps
by going the opposite way: add extra headers to you actual apps.

Related

Set Cookies in apollo server azure functions

I am using apollo server in the azure function. I want to set cookies from apollo server azure functions. But it's not working. It doesn't throw any kind of errors.
How do I set cookies in apollo server azure functions? I tried this way but it's not working.
Here is my code
import { ApolloServer, gql } from "apollo-server-azure-functions";
import { ApolloServerPluginLandingPageLocalDefault } from "apollo-server-core";
import { serialize, parse } from "cookie";
// Construct a schema, using GraphQL schema language
const typeDefs = gql`
type Query {
user: User
}
type User {
id: ID!
name: String!
email: String!
}
`;
// Provide resolver functions for your schema fields
const resolvers = {
Query: {
user: (parents, args, { request, context }, info) => {
const cookie = serialize("token", "123", {
expires: new Date(Date.now() + 900000),
httpOnly: true,
});
context.res.setHeader("Set-Cookie", cookie);
return {
id: "1",
name: "John Doe",
email: "john#example.com",
};
},
},
};
// #ts-ignore
const server = new ApolloServer({
typeDefs,
resolvers,
debug: true,
plugins: [ApolloServerPluginLandingPageLocalDefault({ embed: true })],
context: (context) => {
return context;
},
});
export default server.createHandler({
cors: {
origin: ["*", "https://studio.apollographql.com"],
methods: ["GET", "POST", "OPTIONS"],
allowedHeaders: [
"access-control-allow-header",
"access-control-allow-credentials",
"access-control-allow-origin",
"content-type",
],
},
});
There is no documentation available for apollo server azure functions.
Official repository from apollo server azure functions: https://github.com/Azure-Samples/js-e2e-azure-function-graphql-hello.git
Sharing the discussion with the team internal and posting the update as updated here.
After looking at the issue, the infrastructure, and the announcement from Apollo for this package, I believe Apollo is the correct organization to post this issue because Apollo is providing the server in this sample. It just happens to be running on an Azure Function. Additionally, when I look for a solution on Apollo, it looks like the ApolloServer dependency needs to be swapped out for an Apollo server for express dependency in order to successfully set the cookie.
None of this is great news. I apologize for this.
I believe the sample works in this repo without cookes and doesn't currently include cookies in the code. Moving forward with the Apollo dependency, we will re-evaluate its use based on this feedback.

Pubsub publish multiple events Apollo Server

I am using Apollo Server and I want to publish 2 events in the row from same resolver. Both subscriptions are working fine but only if I dispatch only one event. If I try to dispatch both, second subscription resolver never gets called. If I comment out the first event dispatch second works normally.
const publishMessageNotification = async (message, me, action) => {
const notification = await models.Notification.create({
ownerId: message.userId,
messageId: message.id,
userId: me.id,
action,
});
// if I comment out this one, second pubsub.publish starts firing
pubsub.publish(EVENTS.NOTIFICATION.CREATED, {
notificationCreated: { notification },
});
const unseenNotificationsCount = await models.Notification.find({
ownerId: notification.ownerId,
isSeen: false,
}).countDocuments();
console.log('unseenNotificationsCount', unseenNotificationsCount);// logs correct value
// this one is not working if first one is present
pubsub.publish(EVENTS.NOTIFICATION.NOT_SEEN_UPDATED, {
notSeenUpdated: unseenNotificationsCount,
});
};
I am using default pubsub implementation. There are no errors in the console.
import { PubSub } from 'apollo-server';
import * as MESSAGE_EVENTS from './message';
import * as NOTIFICATION_EVENTS from './notification';
export const EVENTS = {
MESSAGE: MESSAGE_EVENTS,
NOTIFICATION: NOTIFICATION_EVENTS,
};
export default new PubSub();
Make sure, that you use pubsub from context of apollo server, for example:
Server:
const server = new ApolloServer({
schema: schemaWithMiddleware,
subscriptions: {
path: PATH,
...subscriptionOptions,
},
context: http => ({
http,
pubsub,
redisCache,
}),
engine: {
apiKey: ENGINE_API_KEY,
schemaTag: process.env.NODE_ENV,
},
playground: process.env.NODE_ENV === 'DEV',
tracing: process.env.NODE_ENV === 'DEV',
debug: process.env.NODE_ENV === 'DEV',
});
and example use in resolver, by context:
...
const Mutation = {
async createOrder(parent, { input }, context) {
...
try {
...
context.pubsub.publish(CHANNEL_NAME, {
newMessage: {
messageCount: 0,
},
participants,
});
dialog.lastMessage = `{ "orderID": ${parentID}, "text": "created" }`;
context.pubsub.publish(NOTIFICATION_CHANNEL_NAME, {
notification: { messageCount: 0, dialogID: dialog.id },
participants,
});
...
}
return result;
} catch (err) {
log.error(err);
return sendError(err);
}
},
};
...
It has been a while since this moment.
I have also been a struggle with pubsub not working problem.
and I would like to see your ApolloClient setup code.
I changed my configurations with regard to graphql version and client-side setup.
graphql version : 14.xx.xx -> 15.3.0
const client = new ApolloClient({
uri: 'http://localhost:8001/graphql',
cache: cache,
credentials: 'include',
link: ApolloLink.from([wsLink, httpLink])
});
I want you to clarify link order, especially about httpLink, if you use in your case, "HttpLink is a terminating Link.", according to Apollo official site.
At first, I used link order [httpLink, wsLink].
Therefore, pubsub.publish didn't work.
I hope this answer will help some of graphql users.

nuxt.js + Apollo Client: How to disable cache?

I managed to get an nuxt.js + nest.js with typescript and apollo graphql running.
To test if graphql works, i used the files from this example, and added a Button to the nuxt.js-page (on:click -> load all cats via graphql).
Everything works, reading and writing.
The problem is that after doing a mutation via playground or restarting the nest.js server with other graphql-data, the nuxt.js-page is displaying the old data(on click). I have to reload the whole page in the browser, to get the Apollo-Client fetching the new data.
I've tried to add a 'no-cache'-flag and 'network-only'-flag to nuxt.config.ts without success:
apollo: {
defaultOptions: {
$query: {
loadingKey: 'loading',
fetchPolicy: 'no-cache'
}
},
clientConfigs: {
default: {
httpEndpoint: 'http://localhost:4000/graphql',
wsEndpoint: 'ws://localhost:4000/graphql'
}
}
}
The function to get the cats:
private getCats() {
this.$apollo.query({ query: GET_CATS_QUERY }).then((res:any) => {
alert(JSON.stringify(res.data, null, 0));
});
}
How can I disable the cache or is there an other solution?
I had a similar problem recently and managed to fix it by creating a Nuxt plugin which overrides default client's options:
// plugins/apollo-overrides.ts
import { Plugin } from '#nuxt/types';
const apolloOverrides: Plugin = ({ app }) => {
// disable caching on all the queries
app.apolloProvider.defaultClient.defaultOptions = {
query: {
fetchPolicy: 'no-cache',
},
};
};
export default apolloOverrides;
Don't forget to register it in Nuxt's config:
// nuxt.config.js
export default {
...
plugins: [
'~/plugins/apollo-overrides',
],
...
};
I had problem like this you can fix it easily with remove $ before query
defaultOptions: {
query: {
fetchPolicy: 'no-cache',
errorPolicy: 'all'
}
},
And Reopen your dev server
If this solution not working add fetch policy for each query
.query({
query: sample,
variables: {},
errorPolicy: "all",
fetchPolicy: "no-cache"
})

How do you make Schema Stitching in Apollo Server faster?

Initially, I tried to use a Serverless Lambda function to handle schema stitching for my APIs, but I started to move toward an Elastic Beanstalk server to keep from needing to fetch the initial schema on each request.
Even so, the request to my main API server is taking probably ten times as long to get the result from one of the child API servers as my child servers do. I'm not sure what is making the request so long, but it seems like there is something blocking the request from resolving quickly.
This is my code for the parent API:
import * as express from 'express';
import { introspectSchema, makeRemoteExecutableSchema, mergeSchemas } from 'graphql-tools';
import { ApolloServer } from 'apollo-server-express';
import { HttpLink } from 'apollo-link-http';
import fetch from 'node-fetch';
async function run () {
const createRemoteSchema = async (uri: string) => {
const link = new HttpLink({ uri, fetch });
const schema = await introspectSchema(link);
return makeRemoteExecutableSchema({
schema,
link
});
};
const remoteSchema = await createRemoteSchema(process.env.REMOTE_URL);
const schema = mergeSchemas({
schemas: [remoteSchema]
});
const app = express();
const server = new ApolloServer({
schema,
tracing: true,
cacheControl: true,
engine: false
});
server.applyMiddleware({ app });
app.listen({ port: 3006 });
};
run();
Any idea why it is so slow?
UPDATE:
For anyone trying to stitch together schemas on a local environment, I got a significant speed boost by fetching 127.0.0.1 directly instead of going through localhost.
http://localhost:3002/graphql > http://127.0.0.1:3002/graphql
This turned out not to be an Apollo issue at all for me.
I'd recommend using Apollo engine to observe what is really going on with each request as you can see on the next screenshot:
you can add it to your Apollo Server configuration
engine: {
apiKey: "service:xxxxxx-xxxx:XXXXXXXXXXX"
},
Also, I've experienced better performance when defining the defaultMaxAge on the cache controle:
cacheControl: {
defaultMaxAge: 300, // 5 min
calculateHttpHeaders: true,
stripFormattedExtensions: false
},
the other thing that can help is to add longer max cache age on stitched objects if it does make sense, you can do this by adding cache hints in the schema stitching resolver:
mergeSchemas({
schemas: [avatarSchema, mediaSchema, linkSchemaDefs],
resolvers: [
{
AvatarFlatFields: {
faceImage: {
fragment: 'fragment AvatarFlatFieldsFragment on AvatarFlatFields { faceImageId }',
resolve(parent, args, context, info) {
info.cacheControl.setCacheHint({maxAge: 3600});
return info.mergeInfo.delegateToSchema({
schema: mediaSchema,
operation: 'query',
fieldName: 'getMedia',
args: {
mediaId: parseInt(parent.faceImageId),
},
context,
info,
});
}
},
}
},
Finally, Using dataLoaders can make process requests much faster when enabling batch processing and dataloaders caching read more at their github and the code will be something like this:
public avatarLoader = (context): DataLoader<any, any> => {
return new DataLoader(ids => this.getUsersAvatars(dataLoadersContext(context), ids)
.then(results => new Validation().validateDataLoaderArrayResults(ids, results))
, {batch: true, cache: true});
};

Relay Modern subscriptions: returning record mutiple times incrementally

I am working on a very simple app which contains Posts and comments on those Posts. I have got
1) comment_mutation.js ( Server side - i am publishing the saved comment using pubsub). I have removed unnecessary code
CommentCreateMutation: mutationWithClientMutationId({
name: 'CommentCreate',
inputFields: {
},
outputFields: {
commentEdge: {
type: GraphQLCommentEdge,
resolve: async ({ comment, postId }) => {
// Publishing change to the COMMENT_ADDED subscription.
await pubsub.publish(COMMENT_SUB.COMMENT_ADDED, { commentAdded: commentEdge, postId });
return commentEdge;
},
},
},
mutateAndGetPayload: () => {
// Save comment in DB
},
}),
2) CommentSubscription.js (server side) - getting the subscription and filtering it based on postId.
commentAdded: {
type: GraphQLCommentEdge,
args: {
postId: { type: new GraphQLNonNull(GraphQLID) },
...connectionArgs,
},
subscribe:
withFilter(
() => pubsub.asyncIterator(COMMENT_SUB.COMMENT_ADDED),
(payload, args) => payload.postId === args.postId
),
},
Server side is working very good. Whenever any comment is created it publishes the result to subscrition. Subscription catches it and displays the results.
Now the client side:
1) CommentMutaton.js - client side. Whenever user himself creates a comment on client side (react native) it is updating the store very well.
const mutation = graphql`
mutation CommentCreateMutation($input:CommentCreateInput!) {
commentCreate(input: $input) {
commentEdge {
__typename
cursor
node {
id
_id
text
}
}
}
}
`;
const commit = (environment, user, post, text, onCompleted, onError) => commitMutation(
environment, {
mutation,
variables: {
input: { userId: user._id, userName: user.userName, postId: post._id, text },
},
updater: (store) => {
// Update the store
},
optimisticUpdater: (store) => {
// Update the store optimistically
},
onCompleted,
onError,
},
);
2) CommentSubscription.js (client side)
const subscription = graphql`
subscription CommentAddedSubscription($input: ID!) {
commentAdded(postId: $input) {
__typename
cursor
node {
id
_id
text
likes
dislikes
}
}
}
`;
const commit = (environment, post, onCompleted, onError, onNext) => requestSubscription(
environment,
{
subscription,
variables: {
input: post._id,
},
updater: (store) => {
const newEdge = store.getRootField('commentAdded');
const postProxy = store.get(post.id);
const conn = ConnectionHandler.getConnection(
postProxy,
'CommentListContainer_comments',
);
ConnectionHandler.insertEdgeAfter(conn, newEdge);
},
onCompleted,
onError,
onNext,
}
);
export default { commit };
Problem is on client side. Whenever i create a comment on server side I can see the first comment rightly on client side. When i send another comment on server side i can see 2 same comments on client side. When i send comment third time i can see 3 same comments on client side. Does it mean:
every time comment_mutation.js (on server side) runs it create a new
subscription besides existing one. That is the only logical
explanation i can think of.
I commented out the updater function of CommentMutation.js (on client side) but still seeing this error. any help will be much appreciated.

Resources