I have this issue only in graphql. I need to POST base64 html but I didn't find any configuration for graphql to override the 1mb limitation.
I already setup the middleware.js and choose upper value but it works only for REST API
Inside this github issue can solve it. Temporary but it works.
Change the config/plugins.js
module.exports = {
graphql: {
endpoint: "/customendpoint"
}
};
In fact, the endpoint /graphql don't pass by the parser middleware so the request doesn't work
I resolved payload is too large issue on strapi graphql , in REST apis after I configure json limit and formlimit its working but not in graphql strapi in new version. so I find out the solution. and its working great.
config/plugins.js
module.exports = {
graphql: {
config: {
endpoint: "/graphql",
shadowCRUD: true,
playgroundAlways: false,
depthLimit: 7,
amountLimit: 2000,
apolloServer: {
tracing: false,
bodyParserConfig: {
// koa-bodyparser/node_modules/co-body/lib/json.js#36
limit: "256mb",
// koa-bodyparser/index.js#69
formLimit: "256mb",
jsonLimit: "256mb",
textLimit: "256mb",
xmlLimit: "256mb",
},
},
},
},
};
Related
This is happening on a simple project during local development, so cloud infrastructure isn't an issue.
This is also happening in the application playground.
My module registration:
GraphQLModule.forRootAsync<ApolloDriverConfig>({
driver: ApolloDriver,
imports: [YeoConfigModule],
useFactory: (configService: YeoConfigService<AppConfig>) => {
const config: ApolloDriverConfig = {
debug: true,
subscriptions: {
'graphql-ws': true,
},
playground: true,
autoSchemaFile: './apps/event-service/schema.gql',
sortSchema: true,
context: ({ req, res }) => ({ req, res }),
};
const origins = configService.get('CORS_ORIGINS')();
config.cors = { origin: origins, credentials: true };
// config.path = '/apis/event-service/graphql';
return config;
},
inject: [YeoConfigService],
My app startup:
async function bootstrap(): Promise<void> {
const app = await getApp();
await app.listen(process.env.PORT ?? 3600);
}
bootstrap();
My versions:
"graphql-ws": "5.11.2",
"graphql-redis-subscriptions": "2.5.0"
"#apollo/gateway": "2.1.3",
"#nestjs/graphql": "10.1.3",
"graphql": "16.5.0",
Result:
{
"error": "Could not connect to websocket endpoint ws://localhost:3600/graphql. Please check if the endpoint url is correct."
}
Any ideas why this isn't working as expected? I've been reading the nestjs docs up at https://docs.nestjs.com/graphql/subscriptions but there's nothing that I can find about extra setup required other than adding
subscriptions: {
'graphql-ws': true,
},
when registering the graphql module.
For anyone else stumbling upon this, I have started using altair which allows me to specify the ws endpoint as well as the type of connection, among which there is a graphql-ws option.
So I went with it.
If anyone knows how to achieve this using the playground referred in the original answer, happy to mark that one as the right answer over my own.
I'm running an Apollo GraphQL Client on a nodejs server locally that interacts with a GraphQL endpoint hosted on the cloud. I'm getting a "GET query missing" error whenever I query that endpoint with the client, despite the fact that I also have an Angular application using Apollo Angular (which internally uses Apollo Client), which is able to successfully query the cloud endpoint. The client also works fine if I host the GraphQL endpoint locally on my machine. I'm also using the cross-fetch package as my polyfill for fetch.
Here's my code for making the GraphQL client:
public rebuildClient() {
if (this.apollo) {
this.apollo.stop();
this.apollo.clearStore();
}
this.apollo = new ApolloClient(this.configureApolloClientOptions());
}
private configureApolloClientOptions() {
const uploadLink = createUploadLink({
uri: this.serverConfig.httpsPrefix + this.serverConfig.backendDomain + this.serverConfig.graphqlRelativePath,
headers: { 'Apollo-Require-Preflight': 'true' },
fetch: fetch,
});
if (this.graphQLWsClient) this.graphQLWsClient.dispose();
this.graphQLWsClient = createClient({
url: this.serverConfig.wssPrefix + this.serverConfig.backendDomain + this.serverConfig.graphqlRelativePath,
connectionParams: {
authentication: `Basic-Root ${this.authConfig.backendRootUser.username}:${this.authConfig.backendRootUser.password}`,
},
webSocketImpl: WebSocket,
});
const webSocketLink = new GraphQLWsLink(this.graphQLWsClient);
// using the ability to split links, you can send data to each link
// depending on what kind of operation is being sent
const splitLink = split(
// split based on operation type
({ query }) => {
const definition = getMainDefinition(query);
return definition.kind === 'OperationDefinition' && definition.operation === 'subscription';
},
webSocketLink,
uploadLink,
);
return {
link: splitLink,
cache: new InMemoryCache(),
};
}
Here's a template for the serverConfig, which is a JSON file:
{
"backendDomain": "localhost:3000",
"cdnDomain": "localhost:3000",
"cdnHttpsPrefix": "http://",
"httpsPrefix": "http://",
"wssPrefix": "ws://",
"cdnRelativePath": "/cdn",
"dynamicCdnRelativePath": "/cdn/dynamic",
"graphqlRelativePath": "/graphql",
"selfHostedPrefix": "cdn://",
"archiveCategoryId": ""
}
Note that everything works if backendDomain is set to a localhost domain. But when I set it to an actual domain, the Apollo Client breaks with the errors mentioned above.
This seems to be a long lasting issue:
In cypress interface, my application cannot send any graphql request or receive any response. Because it is fetch type.
here is the network status in cypress:
But in normal browser, I actually have several graphql requests, like here:
I know there are already quite several discussions and workarounds, such as using an polyfill to solve this problem such as below:
https://gist.github.com/yagudaev/2ad1ef4a21a2d1cfe0e7d96afc7170bc
Cypress does not intercept GraphQL API calls
but unfortunately, they are not working in my case.
Appreciate to the help of any kinds.
p.s.: I am using cypress 8.3.0, React as the front-end, and using apollo client and apollo server for all graphql stuff.
EDIT:
samele intercept:
cy.intercept('POST', Cypress.env('backendpiUrl') + '/graphql', req => {
if (req.body.operationName === 'updateItem') {
req.alias = 'updateItemMutation';
}
});
sample cypress console:
You can see that all the requests are XHR based, no graphql's fetch request
The links are old, unfetch polyfill is no longer necessary. Since the introduction of cy.intercept(), fetch is able to be waited on, stubbed etc.
Here's the docs Working with GraphQL and an interesting atricle Smart GraphQL Stubbing in Cypress (Note route2 is an early name for intercept)
More up-to-date, posted two days ago bahmutov - todo-graphql-example
Key helper function from this package:
import {
ApolloClient,
InMemoryCache,
HttpLink,
ApolloLink,
concat,
} from '#apollo/client'
// adding custom header with the GraphQL operation name
// https://www.apollographql.com/docs/react/networking/advanced-http-networking/
const operationNameLink = new ApolloLink((operation, forward) => {
operation.setContext(({ headers }) => ({
headers: {
'x-gql-operation-name': operation.operationName,
...headers,
},
}))
return forward(operation)
})
const httpLink = new HttpLink({ uri: 'http://localhost:3000' })
export const client = new ApolloClient({
link: concat(operationNameLink, httpLink),
fetchOptions: {
mode: 'no-cors',
},
cache: new InMemoryCache(),
})
Sample test
describe('GraphQL client', () => {
// make individual GraphQL calls using the app's own client
it('gets all todos (id, title)', () => {
const query = gql`
query listTodos {
# operation name
allTodos {
# fields to pick
id
title
}
}
`
cy.wrap(
client.query({
query,
}),
)
.its('data.allTodos')
.should('have.length.gte', 2)
.its('0')
.should('deep.equal', {
id: todos[0].id,
title: todos[0].title,
__typename: 'Todo',
})
})
Please show your test and the error (or failing intercept).
I'm trying to implement Auth0 in Apollo Federation, I was able to implement it in its individual services (https://auth0.com/blog/developing-a-secure-api-with-nestjs-adding-authorization/#Set-Up-API-Authorization) but if I'm trying to access the APIs thru the gateway, the header/payload is not being passed down to the services, hence its always unauthorized.
if the API is accessed thru individual services, the payload is being received and properly decoded from the header and works fine but if thru the gateway, its not being cascaded to the services that needs it.
Currently using a code-first implementation for it. I've also tried mirroring the module used in the services but it still doesn't work.
sample payload in individual service
{
iss: 'issuer url here',
sub: 'google-oauth2',
aud: ['audience link'],
iat: ,
exp: ,
azp: '',
scope: '',
permissions: [ 'sample:permission' ]
}
imports in the gateway
imports: [
ConfigModule.forRoot(),
GraphQLGatewayModule.forRoot({
server: {
cors: true,
},
gateway: {
serviceHealthCheck: true,
serviceList: [
{
name: 'service',
url: `${process.env.SERVICE_URL}/graphql`,
},
],
},
}),
]
You can customize header that's being used in internal request by using buildService option:
server.ts
const gateway = new ApolloGateway({
buildService: ({ url }) => new RequestHandler({ url }),
serviceList,
})
where RequestHandler class extends RemoteGraphQLDataSource:
import { RemoteGraphQLDataSource } from '#apollo/gateway'
import type { GraphQLRequest } from 'apollo-server-core'
import type express from 'express'
export class RequestHandler extends RemoteGraphQLDataSource {
willSendRequest({ context, request }: { context: { req: express.Request }, request: GraphQLRequest }) {
request.http?.headers.set('somethingFromOriginalRequestOrSomethingCustom', context.req.headers['something'])
}
}
I have implemented a service worker for my website. However I am not sure about the expiration setting on this.
Am currently using Nextjs for page rendering and Workbox with Apollo for data mangement.
My Workbox config:
// File to generate the service worker.
require("dotenv").config()
const workboxBuild = require("workbox-build")
const { COUNTRY: country, NODE_ENV } = process.env
const urlPattern = new RegExp(`/${country}\/static\/.*/`)
// https://developers.google.com/web/tools/workbox/reference-docs/latest/module-workbox-build#.generateSW
const buildSW = () => {
return workboxBuild.generateSW({
swDest: "public/workbox-worker.js",
clientsClaim: true,
mode: NODE_ENV,
skipWaiting: true,
sourcemap: false,
runtimeCaching: [
{
urlPattern: urlPattern,
// Apply a cache-first strategy.
handler: "CacheFirst",
options: {
cacheName: "Static files caching",
expiration: {
maxEntries: 50,
maxAgeSeconds: 3600,
},
},
},
],
})
}
buildSW()
My service worker is installed and activated and has started caching files.
My only question is shouldn't the max age here be 3600? Or am I doing something wrong?
I think you are confusing the The Cache-Control HTTP header with the Workbox Expiration.
As the service can reply to a request it may return a file regardless of the Cache-Control header. What you have configured is to have Workbox evict things from it's cache after 50 or 3600 secs. The service worker has it's own cache that can be found in the application tab of the chrome dev tool
see this question about how the two interact with each other - If you are using Service Workers do you still need cache-control headers?