I'm facing an error on file upload with GraphQL Upload using the ReadStream function:
error: 17:10:32.466+02:00 [ExceptionsHandler] Maximum call stack size exceeded
error: 17:10:32.467+02:00 [graphql] Maximum call stack size exceeded RangeError: Maximum call stack size exceeded
at ReadStream.open (/Users/xxxx/Documents/Xxxx/xxxxx/xxxxx-api/node_modules/fs-capacitor/lib/index.js:80:7)
at _openReadFs (internal/fs/streams.js:117:12)
at ReadStream.<anonymous> (internal/fs/streams.js:110:3)
at ReadStream.deprecated [as open] (internal/util.js:96:15)
at ReadStream.open (/Users/xxxx/Documents/Xxxxx/xxxx/xxxxx-api/node_modules/fs-capacitor/lib/index.js:90:11)
at _openReadFs (internal/fs/streams.js:117:12)
at ReadStream.<anonymous> (internal/fs/streams.js:110:3)
at ReadStream.deprecated [as open] (internal/util.js:96:15)
at ReadStream.open (/Users/xxxx/Documents/Xxxxx/xxxxx/xxxxx-api/node_modules/fs-capacitor/lib/index.js:90:11)
at _openReadFs (internal/fs/streams.js:117:12) {"stack":"RangeError: Maximum call stack size exceeded\n at ReadStream.open (/Users/xxxx/Documents/Xxxxxx/xxxxx/xxxx-api/node_modules/fs-capacitor/lib/index.js:80:7)\n at _openReadFs (internal/fs/streams.js:117:12)\n at ReadStream.<anonymous> (internal/fs/streams.js:110:3)\n at ReadStream.deprecated [as open] (internal/util.js:96:15)\n at ReadStream.open (/Users/xxxxx/Documents/Xxxxx/xxxxx/xxxxx-api/node_modules/fs-capacitor/lib/index.js:90:11)\n at _openReadFs (internal/fs/streams.js:117:12)\n at ReadStream.<anonymous> (internal/fs/streams.js:110:3)\n at ReadStream.deprecated [as open] (internal/util.js:96:15)\n at ReadStream.open (/Users/xxxx/Documents/Xxxxxx/xxxx/xxxxx-api/node_modules/fs-capacitor/lib/index.js:90:11)\n at _openReadFs (internal/fs/streams.js:117:12)"}
(node:44569) [DEP0135] DeprecationWarning: ReadStream.prototype.open() is deprecated
(Use `node --trace-deprecation ...` to show where the warning was created)
Here is the function I'm using to upload a file:
public async cleanUpload(upload: GraphqlUpload, oldName?: string) {
let uploadResponse: FileInfo;
try {
if (oldName) {
this.safeRemove(oldName);
}
uploadResponse = await this.uploadFile(
{
fileName: upload.filename,
stream: upload.createReadStream(),
mimetype: upload.mimetype,
},
{ isPublic: true, filter: imageFilterFunction },
);
return uploadResponse;
} catch (e) {
this.logger.error('unable to upload', e);
if (uploadResponse) {
this.safeRemove(uploadResponse.fileName);
}
throw e;
}
}
The solution was to downgrade the Node version to 12.18 from 14.17.
To keep using Node 14.17 what you can do is to disable Apollo's internal upload and use graphql-upload
Please see this comment which outline the approach quoted here.
For any future readers, here is how to fix the issue once and for all.
The problem is that #nestjs/graphql's dependency, apollo-server-core, depends on an old version of graphql-upload (v8.0) which has conflicts with newer versions of Node.js and various packages. Apollo Server v2.21.0 seems to have fixed this but #nestjs/graphql is still on v2.16.1. Furthermore, Apollo Server v3 will be removing the built-in graphql-upload.
The solution suggested in this comment is to disable Apollo Server's built-in handling of uploads and use your own. This can be done in 3 simple steps:
1. package.json
Remove the fs-capacitor and graphql-upload entries from the resolutions section if you added them, and install the latest version of graphql-upload (v11.0.0 at this time) package as a dependency.
2. src/app.module.ts
Disable Apollo Server's built-in upload handling and add the graphqlUploadExpress middleware to your application.
import { graphqlUploadExpress } from "graphql-upload"
import { MiddlewareConsumer, Module, NestModule } from "#nestjs/common"
#Module({
imports: [
GraphQLModule.forRoot({
uploads: false, // disable built-in upload handling
}),
],
})
export class AppModule implements NestModule {
configure(consumer: MiddlewareConsumer) {
consumer.apply(graphqlUploadExpress()).forRoutes("graphql")
}
}
3. src/blog/post.resolver.ts (example resolver)
Remove the GraphQLUpload import from apollo-server-core and import from graphql-upload instead
// import { GraphQLUpload } from "apollo-server-core" <-- remove this
import { FileUpload, GraphQLUpload } from "graphql-upload"
#Mutation(() => Post)
async postCreate(
#Args("title") title: string,
#Args("body") body: string,
#Args("attachment", { type: () => GraphQLUpload }) attachment: Promise<FileUpload>,
) {
const { filename, mimetype, encoding, createReadStream } = await attachment
console.log("attachment:", filename, mimetype, encoding)
const stream = createReadStream()
stream.on("data", (chunk: Buffer) => /* do stuff with data here */)
}
Related
I am getting this error when trying to access a cypress task masking the "xlsx": "^0.18.5"
for reading excel files.
From Node.js Internals:
TypeError: f.slice is not a function
at firstbyte (/local0/biologics/gitcheckouts/new-ui-e2e/node_modules/xlsx/xlsx.js:23626:38)
at readSync (/local0/biologics/gitcheckouts/new-ui-e2e/node_modules/xlsx/xlsx.js:23706:14)...
My task looks like this cypress.config.ts
on('task', {
readXlsx: readXlsx.read,
});
If I add brackets after the read:-> readXlsx: readXlsx.read() it already fails when starting cypress with the same error message:
Does anybody know what to do? (Cypress version 12.5.0)
Add on:
import * as fs from 'fs';
private validateExcelFile(countedItems: number, correction: number) {
cy.get('#LASTDOWNLOADEDFILE').then((name) => {
const fileName = name as unknown as string;
cy.log(`Validating excel file: ${fileName}`);
const buffer = fs.readFileSync(fileName);
cy.task('readXlsx', { buffer }).then((rows) => {
const rdw = (rows as string).length;
expect(rdw).to.be.equal(countedItems - correction);
});
});
}
I cannot tell what code, if any, precedes the task. The message seems to indicate a buffer issue, here is my working code you can compare to your own.
Otherwise, suspect the file itself.
import { readFileSync } from "fs";
import { read } from "xlsx/xlsx.mjs";
function read(name) {
const buffer = readFileSync(name);
const workbook = read(buffer);
return workbook
}
...
on('task', {
readXlsx: (name) => read(name)
...
Nestjs application does not display the error message during development only during production. I found no module where the apollo -server can be configured to cache: "bounded". The Nestjs documentation itself makes no mention of it anywhere.
The complete error message says:
Persisted queries are enabled and are using an unbounded cache. Your server is vulnerable to denial of service attacks via memory exhaustion. Set cache: "bounded" or persistedQueries: false in your ApolloServer constructor, or see https://go.apollo.dev/s/cache-backends for other alternatives.
Here are some dependencies I suspect could be related to it.
"#nestjs/apollo": "10.0.19",
"#nestjs/common": "9.0.5",
"#nestjs/core": "9.0.5",
"#nestjs/graphql": "10.0.20",
A similar issue was opened at github and sadly it was closed without any solution.
This is related to the Apollo module's configuration, by default the cache is "unbounded" and not safe for production (see : https://www.apollographql.com/docs/apollo-server/performance/cache-backends/#ensuring-a-bounded-cache)
You can easily follow their recommendation by adding the optional "cache" configuration inside your GraphQL module.
GraphQLModule.forRoot<ApolloDriverConfig>({
driver: ApolloDriver,
cache: 'bounded', // <--- This option
...
}
Or provide an external caching with KeyvAdapter see: https://github.com/apollographql/apollo-utils/tree/main/packages/keyvAdapter#keyvadapter-class
In the app.module.ts inside the import array,
GraphQLModule.forRootAsync<ApolloDriverConfig>({
driver: ApolloDriver,
useClass: GqlConfigService,
}),
Then in the GqlConfigService I add cache: 'bounded' in the ApolloDriverConfig options.
import { ConfigService } from '#nestjs/config';
import { ApolloDriverConfig } from '#nestjs/apollo';
import { Injectable } from '#nestjs/common';
import { GqlOptionsFactory } from '#nestjs/graphql';
import { GraphqlConfig } from './config.interface';
#Injectable()
export class GqlConfigService implements GqlOptionsFactory {
constructor(private configService: ConfigService) {}
createGqlOptions(): ApolloDriverConfig {
const graphqlConfig = this.configService.get<GraphqlConfig>('graphql');
return {
// schema options
cache: 'bounded', // ! <== Added here
autoSchemaFile: graphqlConfig.schemaDestination || './src/schema.graphql',
sortSchema: graphqlConfig.sortSchema,
buildSchemaOptions: {
numberScalarMode: 'integer',
},
// subscription
installSubscriptionHandlers: true,
debug: graphqlConfig.debug,
playground: graphqlConfig.playgroundEnabled,
context: ({ req }) => ({ req }),
};
}
}
I defined a lambda function using NodejsFunction of AWS CDK.
I installed the modules I want to use in my lambda function in node_modules of CDK.
When I execute sam local invoke, some modules succeed and others fail.
When an error occurs, the error message "File not found in '/var/task/...'" is displayed.
Does this mean that some modules can be used with NodejsFunction and some cannot?
lambda-stack.ts
new lambda.NodejsFunction(this, 'SampleFunction', {
runtime: Runtime.NODEJS_14_X,
entry: 'lambda/sample-function/index.ts'
})
lambda/sample-function/index.ts (use 'date-fns') -> succeeded!
import { format } from 'date-fns'
export const handler = async () => {
try {
console.log(format(new Date(), "'Today is a' eeee"))
} catch (error) {
console.log(error)
}
}
lambda/sample-function/index.ts (use 'chrome-aws-lambda') -> failed
const chromium = require('chrome-aws-lambda')
export const handler = async () => {
try {
const browser = await chromium.puppeteer.launch()
} catch (error) {
// Cannot find module '/var/task/puppeteer/lib/Browser'
console.log(error)
}
}
lambda/sample-function/index.ts (use 'pdfkit') -> failed
const PDFDocument = require('pdfkit')
export const handler = async () => {
try {
const doc = new PDFDocument()
} catch (error) {
// no such file or directory, open '/var/task/data/Helvetica.afm'
console.log(error)
}
}
it seems that the "pdfkit" and "chrome-aws-lambda" are packages that use some binary files and you need to verify that those binary found in lambda.
when you create a lambda using 'new lambda.NodejsFunction()' in background, there is a esbuild process that bundles all files into one so make sure you not see any error related to that build during synth.
in order to verfiy this is the problem you could try upload your lambda with node_modules and chcek if it is work.
alternative you can:
look (or create) "lambda layer" that will contain those binary see example.
build your lambda with all of the dependencies as a docker image and set lambda to run that docker.
After upgrading to NestJS v8, I had to upgrade my RxJS version as well from 6 to 7 then it started throwing ERROR [ExceptionsHandler] no elements in sequence error.
This is a sample method in one of the app services:
show(): Observable<any> {
return from(this.repository.fetch()).pipe(
filter((data) => data.length > 0),
map((data) => data.map((datum) => parseData(datum)),
);
}
While I had NestJS v7 and RxJS v6, the method was working just fine; in other words, if the filter operation is not passed, the map operator wouldn't be called at all and the Observable stops there.
But after upgrading to NestJS v8 and RxJS v7, if my repository does not return any data, the app starts throwing ERROR [ExceptionsHandler] no elements in sequence error.
A workaround I came up with is as follows:
show(): Observable<any> {
return from(this.repository.fetch()).pipe(
filter((data) => data.length > 0),
defaultIfEmpty([]),
map((data) => data.map((datum) => parseData(datum)),
);
}
This way the error is gone but I have two more problems:
1- the map operator still runs which I do not want
2- the second one which is way more important to me is that I have to update all my services/methods which have a validation like this which is really crazy.
my dependencies are as follows:
"dependencies": {
"#nestjs/common": "^8.4.2",
"#nestjs/core": "^8.4.2",
"rxjs": "^7.5.5"
},
We determined on Discord that an interceptor could be used here to set the defaultIfEmpty like so:
import {
CallHandler,
ExecutionContext,
Injectable,
NestInterceptor
} from '#nestjs/common'
import { defaultIfEmpty } from 'rxjs/operators'
#Injectable()
export class DefaultIfEmptyInterceptor implements NestInterceptor {
intercept(context: ExecutionContext, next: CallHandler) {
return next.handle().pipe(defaultIfEmpty([]))
}
}
And then bind it globally in the providers part of the module with:
import { DefaultIfEmptyInterceptor } from '../defaultIfEmpty.interceptor'
import { APP_INTERCEPTOR } from '#nestjs/core'
// ...
{
provide: APP_INTERCEPTOR,
useClass: DefaultIfEmptyInterceptor,
}
I installed Ant Design with npm install antd, and mounted the Switch component.
My component is:
import React, { useState } from 'react';
import { Switch } from 'antd';
const FullWidthToggle = (isEnabled) => {
const [isChecked, setIsChecked] = useState(isEnabled);
const onChange = (isChecked) => {
setIsChecked(!isChecked)
}
return (
<div className='full-width-toggle'>
<p> <strong>{isChecked ? 'enabled' : 'disabled'}</strong> </p>
<Switch onChange={onChange}/>
</div>
)
}
export default FullWidthToggle;
Any time I switch, the toggle and the text changes, but I get this error in the console:
Uncaught RangeError: Maximum call stack size exceeded at wrapperRaf
I get the same error with the Collapse component, and I suppose with every animation.
I suspect I need to install or configure something else, can someone tell me what?
I found the answer in the meantime here:
https://github.com/vueComponent/ant-design-vue/issues/1219
I had to add
resolve: {
...
alias: {
...
raf: 'node_modules/raf/',
},
},
to my webpack.config.js file