Deploy NestJS to AWS Lambda - aws-lambda

I have a REST API built using NestJS and I'm trying to deploy this to AWS Lambda.
I've created a file called serverless.ts in the src directory of my app -
import { NestFactory } from '#nestjs/core';
import { AppModule } from './app.module';
import serverlessExpress from '#vendia/serverless-express';
import { Handler, Callback, Context } from 'aws-lambda';
let server: Handler;
async function bootstrap() {
const app = await NestFactory.create(AppModule);
await app.init();
const expressApp = app.getHttpAdapter().getInstance();
return serverlessExpress({ app: expressApp });
}
bootstrap();
export const handler: Handler = async (
event: any,
context: Context,
callback: Callback,
) => {
server = server ?? (await bootstrap());
return server(event, context, callback);
};
Now, when I tried deploying this app/handler to AWS Lambda using Serverless framework, it failed a couple of times due to big package size (Lambda limits it to 250Mb).
The next option I had was to use docker and Elastic Container Registry (AWS ECR) to upload an image to Lambda.
Dockerfile -
FROM public.ecr.aws/lambda/nodejs:14
COPY package*.json ${LAMBDA_TASK_ROOT}
RUN npm install
COPY . ${LAMBDA_TASK_ROOT}
RUN npm run build
CMD [ "dist/serverless.handler" ]
I build this image and push to ECR Repository using the following commands -
aws ecr get-login-password --region region | docker login --username AWS --password-stdin aws_account_id.dkr.ecr.region.amazonaws.com
docker tag e9ae3c220b23 aws_account_id.dkr.ecr.region.amazonaws.com/my-repository:tag
docker push aws_account_id.dkr.ecr.region.amazonaws.com/my-repository:tag
The push is successful now. Then I import this image to AWS Lambda and add a API Gateway Trigger to generate an HTTP endpoint.
When I try to access this endpoint, it says -
{ message: Internal server error }
When I view logs in Cloudwatch, this is what I see -
{
"errorType": "Runtime.ImportModuleError",
"errorMessage": "Error: Cannot find module 'serverless'\nRequire stack:\n- /var/runtime/UserFunction.js\n- /var/runtime/Runtime.js\n- /var/runtime/index.js",
"stack": [
"Runtime.ImportModuleError: Error: Cannot find module 'serverless'",
"Require stack:",
"- /var/runtime/UserFunction.js",
"- /var/runtime/Runtime.js",
"- /var/runtime/index.js",
" at _loadUserApp (/var/runtime/UserFunction.js:221:13)",
" at Object.module.exports.load (/var/runtime/UserFunction.js:279:17)",
" at Object. (/var/runtime/index.js:43:34)",
" at Module._compile (internal/modules/cjs/loader.js:1085:14)",
" at Object.Module._extensions..js (internal/modules/cjs/loader.js:1114:10)",
" at Module.load (internal/modules/cjs/loader.js:950:32)",
" at Function.Module._load (internal/modules/cjs/loader.js:790:12)",
" at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:75:12)",
" at internal/main/run_main_module.js:17:47"
]
}
I am unable to figure out the root cause and a fix for this issue. How do I resolve this?

Related

Both AWS module and SSM/secrets layer added, but not found at run-time by lambda nodejs code?

This is the code:
/* global AWS */
AWS.config.update({region: 'us-east-1'});
const ssm = new AWS.SSM();
console.log('Loading function...');
exports.handler = async (event, context) => {
//console.log('Received event:', JSON.stringify(event, null, 2));
console.log('value1 =', event.key1);
console.log('value2 =', event.key2);
console.log('value3 =', event.key3);
ssm.getParameters({
Names: [`/my/dev/scrt`],
WithDecryption: false,
}).promise()
.then(data => data.Parameters.length ? data.Parameters[0].Value : Promise.reject(new Error(`SSM Parameter was not set.`)))
.then(plainsecret => {
console.log(`the secret is ${scrt}`);
return `${scrt} - ${event.key1}`; // Echo back the first key value
});
};
The error:
Response
{
"errorType": "ReferenceError",
"errorMessage": "AWS is not defined",
"trace": [
"ReferenceError: AWS is not defined",
" at Object.<anonymous> (/var/task/index.js:2:1)",
" at Module._compile (internal/modules/cjs/loader.js:1085:14)",
" at Object.Module._extensions..js (internal/modules/cjs/loader.js:1114:10)",
" at Module.load (internal/modules/cjs/loader.js:950:32)",
" at Function.Module._load (internal/modules/cjs/loader.js:790:12)",
" at Module.require (internal/modules/cjs/loader.js:974:19)",
" at require (internal/modules/cjs/helpers.js:101:18)",
" at _tryRequireFile (/var/runtime/UserFunction.js:72:32)",
" at _tryRequire (/var/runtime/UserFunction.js:160:20)",
" at _loadUserApp (/var/runtime/UserFunction.js:219:12)"
]
}
Function Logs
[AWS Parameters and Secrets Lambda Extension] 2023/02/14 17:45:26 PARAMETERS_SECRETS_EXTENSION_LOG_LEVEL is not present. Log level set to info.
I thought /* global AWS */ was how you made the aws module available to lambda. Seems it doesn't work? Also, I added the layer AWS-Parameters-and-Secrets-Lambda-Extension, version 4, merge order 1. So again not sure why it's complaining?
Adding a comment in your code of /* global AWS */ doesn't magically provide anything. However the AWS SDK for NodeJS is included in the NodeJS Lambda runtime. Also, you don't need the AWS-Parameters-and-Secrets-Lambda-Extension if you are just using the standard AWS SDK to pull a secret value, as you appear to be doing.
Assuming you are using NodeJS 16 or earlier, with the AWS SDK v2 (which is what your code looks like) then you simply need to add the following line to the top of your file:
var AWS = require("aws-sdk");
If you are using NodeJS 18, then it comes with the AWS SDK v3, and you need to follow this guide to using it.

Deploy layer through the cdk, "Cannot find module", but works when upload through aws console

When I upload lambda layer through console, the module is found when the lambda runs, but when I deploy the same layer through the cdk, it's not found.
lambda > Runtime.NODEJS_16_X
layer > node-fetch#2.6.7
Through the cdk:
{
"errorType": "Runtime.ImportModuleError",
"errorMessage": "Error: Cannot find module 'node-fetch'\nRequire stack:\n- /var/task/index.js\n- /var/runtime/index.mjs",
"stack": [
"Runtime.ImportModuleError: Error: Cannot find module 'node-fetch'",
"Require stack:",
"- /var/task/index.js",
"- /var/runtime/index.mjs",
" at _loadUserApp (file:///var/runtime/index.mjs:1000:17)",
" at async Object.UserFunction.js.module.exports.load (file:///var/runtime/index.mjs:1035:21)",
" at async start (file:///var/runtime/index.mjs:1200:23)",
" at async file:///var/runtime/index.mjs:1206:1"
]
}
This is part of a bigger solution being deployed in through a cdk pipeline, and it's getting blocked here.

How to run nextjs in AWS lambda with `experimental-edge` runtime

I'm trying to find a way to run Next.js (v13.0.6) with OG image generation logic (using #vercel/og) in AWS Lambda
Everything works fine locally (in dev and prod mode) but when I try execute lambda handler getting "statusCode": 500,
It only fails for apis that involve ImageResponse (and runtime: 'experimental-edge' as a requirement for #vercel/og)
I'm pretty sure the problem is caused by Edge Runtime is not being configured correctly
There is my handler code
next build with next.config.js output: 'standalone' creates folder .next/standalone
insde standalone handler.js
const { parse } = require('url');
const NextServer = require('next/dist/server/next-server').default
const serverless = require('serverless-http');
const path = require('path');
process.env.NODE_ENV = 'production'
process.chdir(__dirname)
const currentPort = parseInt(process.env.PORT, 10) || 3000
const nextServer = new NextServer({
hostname: 'localhost',
port: currentPort,
dir: path.join(__dirname),
dev: false,
customServer: false,
conf: {...} // copied from `server.js` in there same folder
});
const requestHandler = nextServer.getRequestHandler();
// this is a AWS lambda handler that converts lambda event
// to http request that next server can process
const handler = serverless(async (req, res) => {
// const parsedUrl = parse(req.url, true);
try {
await requestHandler(req, res);
}catch(err){
console.error(err);
res.statusCode = 500
res.end('internal server error')
}
});
module.exports = {
handler
}
testing it locally with local-lambda, but getting similar results when test against AWS deployed lambda
what is confusing is that server.js (in .next/standalone) has a similar setup, it only involves http server on top of of it
update:
aws lambda logs show
ERROR [Error [CompileError]: WebAssembly.compile(): Compiling function #64 failed: invalid value type 'Simd128', enable with --experimental-wasm-simd #+3457 ]
update 2:
the first error was fixed by selecting Node 16 for AWS lambda, now getting this error
{
"errorType": "Error",
"errorMessage": "write after end",
"trace": [
"Error [ERR_STREAM_WRITE_AFTER_END]: write after end",
" at new NodeError (node:internal/errors:372:5)",
" at ServerlessResponse.end (node:_http_outgoing:846:15)",
" at ServerlessResponse.end (/var/task/node_modules/next/dist/compiled/compression/index.js:22:783)",
" at NodeNextResponse.send (/var/task/node_modules/next/dist/server/base-http/node.js:93:19)",
" at NextNodeServer.handleRequest (/var/task/node_modules/next/dist/server/base-server.js:332:47)",
" at processTicksAndRejections (node:internal/process/task_queues:96:5)",
" at async /var/task/index.js:34:5"
]
}
At the moment of writing Vercel's runtime: 'experimental-edge' seems to be unstable (run into multiple issues with it)
I ended up recreating #vercel/og lib without wasm and next.js dependencies, can be found here
and simply use it in AWS lambda. It depends on #resvg/resvg-js instead of wasm version, which uses binaries, so there should not be much perf loss comparing to wasm

socketio-jwt error in building typescript when importing library [Cannot find namespace 'SocketIO'.]

hello guys I am working on a nestjs#v9 that has a chat functionality and I want to be able to authorize the tokens sent by the user while creating the websocket
the app has a build script "build":"npm run remove-build && tsc -p tsconfig.build.json"
when running this script using yarn build
it gives me the error
yarn run build stdout:
$ npm run remove-build && tsc -p tsconfig.build.json
> remove-build
> rm -rf dist
../../node_modules/socketio-jwt/types/index.d.ts(26,14): error TS2503: Cannot find namespace 'SocketIO'.
../../node_modules/socketio-jwt/types/index.d.ts(63,38): error TS2503: Cannot find namespace 'SocketIO'.
I found that if I remove this line
// import { authorize } from 'socketio-jwt';
the error disappears but unfortunately I need to use this authorize method
so my question how can I solve this Issue.
[NOTE] i have this packages inside my package.json
"socket.io": "^4.5.2",
"socket.io-redis": "^6.1.1",
"socketio-jwt": "^4.6.2",
the client should be connecting using this code
const socket = io('http://localhost:1080', {
transports: ['websocket'],
query: {
token: TOKEN,
},
});
the server should authorize the socket using
import { authorize } from 'socketio-jwt';
public createIOServer(port: number, options?: ServerOptions): any {...
const server = super.createIOServer(port, options);
server.adapter(this.redisAdapter);
server.use(
authorize({
decodedPropertyName: 'token',
handshake: true,
secret: secret,
}) as any,
);

Cannot find module 'handler' when running simple lambda deployed by serverless

I am trying to start learning serverless / lambda so I created a simple lambda and deployed it with serverless which worked.
However when I want to test the endpoint of the lampbda I get a 502 back. When I look in the logs it tells me that it can not find the module handler which does not make any sense...
here is the log:
{
"errorType": "Runtime.ImportModuleError",
"errorMessage": "Error: Cannot find module 'handler'\nRequire stack:\n-
/var/runtime/UserFunction.js\n- /var/runtime/index.js",
"trace": [
"Runtime.ImportModuleError: Error: Cannot find module 'handler'",
"Require stack:",
"- /var/runtime/UserFunction.js",
"- /var/runtime/index.js",
" at _loadUserApp (/var/runtime/UserFunction.js:100:13)",
" at Object.module.exports.load (/var/runtime/UserFunction.js:140:17)",
" at Object.<anonymous> (/var/runtime/index.js:43:30)",
" at Module._compile (internal/modules/cjs/loader.js:1158:30)",
" at Object.Module._extensions..js (internal/modules/cjs/loader.js:1178:10)",
" at Module.load (internal/modules/cjs/loader.js:1002:32)",
" at Function.Module._load (internal/modules/cjs/loader.js:901:14)",
" at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:74:12)",
" at internal/main/run_main_module.js:18:47"
]
}
This normally means that it can not find the method that is the starting point to execute.
For example on your serverless.yml you can have something like this
functions:
getUsers:
handler: userFile.handler
this would mean that it's required to have a userFile in the same folder of the serverless.yml with the method handler exported.
module.exports.hello = async event => {
return {
statusCode: 200,
body: JSON.stringify(
{
message: 'Go Serverless v1.0! Your function executed successfully!',
input: event,
},
null,
2
),
};
};
Note that it does not need to be named handler function, just needs to have the same name defined on the serverless.yml
I ran into the same error when launching a lambda locally using AWS sam, with Webstorm.
Turns out a previous run had not correctly stopped and destroyed the docker container running the lambda. Stopping and removing said docker container fixed the problem for me.

Resources