Both AWS module and SSM/secrets layer added, but not found at run-time by lambda nodejs code? - aws-lambda

This is the code:
/* global AWS */
AWS.config.update({region: 'us-east-1'});
const ssm = new AWS.SSM();
console.log('Loading function...');
exports.handler = async (event, context) => {
//console.log('Received event:', JSON.stringify(event, null, 2));
console.log('value1 =', event.key1);
console.log('value2 =', event.key2);
console.log('value3 =', event.key3);
ssm.getParameters({
Names: [`/my/dev/scrt`],
WithDecryption: false,
}).promise()
.then(data => data.Parameters.length ? data.Parameters[0].Value : Promise.reject(new Error(`SSM Parameter was not set.`)))
.then(plainsecret => {
console.log(`the secret is ${scrt}`);
return `${scrt} - ${event.key1}`; // Echo back the first key value
});
};
The error:
Response
{
"errorType": "ReferenceError",
"errorMessage": "AWS is not defined",
"trace": [
"ReferenceError: AWS is not defined",
" at Object.<anonymous> (/var/task/index.js:2:1)",
" at Module._compile (internal/modules/cjs/loader.js:1085:14)",
" at Object.Module._extensions..js (internal/modules/cjs/loader.js:1114:10)",
" at Module.load (internal/modules/cjs/loader.js:950:32)",
" at Function.Module._load (internal/modules/cjs/loader.js:790:12)",
" at Module.require (internal/modules/cjs/loader.js:974:19)",
" at require (internal/modules/cjs/helpers.js:101:18)",
" at _tryRequireFile (/var/runtime/UserFunction.js:72:32)",
" at _tryRequire (/var/runtime/UserFunction.js:160:20)",
" at _loadUserApp (/var/runtime/UserFunction.js:219:12)"
]
}
Function Logs
[AWS Parameters and Secrets Lambda Extension] 2023/02/14 17:45:26 PARAMETERS_SECRETS_EXTENSION_LOG_LEVEL is not present. Log level set to info.
I thought /* global AWS */ was how you made the aws module available to lambda. Seems it doesn't work? Also, I added the layer AWS-Parameters-and-Secrets-Lambda-Extension, version 4, merge order 1. So again not sure why it's complaining?

Adding a comment in your code of /* global AWS */ doesn't magically provide anything. However the AWS SDK for NodeJS is included in the NodeJS Lambda runtime. Also, you don't need the AWS-Parameters-and-Secrets-Lambda-Extension if you are just using the standard AWS SDK to pull a secret value, as you appear to be doing.
Assuming you are using NodeJS 16 or earlier, with the AWS SDK v2 (which is what your code looks like) then you simply need to add the following line to the top of your file:
var AWS = require("aws-sdk");
If you are using NodeJS 18, then it comes with the AWS SDK v3, and you need to follow this guide to using it.

Related

How to run nextjs in AWS lambda with `experimental-edge` runtime

I'm trying to find a way to run Next.js (v13.0.6) with OG image generation logic (using #vercel/og) in AWS Lambda
Everything works fine locally (in dev and prod mode) but when I try execute lambda handler getting "statusCode": 500,
It only fails for apis that involve ImageResponse (and runtime: 'experimental-edge' as a requirement for #vercel/og)
I'm pretty sure the problem is caused by Edge Runtime is not being configured correctly
There is my handler code
next build with next.config.js output: 'standalone' creates folder .next/standalone
insde standalone handler.js
const { parse } = require('url');
const NextServer = require('next/dist/server/next-server').default
const serverless = require('serverless-http');
const path = require('path');
process.env.NODE_ENV = 'production'
process.chdir(__dirname)
const currentPort = parseInt(process.env.PORT, 10) || 3000
const nextServer = new NextServer({
hostname: 'localhost',
port: currentPort,
dir: path.join(__dirname),
dev: false,
customServer: false,
conf: {...} // copied from `server.js` in there same folder
});
const requestHandler = nextServer.getRequestHandler();
// this is a AWS lambda handler that converts lambda event
// to http request that next server can process
const handler = serverless(async (req, res) => {
// const parsedUrl = parse(req.url, true);
try {
await requestHandler(req, res);
}catch(err){
console.error(err);
res.statusCode = 500
res.end('internal server error')
}
});
module.exports = {
handler
}
testing it locally with local-lambda, but getting similar results when test against AWS deployed lambda
what is confusing is that server.js (in .next/standalone) has a similar setup, it only involves http server on top of of it
update:
aws lambda logs show
ERROR [Error [CompileError]: WebAssembly.compile(): Compiling function #64 failed: invalid value type 'Simd128', enable with --experimental-wasm-simd #+3457 ]
update 2:
the first error was fixed by selecting Node 16 for AWS lambda, now getting this error
{
"errorType": "Error",
"errorMessage": "write after end",
"trace": [
"Error [ERR_STREAM_WRITE_AFTER_END]: write after end",
" at new NodeError (node:internal/errors:372:5)",
" at ServerlessResponse.end (node:_http_outgoing:846:15)",
" at ServerlessResponse.end (/var/task/node_modules/next/dist/compiled/compression/index.js:22:783)",
" at NodeNextResponse.send (/var/task/node_modules/next/dist/server/base-http/node.js:93:19)",
" at NextNodeServer.handleRequest (/var/task/node_modules/next/dist/server/base-server.js:332:47)",
" at processTicksAndRejections (node:internal/process/task_queues:96:5)",
" at async /var/task/index.js:34:5"
]
}
At the moment of writing Vercel's runtime: 'experimental-edge' seems to be unstable (run into multiple issues with it)
I ended up recreating #vercel/og lib without wasm and next.js dependencies, can be found here
and simply use it in AWS lambda. It depends on #resvg/resvg-js instead of wasm version, which uses binaries, so there should not be much perf loss comparing to wasm

Deploy NestJS to AWS Lambda

I have a REST API built using NestJS and I'm trying to deploy this to AWS Lambda.
I've created a file called serverless.ts in the src directory of my app -
import { NestFactory } from '#nestjs/core';
import { AppModule } from './app.module';
import serverlessExpress from '#vendia/serverless-express';
import { Handler, Callback, Context } from 'aws-lambda';
let server: Handler;
async function bootstrap() {
const app = await NestFactory.create(AppModule);
await app.init();
const expressApp = app.getHttpAdapter().getInstance();
return serverlessExpress({ app: expressApp });
}
bootstrap();
export const handler: Handler = async (
event: any,
context: Context,
callback: Callback,
) => {
server = server ?? (await bootstrap());
return server(event, context, callback);
};
Now, when I tried deploying this app/handler to AWS Lambda using Serverless framework, it failed a couple of times due to big package size (Lambda limits it to 250Mb).
The next option I had was to use docker and Elastic Container Registry (AWS ECR) to upload an image to Lambda.
Dockerfile -
FROM public.ecr.aws/lambda/nodejs:14
COPY package*.json ${LAMBDA_TASK_ROOT}
RUN npm install
COPY . ${LAMBDA_TASK_ROOT}
RUN npm run build
CMD [ "dist/serverless.handler" ]
I build this image and push to ECR Repository using the following commands -
aws ecr get-login-password --region region | docker login --username AWS --password-stdin aws_account_id.dkr.ecr.region.amazonaws.com
docker tag e9ae3c220b23 aws_account_id.dkr.ecr.region.amazonaws.com/my-repository:tag
docker push aws_account_id.dkr.ecr.region.amazonaws.com/my-repository:tag
The push is successful now. Then I import this image to AWS Lambda and add a API Gateway Trigger to generate an HTTP endpoint.
When I try to access this endpoint, it says -
{ message: Internal server error }
When I view logs in Cloudwatch, this is what I see -
{
"errorType": "Runtime.ImportModuleError",
"errorMessage": "Error: Cannot find module 'serverless'\nRequire stack:\n- /var/runtime/UserFunction.js\n- /var/runtime/Runtime.js\n- /var/runtime/index.js",
"stack": [
"Runtime.ImportModuleError: Error: Cannot find module 'serverless'",
"Require stack:",
"- /var/runtime/UserFunction.js",
"- /var/runtime/Runtime.js",
"- /var/runtime/index.js",
" at _loadUserApp (/var/runtime/UserFunction.js:221:13)",
" at Object.module.exports.load (/var/runtime/UserFunction.js:279:17)",
" at Object. (/var/runtime/index.js:43:34)",
" at Module._compile (internal/modules/cjs/loader.js:1085:14)",
" at Object.Module._extensions..js (internal/modules/cjs/loader.js:1114:10)",
" at Module.load (internal/modules/cjs/loader.js:950:32)",
" at Function.Module._load (internal/modules/cjs/loader.js:790:12)",
" at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:75:12)",
" at internal/main/run_main_module.js:17:47"
]
}
I am unable to figure out the root cause and a fix for this issue. How do I resolve this?

How to refer node_modules created in AWS Layers to a Lambda using serverless?

I want to run a lambda function and use AWS layers as a node_module repository.
This is the service in serverless that creates the layers:
service: lambda-layes-utils
frameworkVersion: '2'
provider:
name: aws
lambdaHashingVersion: 20201221
layers:
nodemodules :
path: node_modules
tools :
path: tools
utils :
path: utils
resources:
Outputs:
NodemodulesLayerExport:
Value:
Ref: NodemodulesLambdaLayer
Export:
Name: NodemodulesLambdaLayer
ToolsLayerExport:
Value:
Ref: ToolsLambdaLayer
Export:
Name: ToolsLambdaLayer
UtilsLayerExport:
Value:
Ref: UtilsLambdaLayer
Export:
Name: UtilsLambdaLayer
In this project I am creating 2 other layers tools and utils but they are not important for now.
The layer that will replace the node_module is nodemodules.
These are the dependencies in package.json that creates that layer:
"dependencies": {
"#aws-sdk/client-s3": "^3.53.1",
"#hapi/joi": "^17.1.1",
"aws-sdk": "^2.950.0",
"axios": "^0.21.1",
"mongodb": "^4.4.1",
"mysql2": "^2.2.5",
"mysql2-promise": "^0.1.4",
"querystring": "^0.2.1",
"serverless-iam-roles-per-function": "^3.2.0",
"serverless-plugin-log-retention": "^2.0.0",
"uuid": "^8.3.2",
"uuidv4": "^6.2.12"
},
So, I am expecting to have uuidv4 available in the Lambda that will use the layer.
In the AWS I have the following lambda Layer-S3 that I am using for test the layer nodemodules:
const AWS = require('aws-sdk');
const S3 = new AWS.S3()
const { v4: uuidv4 } = require('uuid');
exports.handler = async (event) => {
const keyName = uuidv4() + '.json';
const objectParams = { Bucket: 'aacertificates', Key: keyName, Body: 'tespaylod' }
return S3.putObject(objectParams).promise();
};
When I run the test I am getting the following error:
undefined ERROR Uncaught Exception {"errorType":"Runtime.ImportModuleError","errorMessage":"Error: Cannot find module 'uuid'\nRequire stack:\n- /var/task/index.js\n- /var/runtime/UserFunction.js\n- /var/runtime/index.js","stack":["Runtime.ImportModuleError: Error: Cannot find module 'uuid'","Require stack:","- /var/task/index.js","- /var/runtime/UserFunction.js","- /var/runtime/index.js"," at _loadUserApp (/var/runtime/UserFunction.js:202:13)"," at Object.module.exports.load (/var/runtime/UserFunction.js:242:17)"," at Object.<anonymous> (/var/runtime/index.js:43:30)"," at Module._compile (internal/modules/cjs/loader.js:1085:14)"," at Object.Module._extensions..js (internal/modules/cjs/loader.js:1114:10)"," at Module.load (internal/modules/cjs/loader.js:950:32)"," at Function.Module._load (internal/modules/cjs/loader.js:790:12)"," at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:76:12)"," at internal/main/run_main_module.js:17:47"]}
It seems that the Lambda can't find the module uuid.
In my lambda layers section I am using the correct lambda / version.
What am I missing here?
Update:
I just realize that when one create the lambda using SERVERLESS it do not set the Compatible runtimes. When one use the manual creation and set manually the Compatible runtimes to nodejs14.x, it works.
Using runtime: nodejs14.x in serverless.yml does not work as well.
provider:
name: aws
lambdaHashingVersion: 20201221
runtime: nodejs14.x
So tue question seems to be how to setup Compatible runtimes from SERVERLESS

Cannot find module 'handler' when running simple lambda deployed by serverless

I am trying to start learning serverless / lambda so I created a simple lambda and deployed it with serverless which worked.
However when I want to test the endpoint of the lampbda I get a 502 back. When I look in the logs it tells me that it can not find the module handler which does not make any sense...
here is the log:
{
"errorType": "Runtime.ImportModuleError",
"errorMessage": "Error: Cannot find module 'handler'\nRequire stack:\n-
/var/runtime/UserFunction.js\n- /var/runtime/index.js",
"trace": [
"Runtime.ImportModuleError: Error: Cannot find module 'handler'",
"Require stack:",
"- /var/runtime/UserFunction.js",
"- /var/runtime/index.js",
" at _loadUserApp (/var/runtime/UserFunction.js:100:13)",
" at Object.module.exports.load (/var/runtime/UserFunction.js:140:17)",
" at Object.<anonymous> (/var/runtime/index.js:43:30)",
" at Module._compile (internal/modules/cjs/loader.js:1158:30)",
" at Object.Module._extensions..js (internal/modules/cjs/loader.js:1178:10)",
" at Module.load (internal/modules/cjs/loader.js:1002:32)",
" at Function.Module._load (internal/modules/cjs/loader.js:901:14)",
" at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:74:12)",
" at internal/main/run_main_module.js:18:47"
]
}
This normally means that it can not find the method that is the starting point to execute.
For example on your serverless.yml you can have something like this
functions:
getUsers:
handler: userFile.handler
this would mean that it's required to have a userFile in the same folder of the serverless.yml with the method handler exported.
module.exports.hello = async event => {
return {
statusCode: 200,
body: JSON.stringify(
{
message: 'Go Serverless v1.0! Your function executed successfully!',
input: event,
},
null,
2
),
};
};
Note that it does not need to be named handler function, just needs to have the same name defined on the serverless.yml
I ran into the same error when launching a lambda locally using AWS sam, with Webstorm.
Turns out a previous run had not correctly stopped and destroyed the docker container running the lambda. Stopping and removing said docker container fixed the problem for me.

TypeError: azure.DocumentDbClient is not a constructor

I'm building a bot to connect to Azure Cosmos DB using Node SDK with the following dependencies:
"dependencies": {
"botbuilder": "~4.6.2",
"botbuilder-azure": "^4.6.2",
},
This is the code that I copied from this official tutorial. The tutorial is for SDK v3, unfortunately there is no official tutorial for v4 for this configuration.
var azure = require('botbuilder-azure');
var documentDbOptions = {
host: <secret>,
masterKey: <secret>,
database: 'database',
collection: 'collection'
};
var docDbClient = new azure.DocumentDbClient(documentDbOptions);
var cosmosStorage = new azure.AzureBotStorage({ gzipData: false }, docDbClient);
Here is the full exception stack:
evandro#mypc:~/Projects/pluralsight-bot$ npm start
> pluralsight-bot#1.0.0 start /home/evandro/Projects/pluralsight-bot
> node ./index.js
/home/evandro/Projects/pluralsight-bot/index.js:28
var docDbClient = new azure.DocumentDbClient(documentDbOptions);
^
TypeError: azure.DocumentDbClient is not a constructor
at Object.<anonymous> (/home/evandro/Projects/pluralsight-bot/index.js:28:19)
at Module._compile (internal/modules/cjs/loader.js:959:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:995:10)
at Module.load (internal/modules/cjs/loader.js:815:32)
at Function.Module._load (internal/modules/cjs/loader.js:727:14)
at Function.Module.runMain (internal/modules/cjs/loader.js:1047:10)
at internal/main/run_main_module.js:17:11
That tutorial you linked is dated 12/12/2017. In Bot Framework terms, that very out-of-date, especially since it's for v3 and not v4. At the top, it has a link to v4, although it only takes you to v4 of the docs, and not the article. Here's more or less the same article for v4.
And here's the relevant code:
const { CosmosDbPartitionedStorage } = require("botbuilder-azure");
[...]
// initialized to access values in .env file.
const ENV_FILE = path.join(__dirname, '.env');
require('dotenv').config({ path: ENV_FILE });
// Create local Memory Storage - commented out.
// var storage = new MemoryStorage();
// Create access to CosmosDb Storage - this replaces local Memory Storage.
var storage = new CosmosDbPartitionedStorage({
cosmosDbEndpoint: process.env.DB_SERVICE_ENDPOINT,
authKey: process.env.AUTH_KEY,
databaseId: process.env.DATABASE_ID,
containerId: process.env.CONTAINER
})
Note: If you're using an existing database that is not partitioned, you'll want to use CosmosDbStorage and not CosmosDbPartitionedStorage. Also, the example in the docs incorrectly imports CosmosDbStorage instead of CosmosDbPartitionedStorage. I've submitted a PR to fix that.

Resources