I would like to know whether it is safe to make HTTP(S) requests during the Init phase of a NodeJS Lambda function. In particular, I would like to make calls to AWS SSM GetParameter using #aws-sdk/client-ssm or AWS KMS Decrypt using #aws-sdk/client-kms to load secrets that will be used within the handler.
I have found one example online of someone creating a Promise outside of the handler and then awaiting it within the handler (Async Initialisation of a Lambda Handler), but I haven’t seen this approach endorsed in the official Lambda sample applications. None of the official examples do any work outside of the handler.
According to AWS Lambda execution environment: Lambda execution environment lifecycle, “Lambda freezes the execution environment when the runtime and each extension have completed and there are no pending events.” AWS Lambda Runtime API: Next invocation elaborates on the http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/next endpoint, “Do not set a timeout on the GET call. Between when Lambda bootstraps the runtime and when the runtime has an event to return, the runtime process may be frozen for several seconds.” I take this to mean that Lambda will signal the process with SIGSTOP at the time of the next call if Provisioned Concurrency is enabled as well as between requests.
In addition, when I look at lambcli’s lambda sources (docker run -it --rm lambci/lambda:build-nodejs12.x cat /var/runtime/Runtime.js), I see that scheduleIteration calls setImmediate(() => this.handleOnce()…) which calls this.client.nextInvocation so I don’t see a way to delay the nextInvocation call.
Questions:
In the Lambda nodejs runtime, is it possible to perform a HTTP request and await its response entirely within the Function init phase?
If you make a request outside of the handler, will the server time out the connection, resulting in Connection Closed errors when the handler awaits the response?
Is there a better recommended way to perform one-time initialization of secrets?
While I'm not able to fully answer all of your questions, I found this blog post that describes a possible solution: https://barker.tech/aws-lambda-nodejs-async-init-88b5c24af567
So in the end, the answers would be:
Yes, it is possible - see linked blog post.
Yes, it will time out. At least, this is what I experienced an issue when I tried to establish a MongoDB-connection outside the function handler with Provisioned Concurrency configured.
I can't really help you with that one...
AWS recently added top level await support in node14 and newer lambdas: https://aws.amazon.com/blogs/compute/using-node-js-es-modules-and-top-level-await-in-aws-lambda/ . Using this you can simply make the init phase wait by using top level await like so:
const sleep = ms => new Promise(resolve => setTimeout(resolve, ms))
console.log("start init");
await sleep(1000);
console.log("end init");
export const handler = async (event) => {
return {
statusCode: 200,
body: JSON.stringify('Hello from Lambda!'),
};
};
This works great if you are using ES modules. If for some reason you are stuck using commonjs (e.g. because your tooling like jest or ts-node doesn't yet fully support ES modules) then you can make your commonjs module look like an es module by making it export a Promise that waits on your initialization rather than exporting an object. Like so:
const sleep = ms => new Promise(resolve => setTimeout(resolve, ms))
const main = async () => {
console.log("start init");
await sleep(1000);
console.log("end init");
const handler = async (event) => {
return {
statusCode: 200,
body: JSON.stringify('Hello from Lambda!'),
};
};
return { handler };
};
# note we aren't exporting main here, but rather the result
# of calling main() which is a promise resolving to {handler}:
module.exports = main();
Related
I was able to start a state machine from a lambda in Serverless v2 using this technique:
const request = {
data: someDataGoesHere
};
const params = {
stateMachineArn: process.env.statemachine_arn,
input: JSON.stringify(request),
name: uniqueNameGoesHere,
};
const steps = new SFNClient({region: "us-east-1"});
const command = new StartSyncExecutionCommand(params);
console.log("Starting State Machine", params);
const result = await steps.send(command);
console.log("Back from State Machine", result);
After upgrading Serverless Framework to version 3, this code fails silently - the call to steps.send(command) never returns and the lambda times out (so "Back from State Machine" is never written to the lambda's log). An entry is not created in the CloudWatch logs for the step function, so there doesn't appear to be any way to figure out what went wrong. I have verified that stateMachineArn is set correctly.
I have tried removing and re-deploying the entire stack, but still can't start the step function. Any debugging advice would be appreciated!
How can I recursively call lambda inside itself in sam local environment?
const AWS = require('aws-sdk');
const lambda = new AWS.Lambda();
exports.foo = async(event, context) => {
// .......
lambda.invoke({ FuncitonName: context.functionName, InvocationType: 'Event', Payload: {/* .... */}})
}
This apparently does not work.
EDIT
My usecase is to split data to prevent timeout.
Payload contains page number and this lambda fetches data from API with the page number and put it to DynamoDB.
Returning result to caller is not important so async invocation is fine.
If you need to call other function I would recommend using localstack which has better support working with lambda functions locally and inter-calls between them.
sam local is fine if you work only with the function itself but once you have integration with s3, dynamodb, sqs, lambda better to use localstack
I am trying to deploy a simple Slack lambda api which uses the #slack/client library to remove members and pinned messages from a specific channel. The issue that I am running into is the function executes without a problem, and it is removing the channel members without a problem, but my Lambda function keeps returning:
HTTP/1.1 502 Bad Gateway
...
X-Cache: Error from cloudfront
...
{
"message": "Internal server error"
}
as the response body. When I check the logs using sls logs -f api, I dont see any errors there either. I see the console.log of my function successfully executing.
My serverless.yml is as follows:
provider:
name: aws
runtime: nodejs10.x
profile: serverless
functions:
api:
handler: handler.api
timeout: 30
events:
- http:
method: POST
path: clean
And my api code, i have removed the unnecessary function codes as they are doing their work, is :
module.exports.api = async (event, context, callback) => {
let channel = JSON.parse(event.body).ctf
let id = await findChannelId(channel)
removeMembersFromChannel(id[0]).then(() => {
removePinsFromChannel(id[0]).then(() => {
callback(null, {
statusCode: 200,
body: JSON.stringify({
message: `Cleaned ${channel} ${id}`,
}, null, 2),
})
})
})
};
Things I have tried:
returning the response instead of using the callback
using promises and async await
testing the function locally using sls invoke local
most of my search shows that this could be a permission issue, but all the references are for s3 usage which is something i am not using.
Questions
Why am I getting this error, and how I can resolve this?
After referenceing this In the handler function, I am using JSON.stringify. Using the serverless-framework, how can i avoid using Lambda proxy integration?
Please, add console.log for detailed logging via cloudwatch and use x-ray. Some typical problems with cloudfront:
- a lot of time to propagate to edge locations (maybe u need recreate your cdn)
- logs from lambda#edge locates in invoked region
Here's my setup:
A Python 3.6 lambda function, which I want to keep pre-warmed at a certain concurrency level (say, 10). The lambda's initialization is painful enough that I don't want to inflict this cost on visitors at random. I call these lambdas "workers"
A Node lambda function which runs every 5 minutes to try to pre-warm 10 instances. It uses the Event invocation type for 9 of them, and RequestResponse for 1. There's only either one or zero of this lambda running at any one time. I call this a "warmer".
I followed the guidelines at [https://www.jeremydaly.com/lambda-warmer-optimize-aws-lambda-function-cold-starts/], namely:
Don’t ping more often than every 5 minutes
Invoke the function directly (i.e. don’t use API Gateway to invoke it)
Pass in a test payload that can be identified as such
Create handler logic that replies accordingly without running the whole function
Here's a problem: this works great for several minutes. Then, as I watch the logs, I start to get timeouts from my worker lambda invocations. The timeouts quickly take over all the invocations that the warmer is trying to launch.
Now, no worker lambdas are prewarmed any more. But the warmer keeps on trying, on a Cloudwatch event cron schedule, suffering 100% timeouts. Finally, Lambda stops trying to launch my worker lambdas at all. It feels like some aspect of Lambda's getting its state scrambled. The only way to recover is to re-deploy the lambda. That buys me another hour with pre-warmed lambdas working.
Questions:
How do I get visibility into why my worker lambdas start timing out, and then become completely non-responsive?
What is the definition of a "Concurrent Execution"? On the main Lambda dashboard it shows me this chart of them. Yet, it seems to have more than twice as many Concurrent Executions as I'm requesting.
Here's the warmup lambda code (Node):
// warmer
"use strict";
/** Generated by Serverless WarmUP Plugin at ${new Date().toISOString()} */
const aws = require("aws-sdk");
aws.config.region = "${this.options.region}";
const lambda = new aws.Lambda({httpOptions: {timeout: 60000}});
const functionNames = ${JSON.stringify(functionNames)};
const delay = ms => new Promise(res => setTimeout(res, ms))
const concurrency = 10;
module.exports.warmUp = async (event, context, callback) => {
console.log("Warm Up Start");
const invokes = await Promise.all(functionNames.map(async (functionName) => {
let invocations = [];
try {
for(let i=1;i <= concurrency;i++){
let params = {
FunctionName: functionName,
InvocationType: (i===concurrency)?'RequestResponse': 'Event',
LogType: 'None',
Qualifier: process.env.SERVERLESS_ALIAS || "$LATEST",
Payload: JSON.stringify({
source: 'serverless-plugin-warmup',
'__WARMER_INVOCATION__': i,
'__WARMER_CONCURRENCY__': concurrency,
'__WARMER_REQUESTED__': new Date().toISOString(),
})
};
invocations.push(lambda.invoke(params).promise())
}
return await delay(75).then(Promise.all(invocations.map(p => p.catch(e => e)))
.then(results => console.log('results', results))
.catch(e => {
console.log(e);
return e;
}
))
} catch (e) {
console.log(\`Warm Up Invoke Error: \${functionName}\`, e);
return false;
}
}));
console.log(\`Warm Up Finished\`);
}
And here's the worker lambda (Python):
source = event.get('source')
if source == 'serverless-plugin-warmup':
time.sleep(0.05)
print(event)
return lambda_gateway_response(200, {"status": "lambda warmup"})
It was the warmer (Node) lambda going haywire, even though all the logs pointed at the worker (Python) lambdas. After setting context.callbackWaitsForEmptyEventLoop = false, the problem disappeared.
I'm trying to call one Lambda function from another one that I have. I set up my permissions so that is not problem.
My problem is that the function doesn't wait for the Invoke function to complete and return NULL all the time.
Here is the code I'm using:
const AWS = require('aws-sdk');
exports.handler = async (event, context, callback) => {
var lambda = new AWS.Lambda({region: 'us-east-1', apiVersion: '2015-03-31'});
var params = {
FunctionName: 'testFunction',
InvocationType: 'RequestResponse'
}
lambda.invoke(params, function(err, data){
console.log(err);
console.log('here');
}).promise().then(data=> { callback(null, {message:'done'}); });
};
The {message:'done'} its never shown. I was recommended to use invokeAsync but that function is deprecated by AWS.
I know the problem is that the function is running lambda.invoke as synchronously because if I add callback(null, {message:'done'}); outside of the lambda.invoke function then I can see the console.logs working.
Any help?
TL;DR - Remove "async" in line 3, and it should work.
Your issue seems to be caused by the async keyword here. I have recreated this and deployed it to Lambda to run on Node v8.10 (but pointing it to invoke one of my own lambda functions of course).
Why are you using "async" here anyway? The async keyword declaration defines an asynchronous function and returns an AsyncFunction object. AWS Lambda is expected a function, not an AsyncFunction, and your "null" result is probably just Lambda immediately giving up because it can't find a regular function. Also, async is almost exclusively used with await (at least is was in 99% of the cases I've seen), and since your code isn't using await at all I don't see any reason to use async either.