Why is my lambda taking exactly 6 seconds every time to respond? - aws-lambda

I am using NodeJS env with serverless framework.
The service is an endpoint for a contact form submission. Code looks something like this.
I have two async calls, one is writing to dynamoDB and another is sending an Email via SES.
module.exports.blog = async (event, context, callback) => {
const data = JSON.parse(event.body);
const handler = 'AB';
const sesParams = getSesParams(handler, data);
if (typeof data.text !== 'string') {
callback(null, validationErrRes);
return;
}
try {
await logToDB(handler, data);
} catch (dbErr) {
console.error(dbErr);
callback(null, errRes(dbErr, 'Failed to log to DB'));
return;
}
try {
await SES.sendEmail(sesParams).promise();
} catch (emailErr) {
console.error(emailErr);
callback(null, errRes(emailErr, 'Failed to send mail'));
return;
}
callback(null, succsessResponse);
return;
};
The response takes exactly 6sec when the dbput and sendMail takes total of < 300ms.
PS: Running both async calls parallelly does not help much.

Try removing the callback in your function definition and the call to your callback function. Just return the successResponse. You are already an async function so do not need to use a callback. You can also just return error.
module.exports.blog = async (event, context) => {
and
return {
statusCode: 200
}
and
return validationErrRes

Related

Codepipeline lambda action never complete

Well, my lambda function work's well according to the log's, but it never get completed in the codepipeline stage, I have already set permission to role for allow notificate pipeline ("codepipeline:PutJobSuccessResult",
"codepipeline:PutJobFailureResult") and even set maximun time to 20sec but still not working (it actually ends at 800ms).
const axios = require('axios')
const AWS = require('aws-sdk');
const url = 'www.exampleurl.com'
exports.handler = async (event, context) => {
const codepipeline = new AWS.CodePipeline();
const jobId = event["CodePipeline.job"].id;
const stage = event["CodePipeline.job"].data.actionConfiguration.configuration.UserParameters;
const putJobSuccess = function(message) {
var params = {
jobId: jobId
};
codepipeline.putJobSuccessResult(params, function(err, data) {
if (err) {context.fail(err); }
else {context.succeed(message);}
});
};
const putJobFailure = function(message) {
var params = {
jobId: jobId,
failureDetails: {
message: JSON.stringify(message),
type: 'JobFailed',
externalExecutionId: context.invokeid
}
};
codepipeline.putJobFailureResult(params, function(err, data) {
if (err) console.log(err)
context.fail(message);
});
};
try {
await axios.post(url, { content: stage})
putJobSuccess('all fine')
} catch (e) {
putJobFailure(e)
}
};
The root issue
Because nodeJS runs everything async by default, codepipeline.putJobSuccessResult is being run async. The issue seems to be that the Lambda function is finishing it's execution before codepipeline.putJobSuccessResult has a chance to complete.
The solution
Run codepipeline.putJobSuccessResult synchronously so that it is forced to complete before the response is returned to Lambda for the lambdaHandler.
const putJobSuccess = function(id) {
//await sleep(60);
console.log("Telling Codepipeline test passed for job: " + id)
var params = {
jobId: id
};
return codepipeline.putJobSuccessResult(params, function(err, data) {
if(err) {
console.error(err)
} else {
console.log(data)
}
}).promise()
};
exports.lambdaHandler = async (event, context) => {
...
await putJobSuccess( jobId )
return response
};
Whenever I see this issue, most of the time it is due to 'PutJobSuccessResult' never being invoked. The best way to check this is to go to CloudTrail > 'Event History' and look for 'Event name' = 'PutJobSuccessResult' during the time range you expect the Lambda function calling this API. Probably you will not find the 'PutJobSuccessResult', then please have a look at the code again and the Lambda execution logs in CloudWatch.

Async / await is not working (javascript / DynamoDB)

I have a DynamoDB Put request wrapped into an async function.
async function putter(param1, param2) {
const paramsPut = {
TableName: MyTableName,
Item: {
"hashKey": param1,
"sortKey": param2,
}
};
dynamodb.put(paramsPut, function(err, data) {
if (err) {
console.log("Failure")
console.log(data)
return data
}
else {
console.log("Success")
console.log(data)
return data
}
});
};
The return for the async funtion is placed in the response function - this the should provide back a promise upon put operation was performed (either sucessfully or not sucessfully).
I then invoke this async put function from another async function:
var param1 = "50";
var param2 = "60";
async function main() {
await putter(param1 , param2)
console.log("Feedback received")
}
When I invoke this aysnc main function I would expect it to provide the Success statement from the put function prior to writing "Feedback received" as it should await the put function response.
However my console logs the "Feedback received" prior to
logging the "Success" statement in the put async function which I
was awaiting.
What am I missing here? Thanks for your support!
Try to change your code like follows:
try {
const data = await dynamodb.put(paramsPut).promise()
console.log("Success")
console.log(data)
return data
} catch (err) {
console.log("Failure", err.message)
// there is no data here, you can return undefined or similar
}
Almost every function from AWS SDK has the promise() variant to return the result as a Promise. Then you can just await the Promise. Don't mix callbacks with promises (async/await) - it makes the code hard to read, it's better to stick with one technique everywhere.

Scheduled Lambda function not able to make 3rd party API calls

I've got 3 functions.
Cron job lambda function
Event driven function which detects when a new record is added to the DynamoDB
A reusable function which is currently called by the 2 above functions
The Cron job function
export async function scheduledFunction() {
const detailsHistory = await sharedFunction(param1);
}
The event driven function
export async function eventFunction(event) {
event.Records.forEach(async record => {
if (record.eventName === 'INSERT') {
await sharedFunction(param1)
}
}
}
The function called by both of the event and scheduled function
const sharedFunction = async (param1) {
const apiUrl = 'xxxxxx';
const details = await axios.get(apiUrl, {
headers: {
'x-api-key': xxxx
}
});
}
The event function works when the DynamoDB has a new insert and then calls the 3rd party API which works as expected
The scheduled function fires every 4 hours and is works and gets to the sharedFunction, but when its gets to the API call await axios.get it just does nothing, I'm not getting any errors in the CloudWatch. I've placed console.logs() before and after the call and it logs the one before but nothing after.
You should always put async code inside try ... catch block. Also forEach won't work with promise you will need to use for loop. Try this:
export async function eventFunction(event) {
try {
for (let record of event.Records) {
if (record.eventName === 'INSERT') {
await sharedFunction(param1)
}
}
}
catch (err) {
console.log(err);
return err;
}
}
Shared function:
const sharedFunction = async (param1) => {
try {
const apiUrl = 'xxxxxx';
return await axios.get(apiUrl, {
headers: {
'x-api-key': xxxx
}
});
}
catch (err) {
return err;
}
}

Serverless authorizer async await not working

I've been working with Node 6 with my Serverless application, and I decided to migrate to async/await, since version 8.x was released.
Altough, I'm having an issue with the authorizer function. Since I removed the callback parameter and just returning the value, it stopped to work. If I send something to the callback parameter, it keeps working fine, but it's not async/await-like. It's not working even if I throw an exception.
module.exports.handler = async (event, context) => {
if (typeof event.authorizationToken === 'undefined') {
throw new InternalServerError('Unauthorized');
}
const decodedToken = getDecodedToken(event.authorizationToken);
const isTokenValid = await validateToken(decodedToken);
if (!isTokenValid) {
throw new InternalServerError('Unauthorized');
} else {
return generatePolicy(decodedToken);
}
};
Any suggestions of how to proceed?
Thank y'all!
I faced the same problem here. It seems authorizers dont support async/await yet. One solution would be get your entire async/await function and call inside the handler. Something like this:
const auth = async event => {
if (typeof event.authorizationToken === 'undefined') {
throw new InternalServerError('Unauthorized');
}
const decodedToken = getDecodedToken(event.authorizationToken);
const isTokenValid = await validateToken(decodedToken);
if (!isTokenValid) {
throw new InternalServerError('Unauthorized');
} else {
return generatePolicy(decodedToken);
}
}
module.exports.handler = (event, context, cb) => {
auth(event)
.then(result => {
cb(null, result);
})
.catch(err => {
cb(err);
})
};
For folks arriving here in 2020 - now it works as OP described.

Lambda that reads from SQS queue - bottleneck?

So I have implemented a an email system like the one here : https://cloudonaut.io/integrate-sqs-and-lambda-serverless-architecture-for-asynchronous-workloads/
Flow is as follows
http request to end an email -> api gateway -> HttpRequestLambda -> SQS <-> SQSMessageConsumerLambda (scheduled) -> MessageWorkerLambda (sends email via email service provider)
My SQSMessageConsumerLambda is scheduled to run every minute
I changed the SQS consumer to recursively call itself when the timeout is getting near rather than just ending. Doing this means that SQS queue has a better chance of not piling up with too many messages.
This seems to work great so far, but I have a couple quesitons:
1.if the function timesout, those messages that were read from the queue are probably still within their visibility timeout period, thus invoking the lambda recursively means that they cant actually be re-read from the queue until their visibilty timeout expires which is probably not likely to be the case immediately after the recursive call. So would it be an idea to pass these messages into the recursive call itself? and then somehow check for these 'passed in messages' at the beginning of the consumer lambda and send them directly to workers in that case ?
2.SQSMessageConsumerLambda is still a bit of a bottleneck isn't it? as it takes about 40-50 ms to invoke the MessageWorkerLambda for each message it wants to delegate. Or, does the 'async.parallel' mitigate this ?
3.Would it be better if we could somehow elastically increase the number of SQSMessageConsumerLambda based on some CloudWatch alarms , i.e. alarms that check if there are more than X amount of messages on the queue for X minutes ?
var AWS = require('aws-sdk');
var sqs = new AWS.SQS();
var async = require("async");
var lambda = new AWS.Lambda();
var QUEUE_URL = `https://sqs.${process.env.REGION}.amazonaws.com/${process.env.ACCOUNT_ID}/${process.env.STAGE}-emailtaskqueue`;
var EMAIL_WORKER = `${process.env.SERVICE}-${process.env.STAGE}-emailWorker`
var THIS_LAMBDA = `${process.env.SERVICE}-${process.env.STAGE}-emailTaskConsumer`
function receiveMessages(callback) {
var numMessagesToRead = 10;
//console.log('in receiveMessages, about to read ',numMessagesToRead);
//WaitTimeSeconds : The duration (in seconds) for which the call waits for a message to arrive in the queue before returning
var params = {
QueueUrl: QUEUE_URL,
MaxNumberOfMessages: numMessagesToRead,
WaitTimeSeconds: 20
};
sqs.receiveMessage(params, function(err, data) {
if (err) {
console.error(err, err.stack);
callback(err);
} else {
if (data.Messages && data.Messages.length > 0) {
console.log('Got ',data.Messages.length, ' messages off the queue' );
}else{
console.log('Got no messages from queue');
}
callback(null, data.Messages);
}
});
}
function invokeWorkerLambda(task, callback) {
console.log('Need to invoke worker for this task..',task);
//task.Body is a json string
var payload = {
"ReceiptHandle" : task.ReceiptHandle,
"body" : JSON.parse(task.Body)
};
console.log('payload:',payload);
//using 'Event' means use async (http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Lambda.html#invoke-property)
//TODO need variable here
var params = {
FunctionName: EMAIL_WORKER,
InvocationType: 'Event',
Payload: JSON.stringify(payload)
};
var millis = Date.now();
lambda.invoke(params, function(err, data) {
millis = Date.now() - millis;
console.log('took ', millis, ' to invoke ', EMAIL_WORKER, ' asynchronously');
if (err) {
console.error(err, err.stack);
callback(err);
} else {
callback(null, data)
}
});
}
function handleSQSMessages(context, callback) {
//console.log('in handleSQSMessages');
receiveMessages(function(err, messages) {
if (messages && messages.length > 0) {
var invocations = [];
messages.forEach(function(message) {
invocations.push(function(callback) {
invokeWorkerLambda(message, callback)
});
});
async.parallel(invocations, function(err) {
if (err) {
console.error(err, err.stack);
callback(err);
} else {
if (context.getRemainingTimeInMillis() > 20000) {
console.log('there is more time to read more messages for this run of the cron')
handleSQSMessages(context, callback);
} else {
console.log('remaining time in millis:',context.getRemainingTimeInMillis(),' No more time here, invoking this lambda again')
lambda.invoke({FunctionName: THIS_LAMBDA, InvocationType: 'Event',Payload: '{"recursiveMarker":true}' }, function(err, data) {
if (err) {
console.error(err, err.stack);
callback(err);
} else {
console.log('data from the invocation:', data);
callback(null, 'Lambda was just called recursively');
}
});
}
}
});
} else {
callback(null, "DONE");
}
});
}
module.exports.emailTaskConsumer = (event, context, callback) => {
console.log('in an emailTaskConsumer. Was this a recursive call ?', event);
handleSQSMessages(context, callback);
}
1) The visibility timeout is a great feature of SQS allowing you to build resilient systems. Could not find a reason to try to handle failures on your own.
2) You could batch all messages read from the queue to the Worker Lambda at process them at once.
3) You could add additional CloudWatch event rules triggering the Consumer Lambda to increase the read througput.
Use SNS to trigger the Lambda. This is the correct way of working with Lambda functions. Your HttpRequestLambda would fire a SNS notification and another Lambda function is immediately triggered to response to that event. Actually, if you are not doing nothing else in HttpRequestLambda, you can also replace it with AWS API proxy. Here you can see full tutorial about exposing the SNS API via API Gateway.

Resources