AWS Lambda SQS Trigger - visual-studio

I have added a new AWS Lambda in visual studio. It has generated a function that passes a SQS Message.
The template I have been generated is similar to the one here
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-lambda-function.html
I can't for the life of me find anywhere that tells me where to pass the queue name? So that my lambda can trigger after a message has been put into that queue. I imagine its a property in that json file? How about a schedule? Anyone know the name of the property to add a schedule?

It looks like you are using CloudFormation to create the Lambda function.
To trigger that Lambda function from SQS, you should create an AWS::Lambda::EventSourceMapping. For example, in YAML:
Type: AWS::Lambda::EventSourceMapping
Properties:
BatchSize: 1
Enabled: true
EventSourceArn: arn:aws:sqs:us-east-2:444455556666:queue1
FunctionName: myfunc
Increase the BatchSize if you want SQS messages batched.

Related

AWS Lambda - different dead letter queue based on Lambda alias

I've two SQS queues one for test stage and one for prod stage and a single Lambda function with test and prod aliases, each alias is triggered by a SQS event of the corresponding queue.
I was wondering if there's a way to specifcy a DLQ for each alias (I'm using a SAM template to define the Lambda function) of the Lambda or I would like to understand which is the best practice to handle such requirement.
It's straightforward to add your DLQs - and the test-prod alias setup you have does not complicate things at all.
Add a RedrivePolicy to your two Queue definitions. Here's the prod half the code:
# template.yaml
# new: define a new queue to be the prod DLQ
MyProdDLQ:
Type: AWS::SQS::Queue
Properties:
QueueName: this-queue-captures-errors-sent-from-the-prod-queue
MyProdQueue:
Type: AWS::SQS::Queue
Properties:
QueueName: this-queue-feeds-the-prod-alias-lambda
VisibilityTimeout: 60
# new: assign the DLQ
RedrivePolicy:
deadLetterTargetArn: !GetAtt MyProdDLQ.Arn
maxReceiveCount: 5
Note: You might be tempted to define the DLQ on the lambda function itself. Confusingly, the DLQ on the lambda function is for asynchronous (event-driven) invocations, not SQS sources. The docs warn about this, but it is easy to miss.

How to send a CloudWatchEvent from a lambda to an EventBridge destination

I have a lambda which is triggered by an EventBridge custom bus. I want to send another event to the customer bus at the end of the function processing. I created a destination in the lambda to send to the same custom bus.
I have the following code where the function handler will return a CloudWatchEvent. This is not working.
public async Task<CloudWatchEvent<object>> FunctionHandler(CloudWatchEvent<object> evnt, ILambdaContext context)
{
return await ProcessMessageAsync(evnt, context);
}
My lambda was being triggered by S3 input event (which is asynchronous), I tried adding destination on Lambda "success" to EventBridge bus, created a rule to capture that and send it to CloudWatch logs but it didn't seem to work.
Turns out, while creating the Rule in EventBridge, event pattern was set to:
{
"source": ["aws.lambda"]
}
Which is what you get if you are using the console and selecting AWS Lambda as the AWS Service.
Infuriated, I couldn't seem to get it to work even with a simple event. On further inspection, I looked at the input event and realized that it wants lambda and not aws.lambda. It is also mentioned in the documentation: https://docs.aws.amazon.com/lambda/latest/dg/invocation-async.html
So to fix it, I changed it to
{
"source": ["lambda"]
}
and it worked for me.
Have you given a shot to AWS Lambda Destinations. There are 4 types of Destinations supported
SQS Queue
SNS Topic
Event Bridge Event Bus
Lambda Function itself.

When should I use a DynamoDB trigger over calling the Lambda with another?

I currently have one AWS Lambda function that is updating a DynamoDB table, and I need another Lambda function that needs to run after the data is updated. Is there any benefit to using a DynamoDB trigger in this case instead of invoking the second Lambda using the first one?
It looks like the programmatic invocation would give me more control over when the Lambda is called (ie. I could wait for several updates to occur before calling), and reading from a DynamoDB Stream costs money while simply invoking the Lambda does not.
So, is there a benefit to using a trigger here? Or would I be better off invoking the Lambda myself?
DynamoDB Stream seems to be the better practice because:
you delegate the responsibility of invoking the post-processor function from your writer-Lambda. Makes writer more simple (aka faster),
you simplify connecting new external writers to the same Table, otherwise you have to implement the logic to call post-processors in all of them as well,
you guarantee that all data is post-processed (even if somebody added a new item in the web-interface of DynamoDB. :)
moneywise, the execution time you will spend to send invoke() operation from writer Lambda will likely cover the costs of a stream.
unless you use DynamoDB transactions your data may still be not yet available for post-processor if you call him from writer too soon. If your business logic doesn't need transactions then using them just to cover this problem = extra time/cost.
P.S. You can batch from the DynamoDB stream of course out of the box with simple setting. You are not obliged to invoke post-processor for every write operation.
After the data is updated, you can publish a SQS message, then add a trigger to configure another function to read from Amazon SQS in the Lambda console, create an SQS trigger.
To create a trigger
Open the Lambda console Functions page.
Choose a function.
Under Designer, choose Add trigger.
Choose a trigger type.
Configure the required options and then choose Add.
Lambda supports the following options for Amazon SQS event sources.
Event Source Options
SQS queue – The Amazon SQS queue to read records from.
Batch size – The number of items to read from the queue in each batch, up to 10. The event may contain fewer items if the batch that Lambda read from the queue had fewer items.
Enabled – Disable the event source to stop processing items.
var QUEUE_URL = 'https://sqs.us-east-1.amazonaws.com/{AWS_ACCUOUNT_}/matsuoy-lambda';
var AWS = require('aws-sdk');
var sqs = new AWS.SQS({region : 'us-east-1'});
exports.handler = function(event, context) {
var params = {
MessageBody: JSON.stringify(event),
QueueUrl: QUEUE_URL
};
sqs.sendMessage(params, function(err,data){
if(err) {
console.log('error:',"Fail Send Message" + err);
context.done('error', "ERROR Put SQS"); // ERROR with message
}else{
console.log('data:',data.MessageId);
context.done(null,''); // SUCCESS
}
});
}
Please don't forget add a trigger from another function to this SQS topic. That function will receive the SQS message automatic to handle.

one AWS CloudWatch event to control multiple things

I have multiple cloudwatch events. Each of them triggers the same Lambda called app with different inputs at the same time: i.e.
event1 triggers lambda app at a schedule using input: app_name=app1
event2 triggers lambda app at the same schedule using input: app_name=app2.
event3 triggers lambda app at the same schedule using input: app_name=app3.
As you can see all the event has the same schedule. I really do not need so many duplicated events.
Is there any way I can use one CloudWatch event to trigger one Lambda with multiple input? i.e. at a time, the same event will trigger lambda app with input app1; it also triggers the same lambda with input app2, it also triggers the same lambda with input app3?
it will make my structure neat. one event, one lambda (with different input) for multiple app.
You can have one CloudWatch Rule with a Schedule event source and one Lambda function target. You will need to configure the input to use a Constant (JSON text) with an array of data as shown here:
Then in your Lambda function the event will be your constant. Example with Node.js 8.10 to start EC2 instances:
const AWS = require('aws-sdk');
const ec2 = new AWS.EC2();
exports.handler = async (event) => {
console.log('Starting instances: %j', event);
const data = await ec2.startInstances({ InstanceIds: event }).promise();
console.log(data);
};

lambda create-event-source-mapping Member must not be null Exception

I am trying to attach a kinesis stream event to lambda function usi cli command but getting exception as :
An error occurred (ValidationException) when calling the CreateEventSourceMapping operation: 1 validation error detected: Value null at 'startingPosition' failed to satisfy constraint: Member must not be null.
My command is :
aws lambda create-event-source-mapping --event-source-arn arn:aws:kinesis:us-west-2:xxxxxx:stream/lambda-stream --function-name helloworld-divyanayan_lambda --batch-size 100
If Lambda is your consumer for Kinesis streams where you are continuously processing stream data, you use "LATEST" as the starting position.
TRIM_HORIZON will read the oldest untrimmed record in the shard.
I got this error for a DynamoDB stream mapping to a Lambda in CloudFormation. As other answer/comment suggests, the problem is the starting position on the Kinesis stream.
The CLI docs do indeed have a flag for --starting-position, and the CloudFormation template does as well.
So, in my case, it was fixed by adding this line to my CFN template:
Type: AWS::Lambda::EventSourceMapping
Properties:
...
StartingPosition: 'LATEST'

Resources