I have to trigger a lambda function once specific stack is created.
I have created the below CloudWatch event rule and associated the target to that lambda function but it is not triggering the lambda.
{
"source": [
"aws.cloudformation"
],
"detail-type": [
"AWS API Call via CloudTrail"
],
"detail": {
"eventSource": [
"cloudformation.amazonaws.com"
],
"eventName": [
"CreateStack"
],
"stackName": [
"sql-automate-04-08"
]
}
}
Please let me know if i am missing anything here.
This doesn’t work using CloudWatch Event Rules because the CloudFormation stack’s lifecycle events don’t reflect individual API calls.
However, you can configure CloudFormation to send stack events to an Amazon SNS topic via its NotificationARNs property. An AWS Lambda function subscribed to that topic can then filter and process the events.
This EventBridge Rule has worked for me:
{
"source": ["aws.cloudformation"],
"detail-type": ["CloudFormation Stack Status Change"],
"region": ["us-east-1"],
"detail": {
"status-details": {
"status": ["CREATE_COMPLETE"]
}
}
}
Related
I am using the Amazon Selling Partner API (SP-API) and am trying to set up a Pub/Sub like system for receiving customer orders etc.
The Notifications API in SP-API sends notifications of different types in 2 different ways depending on what event you are using. Some send directly to eventBridge and others are sent to SQS. https://developer-docs.amazon.com/sp-api/docs/notifications-api-v1-use-case-guide#section-notification-workflows
I have correctly set up the notifications that are directly sent to eventBridge, but am struggling to work the SQS notifications. I want all notifications to be send to my own endpoint.
For the SQS model, I am receiving notifications in SQS, which is set as a trigger for a Lambda function (This part works). The destination for this function is set as another eventBridge (this is that part that doesn't work). This gives the architecture as:
SQS => Lambda => eventBridge => my endpoint
Why is lambda not triggering my eventBridge destination in order to send the notifications?
Execution Role Policies:
Lambda
AWSLambdaBasicExecutionRole
AmazonSQSFullAccess
AmazonEventBridgeFullAccess
AWSLambda_FullAccess
EventBridge
Amazon_EventBridge_Invoke_Api_Destination
AmazonEventBridgeFullAccess
AWSLambda_FullAccess
EventBridge Event Pattern:
{"source": ["aws.lambda"]}
Execution Role Trusted Entities:
EventBridge Role
"Service": ["events.amazonaws.com", "lambda.amazonaws.com", "sqs.amazonaws.com"]
Lambda Role
"Service": ["lambda.amazonaws.com", "events.amazonaws.com", "sqs.amazonaws.com"]
Lambda Code:
exports.handler = function(event, context, callback) {
console.log("Received event: ", event);
context.callbackWaitForEmptyEventLoop = false
callback(null, event);
return {
statusCode: 200,
}
}
Would it be possible to share the access policy for SQS que? I have setup the SQS que to receive notifications but nothing is landing in my SQS que, the subscription and destination is setup, I am suspecting that there is an issue with the access, it would be very helpful is you could share the access policy format that is working for your SQS que.
This is what I am using:
{
"Version": "2012-10-17",
"Id": "Policy1652298563852",
"Statement": [
{
"Sid": "Stmt1652298557402",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::82382xxxx:root"
},
"Action": [
"sqs:SendMessage",
"sqs:SetQueueAttributes"
],
"Resource": "arn:aws:sqs:us-east-1:823829xxxx:SDNotificationsQueue1"
}
]
}
I'm using a SNS Topic as a Dead Letter Queue to handle errors thrown by multiple Lambdas. In the error messages, there are the following attributes :
RequestID,
ErrorCode,
ErrorMessage,
However, I can't easily find which Lambda threw the error, since nothing related to it appear in the message (eg: ARN, function name...)
Although it's possible to look up the request ID on CloudWatch, or to create multiple topics, there should be a much easier way to find which Lambda threw the error. Below is the structure of the received message:
{
"Records": [
{
"EventSource": "aws:sns",
"EventVersion": "1.0",
"EventSubscriptionArn": "",
"Sns": {
"Type": "Notification",
"MessageId": "",
"TopicArn": "",
"Subject": null,
"Message": "",
"Timestamp": "",
"SignatureVersion": "",
"Signature": "",
"SigningCertUrl": "",
"UnsubscribeUrl": "",
"MessageAttributes": {
"RequestID": {
"Type": "String",
"Value": ""
},
"ErrorCode": {
"Type": "String",
"Value": "200"
},
"ErrorMessage": {
"Type": "String",
"Value": "test"
}
}
}
}
]
}
Is there any way to add information, such as the ARN, on the Lambda which triggered this error message?
You can use AWS CloudTrail to identify which Lambda executed:
https://docs.aws.amazon.com/lambda/latest/dg/logging-using-cloudtrail.html
You should have one DLQ per Lambda function. This will let you know where the dead letter is coming from.
I ended up configuring:
One unique SNS topic for every lambda DLQ.
A lambda listening to the above topic and storing the request ID in S3
A trail on CloudTrail logging every lambda invoke
A lambda matching the failed request ID in S3 and the cloudtrail logs. The latter provide the name of the failed lambda.
This infrastructure might seem a bit complicated but it's working very well. It allows to only add one unique onError: ... line of code in every lambda configuration file.
Not sure if you figured out a solution or not, but i solved a somewhat similar problem with a little help from boto3.
To give some more context about my setup:
I had 2 lambda functions (lambdaA and lambdaB) listening to a SNS topic which gets updated by S3 object creation events.
Both lambda functions use one SNS topic as DLQ which is subscribed by a single dlq lambda (lambdaDLQ).
In the lambdaDLQ's received message, you get a Message attribute which has the message received by the error throwing lambda functions (lambdaA or lambdaB) which contains the information about SNS message received by them.
So the steps i used to figure out the error throwing lambda function were:
Parse the Message attribute from lambdaDLQ's event into json.
Get the EventSubscriptionArn from the above parsed message. This holds the subscription details of the error throwing lambda.
Using boto3 to get Protocol, Endpoint, etc. Which give the name of the error throwing lambda.
sns_client = boto3.client('sns')
arn = 'arn_goes_here'
response = sns_client.get_subscription_attributes(SubscriptionArn=arn)
In case your error throwing lambda functions do not listen to a SNS topic, you might need to change this a little bit.
I am new to AWS and AWS-Lambdas. I have to create a lambda function to run a cron job in every 10 minutes. I am planning to add a Cloudwatch trigger to trigger the same in every 10 minutes but without any event. I looked up on the internet and found that some event needs to be there to get it running.
I need to get some clarity and leads on below 2 points:
Can I schedule a job using AWS-Lambda with cloudwatch triggering the same in span of 10 minutes without any events.
How do one need to make it interact with MySQL databases that have been hosted on AWS.
I have my application built on SpringBoot running on multiple instances with a shared database (single source of truth). I have devised everything stated above using Spring's in-built scheduler and proper synchronisation on DB level using locks but because of the distributed nature of instances, I have been advised to do the same using lambdas.
You need to pass ScheduledEvent object to the handleRequest() of the lambda.
handleRequest(ScheduledEvent event, Contex context)
Configure a cron job that runs every 10 minutes in your cloudwatch template (if using cloudformation). This will make sure to trigger your lambda after every 10 min.
Make sure to add below mentioned dependency to your pom.
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-lambda-java-events</artifactId>
<version>2.2.5</version>
</dependency>
Method 2:
You can specify something like this in your cloudformation template. This will not require any argument to be passed to your handler(), incase you do not require any event related information. This will automatically trigger your lambda as per your cron job.
"ScheduledRule": {
"Type": "AWS::Events::Rule",
"Properties": {
"Description": "ScheduledRule",
"ScheduleExpression": {
"Fn::Join": [
"",
[
"cron(",
{
"Ref": "ScheduleCronExpression"
},
")"
]
]
},
"State": "ENABLED",
"Targets": [
{
"Arn": {
"Fn::GetAtt": [
"LAMBDANAME",
"Arn"
]
},
"Id": "TargetFunctionV1"
}
]
}
},
"PermissionForEventsToInvokeLambdaFunction": {
"Type": "AWS::Lambda::Permission",
"Properties": {
"FunctionName": {
"Ref": "NAME"
},
"Action": "lambda:InvokeFunction",
"Principal": "events.amazonaws.com",
"SourceArn": {
"Fn::GetAtt": [
"ScheduledRule",
"Arn"
]
}
}
}
}
If you want to run a cronjob from cloudwatch event is the only option.
If you don't want to use cloudwatch events then go ahead with EC2 instance. But EC2 will cost you more than the cloudwatch event.
Note: Cloudwatch events rule steup is just like defining cronjob in crontab in any linux system, nothing much. In linux serevr you will define everything as a RAW one but here its just an UI based one.
I have successfully created an event trigger on storage blob creation, on a storage account called receivingtestwesteurope, under resource group omni-test, which is received via a function called ValidateMetadata. I created this via the portal GUI. However I now want to add deadletter/retry policies, which can only be done via the CLI.
The working trigger is like this:
{
"destination": {
"endpointBaseUrl": "https://omnireceivingprocesstest.azurewebsites.net/admin/extensions/EventGridExtensionConfig",
"endpointType": "WebHook",
"endpointUrl": null
},
"filter": {
"includedEventTypes": [
"Microsoft.Storage.BlobCreated"
],
"isSubjectCaseSensitive": null,
"subjectBeginsWith": "/blobServices/default/containers/snapshots/blobs/",
"subjectEndsWith": ".png"
},
"id": "/subscriptions/fa6409ab-1234-1234-1234-85dd2b3ceab4/resourceGroups/omni-test/providers/Microsoft.Storage/StorageAccounts/receivingtestwesteurope/providers/Microsoft.EventGrid/eventSubscriptions/png",
"labels": [
""
],
"name": "png",
"provisioningState": "Succeeded",
"resourceGroup": "omni-test",
"topic": "/subscriptions/fa6409ab-1234-1234-1234-85dd2b3ceab4/resourceGroups/omni-test/providers/microsoft.storage/storageaccounts/receivingtestwesteurope",
"type": "Microsoft.EventGrid/eventSubscriptions"
}
First I thought I could update the existing event with a deadletter queue:
az eventgrid event-subscription update --name png --deadletter-endpoint receivingtestwesteurope/blobServices/default/containers/eventgrid
Which returns:
az: error: unrecognized arguments: --deadletter-endpoint
receivingtestwesteurope/blobServices/default/containers/eventgrid
Then I tried via REST Patch:
https://learn.microsoft.com/en-us/rest/api/eventgrid/eventsubscriptions/update
scope: /subscriptions/fa6409ab-1234-1234-1234-85dd2b3ceab4/resourceGroups/omni-test/providers/microsoft.storage/storageaccounts/receivingtestwesteurope
eventSubscriptionName: png
api-version: 2018-05-01-preview
Body:
"deadletterdestination": {
"endpointType": "StorageBlob",
"properties": {
"blobContainerName": "eventgrid",
"resourceId": "/subscriptions/fa6409ab-1234-1234-1234-85dd2b3ceab4/resourceGroups/omni-test/providers/microsoft.storage/storageaccounts/receivingtestwesteurope"
}}
Which returns
"Model state is invalid."
===================
Final working solution:
{
"deadletterdestination": {
"endpointType": "StorageBlob",
"properties": {
"blobContainerName": "eventgrid",
"resourceId": "/subscriptions/fa6409ab-1234-1234-1234-85dd2b3ceab4/resourceGroups/omni-test/providers/microsoft.storage/storageaccounts/receivingtestwesteurope"
}
}
}
have a look at Manage Event Grid delivery settings, where in details is described turning-on a dead-lettering. Note, you have to install an eventgrid extension
az extension add --name eventgrid
also, you can use a REST API for updating your event subscription for dead-lettering.
besides that, I have just released my tinny tool Azure Event Grid Tester for helping with an Azure Event Grid model on the local machine.
Update:
The following is a deadletterdestination property:
"deadletterdestination": {
"endpointType": "StorageBlob",
"properties": {
"blobContainerName": "{containerName}",
"resourceId": "/subscriptions/{subscriptionId}/resourceGroups/{resgroup}/providers/Microsoft.Storage/storageAccounts/{storageAccount}"
}
}
you can use the Event Subscriptions - Update (REST API PATCH) with the above property. Note, that the api-version=2018-05-01-preview must be used.
I'm having trouble understanding why a Cloudwatch event rule is not firing.
I've followed this related question and did the following.
Created a Cloudtrail which sends events to a Cloudwatch log
Created the following CloudWatch event rule:
{
"detail-type": [
"AWS write API Call via CloudTrail"
],
"source": [
"aws.ecr"
],
"detail": {
"eventSource": [
"ecr.amazonaws.com"
],
"eventName": [
"PutImage"
]
}
}
Created a lambda to be invoked by this rule.
I can verify that in my Cloudwatch log group (set up to accept events from Cloudtrail) I am seeing the PutImage event. However, the lambda never fires and the rule metrics show that it is never triggered. I am assuming at this point the rule must be faulty (I would expect to see the rule triggered even if the lambda is faulty) but I can't see what additional logic is required. Is it necessary to link my event to a particular log group?