How to know which Lambda sent a message to the Dead Letter Queue? - aws-lambda

I'm using a SNS Topic as a Dead Letter Queue to handle errors thrown by multiple Lambdas. In the error messages, there are the following attributes :
RequestID,
ErrorCode,
ErrorMessage,
However, I can't easily find which Lambda threw the error, since nothing related to it appear in the message (eg: ARN, function name...)
Although it's possible to look up the request ID on CloudWatch, or to create multiple topics, there should be a much easier way to find which Lambda threw the error. Below is the structure of the received message:
{
"Records": [
{
"EventSource": "aws:sns",
"EventVersion": "1.0",
"EventSubscriptionArn": "",
"Sns": {
"Type": "Notification",
"MessageId": "",
"TopicArn": "",
"Subject": null,
"Message": "",
"Timestamp": "",
"SignatureVersion": "",
"Signature": "",
"SigningCertUrl": "",
"UnsubscribeUrl": "",
"MessageAttributes": {
"RequestID": {
"Type": "String",
"Value": ""
},
"ErrorCode": {
"Type": "String",
"Value": "200"
},
"ErrorMessage": {
"Type": "String",
"Value": "test"
}
}
}
}
]
}
Is there any way to add information, such as the ARN, on the Lambda which triggered this error message?

You can use AWS CloudTrail to identify which Lambda executed:
https://docs.aws.amazon.com/lambda/latest/dg/logging-using-cloudtrail.html

You should have one DLQ per Lambda function. This will let you know where the dead letter is coming from.

I ended up configuring:
One unique SNS topic for every lambda DLQ.
A lambda listening to the above topic and storing the request ID in S3
A trail on CloudTrail logging every lambda invoke
A lambda matching the failed request ID in S3 and the cloudtrail logs. The latter provide the name of the failed lambda.
This infrastructure might seem a bit complicated but it's working very well. It allows to only add one unique onError: ... line of code in every lambda configuration file.

Not sure if you figured out a solution or not, but i solved a somewhat similar problem with a little help from boto3.
To give some more context about my setup:
I had 2 lambda functions (lambdaA and lambdaB) listening to a SNS topic which gets updated by S3 object creation events.
Both lambda functions use one SNS topic as DLQ which is subscribed by a single dlq lambda (lambdaDLQ).
In the lambdaDLQ's received message, you get a Message attribute which has the message received by the error throwing lambda functions (lambdaA or lambdaB) which contains the information about SNS message received by them.
So the steps i used to figure out the error throwing lambda function were:
Parse the Message attribute from lambdaDLQ's event into json.
Get the EventSubscriptionArn from the above parsed message. This holds the subscription details of the error throwing lambda.
Using boto3 to get Protocol, Endpoint, etc. Which give the name of the error throwing lambda.
sns_client = boto3.client('sns')
arn = 'arn_goes_here'
response = sns_client.get_subscription_attributes(SubscriptionArn=arn)
In case your error throwing lambda functions do not listen to a SNS topic, you might need to change this a little bit.

Related

AWS IoT Core for Lambda event missing data

I have a TEKTELIC smart room sensor connected to AWS IoT Core for Lambda. The destination publishes to a topic. In the MQTT test client I get a nicely formed message:
{
"WirelessDeviceId": "24e8d6e2-88c8-4057-a60f-66c5f3ef354e",
"PayloadData": "A2cA4ARoaAD/ASw=",
"WirelessMetadata": {
"LoRaWAN": {
"ADR": true,
"Bandwidth": 125,
"ClassB": false,
"CodeRate": "4/5",
"DataRate": "3",
"DevAddr": "019e3fcb",
"DevEui": "647fda00000089e2",
"FCnt": 4676,
"FOptLen": 0,
"FPort": 10,
"Frequency": "904700000",
"Gateways": [
{
"GatewayEui": "647fdafffe014abc",
"Rssi": -92,
"Snr": 5.800000190734863
},
{
"GatewayEui": "0080000000024245",
"Rssi": -93,
"Snr": 7.25
},
{
"GatewayEui": "24e124fffef464da",
"Rssi": -86,
"Snr": 4.25
}
],
"MIC": "eb050f05",
"MType": "UnconfirmedDataUp",
"Major": "LoRaWANR1",
"Modulation": "LORA",
"PolarizationInversion": false,
"SpreadingFactor": 7,
"Timestamp": "2022-12-07T21:46:13Z"
}
}
}
when I subscribe to the topic with a lambda:
Rule query statement: SELECT *, topic() AS topic FROM 'lora/#'
I am missing most of the data:
{
"Gateways": {
"Timestamp": "2022-12-07T21:46:13Z",
"SpreadingFactor": 7,
"PolarizationInversion": false,
"Modulation": "LORA",
"Major": "LoRaWANR1",
"MType": "UnconfirmedDataUp",
"MIC": "eb050f05",
"Snr": 4.25,
"Rssi": -86,
"GatewayEui": "24e124fffef464da"
},
"Snr": 7.25,
"Rssi": -93,
"GatewayEui": "0080000000024245",
"topic": "lora/tektelic/smart_room"
}
The relevant code is:
def handler(event, context):
print(json.dumps(event))
The event looks like approximately half the data, malformed and in reverse order. There is a Gateways [ ] in the original event, it is now an object with some data from the original array, and other data that was outside the array.
The info on the device that sent the message, and the payload I want to process are missing.
I am following this solution construct pattern, the only modifications are the lambda code and select statement.
I tried increasing the memory form the default 128M to 1024M with no changes.
I am also storing the raw messages in AWS S-3, following this construct pattern, and it matches the MQTT data. I made similar changes to select statement in it.
Thoughts on where to look for issues?
Most recent insight is that the select statement:
iot_topic_rule_props=iot.CfnTopicRuleProps(
topic_rule_payload=iot.CfnTopicRule.TopicRulePayloadProperty(rule_disabled=False, description="Processing of DTC messages from Lora Sensors.", sql="SELECT topic() AS topic, * FROM 'lora/#'", actions=[])),
Replacing the sql with:
sql="SELECT * FROM 'lora/#'",
generates a nicely formed event.
Replacing it with:
sql="SELECT topic() AS topic, * FROM 'lora/#'",
generates the same malformed event, except topic is the first tag instead of the last. I'm going to leave this open for an answer on how what is going on, because it feels like a bug. This should generate an error if it's just unhappy with the sql.
The Key to making it work is to include the aws_iot_sql_version:
sql="SELECT *, topic() AS topic FROM 'lora/#'",
aws_iot_sql_version="2016-03-23",
according to the docs the default value is "2015-10-08", however the the console uses "2016-03-23". I have not done the research to see the details.
I don't think you want to subscribe to a topic with a lambda? A lambda is a short lived bit of code...
Have you looked into Iot Rules
You are able to subscribe using a sql statement, and then trigger a lambda to do stuff with it.
https://docs.aws.amazon.com/iot/latest/developerguide/iot-rules.html

How to trigger lambda once Cloudformation stack is created

I have to trigger a lambda function once specific stack is created.
I have created the below CloudWatch event rule and associated the target to that lambda function but it is not triggering the lambda.
{
"source": [
"aws.cloudformation"
],
"detail-type": [
"AWS API Call via CloudTrail"
],
"detail": {
"eventSource": [
"cloudformation.amazonaws.com"
],
"eventName": [
"CreateStack"
],
"stackName": [
"sql-automate-04-08"
]
}
}
Please let me know if i am missing anything here.
This doesn’t work using CloudWatch Event Rules because the CloudFormation stack’s lifecycle events don’t reflect individual API calls.
However, you can configure CloudFormation to send stack events to an Amazon SNS topic via its NotificationARNs property. An AWS Lambda function subscribed to that topic can then filter and process the events.
This EventBridge Rule has worked for me:
{
"source": ["aws.cloudformation"],
"detail-type": ["CloudFormation Stack Status Change"],
"region": ["us-east-1"],
"detail": {
"status-details": {
"status": ["CREATE_COMPLETE"]
}
}
}

Can't subscribe a GMB business to pub/sub push notifications

I spent several hours trying to solve this stuff.
I need to subscribe a gmb business to pub/sub push notifications. I was able to send/receive messages via gcloud console. Successfully created a topic and a subscription. The problem is that i need to subscribe gmb accounts , but I'm getting this error:
(had to edit this question, the code is better than images)
The request:
PUT https://mybusiness.googleapis.com/v4/accounts/102834134483270918765/notifications
{
"topicName": "projects/probable-pager-194417/topics/fetchReviews",
"notificationTypes": [
"NEW_REVIEW", "UPDATED_REVIEW", "GOOGLE_UPDATE"
]
}
The Response:
{
"error": {
"code": 400,
"message": "Request contains an invalid argument.",
"status": "INVALID_ARGUMENT",
"details": [
{
"#type": "type.googleapis.com/google.mybusiness.v4.ValidationError",
"errorDetails": [
{
"code": 3,
"message": "Invalid topic name provided for subscription. Ensure that the topic exists and is shared with the GMB API service account.",
"value": "projects/probable-pager-194417/topics/fetchReviews"
}
]
}
]
}
}
Finally i found the solution. You need to give Pub/Sub Publisher permission in your topic to this account: 'mybusiness-api-pubsub#system.gserviceaccount.com. Have no idea why.
**Exactly that string

Error while creating Sales Order with "API_SALES_ORDER_SRV"

we want to create a Sales Order with Cloud SDK (Version 1.9.2) using Virtual Data Model (A_SalesOrder) in our Java Application.
We are calling S4 OnPremise System (1709).
SalesOrder so = SalesOrder.builder()
.salesOrderType("ZKE")
.salesOrganization("DE01")
.distributionChannel("01")
.organizationDivision("00")
.build();
try {
SalesOrder salesOrder = new
SalesOrderCreateFluentHelper(so).execute(endpoint);
} ....
We are getting the following error (while executing via PostMan):
"errordetails": [
{
"code": "CX_SADL_ENTITY_SRVICE_NOT_SUPP",
"message": "The requested service is not supported by entity ~A_SALESORDER",
"propertyref": "",
"severity": "error",
"target": ""
},
{
"code": "/IWBEP/CX_MGW_MED_EXCEPTION",
"message": "An exception was raised",
"propertyref": "",
"severity": "error",
"target": ""
}
]
Can somebody give us a advise to create a Sales Order via the API?
How we can create Sales Order Items for this Sales Order in one Step?
Thanks!
Additional information OData Request Data
(Response Data not provided in ERROR_LOG):
Request-Header / Request-Body:
Apparently we received this error message because we did not include any items in the request. If you give it in your body it worked. Thank you
Can you pls share the OData request and response body and payload?
Open transaction /IWFND/ERROR_LOG, choose the error message and in the lower part of the screen, choose Request Data resp. Response Data and provide us both body and header. Make sure to omit any confidential data.

Can I get information about AWS Lambda request IDs, e.g. the trigger?

In my logs I find
START RequestId: 123a1a12-1234-1234-1234-123456789012 Version: $LATEST
for every invocation of AWS lambda. Is it possible to get more information about a request, e.g. what triggered it?
You can get the request id and other information from the context object.
When Lambda runs your function, it passes a context object to the handler. This object provides methods and properties that provide information about the invocation, function, and execution environment.
https://docs.aws.amazon.com/lambda/latest/dg/nodejs-context.html
You can get further information from the process.env variable.
I wrote a short lambda function (node.js) which logs these information to the console and ends up in aws cloud watch
exports.handler = async (event, context) => {
console.log('context:', JSON.stringify(context));
console.log('process.env:', JSON.stringify(process.env));
return {statusCode: 200, body: 'Hello World'};
};
Log output:
START RequestId: 6f3c103e-f0a5-4982-934c-dabedda0065e Version: $LATEST
context:
{
"callbackWaitsForEmptyEventLoop": true,
"functionVersion": "$LATEST",
"functionName": "my_lambda_function_name",
"memoryLimitInMB": "128",
"logGroupName": "/aws/lambda/my_lambda_function_name",
"logStreamName": "2020/10/01/[$LATEST]8adb13668eed4ac8b3e7f8d796fe4d49",
"invokedFunctionArn": "arn:aws:lambda:us-east-1:636121343751:function:my_lambda_function_name",
"awsRequestId": "6f3c103e-f0a5-4982-934c-dabedda0065e"
}
process.env:
{
"AWS_LAMBDA_FUNCTION_VERSION": "$LATEST",
"AWS_SESSION_TOKEN": "IZZ3eS6vFF...",
"LD_LIBRARY_PATH": "/var/lang/lib:/lib64:/usr/lib64:/var/runtime:/var/runtime/lib:/var/task:/var/task/lib:/opt/lib",
"LAMBDA_TASK_ROOT": "/var/task",
"AWS_LAMBDA_LOG_GROUP_NAME": "/aws/lambda/my_lambda_function_name",
"AWS_LAMBDA_RUNTIME_API": "127.0.0.1:9001",
"AWS_LAMBDA_LOG_STREAM_NAME": "2020/10/01/[$LATEST]8adb13668eed4ac8b3e7f8d796fe4d49",
"AWS_EXECUTION_ENV": "AWS_Lambda_nodejs12.x",
"AWS_LAMBDA_FUNCTION_NAME": "my_lambda_function_name",
"AWS_XRAY_DAEMON_ADDRESS": "169.254.79.2:2000",
"PATH": "/var/lang/bin:/usr/local/bin:/usr/bin/:/bin:/opt/bin",
"AWS_DEFAULT_REGION": "us-east-1",
"PWD": "/var/task",
"AWS_SECRET_ACCESS_KEY": "5nQrpKhBwF...",
"LAMBDA_RUNTIME_DIR": "/var/runtime",
"LANG": "en_US.UTF-8",
"NODE_PATH": "/opt/nodejs/node12/node_modules:/opt/nodejs/node_modules:/var/runtime/node_modules:/var/runtime:/var/task",
"AWS_REGION": "us-east-1",
"TZ": ":UTC",
"AWS_ACCESS_KEY_ID": "ASIAZNANWY3473BTADSB",
"SHLVL": "0",
"_AWS_XRAY_DAEMON_ADDRESS": "169.254.79.2",
"_AWS_XRAY_DAEMON_PORT": "2000",
"AWS_XRAY_CONTEXT_MISSING": "LOG_ERROR",
"_HANDLER": "index.handler",
"AWS_LAMBDA_FUNCTION_MEMORY_SIZE": "128",
"_X_AMZN_TRACE_ID": "Root=1-6f75f4e5-5801599f17fd63e74ecd0833;Parent=73fb93812cdbf83f;Sampled=0"
}
END RequestId: 6f3c103e-f0a5-4982-934c-dabedda0065e
REPORT RequestId: 6f3c103e-f0a5-4982-934c-dabedda0065e
Duration: 21.71 ms Billed Duration: 100 ms
Memory Size: 128 MB Max Memory Used: 64 MB Init Duration: 132.35 ms
Not sure I understand what you mean by what triggered it? If it's connected to the API Gateway, then requests are proxied through the Gateway to your Lambda. In that case the Gateway triggered it, even though it was a proxied request.
Additionally and you can augment your Gateway to pass additional request info to your Lambda. See how
And also, to attach the request details to the RequestID you're seeing, you can check the context.awsRequestId.
I'd assume you want to do some form of log monitoring, in which case I think you can bundle up these request information(headers, query, params, body) and send to your log aggregator along with the awsRequestId.
Let me know if that helps
Further reference
A lambda function has two parameters: Event and context.
A normal invocation looks like this:
{
"event": {
"version": "0",
"id": "abcdefgh-1234-5678-1234-abcdefghijkl",
"detail-type": "Scheduled Event",
"source": "aws.events",
"account": "123456789012",
"time": "2018-01-01T12:00:00Z",
"region": "us-east-1",
"resources": [
"arn:aws:events:us-east-1:123456788901:rule/foo"
],
"detail": {}
},
"context": "<__main__.LambdaContext object at 0x123456ax1234>"
}
The test invocation has the elements you get.
You need to get this information from your lambda function context .
RequestId=context.aws_request_id
You will get following result
RequestId- "00cfaafe-5018-4be0-9668-b98daf5a7312" Version: $LATEST

Resources