SQS Queue not receiving all messages - aws-lambda

I'm trying to get an Alexa skill to call a lambda function which sends a message to an SQS Queue. Basically what this guide is doing http://www.cyber-omelette.com/2017/01/alexa-run-script.html
I have the skill and lambda function working, when I execute the skill I get the proper response that's created in the lambda function. However sometimes the Queue gets the message and other times it doesn't, it seems completely random. Is there something that may be causing messages to be dropped/ignored?

In your lambda function, make sure you process ALL the messages received by the lambda function, and not just the first one.
```
def handler(event, context):
result={}
logger.debug(json.dumps(event))
for record in event['Records']:
message=json.loads(record['body'])
#do whatever you have to do with the message
```

Related

How do I use Heartbeat with a Callback Return Step Function in my Lambda Function?

My Lambda function is required to send a token back to the step function for it to continue, as it is a task within the state machine.
Looking at my try/catch block of the lambda function, I am contemplating:
The order of SendTaskHeartbeatCommand and SendTaskSuccessCommand
The required parameters of SendTaskHeartbeatCommand
Whether I should add the SendTaskHeartbeatCommand to the catch block, and then if yes, which order they should go in.
Current code:
try {
const magentoCallResponse = await axios(requestObject);
await stepFunctionClient.send(new SendTaskHeartbeatCommand(taskToken));
await stepFunctionClient.send(new SendTaskSuccessCommand({output: JSON.stringify(magentoCallResponse.data), taskToken}));
return magentoCallResponse.data;
} catch (err: any) {
console.log("ERROR", err);
await stepFunctionClient.send(new SendTaskFailureCommand({error: JSON.stringify("Error Sending Data into Magento"), taskToken}));
return false;
}
I have read the documentation for AWS SDK V3 for SendTaskHeartbeatCommand and am confused with the required input.
The SendTaskHeartbeat and SendTaskSuccess API actions serve different purposes.
When your task completes, you call SendTaskSucces to report this back to Step Functions and to provide the results from the Task that your workflow can then process. You do not need to call SendTaskHeartbeat before SendTaskSuccess and the usage you have in the code above seems unnecessary.
SendTaskHeartbeat is optional and you use it when you've set "HeartbeatSeconds" on your Task. When you do this, you then need your worker (i.e. the Lambda function in this case) to send back regular heartbeats while it is processing work. I'd expect that to be running asynchronously while your code above was running the first line in the try block. The reason for having heartbeats is that you can set a longer TimeoutSeconds (or dynamically using TimeoutSecondsPath) than HeartbeatSeconds, therefore failing / retrying fast when the worker dies (Heartbeat timeout) while you still allow your tasks to take longer to complete.
That said, it's not clear why you are using .waitForTaskToken with Lambda. Usually, you can just use the default Request Response integration pattern with Lambda. This uses the synchronous invoke mode for Lambda and will return the response back to you without you needing to integrate back with Step Functions in your Lambda code. Possibly you are reading these off of an SQS queue for concurrency control or something. But if not, just use Request Response.

How to send a CloudWatchEvent from a lambda to an EventBridge destination

I have a lambda which is triggered by an EventBridge custom bus. I want to send another event to the customer bus at the end of the function processing. I created a destination in the lambda to send to the same custom bus.
I have the following code where the function handler will return a CloudWatchEvent. This is not working.
public async Task<CloudWatchEvent<object>> FunctionHandler(CloudWatchEvent<object> evnt, ILambdaContext context)
{
return await ProcessMessageAsync(evnt, context);
}
My lambda was being triggered by S3 input event (which is asynchronous), I tried adding destination on Lambda "success" to EventBridge bus, created a rule to capture that and send it to CloudWatch logs but it didn't seem to work.
Turns out, while creating the Rule in EventBridge, event pattern was set to:
{
"source": ["aws.lambda"]
}
Which is what you get if you are using the console and selecting AWS Lambda as the AWS Service.
Infuriated, I couldn't seem to get it to work even with a simple event. On further inspection, I looked at the input event and realized that it wants lambda and not aws.lambda. It is also mentioned in the documentation: https://docs.aws.amazon.com/lambda/latest/dg/invocation-async.html
So to fix it, I changed it to
{
"source": ["lambda"]
}
and it worked for me.
Have you given a shot to AWS Lambda Destinations. There are 4 types of Destinations supported
SQS Queue
SNS Topic
Event Bridge Event Bus
Lambda Function itself.

When should I use a DynamoDB trigger over calling the Lambda with another?

I currently have one AWS Lambda function that is updating a DynamoDB table, and I need another Lambda function that needs to run after the data is updated. Is there any benefit to using a DynamoDB trigger in this case instead of invoking the second Lambda using the first one?
It looks like the programmatic invocation would give me more control over when the Lambda is called (ie. I could wait for several updates to occur before calling), and reading from a DynamoDB Stream costs money while simply invoking the Lambda does not.
So, is there a benefit to using a trigger here? Or would I be better off invoking the Lambda myself?
DynamoDB Stream seems to be the better practice because:
you delegate the responsibility of invoking the post-processor function from your writer-Lambda. Makes writer more simple (aka faster),
you simplify connecting new external writers to the same Table, otherwise you have to implement the logic to call post-processors in all of them as well,
you guarantee that all data is post-processed (even if somebody added a new item in the web-interface of DynamoDB. :)
moneywise, the execution time you will spend to send invoke() operation from writer Lambda will likely cover the costs of a stream.
unless you use DynamoDB transactions your data may still be not yet available for post-processor if you call him from writer too soon. If your business logic doesn't need transactions then using them just to cover this problem = extra time/cost.
P.S. You can batch from the DynamoDB stream of course out of the box with simple setting. You are not obliged to invoke post-processor for every write operation.
After the data is updated, you can publish a SQS message, then add a trigger to configure another function to read from Amazon SQS in the Lambda console, create an SQS trigger.
To create a trigger
Open the Lambda console Functions page.
Choose a function.
Under Designer, choose Add trigger.
Choose a trigger type.
Configure the required options and then choose Add.
Lambda supports the following options for Amazon SQS event sources.
Event Source Options
SQS queue – The Amazon SQS queue to read records from.
Batch size – The number of items to read from the queue in each batch, up to 10. The event may contain fewer items if the batch that Lambda read from the queue had fewer items.
Enabled – Disable the event source to stop processing items.
var QUEUE_URL = 'https://sqs.us-east-1.amazonaws.com/{AWS_ACCUOUNT_}/matsuoy-lambda';
var AWS = require('aws-sdk');
var sqs = new AWS.SQS({region : 'us-east-1'});
exports.handler = function(event, context) {
var params = {
MessageBody: JSON.stringify(event),
QueueUrl: QUEUE_URL
};
sqs.sendMessage(params, function(err,data){
if(err) {
console.log('error:',"Fail Send Message" + err);
context.done('error', "ERROR Put SQS"); // ERROR with message
}else{
console.log('data:',data.MessageId);
context.done(null,''); // SUCCESS
}
});
}
Please don't forget add a trigger from another function to this SQS topic. That function will receive the SQS message automatic to handle.

How do I return a message back to SQS from lambda trigger

I have lambda trigger that reads messages from SQS queue. In some conditions, the message may not be ready for processing so I'd like to put the message back in queue for 1min and try again. Currently, I am create another copy of this customer record and posting this new copy in the queue. Is there a reason/way for me to keep the original record in queue as opposed to creating a new one
def postToQueue(customer):
if 'attemptCount' in customer.keys():
attemptCount = int(customer["attemptCount"]) + 1
else:
attemptCount = 2
customer["attemptCount"] = attemptCount
# Get the service resource
sqs = boto3.resource('sqs')
# Get the queue
queue = sqs.get_queue_by_name(QueueName='testCustomerQueue')
response = queue.send_message(MessageBody=json.dumps(customer), DelaySeconds=60)
print('customer postback: ', customer)
print ('response from writing ot the queue is: ', response)
#main function
for record in event['Records']:
if 'body' in record.keys():
customer = json.loads(record['body'])
print("attempting to process customer", customer, " at: ", datetime.datetime.now())
if (not ifReadyToProcess(customer)):
postToQueue(customer)
else:
processCustomer(customer)
This is not an ideal setup for SQS triggering Lambda functions.
My testing shows that messages sent to SQS will immediately trigger the Lambda function, even if a Delay setting is provided. Therefore, putting a message back onto the SQS queue will cause Lambda to fire again straight after.
To avoid a situation where Lambda is continually checking whether a message is ready for processing, I would recommend:
Use Amazon CloudWatch Events to trigger a Lambda function on a schedule (eg every 2 minutes)
The Lambda function should pull messages from the queue and check if they are ready to process.
If they are ready, then process them and delete them
If they are not ready, then push them back onto the queue with a Delay setting and delete the original message
Note that this is different to having SQS directly trigger Lambda. Instead, the Lambda function should call ReceiveMessages() to obtain the message(s) itself, which allows the Delay function to add some time between checks.
Another option: Instead of re-inserting a message into the queue, you could simply take advantage of the Default Visibility Timeout setting by not deleting the message. A message that is read from the queue, but not deleted, will automatically "reappear" on the queue. You could use this as the "retry" time period. However, this means you will need to handle Dead Letter processing yourself (eg if a message fails to be processed after n tries).

How to send error to DLQ if Lambda function throws error (not timedOut)?

I am having Lambda function that I expect to return some result. So if I send wrong parameters it fails for example in the middle of the function.
Is there a way I can handle if any error occurs to be sent in my DLQ, print the error in the message, then retry, then delete the message?
example error from CloudWatch:
TypeError: commandArray is not iterable
AWS Lambda function has a retry mechanism on Asynchronous invocation, If AWS Lambda is unable to fully process the event, it will automatically retry the invocation twice, with delays between retries.
After retries, AWS Lambda will send ERROR message detail to the specified Amazon SQS queue or Amazon SNS topic.
https://docs.aws.amazon.com/lambda/latest/dg/retries-on-errors.html
The error message does not contain failed Lambda function name due to any reason (exceptions/timeout). To add lambda function name in the error message, you can go for two ideas.
Solution - 1
Lambda function name can be found by S3 API, S3 bucket detail can be found by received event object in the error message.
Solution - 2
Convention: SNS topic name contains lambda function name in it
Configure SNS topic to lambda function
Add a lambda function to SNS topic subscriber list
Subscribed lambda function can get lambda function name from SNS topic name and can add any custom detail in the received error message
Lambda has the facility to retry and pump failures into a Dead Letter Queue.
Any Lambda function invoked asynchronously is retried twice before the event is discarded. If the retries fail and you're unsure why, use Dead Letter Queues (DLQ) to direct unprocessed events to an Amazon SQS queue or an Amazon SNS topic to analyze the failure.
You can then have a Lambda Function on the SNS topic or SQS queue that can respond to the error and react in the way you want it to.
For more information, see: https://docs.aws.amazon.com/lambda/latest/dg/dlq.html

Resources