How do I return a message back to SQS from lambda trigger - aws-lambda

I have lambda trigger that reads messages from SQS queue. In some conditions, the message may not be ready for processing so I'd like to put the message back in queue for 1min and try again. Currently, I am create another copy of this customer record and posting this new copy in the queue. Is there a reason/way for me to keep the original record in queue as opposed to creating a new one
def postToQueue(customer):
if 'attemptCount' in customer.keys():
attemptCount = int(customer["attemptCount"]) + 1
else:
attemptCount = 2
customer["attemptCount"] = attemptCount
# Get the service resource
sqs = boto3.resource('sqs')
# Get the queue
queue = sqs.get_queue_by_name(QueueName='testCustomerQueue')
response = queue.send_message(MessageBody=json.dumps(customer), DelaySeconds=60)
print('customer postback: ', customer)
print ('response from writing ot the queue is: ', response)
#main function
for record in event['Records']:
if 'body' in record.keys():
customer = json.loads(record['body'])
print("attempting to process customer", customer, " at: ", datetime.datetime.now())
if (not ifReadyToProcess(customer)):
postToQueue(customer)
else:
processCustomer(customer)

This is not an ideal setup for SQS triggering Lambda functions.
My testing shows that messages sent to SQS will immediately trigger the Lambda function, even if a Delay setting is provided. Therefore, putting a message back onto the SQS queue will cause Lambda to fire again straight after.
To avoid a situation where Lambda is continually checking whether a message is ready for processing, I would recommend:
Use Amazon CloudWatch Events to trigger a Lambda function on a schedule (eg every 2 minutes)
The Lambda function should pull messages from the queue and check if they are ready to process.
If they are ready, then process them and delete them
If they are not ready, then push them back onto the queue with a Delay setting and delete the original message
Note that this is different to having SQS directly trigger Lambda. Instead, the Lambda function should call ReceiveMessages() to obtain the message(s) itself, which allows the Delay function to add some time between checks.
Another option: Instead of re-inserting a message into the queue, you could simply take advantage of the Default Visibility Timeout setting by not deleting the message. A message that is read from the queue, but not deleted, will automatically "reappear" on the queue. You could use this as the "retry" time period. However, this means you will need to handle Dead Letter processing yourself (eg if a message fails to be processed after n tries).

Related

Is there a way to implement in Laravel 8, a user specific queue containing the ability to fire an event when the queue is empty?

I currently have an import process that dispatches a series of jobs to a default queue that are initiated by user input via API.
If I add the user id to the queue name when dispatching it will go to a user-specific queue but I have no way of starting a queue worker for that specific queue. Any way to programmatically start a queue:work command to get around this?
Furthermore, I'd like to send a broadcast signal for the individual user once the queue has finished its jobs. My initial thoughts were to implement sending a signal in an event subscriber that monitors that user-specific queue if I can solve the initial question.
I found a partial route here: Polling Laravel Queue after All Jobs are Complete
This doesn't fully work because it will keep triggering when the queue is empty. So I'd have to find some way to unsubscribe the event subscriber once it runs once. I'd also have to find a way to subscribe the event subscriber at runtime once the import process has started vs in the Event Subscribe Service Provider as stated in the official Laravel documentation.
https://laravel.com/docs/8.x/events#event-subscribers
One approach could be to create a custom table that manages this and then add/remove to it and have the loop event subscriber iterate through that table, and check if the queue is in that table, if so then check to see its size, and if its 0, send the broadcast signal and then remove from the table.
Here are the events that already exist for Queues. https://laravel.com/api/8.x/Illuminate/Queue/Events/Looping.html
What's the best way to approach this?
Start to End:
User provides a file to import, I'm interpreting the file, and dispatching jobs that process the data, once jobs are finished, a broadcast signal should be sent to that user saying the import is completed.
You might want to use the Job Batches functionality
It will let you dispatch jobs and run callback at the end. Here is an exemple from the doc:
$batch = Bus::batch([
new ImportCsv(1, 100),
new ImportCsv(101, 200),
new ImportCsv(201, 300),
new ImportCsv(301, 400),
new ImportCsv(401, 500),
])->then(function (Batch $batch) {
// All jobs completed successfully...
})->catch(function (Batch $batch, Throwable $e) {
// First batch job failure detected...
})->finally(function (Batch $batch) {
// The batch has finished executing...
})->dispatch();
You can send the Broadcast Event at the end in the Callback

When should I use a DynamoDB trigger over calling the Lambda with another?

I currently have one AWS Lambda function that is updating a DynamoDB table, and I need another Lambda function that needs to run after the data is updated. Is there any benefit to using a DynamoDB trigger in this case instead of invoking the second Lambda using the first one?
It looks like the programmatic invocation would give me more control over when the Lambda is called (ie. I could wait for several updates to occur before calling), and reading from a DynamoDB Stream costs money while simply invoking the Lambda does not.
So, is there a benefit to using a trigger here? Or would I be better off invoking the Lambda myself?
DynamoDB Stream seems to be the better practice because:
you delegate the responsibility of invoking the post-processor function from your writer-Lambda. Makes writer more simple (aka faster),
you simplify connecting new external writers to the same Table, otherwise you have to implement the logic to call post-processors in all of them as well,
you guarantee that all data is post-processed (even if somebody added a new item in the web-interface of DynamoDB. :)
moneywise, the execution time you will spend to send invoke() operation from writer Lambda will likely cover the costs of a stream.
unless you use DynamoDB transactions your data may still be not yet available for post-processor if you call him from writer too soon. If your business logic doesn't need transactions then using them just to cover this problem = extra time/cost.
P.S. You can batch from the DynamoDB stream of course out of the box with simple setting. You are not obliged to invoke post-processor for every write operation.
After the data is updated, you can publish a SQS message, then add a trigger to configure another function to read from Amazon SQS in the Lambda console, create an SQS trigger.
To create a trigger
Open the Lambda console Functions page.
Choose a function.
Under Designer, choose Add trigger.
Choose a trigger type.
Configure the required options and then choose Add.
Lambda supports the following options for Amazon SQS event sources.
Event Source Options
SQS queue – The Amazon SQS queue to read records from.
Batch size – The number of items to read from the queue in each batch, up to 10. The event may contain fewer items if the batch that Lambda read from the queue had fewer items.
Enabled – Disable the event source to stop processing items.
var QUEUE_URL = 'https://sqs.us-east-1.amazonaws.com/{AWS_ACCUOUNT_}/matsuoy-lambda';
var AWS = require('aws-sdk');
var sqs = new AWS.SQS({region : 'us-east-1'});
exports.handler = function(event, context) {
var params = {
MessageBody: JSON.stringify(event),
QueueUrl: QUEUE_URL
};
sqs.sendMessage(params, function(err,data){
if(err) {
console.log('error:',"Fail Send Message" + err);
context.done('error', "ERROR Put SQS"); // ERROR with message
}else{
console.log('data:',data.MessageId);
context.done(null,''); // SUCCESS
}
});
}
Please don't forget add a trigger from another function to this SQS topic. That function will receive the SQS message automatic to handle.

SQS Queue not receiving all messages

I'm trying to get an Alexa skill to call a lambda function which sends a message to an SQS Queue. Basically what this guide is doing http://www.cyber-omelette.com/2017/01/alexa-run-script.html
I have the skill and lambda function working, when I execute the skill I get the proper response that's created in the lambda function. However sometimes the Queue gets the message and other times it doesn't, it seems completely random. Is there something that may be causing messages to be dropped/ignored?
In your lambda function, make sure you process ALL the messages received by the lambda function, and not just the first one.
```
def handler(event, context):
result={}
logger.debug(json.dumps(event))
for record in event['Records']:
message=json.loads(record['body'])
#do whatever you have to do with the message
```

Trigger/Handle events between programs in different ABAP sessions

I have two programs running in separated sessions. I want to send a event from program A and catch this event in program B.
How can I do that ?
Using class-based events is not really an option, as these cannot be used to communicate between user sessions.
There is a mechanism that you can use to send messages between sessions: ABAP Messaging Channels. You can send anything that is either a text string, a byte string or can be serialised in any of the above.
You will need to create such a message channel using the repository browser SE80 (Create > Connectivity > ABAP Messaging Channel) or with the Eclipse ADT (New > ABAP Messaging Channel Application).
In there, you will have to define:
The message type (text vs binary)
The ABAP programs that are authorised to access the message channel.
The scope of the messages (i.e. do you want to send messages between users? or just for the same user? what about between application servers?)
The message channels work through a publish - subscribe mechanism. You will have to use specialised classes to publish to the channel (inside report A) and to read from the channel (inside report B). In order to wait for a message to arrive once you have subscribed, you can use the statement WAIT FOR MESSAGE CHANNELS.
Example code:
" publishing a message
CAST if_amc_message_producer_text(
cl_amc_channel_manager=>create_message_producer(
i_application_id = 'DEMO_AMC'
i_channel_id = '/demo_text'
i_suppress_echo = abap_true )
)->send( i_message = text_message ).
" subscribing to a channel
DATA(lo_receiver) = NEW message_receiver( ).
cl_amc_channel_manager=>create_message_consumer(
i_application_id = 'DEMO_AMC'
i_channel_id = '/demo_text'
)->start_message_delivery( i_receiver = lo_receiver )
" waiting for a message
WAIT FOR MESSAGING CHANNELS
UNTIL lo_receiver->text_message IS NOT INITIAL
UP TO time SECONDS.
If you want to avoid waiting inside your subscriber report B and to do something else in the meanwhile, then you can wrap the WAIT FOR... statement inside a RFC and call this RFC using the aRFC variant. This would allow you to continue doing stuff inside report B while waiting for an event to happen. When this event happens, the aRFC callback method that you defined inside your report when calling the RFC would be executed.
Inside the RFC, you would simply have the subscription part and the WAIT statement plus an assignment of the message itself to an EXPORTING parameter. In your report, you could have something like:
CALL FUNCTION 'ZMY_AMC_WRAPPER' STARTING NEW TASK 'MY_TASK'
CALLING lo_listener->my_method ON END OF TASK.
" inside your 'listener' class implementation
METHOD my_method.
DATA lv_message TYPE my_message_type.
RECEIVE RESULTS FROM FUNCTION 'ZMY_AMC_WRAPPER'
IMPORTING ev_message = lv_message.
" do something with the lv_message
ENDMETHOD.
You could emulate it by checking in program B if a parameter in SAP memory has changed. program A will set this parameter to send the event. (ie SET/ GET PARAMETER ...). In effect you're polling event in B.
There a a lot of unknown in your desription. For example is the event a one-shot operation or can A send several event ? if so B will have to clear the parameter when done treating the event so that A know it's OK to send a new one (and A will have to wait for the parameter to clear after having set it)...
edited : removed the part about having no messaging in ABAP, since Seban shown i was wrong

Implementing bulk-messaging from Salesforce to/from Twilio, hitting Salesforce API limits

I am building an integration between Salesforce and Twilio that sends/receives SMS using TwilioForce REST API. The main issue is getting around the 10-call API limit from Salesforce, as well as the prohibition on HTTP call outs from a trigger.
I am basing the design on Dan Appleman's Asynchronous Request processes, but in either Batch mode or RequestAsync(), ASync(), Sync(), repeat... I'm still hitting the limits.
I'd like to know how other developers have done this successfully; the integrations have been there for a while, but the examples are few and far between.
Are you sending unique messages for each record that has been updated? If not, then why not send one message to multiple recipients to save on your API limits?
Unfortunately, if you do actually need to send more than 10 unique messages there is no way to send messages in bulk with the Twilio API, you could instead write a simple application that runs on Heroku or some other application platform that you can call out to that will handle the SMS functionality for you.
I have it working now using the following structure (I apologize for the formatting - it's mostly pseudocode):
ASyncRequest object:
AsyncType (picklist: 'SMS to Twilio' is it for now),
Params (long text area: comma-separated list of Ids)
Message object:
To (phone), From (phone), Message (text), Sent (boolean), smsId (string), Error (text)
Message trigger: passes trigger details to CreateAsyncRequests() method.
CreateAsyncRequests: evaluate each new/updated Message__c; if Sent == false for any messages, we create an AsyncRequest, type=SMS to Twilio, Params += ',' + message.Id.
// Create a list to be inserted after all the Messages have been processed
List requests = new List();
Once we reach 5 message.Ids in a single AsyncRequest.Params list, add it to requests.
If all the messages have been processed and there's a request with < 5 Ids in Params, add it to requests as well.
If requests.size() > 0 {
insert requests;
AsyncProcessor.StartBatch();
}
AsyncProcessor implements .Batchable and .AllowsCallouts, and queries ASyncRequest__c for any requests that need to be processed, which in this case will be our Messages list.
The execute() method takes the list of ASyncRequests, splits each Params value into its component Message Ids, and then queries the Message object for those particular Messages.
StartBatch() calls execute() with 1 record at a time, so that each execute() process will still contain fewer than the maximum 10 callouts.
Each Message is processed in a try/catch block that calls SendMessage(), sets Message.smsId = Twilio.smsId and sets Message.Sent = true.
If no smsId is returned, then the message was not sent, and I set a boolean bSidIsNull = true indicating that (at least) one message was not sent.
** If any message failed, no smsIds are returned EVEN FOR MESSAGES THAT WERE SUCCESSFUL **
After each batch of messages is processed, I check bSidIsNull; if true, then I go back over the list of messages and put any that do not have an smsId into a map indexed by the Twilio number I'm trying to send them From.
Since I limited each ASyncRequest to 5 messages, I still have the use of a callout to retrieve all of the messages sent from that Twilio.From number for the current date, using
client.getAccount().getMessages('From' => fromNumber, 'DateSent' => currentDate)
Then I can update the Message.smsIds for all of the messages that were successful, and add an error message to Message.Error_on_Send__c for any that failed.

Resources