I'm trying to send multiple emails with queues (beanstalkd). My application sends some number of emails and then I receive timeout exception.
foreach ($emails as $e) {
Mail::queue('emails.invite', ["username" => Auth::user()->username, "grupa" => $naziv, "id" => $id, "email" => $e], function($message) use ($e){
$message->to($e)->subject("Pridruži nam se!");
});
}
Is there a way to put all emails to queue, so when system is available the email should be sent.
EDIT: Full message for timeout exception:
{"error":{"type":"Symfony\\Component\\Debug\\Exception\\FatalErrorException","message":"Maximum execution time of 30 seconds exceeded","file":"\/home\/forge\/default\/vendor\/nikic\/php-parser\/lib\/PHPParser\/NodeAbstract.php","line":110}}
How are you doing it? If you're using beanstalk (or any queue), you're doing stuff from your application (producer, send to the queue) and from the worker process that consumes data from the queue and sends the email.
The producer just puts the email into beanstalk, so easy...
The consumer should be a long running process and it should be executed from cli, with no maximum time execution (you have to tweak the php.ini of php-cli). In the loop, you should check if there's something new in the queue and send the email.
Basically, sounds like your problem is that the consumer part has maximum execution time set, so it can't consume the emails in a while loop after X seconds. Tweak that and make sure there's no limit.
Related
An application that I'm making will allow users to set up automatic email campaigns to email their list of users (up to x per day).
I need a way of making sure that this is throttled so too many aren't sent within some range. Right now I'm trying to work within the confines of a free Mailtrap plan. But even on production using Sendgrid, I want a sensible throttle.
So say a user has set their automatic time to 9am and there are 30 users eligible to receive requests on that date and time. Every review_request gets a record in the DB. Upon Model creation, an event listener is triggered to then dispatch a job.
This is the handle method of the job that is dispatched:
/**
* Execute the job.
*
* #return void
*/
public function handle()
{
Redis::throttle('request-' . $this->reviewRequest->id)
->block(0)->allow(1)->every(5)
->then(function () {
// Lock obtained...
$message = new ReviewRequestMailer($this->location, $this->reviewRequest, $this->type);
Mail::to($this->customer->email)
->send(
$message
);
}, function () {
// Could not obtain lock...
return $this->release(5);
});
}
the above is taken from https://laravel.com/docs/8.x/queues#job-middleware
"For example, consider the following handle method which leverages Laravel's Redis rate limiting features to allow only one job to process every five seconds:"
I am using Horizon to view the jobs. When I run my command to send emails (about 25 requests to be sent), all jobs seems to process instantly. Not 1 every 5 seconds as I would expect.
The exception for the failed jobs are:
Swift_TransportException: Expected response code 354 but got code "550", with message "550 5.7.0 Requested action not taken: too many emails per second
Why does the above Redis throttle not process a single job every 5 seconds? And how can I achieve this?
I currently have an import process that dispatches a series of jobs to a default queue that are initiated by user input via API.
If I add the user id to the queue name when dispatching it will go to a user-specific queue but I have no way of starting a queue worker for that specific queue. Any way to programmatically start a queue:work command to get around this?
Furthermore, I'd like to send a broadcast signal for the individual user once the queue has finished its jobs. My initial thoughts were to implement sending a signal in an event subscriber that monitors that user-specific queue if I can solve the initial question.
I found a partial route here: Polling Laravel Queue after All Jobs are Complete
This doesn't fully work because it will keep triggering when the queue is empty. So I'd have to find some way to unsubscribe the event subscriber once it runs once. I'd also have to find a way to subscribe the event subscriber at runtime once the import process has started vs in the Event Subscribe Service Provider as stated in the official Laravel documentation.
https://laravel.com/docs/8.x/events#event-subscribers
One approach could be to create a custom table that manages this and then add/remove to it and have the loop event subscriber iterate through that table, and check if the queue is in that table, if so then check to see its size, and if its 0, send the broadcast signal and then remove from the table.
Here are the events that already exist for Queues. https://laravel.com/api/8.x/Illuminate/Queue/Events/Looping.html
What's the best way to approach this?
Start to End:
User provides a file to import, I'm interpreting the file, and dispatching jobs that process the data, once jobs are finished, a broadcast signal should be sent to that user saying the import is completed.
You might want to use the Job Batches functionality
It will let you dispatch jobs and run callback at the end. Here is an exemple from the doc:
$batch = Bus::batch([
new ImportCsv(1, 100),
new ImportCsv(101, 200),
new ImportCsv(201, 300),
new ImportCsv(301, 400),
new ImportCsv(401, 500),
])->then(function (Batch $batch) {
// All jobs completed successfully...
})->catch(function (Batch $batch, Throwable $e) {
// First batch job failure detected...
})->finally(function (Batch $batch) {
// The batch has finished executing...
})->dispatch();
You can send the Broadcast Event at the end in the Callback
I have lambda trigger that reads messages from SQS queue. In some conditions, the message may not be ready for processing so I'd like to put the message back in queue for 1min and try again. Currently, I am create another copy of this customer record and posting this new copy in the queue. Is there a reason/way for me to keep the original record in queue as opposed to creating a new one
def postToQueue(customer):
if 'attemptCount' in customer.keys():
attemptCount = int(customer["attemptCount"]) + 1
else:
attemptCount = 2
customer["attemptCount"] = attemptCount
# Get the service resource
sqs = boto3.resource('sqs')
# Get the queue
queue = sqs.get_queue_by_name(QueueName='testCustomerQueue')
response = queue.send_message(MessageBody=json.dumps(customer), DelaySeconds=60)
print('customer postback: ', customer)
print ('response from writing ot the queue is: ', response)
#main function
for record in event['Records']:
if 'body' in record.keys():
customer = json.loads(record['body'])
print("attempting to process customer", customer, " at: ", datetime.datetime.now())
if (not ifReadyToProcess(customer)):
postToQueue(customer)
else:
processCustomer(customer)
This is not an ideal setup for SQS triggering Lambda functions.
My testing shows that messages sent to SQS will immediately trigger the Lambda function, even if a Delay setting is provided. Therefore, putting a message back onto the SQS queue will cause Lambda to fire again straight after.
To avoid a situation where Lambda is continually checking whether a message is ready for processing, I would recommend:
Use Amazon CloudWatch Events to trigger a Lambda function on a schedule (eg every 2 minutes)
The Lambda function should pull messages from the queue and check if they are ready to process.
If they are ready, then process them and delete them
If they are not ready, then push them back onto the queue with a Delay setting and delete the original message
Note that this is different to having SQS directly trigger Lambda. Instead, the Lambda function should call ReceiveMessages() to obtain the message(s) itself, which allows the Delay function to add some time between checks.
Another option: Instead of re-inserting a message into the queue, you could simply take advantage of the Default Visibility Timeout setting by not deleting the message. A message that is read from the queue, but not deleted, will automatically "reappear" on the queue. You could use this as the "retry" time period. However, this means you will need to handle Dead Letter processing yourself (eg if a message fails to be processed after n tries).
In my application, I have a queue (HornetQ) set up on JBoss 7 AS.
I have used Spring batch to do some work once the messages is received (save values in database etc.) and then the consumer commits the JMS session.
Sometimes when there is an exception while processing the message, the excecution of consumer is aborted abruptly.
And the message remains in "in delivery" state. There are about 30 messages in this state on my production queue.
I have tried restarting the consumer but the state of these messages is not changed. The only way to remove these
messages from the queue is to restart the queue. But before doing that I want a way to read these messages so
that they can be corrected and sent to the queue again to be processed.
I have tried using QueueBrowser to read them but it does not work. I have searched a lot on Google but could not
find any way to read these messages.
I am using a Transacted session, where once the message is processed, I am calling:
session.commit();
This sends the acknowledgement.
I am implementing spring's
org.springframework.jms.listener.SessionAwareMessageListener
to recieve messages and then to process them.
While processing the messages, I am using spring batch to insert some data in database.
For a perticular case, it tries to insert data too big to be inserted in a column.
It throws an exception and transaction is aborted.
Now, I have fixed my producer and consumer not to have such data, so that this case should not happen again.
But my question is what about the 30 "in delivery" state messages that are in my production queue? I want to read them so that they can be corrected and sent to the queue again to be processed. Is there any way to read these messages? Once I know their content, I can restart the queue and submit them again (after correcting them).
Thanking you in anticipation,
Suvarna
It all depends on the Transaction mode you are using.
for instance if you use transactions:
// session here is a TX Session
MessageConsumer cons = session.createConsumer(someQueue);
session.start();
Message msg = consumer.receive...
session.rollback(); // this will make the messages to be redelivered
if you are using non TX:
// session here is auto-ack
MessageConsumer cons = session.createConsumer(someQueue);
session.start();
// this means the message is ACKed as we receive, doing autoACK
Message msg = consumer.receive...
//however the consumer here could have a buffer from the server...
// if you are not using the consumer any longer.. close it
consumer.close(); // this will release messages on the client buffer
Alternatively you could also set consumerWindowSize=0 on the connectionFactory.
This is on 2.2.5 but it never changed on following releases:
http://docs.jboss.org/hornetq/2.2.5.Final/user-manual/en/html/flow-control.html
I"m covering all the possibilities I could think of since you're not being specific on how you are consuming. If you provide me more detail then I will be able to tell you more:
You can indeed read your messages in the queue using jmx (with for example jconsole)
In Jboss As7 you can do it the following way :
MBeans>jboss.as>messaging>default>myJmsQueue>Operations
listMessagesAsJson
[edit]
Since 2.3.0 You have a dedicated method for this specific case :
listDeliveringMessages
See https://issues.jboss.org/browse/HORNETQ-763
I am building an integration between Salesforce and Twilio that sends/receives SMS using TwilioForce REST API. The main issue is getting around the 10-call API limit from Salesforce, as well as the prohibition on HTTP call outs from a trigger.
I am basing the design on Dan Appleman's Asynchronous Request processes, but in either Batch mode or RequestAsync(), ASync(), Sync(), repeat... I'm still hitting the limits.
I'd like to know how other developers have done this successfully; the integrations have been there for a while, but the examples are few and far between.
Are you sending unique messages for each record that has been updated? If not, then why not send one message to multiple recipients to save on your API limits?
Unfortunately, if you do actually need to send more than 10 unique messages there is no way to send messages in bulk with the Twilio API, you could instead write a simple application that runs on Heroku or some other application platform that you can call out to that will handle the SMS functionality for you.
I have it working now using the following structure (I apologize for the formatting - it's mostly pseudocode):
ASyncRequest object:
AsyncType (picklist: 'SMS to Twilio' is it for now),
Params (long text area: comma-separated list of Ids)
Message object:
To (phone), From (phone), Message (text), Sent (boolean), smsId (string), Error (text)
Message trigger: passes trigger details to CreateAsyncRequests() method.
CreateAsyncRequests: evaluate each new/updated Message__c; if Sent == false for any messages, we create an AsyncRequest, type=SMS to Twilio, Params += ',' + message.Id.
// Create a list to be inserted after all the Messages have been processed
List requests = new List();
Once we reach 5 message.Ids in a single AsyncRequest.Params list, add it to requests.
If all the messages have been processed and there's a request with < 5 Ids in Params, add it to requests as well.
If requests.size() > 0 {
insert requests;
AsyncProcessor.StartBatch();
}
AsyncProcessor implements .Batchable and .AllowsCallouts, and queries ASyncRequest__c for any requests that need to be processed, which in this case will be our Messages list.
The execute() method takes the list of ASyncRequests, splits each Params value into its component Message Ids, and then queries the Message object for those particular Messages.
StartBatch() calls execute() with 1 record at a time, so that each execute() process will still contain fewer than the maximum 10 callouts.
Each Message is processed in a try/catch block that calls SendMessage(), sets Message.smsId = Twilio.smsId and sets Message.Sent = true.
If no smsId is returned, then the message was not sent, and I set a boolean bSidIsNull = true indicating that (at least) one message was not sent.
** If any message failed, no smsIds are returned EVEN FOR MESSAGES THAT WERE SUCCESSFUL **
After each batch of messages is processed, I check bSidIsNull; if true, then I go back over the list of messages and put any that do not have an smsId into a map indexed by the Twilio number I'm trying to send them From.
Since I limited each ASyncRequest to 5 messages, I still have the use of a callout to retrieve all of the messages sent from that Twilio.From number for the current date, using
client.getAccount().getMessages('From' => fromNumber, 'DateSent' => currentDate)
Then I can update the Message.smsIds for all of the messages that were successful, and add an error message to Message.Error_on_Send__c for any that failed.