Laravel queue event executed immediately before page fully loaded - laravel

I add job on queue on page load controller and invoke laravel event and broadcast it via socket io to front end. Problem is since this is done on page load, the job is executed before the page fully loaded. As a result, I can see the response appended for a short time while the page loads and disappeared when its fully loaded. Why is that so?
I doubted if the connection is on sync instead of redis. Upon checking .env and config/queue.php uses redis as default.
DispatchNow is working fine but dispatch is not. Is it due to this the response sent immediately before the page settled down?
In front end, I added the code to connect with socket inside document ready to ensure it's done after the dom is loaded. But doesnt help, it behaves the same.
I tried other workaround where I fire an ajax call to the queue job once the specific DOM element is visible, and it works fine.
But I want it to be called on the page controller itself instead of a separate ajax.
In controller:
$sellings = curl(...some call to external url);
SendOrder::dispatchNow($sellings, Auth::id());
return view('home');
In SendOrder job:
public function handle()
{
// Allow only 2 emails every 1 second
Redis::throttle('any_key')->allow(2)->every(1)->then(function () {
event(new DashboardEvent('job1', $this->order, $this->user));
Log::info('job 1done');
}, function () {
// Could not obtain lock; this job will be re-queued
return $this->release(2);
});
}
.env:
BROADCAST_DRIVER=redis
CACHE_DRIVER=redis
QUEUE_CONNECTION=redis
SESSION_DRIVER=redis
config/queue.php
'default' => env('QUEUE_CONNECTION', 'redis'),
........
.........
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => env('REDIS_QUEUE', 'default'),
'retry_after' => 90,
'block_for' => null,
],

You could simply delay the dispatching with something like:
SendOrder::dispatch()->delay(now()->addMinutes(1))
Also, if you what to dispatch a job after the response is sent back to the client, you could use dispatchAfterResponse() method too.
This method makes the job run after the response is sent and before closing the connection. It simply registers a terminating callback that the application runs before it’s done with the request.
However, this is only useful to dispatch a short job instantly instead of sending it to a queue system. Since sending emails isn't exactly a short job, this might not be for you.

Related

Laravel email sending not working when 'to' not overridden

Email sending works fine when I have a global 'to' set in config/mail.php
'to' => [
'address' => 'someone#example.com',
'name' => 'Someone',
however as soon as this is removed and email is supposedly sent to the actual email address, nothing happens. No errors, everything appears fine, except the email is never received.
I've checked spam folders, I've tried sending to different email addresses, I've tried setting the notifiable route (even though 'email' exists on the model), I've cleared cache and config cache, I've tried listening to message sending event and dumping results - I'm lost.
solved.
When sending a MailableObject (as apposed to a MailMessage) you need to chain the 'to' command like so:
public function toMail($notifiable)
{
return (new MailObject(blah blah))->to($notifiable);
}

How to set dynamic SMTP data in Laravel 5.4 for queued emails?

In my application each user can use his own SMTP server. Therefor the config must be provided. I'm using Laravel Notifications to send the emails. If I'm using no queue (that means sync), there is no problem.
I made a CustomNotifiable Trait:
config([
'mail.host' => $setting->smtp_host,
'mail.port' => $setting->smtp_port,
'mail.username' => $setting->smtp_username,
'mail.password' => $setting->smtp_password,
'mail.encryption' => $setting->smtp_encryption,
'mail.from.address' => $setting->smtp_from_address,
'mail.from.name' => $setting->smtp_from_name,
]);
(new \Illuminate\Mail\MailServiceProvider(app()))->register();
After that, I restore the original config:
config([
'mail' => $originalMailConfig
]);
(new \Illuminate\Mail\MailServiceProvider(app()))->register();
No problem until now.
But if it's queued, just the first config after starting the queue worker will be taken for all further emails, even if any other SMTP config is provided. The default config from config/mail.php will be overridden. But this only works the first time.
I've made in the AppServiceProvider::boot method (the SMTP config is stored at the notification):
Queue::before(function (JobProcessing $event) {
// Handle queued notifications before they get executed
if (isset($event->job->payload()['data']['command']))
{
$payload = $event->job->payload();
$command = unserialize($payload['data']['command']);
// setting dynamic SMTP data if required
if (isset($command->notification->setting))
{
config([
'mail.host' => $command->notification->setting->smtp_host,
'mail.port' => $command->notification->setting->smtp_port,
'mail.username' => $command->notification->setting->smtp_username,
'mail.password' => $command->notification->setting->smtp_password,
'mail.encryption' => $command->notification->setting->smtp_encryption,
'mail.from.address' => $command->notification->setting->smtp_from_address,
'mail.from.name' => $command->notification->setting->smtp_from_name,
]);
(new \Illuminate\Mail\MailServiceProvider(app()))->register();
}
}
});
Of course, the original config get restored:
Queue::after(function (JobProcessed $event) use ($originalMailConfig) {
$payload = $event->job->payload();
$command = unserialize($payload['data']['command']);
// restore global mail settings
if (isset($command->notification->setting))
{
config([
'mail' => $originalMailConfig
]);
(new \Illuminate\Mail\MailServiceProvider(app()))->register();
}
});
It seems, as the Swift Mailer has a cache or something like that. I registered a new MailServiceProvider, which should simply replace the old one. So if I set the config with the new SMTP data, the new registered provider should take them. Logging the config shows even in the TransportManager, that the correct SMTP data were set, right before sending the mail, but the mail was sent with the first set config.
I found this thread and tried the linked solution, but with the same result: How to set dynamic SMTP details laravel
So I need a way to override the Services / ServiceProvider / SMTP config. Even if the Supervisor restarts the queue, there is a chance that multiple emails with different configs should be send at the same time.
In Laravel 5.4+, as I see that the Mailer Class is a singleton that hold a MailTransport Class, which is responsible for the config of SMTP mail and is a singleton,too; I just have to override the config using the following approach:
First, I setup a trait so I can just turn this feature on some Mails:
trait MailSenderChangeable
{
/**
* #param array $settings
*/
public function changeMailSender($settings)
{
$mailTransport = app()->make('mailer')->getSwiftMailer()->getTransport();
if ($mailTransport instanceof \Swift_SmtpTransport) {
/** #var \Swift_SmtpTransport $mailTransport */
$mailTransport->setUsername($settings['email']);
$mailTransport->setPassword($settings['password']);
}
}
}
Then, in the build() method of your mail class, you can utilize the above trait and call:
$this->changeMailSender([
'email'=>$this->company->email,
'password'=>$this->company->email_password,
]);
Boom, let the Laravel do the rest.
After a lot of researching I stumbled upon the different queue commands. I tried queue:listen (which is not described in the Laravel 5.4 docs) instead of queue:work and the problems are gone.
Of course, this doesn't really explain the described behavior, but fortunately it doesn't matter, because I can live with this solution/workaround.
Another strange behavior is, that from time to time the queue worker throws an exception because the database was locked. No idea, when or why this happened.
This post explained a little bit, why things can happen: What is the difference between queue:work --daemon and queue:listen
In a nutshell, queue:listen solved my problem and another very strange db lock problem as well.

Possible ajax call to result in extreme load times after redirects?

I have the following senario
Page A is being loaded and fires up 3 ajax calls, they take some time because they are working on getting a lot of data, though nothing the server can't handle. Now the user is redirecting to page B before the ajax calls have finished. I know they will continue running on the background, but could this possible cause dramatic loading times? There is no sql overheat, server processor is only using around 10% of it's limits. Yet loading times can differ from 1-2 seconds all the way to 50+ seconds.
Is it possible that this is being caused by the previous, yet running ajax calls and that the browser is somehow still having a connection with these calls and awaits response before it will load the next page??
Based on your answer in the comment section i try to explain it and provide a solution for this - because i ran a ton of times in this issue - although you write that your ajax calls are not using the session library - check out if you start it anyway.
PHP writes session data to a file by default. If a request starts it starts the session. This session file is locked. What this means is that if your web page makes a ton of requests to PHP scripts, e.g. for loading content via Ajax, each request locks the session and prevents the other requests to complete.
What we did in order to prevent situations like that is the following:
in your /application/config/hooks.php
$hook['pre_controller'][] = array(
"class" => "AppSessionInterceptor",
"function" => "initialize",
"filename" => "AppSessionInterceptor.php",
"filepath" => "hooks"
);
and after that create a hook called AppSessionInterceptor
class AppSessionInterceptor
{
private $ci;
public function __construct()
{
$this->ci = &get_instance();
}
public function initialize()
{
//some doings
$this->closeSession();
}
private function closeSession()
{
$blnSessWriteClose = true;
$arrStopSessionFromWriteClose = array(
$this->ci->uri->segment(1) => array("logout","login"),
$this->ci->uri->segment(2) => array("logout","login"),
$this->ci->input->get("blnWriteSession") => array(1)
);
foreach($arrStopSessionFromWriteClose AS $key => $arrValue)
{
if (in_array($key, $arrValue))
{
$blnSessWriteClose = false;
break;
}
}
if ($blnSessWriteClose) session_write_close();
}
}
This closes the session before anything else is called.
Keep in mind that if you close a session - you are able to read the data but can't write it anymore.

Laravel 4 and Iron.io multiple queue / PHP

I started to take a look Iron.io as service for my queue process. With the easy set up in laravel I make it work in a couple of minutes but there is something that is not clear to me.
I subscribed a new queue called resizer using the artisan command as the following:
php artisan queue:subscribe resizer http://mywebsite.com/queue/resizer
On the settings in the queue.php file I have to give the name on the key queue of the queue created in this case resizer
'iron' => array(
'driver' => 'iron',
'host' => 'mq-aws-us-east-1.iron.io',
'token' => 'xxxxxx',
'project' => 'xxxx',
'queue' => 'resizer',
'encrypt' => true,
),
But for sure I will have others kind of queues. This resizer queue is responsible to resize images, but I will have to set up another one for send email maybe called email.
Now let's say that I want implement the email queue and also have the resizer well i thought just subscribe another service.
php artisan queue:subscribe email http://mywebsite.com/queue/email
my routes:
Route::post('queue/resizer', function()
{
Queue::marshal();
});
Route::post('queue/email', function()
{
Queue::marshal();
});
Problem:
When I Hit the route queue/email Iron.io fire the resizer instead the email process adding 1 more message to that queue because on the settings I set up resizer.
So how can I have different tasks / queue to assign to Iron.io each one for differents needs?
You can use pushRaw function
pushRaw($payload, $queue = null, array $options = array())
Example:
Queue::pushRaw("This is Hello World payload", "email");

Yii session timeout not working

same as title, in my config:
'session' => array(
'class'=>'CHttpSession',
'timeout'=> 1200,
'autoStart'=>true,
),
in my code:
$ssKey = 'MY_SS_KEY';
if (isset(Yii::app()->session[$ssKey])) {
$this->jobWithSession();
} else {
$this->jobWithNotSession();
Yii::app()->session[$ssKey] = 'ok';
}
first time, it call function jobWithNotSession(), but after over 1200s (20 minute), it still call function jobWithNotSession, what's wrong? somebody can help me?
In order to make Yii session timeout to work, you should do the following:
In protected/config/main.php:
'components'=>array(
'session' => array(
'class' => 'CDbHttpSession', //Set class to CDbHttpSession
'timeout' => 1800, //Any time, in seconds
),
),
1800 is the time, in seconds, during which your time session will be active.
It is algo important to set the class to CDbHttpSession.
You are using the wrong functionality here. The session timeout is only applicable when the php garbage collector is getting called, which is far from every page-view. It has something to do with the gc_probability-setting. So that's not what you want to use. As long as it doesn't run, the session still exists (even though expired) and the user remains logged in.
What you do want is to "remove" the autoLogin cookie, which you can do by controlling the duration of it.
So basically what you need to change is the duration parameter of the CWebUser::login()-function.

Resources