I have a Laravel Scheduled job which is defined in Kernel.php like so
$schedule->call('\App\Http\Controllers\ScheduleController#processQueuedMessages')
->everyFiveMinutes()
->name('process_queued_messages')
->withoutOverlapping();
During development, my job threw an exception due to a syntax error. I corrected the error and tried executing it again; but for some reason it won't.
I tried artisan down and then artisan up. I also tried restarting the server instance. But nothing would help. The job just didn't get executed (there was no exception either).
I realize that the problem is due to ->withoutOverlapping(). Somehow, Laravel scheduler is thinking that the job is already running and hence is not executing it again.
I found the solution by looking at the vendor code.
Illuminate\Console\Scheduling\CallbackEvent.php
It creates a file in local storage with the name schedule-*.
public function withoutOverlapping()
{
if ( ! isset($this->description))
{
throw new LogicException(
"A scheduled event name is required to prevent overlapping. Use the 'name' method before 'withoutOverlapping'."
);
}
return $this->skip(function()
{
return file_exists($this->mutexPath());
});
}
protected function mutexPath()
{
return storage_path().'/framework/schedule-'.md5($this->description);
}
Deleting the file schedule-* at storage/framework resolved the issue.
To anyone reading this, deleting the schedule files yourself, is not the right way to go. You need to specify - the lock time - based on which, withoutOverlapping prevents further tasks from running.
As cited in Laravel - Task Scheduling
If needed, you may specify how many minutes must pass before the "without overlapping" lock expires. By default, the lock will expire after 24 hours:
Your problem originates due to the fact that withoutOverlapping applies a default lock for 24 hours. So you had to wait for 24 hours before similar tasks are accepted. Simply adjust the lock time based on your needs by doing:
$schedule->command('emails:send')->withoutOverlapping(10); // where 10 refers to minutes
This did the trick for me:
php artisan cache:clear
I had this problem too. There's not a proper solution for this but a workaround will solve the issue.
Go to storage/framework Folder of your project and delete all the schedule-*********** files.
And then again try to run the cron. It will even if you use withoutOverlapping() function.
Hope this works for you. Ask if any doubt.
Since laravel 8.x you can use specific command to clear mutex files, for example if task stucks because of a server reboot while task were executing
the problem of clear:cache is that this command clear cache for the entire application and this command just clears mutex-files
php artisan schedule:clear-cache
This happened to us this week and we think we figured out the cause. Our crons run off one of our production server's site folder. Our deploy process involves having a second folder where we do deploy/build, and then do a hot folder swap at the end. Since withoutOverlapping() likely has to update a line in the schedule-* files when the process is done, the folder might be swapped mid job and the cron is unable to successfully mark the job as having been finished in the correct schedule-* file, so it thinks it's still running/stuck.
It was a rare occurrence but we're going to add a command to clear out these files after a deploy so it doesn't happen again.
Related
I send mail as a cron job with Laravel. For this, when I want to use the last value I added in my resources/lang/de.json file in the mail blade template file(resources/views/mails/...blade.php), it gives an output as if such a value is not defined. However, if I use the same key in a blade file I created before, it works without any errors. In addition, the keys that I added to the same file (de.json) in the first time work without errors in the same mail blade file.
Thinking it's some kind of cache situation, I researched and found out that restarting the queue worker might fix the problem. However, both locally and on the server with ssh.
'php artisan queue:restart'
Even though I ran the command, there was no improvement.
Do you have any ideas?
Since queue workers are long-lived processes, they will not notice changes to your code without being restarted. So, the simplest way to deploy an application using queue workers is to restart the workers during your deployment process. https://laravel.com/docs/9.x/queues#queue-workers-and-deployment
but php artisan queue:restart will instruct all queue workers to gracefully exit after they finish processing their current job so that no existing jobs are lost. And I see a lot of issues with this command not to solve restart and deploy the worker.
So, Simplest way,
try to stop the worker manually (ctrl+C)
start the worker again with php artisan queue:work again.
might this help.
I'm having issues keeping the queue:work command running on my server. I tried nohup, but as soon as I close the terminal (which times out every 5 minutes or so no matter what I've tried) the process goes away.
I thought about running a script in cron to kick off the nohup command, however that runs in jailshell too so I have no way of seeing if the process is still running from a previous cron or not and I don't want a potential 20k copies of this running because it's trying to kick off every minute.
I also don't have access to install software to install Supervisord.
So, what other solutions can I use to ensure this stays running?
EDIT I contacted the support for my host, and pretty much it looks like there are no real alternatives for me. I think I'm going to have to set this project up on Linode, or rework things to not have queuing tasks.
It seems that the problem resides in the shell configuration, because the command ps is rewritten to show only the children process.
The solution is to ask your hosting provider (or change it yourself if allowed) to set this variable:
SHELL="/bin/bash"
This simple fix allowed me to have the function working properly.
Now my Kernel.php looks as follows:
$command = "ps faux | grep queue:work";
exec($command, $task_list);
// Process are duplicate in ps and show also the command as two single lines
$running_process = (count($task_list) / 2) - 1;
if($running_process < 1)
$schedule->command('queue:work --queue=high,low --tries=3')
->everyMinute();
else if($running_process > 5)
// If too many are active, restart everything to avoid overload
$schedule->call(function(){
Artisan::call('queue:restart');
})->everyMinute();
This code makes sure that at least one worker is always running, and at the same time forces a restart if you have more that 5 workers active.
I'm really struggling with this issue, which is a little frustrating to me because it seems to be so simple in a Linux server. I have a Windows Azure Web App and I want to run "php artisan queue:listen" on the server continuously to take care of dispatched jobs. From what I read from the documentation, in Linux you just use Supervisor to run the command constantly and to revive it, in case it dies. From what I found online, Azure has a similar functionality called WebJobs where you can serve them a script to be ran and then decide whether it should run on a schedule or continuously (kinda like the Scheduler in Laravel). With this I have 2 questions.
1 - Is this the right solution? Place a script to run the command on a WebJob and have the WebJob run continuously?
2 - I'm not experienced in writing php scripts to run command lines, so all I can do is something like this:
echo shell_exec('php artisan queue:work');
Problem is this does not give me the output of the command (I don't see anything like the "processed" result that I see when I run the command by hand on my command console and a job is processed). It is important to me to be able to read the output of the command, because I want to be able to check the logs for errors in case something happens when a job isn't able to be processed. From the documentation shell_exec returns null in case an error is thrown so I'm completely clueless on how to deal with this.
Thank you so much in advance!
Instead of using shell_exec() you can directly upload .cmd file which includes your command php artisan queue:work, and then you can find the output log in WebJob Details page.
About how to do that, please check Ernesto's answer out.
For Azure you can make a new webjob to your web app, and upload a .cmd
file including a command like this.
php %HOME%\site\wwwroot\artisan queue:work --daemon
and defining that as a triguered and 0 * * * * * frecuency cron.
that way work for me.
For more information, please refer to Run Background tasks with WebJobs.
I have two apps running on the same server.
Now it seems like when adding withoutOverlapping() to the scheduler job and managing the base cronjob via cron itself, these 2 apps are blocking each other in execution.
Could that be?
Yes, withoutOverlapping only works per application.
Laravel creates a file in the storage folder with a hash of the job. This way, if the file exists, Laravel knows the job is still running. The one application cannot possibly know if the other one is currently running a job because it does not have access to the storage folder of the other application.
If your code looks like the following
$schedule->command('process:queue 0')->everyMinute()->withoutOverlapping();
$schedule->command('process:queue 1')->everyMinute()->withoutOverlapping();
It is because same commands with different parameters might bc considered overlapping.
I.e. the hash of the job will consider only the command signature.
I've configured scheduler to run once every minute to execute two commands:
$schedule->command('amazon:read-sqs')
->everyMinute()
->runInBackground()
->withoutOverlapping()
->sendOutputTo(storage_path('logs/cmd/amazon_read_sqs.log'), true)
->thenPing('http://beats.envoyer.io/heartbeat/SoMeRaNdOmHaSh1');
$schedule->command('jobs:dispatcher', ['--max' => 100])
->everyMinute()
->runInBackground()
->withoutOverlapping()
->sendOutputTo(storage_path('logs/cmd/jobs_dispatcher.log'), true)
->thenPing('http://beats.envoyer.io/heartbeat/SoMeRaNdOmHaSh2');
It's been running great for the past month of development. However, right after setting up our server to run with Envoyer, the scheduler suddenly never executes after the first time.
In other words, if the schedule is set to every minute in Forge, it runs once and then never appends the logs.
I added Envoyer heartbeats to track it every 10 min but it doesn't trigger the thenPing() method to notify Envoyer...even after that first run.
I can delete the cron entry and recreate it, forcing it to run that one time.
All of these run fine if they are given their own cronjob.
When I check for any /storage/framework/schedule-* lock files, I find nothing to delete that could be blocking them.
Nothing in the Laravel log files showing a problem.
Any ideas?
Solved this by changing the cronjob from php /home/forge/default/artisan schedule:run to php /home/forge/default/current/artisan schedule:run
This allowed the Laravel scheduler to run correctly. However, the methods thenPing and pingBefore still never actually do their job.
To fix that, I had to manually add this line after each command:
(new Client())->get('http://beats.envoyer.io/heartbeat/SoMeRaNdOmHaSh');
Why the built-in ping methods don't work is a mystery. Would love to know why.