Keep laravel queue:work running while in jailshell - laravel

I'm having issues keeping the queue:work command running on my server. I tried nohup, but as soon as I close the terminal (which times out every 5 minutes or so no matter what I've tried) the process goes away.
I thought about running a script in cron to kick off the nohup command, however that runs in jailshell too so I have no way of seeing if the process is still running from a previous cron or not and I don't want a potential 20k copies of this running because it's trying to kick off every minute.
I also don't have access to install software to install Supervisord.
So, what other solutions can I use to ensure this stays running?
EDIT I contacted the support for my host, and pretty much it looks like there are no real alternatives for me. I think I'm going to have to set this project up on Linode, or rework things to not have queuing tasks.

It seems that the problem resides in the shell configuration, because the command ps is rewritten to show only the children process.
The solution is to ask your hosting provider (or change it yourself if allowed) to set this variable:
SHELL="/bin/bash"
This simple fix allowed me to have the function working properly.
Now my Kernel.php looks as follows:
$command = "ps faux | grep queue:work";
exec($command, $task_list);
// Process are duplicate in ps and show also the command as two single lines
$running_process = (count($task_list) / 2) - 1;
if($running_process < 1)
$schedule->command('queue:work --queue=high,low --tries=3')
->everyMinute();
else if($running_process > 5)
// If too many are active, restart everything to avoid overload
$schedule->call(function(){
Artisan::call('queue:restart');
})->everyMinute();
This code makes sure that at least one worker is always running, and at the same time forces a restart if you have more that 5 workers active.

Related

Can I specify a timeout for a GCP ai-platform training job?

I recently submitted a training job with a command that looked like:
gcloud ai-platform jobs submit training foo --region us-west2 --master-image-uri us.gcr.io/bar:latest -- baz qux
(more on how this command works here: https://cloud.google.com/ml-engine/docs/training-jobs)
There was a bug in my code which cause the job to just keep running, rather than terminate. Two weeks and $61 later, I discovered my error and cancelled the job. I want to make sure I don't make that kind of mistake again.
I'm considering using the timeout command within the training container to kill the process if it takes too long (typical runtime is about 2 or 3 hours), but rather than trust the container to kill itself, I would prefer to configure GCP to kill it externally.
Is there a way to achieve this?
As a workaround, you could write a small script that runs your command and then sleeps the time you want until running a cancel job command.
As a timeout definition is not available in AI Platform training service, I took the liberty to open a Public Issue with a Feature Request to record the lack of this command. You can track the PI progress here.
Except the script mentioned above, you can also try:
TimeOut Keras callback, or timeout= Optuna param (depending on which library you actually use)
Cron-triggered Lambda (Cloud Function)

Laravel non-overlapping scheduled job not executing

I have a Laravel Scheduled job which is defined in Kernel.php like so
$schedule->call('\App\Http\Controllers\ScheduleController#processQueuedMessages')
->everyFiveMinutes()
->name('process_queued_messages')
->withoutOverlapping();
During development, my job threw an exception due to a syntax error. I corrected the error and tried executing it again; but for some reason it won't.
I tried artisan down and then artisan up. I also tried restarting the server instance. But nothing would help. The job just didn't get executed (there was no exception either).
I realize that the problem is due to ->withoutOverlapping(). Somehow, Laravel scheduler is thinking that the job is already running and hence is not executing it again.
I found the solution by looking at the vendor code.
Illuminate\Console\Scheduling\CallbackEvent.php
It creates a file in local storage with the name schedule-*.
public function withoutOverlapping()
{
if ( ! isset($this->description))
{
throw new LogicException(
"A scheduled event name is required to prevent overlapping. Use the 'name' method before 'withoutOverlapping'."
);
}
return $this->skip(function()
{
return file_exists($this->mutexPath());
});
}
protected function mutexPath()
{
return storage_path().'/framework/schedule-'.md5($this->description);
}
Deleting the file schedule-* at storage/framework resolved the issue.
To anyone reading this, deleting the schedule files yourself, is not the right way to go. You need to specify - the lock time - based on which, withoutOverlapping prevents further tasks from running.
As cited in Laravel - Task Scheduling
If needed, you may specify how many minutes must pass before the "without overlapping" lock expires. By default, the lock will expire after 24 hours:
Your problem originates due to the fact that withoutOverlapping applies a default lock for 24 hours. So you had to wait for 24 hours before similar tasks are accepted. Simply adjust the lock time based on your needs by doing:
$schedule->command('emails:send')->withoutOverlapping(10); // where 10 refers to minutes
This did the trick for me:
php artisan cache:clear
I had this problem too. There's not a proper solution for this but a workaround will solve the issue.
Go to storage/framework Folder of your project and delete all the schedule-*********** files.
And then again try to run the cron. It will even if you use withoutOverlapping() function.
Hope this works for you. Ask if any doubt.
Since laravel 8.x you can use specific command to clear mutex files, for example if task stucks because of a server reboot while task were executing
the problem of clear:cache is that this command clear cache for the entire application and this command just clears mutex-files
php artisan schedule:clear-cache
This happened to us this week and we think we figured out the cause. Our crons run off one of our production server's site folder. Our deploy process involves having a second folder where we do deploy/build, and then do a hot folder swap at the end. Since withoutOverlapping() likely has to update a line in the schedule-* files when the process is done, the folder might be swapped mid job and the cron is unable to successfully mark the job as having been finished in the correct schedule-* file, so it thinks it's still running/stuck.
It was a rare occurrence but we're going to add a command to clear out these files after a deploy so it doesn't happen again.

Laravel scheduler command never runs a 2nd time

I've configured scheduler to run once every minute to execute two commands:
$schedule->command('amazon:read-sqs')
->everyMinute()
->runInBackground()
->withoutOverlapping()
->sendOutputTo(storage_path('logs/cmd/amazon_read_sqs.log'), true)
->thenPing('http://beats.envoyer.io/heartbeat/SoMeRaNdOmHaSh1');
$schedule->command('jobs:dispatcher', ['--max' => 100])
->everyMinute()
->runInBackground()
->withoutOverlapping()
->sendOutputTo(storage_path('logs/cmd/jobs_dispatcher.log'), true)
->thenPing('http://beats.envoyer.io/heartbeat/SoMeRaNdOmHaSh2');
It's been running great for the past month of development. However, right after setting up our server to run with Envoyer, the scheduler suddenly never executes after the first time.
In other words, if the schedule is set to every minute in Forge, it runs once and then never appends the logs.
I added Envoyer heartbeats to track it every 10 min but it doesn't trigger the thenPing() method to notify Envoyer...even after that first run.
I can delete the cron entry and recreate it, forcing it to run that one time.
All of these run fine if they are given their own cronjob.
When I check for any /storage/framework/schedule-* lock files, I find nothing to delete that could be blocking them.
Nothing in the Laravel log files showing a problem.
Any ideas?
Solved this by changing the cronjob from php /home/forge/default/artisan schedule:run to php /home/forge/default/current/artisan schedule:run
This allowed the Laravel scheduler to run correctly. However, the methods thenPing and pingBefore still never actually do their job.
To fix that, I had to manually add this line after each command:
(new Client())->get('http://beats.envoyer.io/heartbeat/SoMeRaNdOmHaSh');
Why the built-in ping methods don't work is a mystery. Would love to know why.

How to fire Laravel Queues with beanstalkd

I'm pretty new to the whole Queue'd jobs thing in Laravel 4. I have some process heavy tasks I need the site to run in the background after being fired by the user doing a particular action.
When I was doing the local development for my site I was using this:
Queue::push('JobClass', array('somedata' => $dataToBeSent));
And I was using the local "sync" driver to do it. (The jobs would just automatically fire, impacting on the user experience but I assumed when going into the production phase I could switch it to beanstalkd and they would then be run in the background)
Which brings me to where I'm at now. I have beanstalkd set up with the dependencies installed with composer and the beanstalkd process listening for new jobs. I installed a beanstalk admin interface and can see my jobs going into the queue, but I have no idea how to actually get them to run!
Any help would be apprieciated, thanks!
This is actually a really badly documented feature in Laravel.
What you actually need to do is have the JobClass.php in a folder that is auto-loaded, I use app/commands, but they can also be in app/controllers or app/models if you like. And this function needs to have a fire event that takes the $job and $data argument.
To run these, simply execute php artisan queue:listen --timeout=60 in your terminal, and it will be busy emptying the queue, until it's empty, or it's been running for longer then 60 seconds. (Small note: The timeout is the time-limit to start a queue, so it may run for 69 seconds if 1 job takes 10 seconds.
If you only want to run 1 job (perfect for testing), run php artisan queue:work
There are tools like Supervisord that make sure your job handlers keep running, but I recommend to just make a Cron task that starts every X minutes based on how fast the data needs to be processed, and on how much data comes in.
Keep in mind you need to path your artisan.
php /some/path/to/artisan queue:work

Crontab job as a service

I have a script that pulls some data from a web service and populates a mysql database. The idea is that this runs every minute, so I added a cron job to execute the script.
However, I would like the ability to occasionally suspend and re-start the job without modifying my crontab.
What is the best practice for achieving this? Or should I not really be using crontab to schedule something that I want to occasionally suspend?
I am considering an implementation where a global variable is set, and checked inside the script. But I thought I would canvas for more apt solutions first. The simpler the better - I am new to both scripting and ruby.
If I were you my script would look at a static switch, like you said with your global variable, but test for a file existence instead of a global variable. This seems clean to me.
Another solution is to have a service not using crontab but calling your script every minute. This service would be like other services in /etc/init.d or (/etc/rc.d depending on your distribution) and have start, stop and restart commands as other services.
These 2 solutions can be mixed:
the service only create or delete the switching file, and the crontab line is always active.
Or your service directly edits the crontab like this, but
I prefer not editing the crontab via a script and the described technique in the article is not atomic (if you change your crontab between the reading and the writting by the script your change is lost).
So at your place I would go for 1.

Resources