I am using a Beanstalkd queue to deploy jobs in my Laravel 5.3 application. I use Laravel Forge to administer the server.
I have one of two scenarios that occur:
1) I set a max number of attempts, which causes every job pushed to the queue to be placed on the failed jobs table - even if its task is completed successfully, resulting in this exception on the jobs table:
Illuminate\Queue\MaxAttemptsExceededException: A queued job has been attempted too many times. The job may have previously timed out
And this in my error log:
Pheanstalk\Exception\ServerException: Server reported NOT_FOUND
2) If I remove the max attempts, the jobs run successfully but in an infinite loop.
I am assuming that I am not removing these jobs from the queue properly and so in scenario #1 the job is failing because just wants to keep running.
My controller pushes my job to the queue like this:
Queue::push('App\Jobs\UpdateOutlookContact#handle', ['userId' => $cs->user_id, 'memberId' => $member->id, 'connection' => $connection]);
Here is the handle function of my job:
public function handle($job, $data)
{
Log::info('Outlook syncMember Job dispatched');
$outlook = new Outlook();
$outlook->syncMember($data['userId'], $data['memberId'], $data['connection']);
$job->delete();
}
Here is a picture of my queue configuration from the Laravel Forge admin panel. I am currently using the default queue. If "Tries" is changed to ANY, the jobs succeed but run in an infinite loop.
How do I properly remove these jobs from the queue?
Related
I have a laravel queue setup to a database. When I run dd(env('QUEUE_DRIVER')) I get database back. When I create a job it is run right away. I would like the job to be queued until I run php artisan queue:work. What do I need to do to have the job not run right away. Thanks!
Edit 1:
Dispatch Code:
for ($i=0; $i < 100; $i++) {
$job = new UpdateJob("");
dispatch($job);
}
Job Code:
public function handle(){
sleep(30);
SlackApi::SendMessage("Job!");
}
When I run this I get a slack message every 30 seconds. But none of these jobs are being stored in the DB.
Edit 2:
Even when I add ->delay(Carbon::now()->addMinutes(10)) to the job the job is still run right away.
The issue seems to come from upgrading from v5.1 to v5.4 I added Illuminate\Bus\BusServiceProvider::class to my providers in app.php and that fixed everything.
Is any supervisor running? if the supervisor is configure the queue will be take care of the job as son as it's added to the queue. if you want a delayed dispatch of the job you must specify the delay. Take a look into https://laravel.com/docs/5.4/queues#delayed-dispatching for more details
I have setup the Queue system in L5 using the Databse connection and after i run the migration I have two tables in my DB failed_jobs and jobs. Everything is working fine so far but when the pushed operation failed its keep going and trying to process the operation and did not delete the job on fail or inserted on failed_jobs
Queue::push(function($job) use ($id)
{
Account::delete($id);
$job->delete();
});
In the above example how can I set number of attempts to try if not success and then insert into failed_jobs.
I know this can be done using
php artisan queue:listen --tries=3
But I want the same using the Closures as I have different cases
You can check the number of attempts:
if ($job->attempts() > 3)
{
//
}
This is clearly mentioned in the documentation here.
You can define the maximum number of attempts on the job class itself.
If the maximum number of attempts is specified on the job, it will take precedence over the value provided on the command line.
Add this to your job class:
/**
* The number of times the job may be attempted.
*
* #var int
*/
public $tries = 5;
I am using laravel 4.2.
I've a project requirement to send some analysis report email to all the users every Monday 6 am.
Obviously its a scheduled task, hence I've decided to use cron-job.
For this I've installed liebig/cron package. The package is installed successfully. To test email, I've added following code in app/start/global.php:
Event::listen('cron.collectJobs', function() {
Cron::setEnablePreventOverlapping();
// to test the email, I am setting the day of week to today i.e. Tuesday
Cron::add('send analytical data', '* * * * 2', function() {
$maildata = array('email' => 'somedomain#some.com');
Mail::send('emails.analytics', $maildata, function($message){
$message->to('some_email#gmail.com', 'name of user')->subject('somedomain.com analytic report');
});
return null;
}, true);
Cron::run();
});
Also in app\config\packages\liebig\cron\config.php the key preventOverlapping is set to true.
Now, if I run it like php artisan cron:run, it sends the same email twice with the same time.
I've deployed the same code on my DigitalOcean development server (ubuntu) and set its crontab to execute this command every minute but still it is sending the same email twice.
Also it is not generating lock file in app/storage directory, according to some search results I've come to know that it creates a lock file to prevent overlapping. the directory has full permissions granted.
Can anybody knows how to solve it?
Remove Cron::run().
Here's what's happening:
Your Cron route or cron:run command is invoked.
Cron fires off the cron.collectjobs event to get a list of events.
You call Cron::run() and run all the events.
Cron calls Cron::run() and runs all the events.
In the cron.collectjobs event you should only be making a list of jobs using Cron::add().
The reason you're not seeing a lock file is either that preventOverlapping is set to false (it's true by default), or that the jobs are running so fast you don't see it being created and deleted. The lock file only exists for the time the jobs run, which may only be milliseconds.
I need to have a Job with multiple tasks, being run on different machines, one after another (not simultaneously), and while the current job is running, another same job can arrive to the queue, but should not be started until the previous one has finished. So I came up with this 'solution' which might not be the best but it gets the job done :). I just have one problem.
I figured out I would need a JobQueue (either MongoDb or Redis) with the following structure:
{
hostname: 'host where to execute the task',
running:FALSE,
task: 'current task number',
tasks:{
[task_id:1, commands:'run these ecommands', hostname:'aaa'],
[task_id:2,commands:'another command', hostname:'bbb']
}
}
Hosts:
search for the jobs with same hostname, and running==FALSE
execute the task that is set in that job
upon finish, host sets running=FALSE, checks if there are any other tasks to perform and increases task number + sets the hostname to the next machine from the next task
Because jobs can accumulate, imagine situation when jobs are queued for one host like this: A,B,A
Since I have to run all the jobs for the specified machine how do I not start the 3rd A (first A is still running)?
{
_id : ObjectId("xxxx"), // unique, generated by MongoDB, indexed, sortable
hostname: 'host where to execute the task',
running:FALSE,
task: 'current task number',
tasks:{
[task_id:1, commands:'run these ecommands', hostname:'aaa'],
[task_id:2,commands:'another command', hostname:'bbb']
}
}
The question is how would the next available "worker" know whether it's safe for it to start the next job on a particular host.
You probably need to have some sort of a sortable (indexed) field to indicate the arrival order of the jobs. If you are using MongoDB, then you can let it generate _id which will already be unique, indexed and in time-order since its first four bytes are timestamp.
You can now query to see if there is a job to run for a particular host like so:
// pseudo code - shell syntax, not actual code
var jobToRun = db.queue.findOne({hostname:<myHostName>},{},{sort:{_id:1}});
if (jobToRun.running == FALSE) {
myJob = db.queue.findAndModify({query:{_id:jobToRun._id, running:FALSE},update:{$set:{running:TRUE}}});
if (myJob == null) print("Someone else already grabbed it");
else {
/* now we know that we updated this and we can run it */
}
} else { /* sleep and try again */ }
What this does is checks for the oldest/earliest job for specific host. It then looks to see if that job is running. If yes then do nothing (sleep and try again?) otherwise try to "lock" it up by doing findAndModify on _id and running FALSE and setting running to TRUE. If that document is returned, it means this process succeeded with the update and can now start the work. Since two threads can be both trying to do this at the same time, if you get back null it means that this document already was changed to be running by another thread and we wait and start again.
I would advise using a timestamp somewhere to indicate when a job started "running" so that if a worker dies without completing a task it can be "found" - otherwise it will be "blocking" all the jobs behind it for the same host.
What I described works for a queue where you would remove the job when it was finished rather than setting running back to FALSE - if you set running to FALSE so that other "tasks" can be done, then you will probably also be updating the tasks array to indicate what's been done.
I've implemented long-running tasks in my Rails app using delayed_job along with delayed_job_web. My delayed_job configuration instructs jobs to be attempted once, and for failures to be retained:
config/initializers/delayed_job.rb:
Delayed::Worker.max_attempts = 1
Delayed::Worker.destroy_failed_jobs = false
I tried 2 test jobs that automatically raised errors, in order to see how failures behave. What I get is the following:
My expectation was that Failed jobs would have a count of 2, but that Enqueued / Working / Pending would all be 0. I can't find any documentation on what determines whether a job is Enqueued / Working / Pending, or even what the difference between Working and Pending is (the web interface describes both lists as "contains jobs currently being processed".)
Can anyone provide some clarity?
If you check https://github.com/ejschmitt/delayed_job_web/blob/master/lib/delayed_job_web/application/app.rb , you see the following (starting line 114):
when :working
'locked_at is not null'
when :failed
'last_error is not null'
when :pending
'attempts = 0'
end
Enqueued would be the total number of delayed jobs, i.e. Delayed::Job.count
Working jobs are those that have been locked by the delayed_job process and are currently being worked.
Failed are those that have a last_error
Pending are those jobs that have never been attempted.