I have a beanstalkd instance with two workers picking jobs from one tube.
I've noticed that occasionally one of the workers will reserve a job that has already been reserved (and being worked on) by the other worker.
I know there aren't duplicate jobs in the queue.
Why does beanstalkd allow the same job to be reserved twice?
It sounds to me that you didn't implemented the protocol properly. You need to handle DEADLINE_SOON, and do TOUCH.
What does DEADLINE_SOON mean?
DEADLINE_SOON is a response to a reserve command indicating that you have a job reserved whose deadline is real soon (current safety margin is approximately 1 second).
If you are frequently receiving DEADLINE_SOON errors on reserve, you should probably consider increasing the TTR on your jobs as it generally indicates you aren’t completing them in time. It may also be that you are failing to delete tasks when you have completed them.
See the mailing list discussion for more information.
How does TTR work?
TTR only applies to a job at the moment it becomes reserved. At that event, a timer (called “time-left” in the job stats) starts counting down from the job’s TTR.
If the timer reaches zero, the job gets put back in the ready queue.
If the job is buried, deleted, or released before the timer runs out, the timer ceases to exist.
If the job is touch"ed before the timer reaches zero, the timer starts over counting down from TTR.
The "touch" command
Allows a worker to request more time to work on a job.
This is useful for jobs that potentially take a long time, but you still want
the benefits of a TTR pulling a job away from an unresponsive worker. A worker
may periodically tell the server that it's still alive and processing a job
(e.g. it may do this on DEADLINE_SOON). The command postpones the auto
release of a reserved job until TTR seconds from when the command is issued.
The jobs take longer to run than the TTR, so it was being returned back to the queue and picked up by the other worker.
I now set a larger TTR on the job.
Related
We have a job MyPrettyJob, that is queued through redis from a controller. When we run this job from the command like so, the job does succeed. When we run the job with little data the queue stays online, but when we run the job with a lot of data the queue crashes with an exit code of 12, which suggests an "Out of Memory" error.
The large job processes about 300.000 items, who mostly depend on each other. To that end, we cannot really split up this job without causing severe performance impact. In some extreme cases it could take up to hours instead of the few minutes it currently takes.
For the large job, the queue outputs the following:
$ php artisan queue:work --queue=myqueue
Processing: App\Jobs\MyPrettyJob
Processed: App\Jobs\MyPrettyJob
$ echo $?
12
The queue worker even crashes regardless if something is queued behind that job. That seems to suggest that the queue crashes through cleanup of the large job, but it does not seem to give any indication of what that is. The queue worker also crashes regardless if any database interactions are done, which rules anything related to the database.
What is the queue doing in-between jobs? Can I debug in any way why it is getting out of memory after completing the job? Does the queue write something to a log maybe, or is it doing something in redis in between jobs? It seems like a really weird time for that process to crash.
Exit code 12 happens when the queue worker system determines that it has used more memory than is allowed (see https://github.com/laravel/framework/blob/5.8/src/Illuminate/Queue/Worker.php#L199-L210 for the specific section of code). If you run php artisan queue:work --memory=<digit> where memory is enough to fully run your job (for example, 1024 for 1GB), you should be able to allow your job to complete and continue running after the fact.
I've built a system based on Laravel where users are able to begin a "task" which repeats a number of times, with a delay between each repetition. I've accomplished this by queueing a job with an amount argument, which then recursively queues an additional job until the count is up.
For example, I start my task with 3 repetitions:
A job is queued with an amount argument of 3. It is ran, the amount is decremented to 2. The same job is queued again with a delay of 5 seconds specified.
When the job runs again, the process repeats with an amount of 1.
The last job executes, and now that the amount has reached 0, it is not queued again and the tasks have been completed.
This is working as expected, but I need to know whether a user currently has any tasks being processed. I need to be able to do the following:
Check if a particular queue has any jobs started by a particular user.
Check the value that was set for amount on that job.
I'm using the database driver for a queue named tasks. Is there any existing method to accomplish my goals here?
Thanks!
You shoudln't be using delay to queue multiple repetitions of the same job over and over. That functionality is meant for something like retrying a failed network request. Keeping jobs in the queue for multiple hours at a time can lead to memory issues with your queues if the count gets too high.
I would suggest you use the php artisan schedule:run functionality to run a command every 1-5 minutes to check the database if it is time to run a user's job. If so, kick off that job and add a status flag to the user table (or whatever table you want to keep track of these things). When finished you mark that same row as completed and wait for the next time the cron runs to do it again.
In my Laravel 5.1 project I want to start my second job when first will finished.
Here is my logic.
\Queue::push(new MyJob())
and when this job finish I want to start this job
\Queue::push(new ClearJob())
How can i realize this?
If you want this, you just should define 1 Queue.
A queue is just a list/line of things waiting to be handled in order,
starting from the beginning. When I say things, I mean jobs. - https://toniperic.com/2015/12/01/laravel-queues-demystified
To get the opposite of what you want: async executed Jobs, you should define a new Queue for every Job.
Multiple Queues and Workers
You can have different queues/lists for
storing the jobs. You can name them however you want, such as “images”
for pushing image processing tasks, or “emails” for queue that holds
jobs specific to sending emails. You can also have multiple workers,
each working on a different queue if you want. You can even have
multiple workers per queue, thus having more than one job being worked
on simultaneously. Bear in mind having multiple workers comes with a
CPU and memory cost. Look it up in the official docs, it’s pretty
straightforward.
I'm building a Heroku app that relies on scheduled jobs. We were previously using Heroku Scheduler but clock processes seem more flexible and robust. So now we're using a clock process to enqueue background jobs at specific times/intervals.
Heroku's docs mention that clock dynos, as with all dynos, are restarted at least once per day--and this incurs the risk of a clock process skipping a scheduled job: "Since dynos are restarted at least once a day some logic will need to exist on startup of the clock process to ensure that a job interval wasn’t skipped during the dyno restart." (See https://devcenter.heroku.com/articles/scheduled-jobs-custom-clock-processes)
What are some recommended ways to ensure that scheduled jobs aren't skipped, and to re-enqueue any jobs that were missed?
One possible way is to create a database record whenever a job is run/enqueued, and to check for the presence of expected records at regular intervals within the clock job. The biggest downside to this is that if there's a systemic problem with the clock dyno that causes it to be down for a significant period of time, then I can't do the polling every X hours to ensure that scheduled jobs were successfully run, since that polling happens within the clock dyno.
How have you dealt with the issue of clock dyno resiliency?
Thanks!
You will need to store data about jobs somewhere. On Heroku, you don't have any informations or warranty about your code being running only once and all the time (because of cycling)
You may use a project like this on (but not very used) : https://github.com/amitree/delayed_job_recurring
Or depending on your need you could create a scheduler or process which schedule jobs for the next 24 hours and is run every 4 hours in order to be sure your jobs will be scheduled. And hope that the heroku scheduler will work at least once every 24 hours.
And have at least 2 worker processing the jobs.
Though it requires human involvement, we have our scheduled jobs check-in with Honeybadger via an after_perform hook in rails
# frozen_string_literal: true
class ScheduledJob < ApplicationJob
after_perform do |job|
check_in(job)
end
private
def check_in(job)
token = Rails.application.config_for(:check_ins)[job.class.name.underscore]
Honeybadger.check_in(token) if token.present?
end
end
This way when we happen to have poorly timed restarts from deploys we at least know should-be scheduled work didn't actually happen
Would be interested to know if someone has a more fully-baked, simple solution!
I am designing a cloud app and need a worker process which scours my database looking for work, and then performs it.
Most of the info I seem to find on the subject of background tasks in the cloud involves some kind of scheduler and/or queuing system.
What I have doesn't quite fit into the "run this task every 5 minutes" or "add this to the queue to be executed later" models. I think the main difference to my problem is that the workers themselves find work to do, rather than being assigned it by a periodic scheduler or an external process that generates work.
What I have is basically a giant table where each entry has three fields:
job: a small task to be performed, lets say it gets the last message from a twitter account and stores it in the database
the interval at which to perform that job: say every 5 minutes, N.B. the interval is arbitrary and different for each entry in the table
the last date when the job was performed
The way I would implement this is to have a worker which has an infinite loop. When it enters the loop, it scours the database a)looking for items whose date + interval < currentTime, b)when it finds one, it sets date = currentTime, and c)then executes the job. If there is no work ATM, it sleep for a few seconds, then tries again.
I will have many parallel workers scouring the database simultaneously, which is why I do b) first and then c) in the paragraph above. Since there are parallel workers, action a) and b) are atomic operations on the database to prevent work being duplicated. If the worker crashes after a) and b), but before it manages to finish the work, it's no big deal, and the workers can just do it at the next interval; reason for this is that the work is not performed in a time-invariant system so a backlog scenario of failed jobs has no benefit as the tasks have to be performed at their exact intervals, so it's better to skip 1 interval than to have uneven intervals between which the tasks were executed.
My question is whether that is a reasonable implementation strategy? If so, how do I bring this process to life on the cloud (I am using Heroku, but may switch to EC2 in the future)? I still haven't written any code so I would welcome other suggestions (maybe I misunderstood the use cases/applications for queue systems).
This sounds so close to using something like a scheduled job that you might as well tread the well beaten path and do it the more conventional way. There's no reason why you can't schedule a job to run once every few seconds.
However, this idea of looking for work sounds dodgy. What happens if two workers find the same task to run at the same time for instance? Also, are there not triggers in the application which can indicate that work needs doing? It seems strange that you have code 'looking for work'.
You can go a very long way with simple periodic background tasks, so I would exhaust all possibilities in that area before rolling your own.