Laravel 5 Queue assign to multiple workers in beanstalkd - laravel-5

I am using laravel 5 and queue driver beanstalkd already installed in application, could someone please suggest what i have to do for achieving parallel processing, i want to run jobs in parallel in a same or different queue. currently it oly process one job at a time which is very time-consuming.

You can have multiple workers each watching a number of tubes. Which jobs will be run first depends on any priority when they were put into the system, or simple first-come-first.
It's a very common pattern to start and keep workers (one, or many) running with a tool such as 'SupervisorD'.

Check the Laravel docs:
https://laravel.com/docs/5.5/queues#supervisor-configuration
Supervisor is that you need. Supervisor will control your workers, if they die, supervisor will restart them again. For parallel processing, check the attribute numprocs of Supervisor
From Laravel docs:
the numprocs directive will instruct Supervisor to run x queue:work processes and monitor all of them, automatically restarting them if they fail.

Related

Laravel Database Queue Frequency

Laravel's database queue runs a database query every second to see if there are jobs to be processed.
I know it's not a complex query, but we want to reduce the number of connections hitting our database and we don't have that many jobs right now to consume. We'd like to modify it to run every 15-30 seconds or even longer.
I don't see config option to do something like this in the documentation and haven't found questions that cover this type of use case.
I do see that rate limiting can be enabled when using Redis queues, but our project needs to use the database queue for the time being.
We're on Laravel 5.5 and PHP 7.0 right now and it will be a while before we upgrade to newer versions. I wanted to go with Laravel Horizon, but that requires an upgrade to PHP 7.1.
Any help is appreciated.
From the documentation of laravel
Worker Sleep Duration
When jobs are available on the queue, the worker will keep processing jobs with no delay
in between them. However, the sleep option determines how long (in seconds) the worker
will "sleep" if there are no new jobs available. While sleeping, the worker will not
process any new jobs - the jobs will be processed after the worker wakes up again.
php artisan queue:work --sleep=30

Laravel Forge: How to stop queue workers?

I have configured a queue worker as a daemon on Forge, then used the recommended deployment script command (php artisan queue:restart).
How do I manually stop and restart the queue worker? If I stop it, supervisor will just restart it. Do I need kill the active worker in Forge first?
This may be required on an ad-hoc basis. For example, if I want to clear a log file that the queue has open.
I've been pretty vocal in deployment discussions, and I always tell people to stop their worker processes with the supervisorctl command.
supervisorctl stop <name of task>
Using the queue:restart command doesn't actually restart anything. It sets an entry in the cache which the worker processes check, and shutdown. As you noticed, supervisor will then restart the process.
This means that queue:restart has one huge problem, ignoring the naming and the fact that it doesn't restart; it will cause all worker processes on all servers that uses the same cache to restart. I think this is wrong, I think a deployment should only affect the current server currently being deployed to.
If you're using a per-server cache, like the file cache driver, then this has another problem; what happens if your deployment entirely removes the website folder? The cache would change, the queues would start again, and the worker process may have a mix of old and new code. Fun things to debug...
Supervisor will signal the process when it is shutting down, and wait for it to shut down cleanly, and if it doesn't, forcefully kill it. These timeouts can be configured in the supervisor configuration file. This means that using supervisorctl to stop the queue process will not terminate any jobs "half-way through", they will all complete (assuming they run for a short enough time, or you increase the timeouts).

What is the difference queue:work and queue:listen

I can't understand what's the difference between Laravel queue:work and Laravel queue:listen
I can see that:
Queue: Listen to a given queue
Work: Process the next job on a queue
But still don't get it, because I've tried both, both will run queue if there is any new queue ("work option" not just running once)
I'm not talking about the daemon option. Just these both.
Until Laravel 5.2 you had :listen and :work.
Work would process the first job in the queue.
Listen would process all jobs as they came through.
In Laravel 5.3+ this is no longer the case. Listen still exists, but it is deprecated and slated for removal in 5.5. You should prefer :work now.
Work now process jobs one after the other, but have a plethora of options you can configure.
Edit
The above was true at the time of the posting, but since then things have been changed a bit.
queue:work should be preferred when you want your queue's to run as a daemon. This would be a long-lived process that would be beneficial where performance was an issue. This will use a cached version of the application and does not re-bootstrap the application every time a job is processed.
queue:listen should be used when you don't care about performance or you don't want to have to restart the queue after making changes to the code.
They'll both pop jobs off the queue 1-by-1 as received.
They both share almost the exact same options that can be passed to them.
In Laravel 5.3+ queue:work runs a daemon listener. It could in 5.2 as well if you specified the --daemon flag. A daemon work boots the framework one time and then processes jobs repeatedly. The queue:listen command runs a queue:work --once sub-process in a loop which boots the framework each iteration.
queue:work should pretty much always be used in production as it's much more efficient and uses less RAM. However; you need to restart it after each core change. queue:listen is useful for development and local environments because you don't have to restart it after code changes (because the framework is booting fresh each job).
from here
The queue:work Artisan command includes a --daemon option for forcing
the queue worker to continue processing jobs without ever re-booting
the framework. This results in a significant reduction of CPU usage
when compared to the queue:listen command:
As you can see, the queue:work job supports most of the same options
available to queue:listen. You may use the php artisan help queue:work
job to view all of the available options.
https://laravel.com/docs/5.1/queues#running-the-queue-listener
There are two different issues listed.
There is artisan queue:work and artisan queue:listen
queue:work will simply pop off the next job in the queue, and process only that one job. This is a 'one off' command that will return to the command prompt once the one queue command is processed.
queue:listen will listen to the queue, and continue to process any queue commands it receives. This will continue running indefinitely until you stop it.
In Laravel >=4.2 there was a --daemon command added. The way it works is simply keeps running the queues directly, rather than rebooting the entire framework after every queue is processed. This is an optional command that significantly reduces the memory and cpu requirements of your queue.
The important point with the --daemon command is that when you upgrade your application, you need to specifically restart your queue with queue:restart, otherwise you could potentially get all sorts of strange errors as your queue would still have the old code in memory.
So to answer your question "Which command should I use for running my daemons?" - the answer is almost always queue:work --daemon

Ensure Background Worker is Alive and Working

Is there any way in ruby to determine if a background worker is running?
For instance, i have a server that works a queue in delayed job and i would like to ensure 4 workers are on it and spin up a new worker process if one has either stalled or quit.
From the command line, crontab -lgives a list of all currently running jobs.
From the Rails console, Delayed::Job.all will give you a list of all currently running jobs.
Delayed Job also has a list of lifecycle methods which you can access:
http://www.rubydoc.info/github/collectiveidea/delayed_job/Delayed/Lifecycle
the usual way to do that is to use an external watchdog process. you can use Monit or God

Can multiple sidekiq instances process the same queue

I'm not familiar with the internals of Sidekiq and am wondering if it's okay to launch several Sidekiq instances with the same configuration (processing the same queues).
Is it possible that 2 or more Sidekiq instances will process the same message from a queue?
UPDATE:
I need to know if there is a possible conflict, when running Sidekiq on more than 1 machine
Yes, sidekiq can absolutely run many processes against the same queue. Redis will just give the message to a random process.
Nope, I've ran Sidekiqs in different machines with no issues.
Each of the Sidekiqs read from the same redis server, and redis is very robust in multi-threaded, and distributed scenarios.
In addition, if you look at the web interface for Sidekiq, it will show all the workers across all machines because all the workers are logged in the same redis server.
So no, no issues.

Resources