How to run different Beanstalkd Laravel queues from the same server? - laravel

I have two different Laravel queues in the same server. In my Supervisord.d folder I have two ini files for those queues. The job names are different in the queues. But, every time I run a job and expect the result from one queue, other queue also interferes. Here is the sample of the ini files:
[program:queue_runner]
command = php /path_to_prod/artisan queue:work --daemon --queue=default,smsInt,smsIntLow --tries=1 --timeout=30
stdout_logfile = /path_to_prod/storage/logs/supervisor.log
redirect_stderr = true
numprocs = 5
process_name = %(program_name)s%(process_num)s
[program:queue_runner_test]
command = php /path_to_test/artisan queue:work --daemon --queue=default,smsIntTest,smsIntTestLow --tries=1 --timeout=30
stdout_logfile = /path_to_test/storage/logs/supervisor.log
redirect_stderr = true
numprocs = 50
process_name = %(program_name)s%(process_num)s
Could you please help me to solve it.

Found the solution of my problem. Though the jobs were despatching from the test site on the smsIntTest and from the other site on the smsInt queues from the beginning. But, they were getting picked up by wrong queues every time.
As the following post suggested, Why is Laravel or Beanstalkd skipping jobs?
I've assigned 'queue' => 'smsInt' in the 'connections' array of the app/config/queue.php file for one site, and 'queue' => 'smsIntTest' for the other one. This solution solved the problem.

Related

How to see celery tasks in redis queue when there is no worker?

I have a container creating celery tasks, and a container running a worker.
I have removed the worker container, so I expected that tasks would accumulate in the redis list of tasks.
But I can't see any tasks in redis.
This is with django. I need to isolate the worker and queue, hence the settings
A typical queue name is 'test-dear', that is, SHORT_HOSTNAME='test-dear'
CELERY_DATABASE_NUMBER = 0
CELERY_BROKER_URL = f"redis://{REDIS_HOST}:6379/{CELERY_DATABASE_NUMBER}"
CELERY_RESULT_BACKEND = f"redis://{REDIS_HOST}:6379/{CELERY_DATABASE_NUMBER}"
CELERY_BROKER_TRANSPORT_OPTIONS = {'global_keyprefix': SHORT_HOSTNAME }
CELERY_TASK_DEFAULT_QUEUE = SHORT_HOSTNAME
CELERY_TASK_ACKS_LATE = True
After starting everything, and stopping the worker, I add tasks.
For example, on the producer container after python manage.py shell
>>> from cached_dear import tasks
>>> t1 = tasks.purge_deleted_masterdata_fast.delay()
<AsyncResult: 9c9a564a-d270-444c-bc71-ff710a42049e>
t1.get() does not return.
then in redis:
127.0.0.1:6379> llen test-dear
(integer) 0
I was not expecting 0 entries.
What I am doing wrong or not understanding?
I did this from the redis container
redis-cli monitor | grep test-dear
and sent a task.
The list is test-deartest-dear and
llen test-deartest-dear
works to show the number of tasks which have not yet been sent to a worker.
The queue name is f"{global_keyprefix}{queue_name}

Why supervisor is processing one job more than once?

I've been working on a Laravel (5.3) project in which I have to crawl data from multiple websites.
So, for that I set up queue jobs and configured a supervisor for them.
Everything works fine until I configure the supervisor to run only 1 process.
In file
/etc/supervisor/conf.d/laravel-worker.conf
numprocs=1
When I assign the numprocs value to more than 1, it behaves weird that supervisor executes the jobs for 2 times or 3 times.
Followings are my versions:
Ubuntu 14.04.2 LTS
Laravel 5.3
supervisord 3.0b2
Followings are my configurations:
Configurations for following file are
/etc/supervisor/supervisor.conf
; supervisor config file
[unix_http_server]
file=/var/run/supervisor.sock ; (the path to the socket file)
chmod=0700 ; sockef file mode (default 0700)
[supervisord]
logfile=/var/log/supervisor/supervisord.log ; (main log file;default $CWD/supervisord.log)
pidfile=/var/run/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
childlogdir=/var/log/supervisor ; ('AUTO' child log dir, default $TEMP)
; the below section must remain in the config file for RPC
; (supervisorctl/web interface) to work, additional interfaces may be
; added by defining them in separate rpcinterface: sections
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///var/run/supervisor.sock ; use a unix:// URL for a unix socket
; The [include] section can just contain the "files" setting. This
; setting can list multiple files (separated by whitespace or
; newlines). It can also contain wildcards. The filenames are
; interpreted as relative to this file. Included files *cannot*
; include files themselves.
[include]
files = /etc/supervisor/conf.d/*.conf
Configurations for following file are
/etc/supervisor/conf.d/laravel-worker.conf
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/myapp/artisan queue:work database --sleep=3 --tries=3
autostart=true
autorestart=true
user=hmabuzar
numprocs=25
redirect_stderr=true
stderr_events_enabled=true
stderr_logfile=/var/www/myapp/storage/logs/worker.error.log
stdout_logfile=/var/www/myapp/storage/logs/worker.log

Laravel Queue Restarts After 60 seconds

I have a job like this:
//Run very intensive script that generates files
//Notify the user that the job is done
I know that the script takes 4-5 minutes to run since it is the time needed to generate all the files. However, after exactly 60 seconds, the job is removed (i.e. I do not see it in my jobs database table) and the user get notified. Then, every 60 seconds, until the script is done, the user is notified that the job is done.
The job do not fail. The job is only present in the jobs table for the first 60 seconds. The file-generating script runs only once.
I use supervisor:
[program:queue]
process_name=%(program_name)s_%(process_num)02d
command=php artisan queue:work --timeout=600 --queue=high,low
user=forge
numprocs=8
directory=/home/forge/default
stdout_logfile=/home/forge/default/storage/logs/supervisor.log
redirect_stderr=true
Here's my database config:
'database' => [
'driver' => 'database',
'table' => 'jobs',
'queue' => 'low',
'expire' => 600,
],
The behaviour is the same if I use redis
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => 'low',
'expire' => 600,
],
Your configuration is slightly off. I'm not sure where expire came from, but I believe you meant it to be retry_after. Since your configuration does not define a retry_after after key, Laravel defaults the value to 60 seconds. So, your queue is killing the job after it runs for 60 seconds and re-queues it to try again.
Additionally, the following note is from the documentation:
The --timeout value should always be at least several seconds shorter than your retry_after configuration value. This will ensure that a worker processing a given job is always killed before the job is retried. If your --timeout option is longer than your retry_after configuration value, your jobs may be processed twice.
So, if your queue work timeout is going to be 600, I'd suggest setting your retry_after to at least 610.

Queue don't start in Laravel

I try:
php artisan queue:listen
But the result is a empty:
picture of result
What should happen?
I want it to execute the code:
$job = (new SendEmail())->delay(10);
$this->dispatch($job);
That's what should happen. It means it's waiting for something to be pushed to the queue.
If you carry out an action that pushes something to the queue, through an event or job etc. then you will see something like
-bash-4.1$ php artisan queue:listen
[2016-07-22 09:27:57] Processed: App\Listeners\Users\SendWelcomeEmail#handle
Have you definitely set up the correct queue driver (e.g. database) in your .env or config/queue.php file?

multiple sidekiq queue for an sinatra application

We have a Ruby on Sinatra application. We use sidekiq and redis for queue process.
We already implemented and using sidekiq that queues up jobs that does insertion into database. it works pretty fine till now.
Now I wanted to add another jobs which will read bulk data from database and export to csv file.
I donot want both this job to be in same queue instead is there possible to create different queue for these jobs in same application?
Please give some solution.
You probably need advanced queue options. Read about them here: https://github.com/mperham/sidekiq/wiki/Advanced-Options
Create csv queue from command line (it can be done in config file as well):
sidekiq -q csv -q default
Then in your worker:
class CSVWorker
include Sidekiq::Worker
sidekiq_options :queue => :csv
# perform method
end
take a look at sidekiq wiki: https://github.com/mperham/sidekiq/wiki/Advanced-Options
by default everything goes inside 'default' queue but you can specify a queue in your worker:
sidekiq_options :queue => :file_queue
and to tell sidekiq to process your queue, you have to either declare it in configuration file:
:queues:
- file_queue
- default
or pass it as argument to the sidekiq process: sidekiq -q file_queue

Resources