Do I have to run migrations for creating jobs table and failed_jobs?
php artisan queue:table
php artisan queue:failed-table
php artisan migrate
jobs table is used when your queue driver is database. (Since you're using Redis you don't have to have this)
failed_jobs is used when your queue jobs fail to run. It's good to have this table so that you can track jobs that failed.
Related
Thank you in advance,
I have created jobs and queued them well using the link https://laravel.com/docs/8.x/queues
it is working well in local and over AWS too but not in google compute engine
when I execute the php artisan queue:work will start running queued jobs but not working over the google compute engine
I used the below config in .env
QUEUE_CONNECTION=database
Imagine:
1- Eating job
2- Drinking job
3- Eating Queue
4- Drinking Queue
If I have an Eating Job running on Eating queue and from this job I want to run Drinking Job but on Drinking Queue.
Is it possible?
I just found what solves my problem.
Instead of using php artisan queue:work I used php artisan queue:listen on the job that runs another job on the different queue.
Looks like queue:work only runs that job on the queue and doesn't affect another queue but when I used queue:listen it works fine.
A note: This is helped in the windows machine while on linux server it works normally with php artisan queue:work
How to check running scheduler or cron jobs in laravel 5.2 on the live server?
I want to know how to find running cron job in on my laravel project?
You can check jobs table for this. If you do not have this table yet, then you can create it by running the following commands:
php artisan queue:table
php artisan migrate
and set your queue driver to database in config/queue.php. the created table will hold currently active jobs along with number of attempts.
I'm new with Laravel, have implemented a queue with Redis and Supervisor installed to monitor but can't figure out somethings.
The Supervisor configuration is:
command=php <laravel path>/artisan queue:work --once
autostart=true
autorestart=true
user=www-data
numprocs=2
redirect_stderr=true
stdout_logfile=<laravel path>worker.log
Questions:
Any error produced by a job executed by the queue will be stored in worker.log or depending on the error will be stored there or elsewhere?
How can I know the data of the job that is running?
How can know the queue content and if queue is working?
How can know if supervisor is working?
Taylor has built Laravel Horizon since 5.5. This is an absolute must if you have a job/queue-heavy application:
Laravel Horizon
While it takes just a little bit of configuration to get up and running, once you do, you'll have all the metrics and data that you need to monitor and inspect your jobs.
I'm implementig Laravel queues via database driver, and when I start the process for listening the jobs, another process is also started.
So basically I'm doing php artisan queue:listen and the process that is automatically started is php artisan queue:work.
so basically the second process here is generated automatically, also the thing is that it doesnt point to the folder where it should be
The listener:
php artisan queue:listen
starts a long-running process that will "run" (process) new jobs as they are pushed onto the queue. See docs.
The processer:
php artisan queue:work
Will process new jobs as they are pushed onto the queue. See docs.
So basically queue:listen runs queue:work as and when new jobs are pushed.
P.S: You need not worry about this but its good to know how it works. You can dig into the code if you need more information.