Laravel Google Compute Engine - Queued jobs not working - laravel

Thank you in advance,
I have created jobs and queued them well using the link https://laravel.com/docs/8.x/queues
it is working well in local and over AWS too but not in google compute engine
when I execute the php artisan queue:work will start running queued jobs but not working over the google compute engine
I used the below config in .env
QUEUE_CONNECTION=database

Related

Laravel 7 - Stop Processing Jobs and Clear Queue

I have a production system on AWS and use Laravel Forge. There is a single default queue that is processing Jobs.
I've created a number jobs and now wish to delete them (as they take many hours to complete and I realize my input data was bad). I created a new job with good data, but it won't be processed until all the others have finished.
How can I delete all jobs?
It was previously set up using redis queue driver. I could not figure out how to delete jobs, so I switched the driver to database, and restarted the server, thinking that this would at least get the jobs to stop processing. However, much to my dismay, they continue to be processed :-(
I even deleted the worker from the forge ui and restarted the server: the jobs still process.
Why do jobs continue to be processed?
How can I stop them?
You can use:
php artisan queue:clear redis
It will clear all jobs from default queue on redis connection. If you put jobs in other queue then you should specify queue name for example:
php artisan queue:clear redis --queue=custom_queue_name

How to check running scheduler jobs in laravel 5.2 on live server?

How to check running scheduler or cron jobs in laravel 5.2 on the live server?
I want to know how to find running cron job in on my laravel project?
You can check jobs table for this. If you do not have this table yet, then you can create it by running the following commands:
php artisan queue:table
php artisan migrate
and set your queue driver to database in config/queue.php. the created table will hold currently active jobs along with number of attempts.

How can I use a single Horizon instance to monitor jobs from all sites on server

I have multiple (Laravel 5.8) sites on a single Laravel Forge server.
Instead of installing Horizon on all sites I would rather have a single site running just Horizon with some basic authentication.
All sites are using the same Redis instance running on the server.
I would like to monitor all jobs created on all sites via this single Horizon dashboard, however I can't seem to get Horizon to see the jobs created by the other sites.
The jobs are running as expected, but not showing in Horizon.
I have tried removing the HORIZON_PREFIX env value in the Horizon config but that doesn't work.
I have tried setting HORIZON_PREFIX on the Horizon site, and REDIS_PREFIX on the other sites to the same value but that doesn't work either.
Note: The other sites are not running Horizon at all (something I want to avoid), they are just communicating with Redis in the standard way.
What am I missing?

Elastic Beanstalk and Laravel queues

I'm implementig Laravel queues via database driver, and when I start the process for listening the jobs, another process is also started.
So basically I'm doing php artisan queue:listen and the process that is automatically started is php artisan queue:work.
so basically the second process here is generated automatically, also the thing is that it doesnt point to the folder where it should be
The listener:
php artisan queue:listen
starts a long-running process that will "run" (process) new jobs as they are pushed onto the queue. See docs.
The processer:
php artisan queue:work
Will process new jobs as they are pushed onto the queue. See docs.
So basically queue:listen runs queue:work as and when new jobs are pushed.
P.S: You need not worry about this but its good to know how it works. You can dig into the code if you need more information.

Can I trigger Laravel jobs from a controller instead of using the `php artisan queue` process

We are running our production system on Elastic Beanstalk. We want to be able to take advantage of EBS' worker tiers with autoscaling. Unfortunately, due to how Laravel queue processing works, Laravel expects all queues to be consumed by starting a php command line process on your servers. EBS worker tiers don't function that way. AWS installs a listener daemon of its own, that pulls of jobs and feeds them to your worker over local HTTP calls. Sounds great. Unfortunately, I can't figure out how one would call a queued job from a route and controller in Laravel instead of using the built-in artisan queue listener task. Any clues as to how to achieve this would be greatly appreciated.
You can use the Artisan::call method to call commands from code.
$exitCode = Artisan::call('queue:work');
You can see more info in the docs
In Controller action method:
JobClassName::dispatch();

Resources