How to clear Laravel Queue using iron.io - laravel-4

I'm working on TeamSpeak management system based on Laravel 4
the problem is when i restart the script it add the queue again unless i restart the Queue listener
is there a way to clear the old Queue on script startup without the need to restart the queue:listen ??
am using Iron.io service as a queue engine
Thanks in advance
//EDIT
Thanks to "thousandsofthem"
it works with Laravel like this:
$queue_name = Config::get('queue.connections.iron.queue');
Queue::getIron()->clearQueue($queue_name);

How about touching $iron_mq->clearQueue($queue_name) ?
https://github.com/iron-io/iron_mq_php/blob/master/IronMQ.class.php#L235
No idea how Laravel exposes it though

Related

Recently added json language file values are not updated in email blade

I send mail as a cron job with Laravel. For this, when I want to use the last value I added in my resources/lang/de.json file in the mail blade template file(resources/views/mails/...blade.php), it gives an output as if such a value is not defined. However, if I use the same key in a blade file I created before, it works without any errors. In addition, the keys that I added to the same file (de.json) in the first time work without errors in the same mail blade file.
Thinking it's some kind of cache situation, I researched and found out that restarting the queue worker might fix the problem. However, both locally and on the server with ssh.
'php artisan queue:restart'
Even though I ran the command, there was no improvement.
Do you have any ideas?
Since queue workers are long-lived processes, they will not notice changes to your code without being restarted. So, the simplest way to deploy an application using queue workers is to restart the workers during your deployment process. https://laravel.com/docs/9.x/queues#queue-workers-and-deployment
but php artisan queue:restart will instruct all queue workers to gracefully exit after they finish processing their current job so that no existing jobs are lost. And I see a lot of issues with this command not to solve restart and deploy the worker.
So, Simplest way,
try to stop the worker manually (ctrl+C)
start the worker again with php artisan queue:work again.
might this help.

Laravel queue using redis is not stable?

I am using redis-server ver 5.x on my vps ubuntu ver 20.x.Sometimes, few jobs do not execute and stop immediately.
My command to create worker:
php artisan queue:work --timeout=0 --tries=2
So maybe there is something wrong with redis-server? I tried to refresh the redis and even reinstall it. Nothing change.
P/s: No exception was threw in each job. Every jobs take only one second to execute. There is nothing unusual in logs of redis-server and Laravel app
Turns out, I have an another app using the same redis server. And workers of the second app a executed jobs in the first app. Problem solved.

how to use queue when requests are parallel in laravel

my function in controller Calling parallel and i create a job for use queue in laravel Because parallel call causing the problem
i call this job in my function :
$this->dispatch(new ProcessReferal($orderId));
and i run this command in terminal :
php artisan queue:work --tries=3
But my job is still running in parallel
And processes the process simultaneously
what's wrong?
If you are checking it on Local server. Then, You have to add QUEUE_DRIVER=database in .env file.
QUEUE_DRIVER=sync is used for parallel call
Hi there,
With queue laravel, you need config some info in your code:
See more: https://laravel.com/docs/5.8/queues#connections-vs-queues
First:
Driver: default sync, so you need change it to: database, redis...
You can change it in .env file (QUEUE_DRIVER=database...)
Connections: Very important if you setting driver is database and use mutil DB for your project.
Second:
Laravel queue have some config, but i think we need see 3 thing: retry_after, timeout, tries. When you work with large job, retry_after and timeout is very important.
Hope it can help you.

Notifications not added to queue

I've provisioned a Laravel Forge server and configured it to use redis for queues via .env:
QUEUE_DRIVER=redis
My settings for Redis in both config/queue.php and config/database.php are the defaults found in a new laravel project.
The problem is that when a mail notification is triggered, it is never added to the queue. It never gets to the processing stage.
I've tried using forge's queue interface as well as SSH into the server and running a simple
php artisan queue:listen
without any parameters. In both cases, no results (using the artisan command confirms no job is added to the queue).
Interestingly, I tried Beanstalkd:
QUEUE_DRIVER=beanstalkd
and suffered the same problem.
As a sanity check, I set the queue driver to sync:
QUEUE_DRIVER=sync
and the notification was delivered without issue, so there isn't a problem with my code in the notification class, it's somewhere between calling the notify method and being added to the queue.
The same configuration running locally works fine. I can use
php artisan queue:listen
and the notifications go through.
After an insane amount of time trying to address this, I discovered it was because the app was in maintenance mode. To be fair, the documentation does state that queued jobs aren't fired in maintenance mode, but unless you knew maintenance mode was the culprit you probably wouldn't be looking in that section.

Laravel 4.2 Queue Push Syncing When Set To Redis

Using Laravel 4.2 framework. Was on 4.1.x but switching back to that version, has the queue::push still firing immediately as if the queue config was set to sync but this is set to redis.
When running the queue closure, it fires the command immediately. Testing with sample output in the actual command to confirm. I can connect to the redis db without an issue with the configuration set in the config file.
Here is the syntax of my queue closure:
Queue::push(function($job) use ($placeId)
{
Artisan::call('testcommandname', [$placeId]);
$job->delete();
});
Not sure if I am overlooking something or what? Thanks for the help.
So thinking this was an error due to a framework upgrade, it ended up being that I wasn't setting the correct environment configuration for queues.

Resources