The documentation doesn't seem to be quite updated yet, and the answers on the net are mostly wrong, but if you want to have a 'dev' version of a queue worker/processor/listener, then use the 'queue:listen' option instead of the 'queue:work' option.
'queue:work' will absolutely cache everything, and nothing in the world save for killing the process and restarting it will prevent it from caching. that includes using the 'none' cache driver, etc.
This includes using the 'queue:restart' option, which is supposed to soft-restart the worker queue. Maybe it does that, but it doesn't kill the cache.
Use php artisan queue:listen instead of php artisan queue:work.
Related
I want to make an uptime control application with Laravel.
New sites will be added to the database constantly.
Each site will be thoroughly checked. Whether the site is online or not, when was it interrupted, how long did it take to go online, etc. I will store such details in the database.
No problem so far, I can do these. I wanted to explain these parts to explain better.
However, I am stuck on how to make a background service through laravel.
For example, if I were doing this with plain php, I would create a service for linux and run it with "systemctl start foo", with the command "systemctl enable foo" I would always make it run. I would create a php file and it could run forever in a "while" loop. or I could use nohup or supervisor.
But I don't want to go this way. This is a framework and I want to move forward in that direction.
I did a lot of research but couldn't find a solution. There is a laravel queue way for some operations, but that's not exactly what I want. For this to work, we add it to the queue with the "dispatch" method and run the processes in the queue with the "php artisan queue:work" command. We use supervisor for continuous operation.
But in my case there is no "dispatch" trigger. There is a constant listening to the database and running a transaction.
I created a command with "php artisan make:command foo" and I run a command with "php artisan foo" it works ok.
But how can I run it continuously, is there a way around this? Should I go outside the framework?
If I can resolve this issue without leaving the framework, is there a way to monitor them on the frontend? (websocket etc. external)
Because I plan to run other services in the background and it would be great to be able to view their current status, uptime, network usage and output.
Or a small idea that comes to my mind, I will start the first process manually (I will manually add it to the queue via dispatch and run the queue on the supervisor side). When the first transaction is completed, it will send the next site to Jobs via dispatch. but it doesn't seem like a very healthy way.
I preferred to proceed using supervisor.
I used a configuration like this:
[program:your-progam-name]
process_name=%(program_name)s_%(process_num)02d
command=php /home/foo/bar/baz/laravel_path/artisan yourProgramCommandCode
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true
user=paparazi
redirect_stderr=true
stdout_logfile=/home/foo/bar/baz/laravel_path/logs/visor.log
stopwaitsecs=3600
details: https://laravel.com/docs/9.x/queues#configuring-supervisor
I send mail as a cron job with Laravel. For this, when I want to use the last value I added in my resources/lang/de.json file in the mail blade template file(resources/views/mails/...blade.php), it gives an output as if such a value is not defined. However, if I use the same key in a blade file I created before, it works without any errors. In addition, the keys that I added to the same file (de.json) in the first time work without errors in the same mail blade file.
Thinking it's some kind of cache situation, I researched and found out that restarting the queue worker might fix the problem. However, both locally and on the server with ssh.
'php artisan queue:restart'
Even though I ran the command, there was no improvement.
Do you have any ideas?
Since queue workers are long-lived processes, they will not notice changes to your code without being restarted. So, the simplest way to deploy an application using queue workers is to restart the workers during your deployment process. https://laravel.com/docs/9.x/queues#queue-workers-and-deployment
but php artisan queue:restart will instruct all queue workers to gracefully exit after they finish processing their current job so that no existing jobs are lost. And I see a lot of issues with this command not to solve restart and deploy the worker.
So, Simplest way,
try to stop the worker manually (ctrl+C)
start the worker again with php artisan queue:work again.
might this help.
I want this command php artisan queue:work stays active and not get killed for a long time...
when we have queues, and we run the server, if we only use the command php artisan queue:work, it can get killed for some reason and our queues don't work anymore. what should I do in this case?
Your question is quite ambiguous , but I'll assume that you need the command to run whilst accessing that same instance of command line without closing or stopping the process.
I would recommend using Screens for this which allow you to effectively have virtual terminals open within the one you have.
Give the following article a read
https://www.digitalocean.com/community/tutorials/how-to-install-and-use-screen-on-an-ubuntu-cloud-server
I've provisioned a Laravel Forge server and configured it to use redis for queues via .env:
QUEUE_DRIVER=redis
My settings for Redis in both config/queue.php and config/database.php are the defaults found in a new laravel project.
The problem is that when a mail notification is triggered, it is never added to the queue. It never gets to the processing stage.
I've tried using forge's queue interface as well as SSH into the server and running a simple
php artisan queue:listen
without any parameters. In both cases, no results (using the artisan command confirms no job is added to the queue).
Interestingly, I tried Beanstalkd:
QUEUE_DRIVER=beanstalkd
and suffered the same problem.
As a sanity check, I set the queue driver to sync:
QUEUE_DRIVER=sync
and the notification was delivered without issue, so there isn't a problem with my code in the notification class, it's somewhere between calling the notify method and being added to the queue.
The same configuration running locally works fine. I can use
php artisan queue:listen
and the notifications go through.
After an insane amount of time trying to address this, I discovered it was because the app was in maintenance mode. To be fair, the documentation does state that queued jobs aren't fired in maintenance mode, but unless you knew maintenance mode was the culprit you probably wouldn't be looking in that section.
I have been developing a web page named: directorioelectronico.com and I have specially issues now, I will be very grateful that someone can be help me.
The web page is loading very slow in the first loading (5,000ms - 20,000ms) (latest are speeded normally) I tried to install APC module but my host is shared and the administrator can not install it, so I resize realpath_cache_size to 2M and the performance is now better (4,000 - 16,000 ms) somebody knows how I can perform it much more?
In advance, Thank you very much for you help.
My issue was that my share host haven't APC cache and for symfony2 is mandatory have it for have a good load so I change my host provider and now I have a VPS where I can install APC and now it is very fast.
The first time a Symfony program is run with env=prod, it has to create a significant amount of cached code - parsing the routes, annotations, converting configurations files and preparing CSS and Javascript.
It will always be a lot slower the first time, so that the rest of the time it will be fast. If you can run it before the website goes live (eg, with app/console), then that work can happen offline.
After clear:cache the next call on the application will have to rebuild a number cached files. That can be slow - so why make a site visitor trigger it?
If you're clearing the cache in production, try using the cache:warmup command to pre-build the the cache. It will mean the next visitor won't have to wait while the heavy lifting is done.
Something like this should help:
$ php ./app/console clear:cache --env=prod
$ php ./app/console clear:warmup
More info in the Symfony documentation.
I'd also suggest to enable query and result caches for doctrine (did you install/active apc cache for your php installation?). This might further reduce the loading time. Just have a look here :-)
Also try to use a deployment script to automatically trigger the cache clear/warmup, mentioned above. This way you won't forget to call those.
Do you use assetic for css/js? Then combine those files, minify them via assetic filters
Good candidates for deployment scripts are ansible, capifony or just a simple shell script.