I have a job like this:
//Run very intensive script that generates files
//Notify the user that the job is done
I know that the script takes 4-5 minutes to run since it is the time needed to generate all the files. However, after exactly 60 seconds, the job is removed (i.e. I do not see it in my jobs database table) and the user get notified. Then, every 60 seconds, until the script is done, the user is notified that the job is done.
The job do not fail. The job is only present in the jobs table for the first 60 seconds. The file-generating script runs only once.
I use supervisor:
[program:queue]
process_name=%(program_name)s_%(process_num)02d
command=php artisan queue:work --timeout=600 --queue=high,low
user=forge
numprocs=8
directory=/home/forge/default
stdout_logfile=/home/forge/default/storage/logs/supervisor.log
redirect_stderr=true
Here's my database config:
'database' => [
'driver' => 'database',
'table' => 'jobs',
'queue' => 'low',
'expire' => 600,
],
The behaviour is the same if I use redis
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => 'low',
'expire' => 600,
],
Your configuration is slightly off. I'm not sure where expire came from, but I believe you meant it to be retry_after. Since your configuration does not define a retry_after after key, Laravel defaults the value to 60 seconds. So, your queue is killing the job after it runs for 60 seconds and re-queues it to try again.
Additionally, the following note is from the documentation:
The --timeout value should always be at least several seconds shorter than your retry_after configuration value. This will ensure that a worker processing a given job is always killed before the job is retried. If your --timeout option is longer than your retry_after configuration value, your jobs may be processed twice.
So, if your queue work timeout is going to be 600, I'd suggest setting your retry_after to at least 610.
Related
Sometimes when I deploy a Laravel project to AWS Elastic Beanstalk I'm faced with an annoying error saying that the log file cannot be opened:
The stream or file "/var/app/current/storage/logs/laravel-2020-10-21.log" could not be opened: failed to open stream: Permission denied
In my eb deploy.config file I have a statement which, in theory, should fix things, but doesn't:
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/99_make_storage_writable.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
echo "Making /storage writeable..."
chmod -R 755 /var/app/current/storage
if [ ! -f /var/app/current/storage/logs/laravel.log ]; then
echo "Creating /storage/logs/laravel.log..."
touch /var/app/current/storage/logs/laravel.log
chown webapp:webapp /var/app/current/storage/logs/laravel.log
fi
This is because it's not referencing the daily log file.
I have an .ebignore file in place which explicitly prevents local logs from being deployed, so it isn't the presence of an existing log file that's causing problems:
/storage/logs/*
The issue is that Laravel is creating the daily log as root so it cannot be written to by the normal user (webapp).
I just don't know why it's doing it?
The solution is to allow each process to create its own log file. That way each process will have the correct permissions to write to it.
You can do this in the config/logging.php file and adding the process name (php_sapi_name()) to the file name:
'daily' => [
'driver' => 'daily',
'path' => storage_path('logs/' . php_sapi_name() . '-laravel.log'),
'level' => 'debug',
'days' => 14,
],
Now each process will be able to write to its own file and there will be no permission problems.
Important Note: The above example uses "Daily", but make sure you make the change to right logging channel for you.
Try to set storage folder permissions to this
chmod -R gu+w storage/
chmod -R guo+w storage/
If anyone else stumbles upon this one, and can't solve it despite all the great solutions, one other thing that causes the original problem "could not be opened: failed to open stream: Permission denied" is that it seems like the log file is being written by the root user, not the ec2-user or webapp, which means no matter how much the right chmod or chown is done, we can't touch the file.
So the workaround to that is to make sure the log file is saved with the user id and then it will be different files.
Add ".get_current_user()." to the storage_path
config/logging.php
'single' => [
'driver' => 'single',
'path' => storage_path('logs/'.get_current_user().'laravel.log'),
'level' => env('LOG_LEVEL', 'debug'),
],
'daily' => [
'driver' => 'daily',
'path' => storage_path('logs/'.get_current_user().'laravel.log'),
'level' => env('LOG_LEVEL', 'debug'),
'days' => 14,
],
Further though, one can discuss why log files are stored on a Beanstalk instance as it will anyway be overwritten the next time, so I would advice Horizon, S3 storage or whatever else, but that's a different topic. I guess you just want to solve the issue. I spent like a week until I found out the root user wrote the file first...
You can check who owns the file if you can SSH in to the Beanstalk instance "eb ssh". Then go to the folder var/app/current/storage/logs then write "ls -la" (which will list permissions). Then you can see that the root user has written the file once first and then has rights to it. I tried to change predeploy and postdeploy settings but didn't work. Writing it as a separate file name worked fine.
I have two different Laravel queues in the same server. In my Supervisord.d folder I have two ini files for those queues. The job names are different in the queues. But, every time I run a job and expect the result from one queue, other queue also interferes. Here is the sample of the ini files:
[program:queue_runner]
command = php /path_to_prod/artisan queue:work --daemon --queue=default,smsInt,smsIntLow --tries=1 --timeout=30
stdout_logfile = /path_to_prod/storage/logs/supervisor.log
redirect_stderr = true
numprocs = 5
process_name = %(program_name)s%(process_num)s
[program:queue_runner_test]
command = php /path_to_test/artisan queue:work --daemon --queue=default,smsIntTest,smsIntTestLow --tries=1 --timeout=30
stdout_logfile = /path_to_test/storage/logs/supervisor.log
redirect_stderr = true
numprocs = 50
process_name = %(program_name)s%(process_num)s
Could you please help me to solve it.
Found the solution of my problem. Though the jobs were despatching from the test site on the smsIntTest and from the other site on the smsInt queues from the beginning. But, they were getting picked up by wrong queues every time.
As the following post suggested, Why is Laravel or Beanstalkd skipping jobs?
I've assigned 'queue' => 'smsInt' in the 'connections' array of the app/config/queue.php file for one site, and 'queue' => 'smsIntTest' for the other one. This solution solved the problem.
So I have
multitask :do_something => [
:task1,
:task2,
:task3,
:task4,
:task5,
:task6
]
And each task runs a script. I want the tasks1 to task6 concurrently to run 1000 times without stopping. Is there a way it can be achieved?
I want to store special character in database vice versa like emotive but when I try to save in database I get question marks.
In your database.php file, make sure to set the charset and collation to utf8mb4:
'mysql' => [
'driver' => 'mysql',
'host' => env('DB_HOST', 'localhost'),
'database' => env('DB_DATABASE', 'forge'),
'username' => env('DB_USERNAME', 'forge'),
'password' => env('DB_PASSWORD', ''),
'charset' => 'utf8mb4',
'collation' => 'utf8mb4_unicode_ci',
And your emoji column should be a String:
$table->string('emoji');
1) Ensure you're using MYSQL 5.5.3 or later then will you be able to change the collation to utf8mb4_something,
2) Ensure table columns that are going to receive emoji have their collation set to utf8mb4_something
if still you have issue, then Edit your database.php config file and update following,
'charset' = 'utf8mb4';
'collation' = 'utf8mb4_unicode_ci'
In addition to the answers above, make sure to clear cache and migrate refresh to update the MySQL tables.
If you are using websocket for chat and sending Emoticons, then you have to restart the websocket.
You can find and kill all the websocket processes like
sudo kill (Your PID) without the round brackets.
And for the ? Marks, it's okay because some browsers, or MySQL editors not supporting Emoticons, so they show you questions (?) marks.
One of the best and easiest way is to change your charset to utf8mb4
charset => utf8mb4
Logstash uses the sincedb file to store the position it is at in processing a file. In the event of logstash shutting shown before processing is completed, it can use sincedb to continue from where it left off.
Running on Windows, the behaviour observed is that the sincedb file is only written when logstash closes. This means that if the machine logstash is running on is terminated and logstash's own shutdown routines are not called, no sincedb file will be written.
Setting the sincedb_write_interval to different values does not appear to make any difference. Even with this set, sincedb is only written when logstash terminates or is shutdown.
Below is the basic structure of our logstash configuration.
Are we using sincedb_write_interval in the wrong way?
Thanks
{
file {
path => "..."
sincedb_write_interval => 10
}
}
output {
elasticsearch {
host => "..."
index => "..."
protocol => "http"
cluster => "..."
}
}
You are using it correctly.
However, the default is 15 seconds so you should not be having the issue. Could run with some test input then wait a minute and post your sincedb?
sincedb_write_interval matters when Logstash tries to decide whether to pick up from last time's read.
If you do "sincedb_write_interval => NULL", Logstash will re-parse the whole file even though it has parsed it before.
I am using a very old logstash, 1.4.2. But having the same issue. The only value that works is "1". The default 15 doesn't work and no other value apart from "1" also don't work.
sincedb_write_interval => 1
Setting it to "1", updates the sincedb immediately.