Sometimes when I deploy a Laravel project to AWS Elastic Beanstalk I'm faced with an annoying error saying that the log file cannot be opened:
The stream or file "/var/app/current/storage/logs/laravel-2020-10-21.log" could not be opened: failed to open stream: Permission denied
In my eb deploy.config file I have a statement which, in theory, should fix things, but doesn't:
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/99_make_storage_writable.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
echo "Making /storage writeable..."
chmod -R 755 /var/app/current/storage
if [ ! -f /var/app/current/storage/logs/laravel.log ]; then
echo "Creating /storage/logs/laravel.log..."
touch /var/app/current/storage/logs/laravel.log
chown webapp:webapp /var/app/current/storage/logs/laravel.log
fi
This is because it's not referencing the daily log file.
I have an .ebignore file in place which explicitly prevents local logs from being deployed, so it isn't the presence of an existing log file that's causing problems:
/storage/logs/*
The issue is that Laravel is creating the daily log as root so it cannot be written to by the normal user (webapp).
I just don't know why it's doing it?
The solution is to allow each process to create its own log file. That way each process will have the correct permissions to write to it.
You can do this in the config/logging.php file and adding the process name (php_sapi_name()) to the file name:
'daily' => [
'driver' => 'daily',
'path' => storage_path('logs/' . php_sapi_name() . '-laravel.log'),
'level' => 'debug',
'days' => 14,
],
Now each process will be able to write to its own file and there will be no permission problems.
Important Note: The above example uses "Daily", but make sure you make the change to right logging channel for you.
Try to set storage folder permissions to this
chmod -R gu+w storage/
chmod -R guo+w storage/
If anyone else stumbles upon this one, and can't solve it despite all the great solutions, one other thing that causes the original problem "could not be opened: failed to open stream: Permission denied" is that it seems like the log file is being written by the root user, not the ec2-user or webapp, which means no matter how much the right chmod or chown is done, we can't touch the file.
So the workaround to that is to make sure the log file is saved with the user id and then it will be different files.
Add ".get_current_user()." to the storage_path
config/logging.php
'single' => [
'driver' => 'single',
'path' => storage_path('logs/'.get_current_user().'laravel.log'),
'level' => env('LOG_LEVEL', 'debug'),
],
'daily' => [
'driver' => 'daily',
'path' => storage_path('logs/'.get_current_user().'laravel.log'),
'level' => env('LOG_LEVEL', 'debug'),
'days' => 14,
],
Further though, one can discuss why log files are stored on a Beanstalk instance as it will anyway be overwritten the next time, so I would advice Horizon, S3 storage or whatever else, but that's a different topic. I guess you just want to solve the issue. I spent like a week until I found out the root user wrote the file first...
You can check who owns the file if you can SSH in to the Beanstalk instance "eb ssh". Then go to the folder var/app/current/storage/logs then write "ls -la" (which will list permissions). Then you can see that the root user has written the file once first and then has rights to it. I tried to change predeploy and postdeploy settings but didn't work. Writing it as a separate file name worked fine.
Related
So I have a fairly modest Logstash setup for Apache logs that I am using on RedHat 7 (production) as well as macOS High Sierra (10.13.6) for development and something odd has happened since upgrading from Logstash version 6.3.2 to 6.4.1. I am using Homebrew on macOS to install and update Logstash and these issues persist even if I “nuke” my installed Hombrew items and reinstall.
Straight to the point.
Simply put, static data input files are not being read and ingested on startup in 6.4.1 as they once did on 6.3.2 and earlier. For 6.4.1 I need to manually cat log lines to the target path for Logstash to “wake up” and pick up these new lines even if I designate the new read mode.
At the end of the day, this setup doesn’t need a sincedb setup and can be restarted and read from the head of file to end and we are all happy… At least until Logstash 6.4.1… Now nobody is happy. What can be done to force Logstash to always read data from the beginning of files no matter what?
Details and discovery.
The Logstash setup I am using just does some filtering of Apache logs for input. The input config I am using reads as follows; note that the file path is slightly tweaked for privacy but is effectively exactly what I am using right now and have been using for the past year or so without issue:
input {
file {
path => "/opt/logstash/coolapp/access_log*"
exclude => "*.gz"
start_position => "beginning"
sincedb_path => "/dev/null"
ignore_older => 0
close_older => 3600
stat_interval => 1
discover_interval => 15
}
}
The way I am using this for local development is simply getting a copy of remote Apache server logs and placing them in that /opt/logstash/coolapp/ directory.
Then when I startup Logstash via the command line like this with the -f potion set so my coolapp-apache.conf is read:
logstash -f coolapp-apache.conf
Logstash starts up locally, emits all of it’s pile of start up status messages until this final message:
[2018-09-24T12:40:09,458][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
Which to me indicates it’s fully up and running and checking my data collection output shows—if it is working—a flow of data pouring in… But when using Logstash 6.4.1 I see no data flowing in.
File input plugin works with tail mode.
Checking the newly updated documentation for the file input plugin (v4.1.5) shows there is a new mode option that has a read mode and a tail mode. Knowing that the default mode is tail I tested the setup by doing the following after starting up my local Logstash debugging setup. First I copied the access_log as follows:
cp /opt/logstash/coolapp/access_log /opt/logstash/coolapp/access_log_BAK
Then I zeroed out the main access_log file using :> like this:
:> /opt/logstash/coolapp/access_log
And finally I just ran cat and appended that copied file’s data to the original file like this:
cat /opt/logstash/coolapp/access_log_BAK > /opt/logstash/coolapp/access_log
The second I did that, lo and behold the data started to flow as expected! I guess the new file input plugin is focused on tailing a file more than read`ing? Anyway, that works but is clearly annoying. I don’t develop like this. I need Logstash to simply read the files and parse them.
File input plugin not working with read mode.
So I tried using the following setup to just read the files based on what I saw in the official Logstash file input mode documentation:
input {
file {
path => "/opt/logstash/coolapp/access_log"
mode => "read"
file_completed_action => "log"
file_completed_log_path => "/Users/Giacomo1968/Desktop/access_log_foo"
}
}
Of course things like access_log_foo is just for proof-of-concept file name for testing, but when all is said and done this read mode utterly does not work on macOS. I have even tried changing the path to something like my desktop and it doesn’t work. And the whole “zero out and then append a file” trick I used as explained in the “tail mode” explanation doesn’t cut it here since the file is not being tailed I guess?
So knowing all of that:
What can be done to force Logstash 6.4.1 to always read data from the beginning of files no matter what as it once did effortlessly in Logstash version 6.3.2 and previous?
Okay, I figured this out. I am now on Logstash 6.5 and my original config was as follows:
input {
file {
path => "/opt/logstash/coolapp/access_log*"
exclude => "*.gz"
start_position => "beginning"
sincedb_path => "/dev/null"
ignore_older => 0
close_older => 3600
stat_interval => 1
discover_interval => 15
}
}
When I redid it getting rid of ignore_older and adjusting close_older and stat_interval to use string_duration things started working again as expected.
input {
file {
path => "/opt/logstash/coolapp/access_log*"
exclude => "*.gz"
start_position => "beginning"
sincedb_path => "/dev/null"
close_older => "1 hour"
stat_interval => "1 second"
discover_interval => 15
}
}
My assumption is that Logstash 6.3.2 interpreted ignore_older being set to 0 as false thus disabling ignore_older but in version 6.4 and higher that value is now being interpreted as an actual time value in seconds? Haven’t dug deeply into the source code, but everything I have experienced points to that being the issue.
Regardless, this config now works and I am running Logstash 6.5 on macOS Mojave (10.14.1) without any issues.
I get the "Error in exception handler" error very often, mainly because of file permission issue, and sometimes because of error in code.
I want to redirect user to a custom error page every time the system encounters the 'error in exception handler' error.
How do I handle this error?
It's because Laravel can't write to the logfile. If you don't want logs, you can disable it in app/start/global.php around line 55:
App::error(function(Exception $exception, $code)
{
Log::error(...); //comment out this line.
});
But honestly, that would be a symptom-treatment instead of a problem-treatment. You should chown the app/storage recursively to the user running the server. Fastest way:
In public/index.php, at the very top, temporarily put in die(`whoami`) just after the opening <?php-tag.
Load any page and copy whatever it prints on the site. Let's say it's www-data.
Fire up a terminal/console, go to your project root and run chown www-data -R app/storage, swapping www-data with whatever you found in step two.
I've spent some hours trying to figure out why logrotate won't successfully upload my logs to S3, so I'm posting my setup here. Here's the thing--logrotate uploads the log file correctly to s3 when I force it like this:
sudo logrotate -f /etc/logrotate.d/haproxy
Starting S3 Log Upload...
WARNING: Module python-magic is not available. Guessing MIME types based on file extensions.
/var/log/haproxy-2014-12-23-044414.gz -> s3://my-haproxy-access-logs/haproxy-2014-12-23-044414.gz [1 of 1]
315840 of 315840 100% in 0s 2.23 MB/s done
But it does not succeed as part of the normal logrotate process. The logs are still compressed by my postrotate script, so I know that it is being run. Here is my setup:
/etc/logrotate.d/haproxy =>
/var/log/haproxy.log {
size 1k
rotate 1
missingok
copytruncate
sharedscripts
su root root
create 777 syslog adm
postrotate
/usr/local/admintools/upload.sh 2>&1 /var/log/upload_errors
endscript
}
/usr/local/admintools/upload.sh =>
echo "Starting S3 Log Upload..."
BUCKET_NAME="my-haproxy-access-logs"
# Perform Rotated Log File Compression
filename=/var/log/haproxy-$(date +%F-%H%M%S).gz \
tar -czPf "$filename" /var/log/haproxy.log.1
# Upload log file to Amazon S3 bucket
/usr/bin/s3cmd put "$filename" s3://"$BUCKET_NAME"
And here is the output of a dry run of logrotate:
sudo logrotate -fd /etc/logrotate.d/haproxy
reading config file /etc/logrotate.d/haproxy
Handling 1 logs
rotating pattern: /var/log/haproxy.log forced from command line (1 rotations)
empty log files are rotated, old logs are removed
considering log /var/log/haproxy.log
log needs rotating
rotating log /var/log/haproxy.log, log->rotateCount is 1
dateext suffix '-20141223'
glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]'
renaming /var/log/haproxy.log.1 to /var/log/haproxy.log.2 (rotatecount 1, logstart 1, i 1),
renaming /var/log/haproxy.log.0 to /var/log/haproxy.log.1 (rotatecount 1, logstart 1, i 0),
copying /var/log/haproxy.log to /var/log/haproxy.log.1
truncating /var/log/haproxy.log
running postrotate script
running script with arg /var/log/haproxy.log : "
/usr/local/admintools/upload.sh 2>&1 /var/log/upload_errors
"
removing old log /var/log/haproxy.log.2
Any insight appreciated.
It turned out that my s3cmd was configured for my user, not for root.
ERROR: /root/.s3cfg: No such file or directory
ERROR: Configuration file not available.
ERROR: Consider using --configure parameter to create one.
Solution was to copy my config file over. – worker1138
I can't believe how difficult puppet is being with Windows - particularly windows permissions! I have a very simple pp file that I'm trying to execute:
case $operatingsystem {
...
'Windows': {
file { 'c:/puppet/':
ensure => directory,
owner => 'myUser',
group => 'Administrators',
mode => '0777',
}
}
}
This seems as simple as it could get - create a directory called "c:\puppet" and let everyone have access - IT'S NOT WORKING! It creates a directory, but nobody has ANY permissions (except special permissions). I am in the administrators group, so I can delete it and access it, but I want to drop stuff inside and be able to install from there (since apparently an "http" source doesn't work directly on Windows ...).
Is ANYONE else using puppet for Windows, or am I just using the wrong tool for this job? I am getting very frustrated, and the documentation seems reasonable, but without simple examples of how I'm trying to use puppet I'm getting completely stuck.
have you checked out
https://forge.puppetlabs.com/puppetlabs/acl
from their docs you can do something like..
acl { 'c:/puppet':
permissions => [
{ identity => 'Administrator', rights => ['full'] },
{ identity => 'myuser', rights => ['read','execute'] }
owner => 'myuser',
],
}
I've come up with a solution that seems to be systems administration by blunt force trauma:
exec { 'fix_acls':
command => 'cmd.exe /c "takeown /r /f c:\puppet && icacls c:\puppet /grant SYSTEM:(OI)(CI)F myUser:(OI)(CI)F /T"',
path => $::path,
}
This gives myUser and SYSTEM access, I could also do this with USERS in general I think, but it seems there must be a better way.
I have this error.
Lock file could not be created
So far I've done everything that was recommended: I removed all the files from typo3temp, I have given write permissions recursively, still does not work. Can anybody help?
Typo3 back-end show this error probably because log file gonna be more than 30 MB.
To solve this error do change in localconfiguration.php
for the typo3 version < 6
add this line : $ TYPO3_CONF_VARS ['SYS'] ['enableDeprecationLog'] = '0 ';
for typo3 version > 6
add this 'enableDeprecationLog' => '0',
in 'SYS' => array of localconfiguration.php
I hope this helps u !!!!
in my case i had no typo3temp folder, so i created one
/users/pub00/web/html # mkdir typo3temp
/users/pub00/web/html # chown vmamp:nobody typo3temp/
/users/pub00/web/html # chmod 775 typo3temp/