I am using the default laravel folder structure and the filesystem public:
'public' => [
'driver' => 'local',
'root' => storage_path('app/public'),
'url' => env('APP_URL') . '/storage',
'visibility' => 'public',
],
Everything runs on docker and the complete laravel folder is mounted into /var/www/html:
#... Laravel PHP-FPM service definition
volumes:
- "./:/var/www/html"
#...
When I run php artisan storage:link and then cd /var/www/html/public and ls -la I see that the symlink exists:
lrwxr-xr-x 1 root root 32 May 5 11:19 storage -> /var/www/html/storage/app/public
If i then check to see if the linked folder also exists in the container cd /var/www/html/storage/app/public everything is there as expected.
Also, when checking ls -la /var/www/html/public/storage/ directly it shows me the content of the linked folder. So everything is working from symlink perspective
# ls -la /var/www/html/public/storage/
total 512
drwxr-xr-x 4 www-data www-data 128 May 5 11:21 .
drwxr-xr-x 11 www-data www-data 352 Apr 19 14:03 ..
-rw-r--r-- 1 www-data www-data 14 Feb 16 16:58 .gitignore
-rw-r--r-- 1 www-data www-data 519648 May 5 10:58 sample.png
However, when opening the url /storage/sample.png the server returns 404.
The remaining contents of /var/www/html/public do work fine, so for example /var/www/html/public/test.png is visible under localhost/test.png.
On production, everything works fine though. Any ideas why the symbolic link is not working on the local system while the link is actually correct?
I already tried removing the storage link and setting the symlink again.
You mentioned that you are using docker. I think the reason that it doesn't work locally, but in production, could be that there is a configuration shift between the two deployments.
For it to work the Nginx container must have access to the storage folder, since it is supposed to serve assets that are located there. I'm guessing that is currently not the case and only the public folder is mounted.
Check if the entire project or both ./public and ./storage are mounted into the Nginx container.
Something like this:
services:
#...
nginx:
#...
volumes:
- "./public:/var/www/html/public:ro"
- "./storage/app:/var/www/html/storage/app:ro"
#...
Add these lines on file configuration of nginx:
location ^~ /storage/ {
root "/var/www/html/public/storage/"
}
Every time when the request start with storage, will access directly the shared path.
It's important to make sure your symlink is created from inside the docker container.
If you run storage:link from outside of your container, the symlink will be relative to your local machine.
Get into the container by running:
docker exec -it name_of_your_php_container bash
Once you are inside, you should be able to run php artisan
Related
I have a NuxtJS project that I want to run in a docker container.
Everything works as expected, the only issue is that, since I am working on windows, the yarn command is extremely slow.
I tried to fix that by excluding the node_modules and .nuxt directories from the mounted volumes:
volumes:
- .:/app
- /app/node_modules
- /app/.nuxt
Now here is the problem. The initialisation process is very fast up until it throws an error:
error An unexpected error occurred: "EACCES: permission denied, mkdir '/app/node_modules/#babel'".
I've done this in multiple projects and never had an issue.
Inside the container the folder permissions are as follows:
drwxr-xr-x 2 root root 4096 Apr 11 15:15 node_modules
Changing writing permissions inside the container obviously does not work:
chmod: changing permissions of 'node_modules/': Operation not permitted
when I changed the docker-compose file I recreated the containers so there is no issue there.
Any idea what could be wrong?
I have a custom tomcat docker image where we add the setenv.sh while building the image.
After updating the setenv.sh recently and redeploying, the change is not being seen in AWS, but if I docker pull the created image locally the change is present.
Is there any reason why this could be happening?
FROM tomcat:7.0-jdk8
...
# Copy Application
COPY ./platform-war/ ./platform-war/
COPY ./scripts/ ./scripts/
...
# Tomcat additional env
RUN /bin/bash -c "ln -s $BASE_DIR/scripts/setenv.sh /usr/local/tomcat/bin/setenv.sh"
# Make sripts executable
RUN /bin/bash -c "chmod -R +x $BASE_DIR/scripts"
And inside the image in aws the link is working as expected
-rwxr-xr-x 1 root root 1975 Jul 2 2021 digest.sh
-rwxr-xr-x 1 root root 3718 Jul 2 2021 setclasspath.sh
lrwxrwxrwx 1 root root 32 Feb 3 23:15 setenv.sh -> /opt/base/scripts/setenv.sh
-rwxr-xr-x 1 root root 1912 Jul 2 2021 shutdown.sh
yet, the content of the setenv is not the expected one, it is an old one.
The image looks updated cause it has present code changes.
I'm trying to run a simple Laravel command line on my Elastic Beanstalk instance: php artisan queue:work
But I keep getting the following error:
In StreamHandler.php line 107:
The stream or file "/var/app/current/storage/logs/laravel.log" could
not be opened: failed to open stream: Permission denied
I've tried every solution I can find on SO (except the chmod -R 777 advice that seems to trail this question everywhere).
I've tried deleting the existing laravel.log and then using touch to create a new one, and then making sure webapp is the owner.
I've also tried:
sudo chmod -R 755 /var/app/current/storage/
sudo chown -R webapp /var/app/current/storage/
When I list the logs directory, everything looks as I think it should:
-rwxr-xr-x 1 webapp webapp 0 Apr 4 14:38 laravel.log
The storage directory also looks fine:
drwxr-xr-x 6 webapp webapp 4096 Apr 3 19:33 storage
But I'm still getting the above error! Can anyone explain why (not just give a solution).
Thank you
So the simple answer is that I was running the command as ec2-user. As a solution, I could either:
Change the ownership of laravel.log to ec2-user
Run the command as the owner (eg. sudo -u webapp php artisan queue:work)
Switch to root with sudo su to see how it would be run during deployment (ie. as root)
Nothing was especially wrong.
When you log into an EBS instance via ssh, you're logged in as ec2-user.
I don't believe the ec2-user is part of the webapp group which is actually executing PHP & apache/nginx.
Try adding your ec2-user to the webapp group by creating an ebextension in the root of your Laravel project under .ebextensions/ec2user.config
users:
ec2-user:
groups:
- webapp
Prove this is the problem by turning off selinux with the command
sudo setenforce 0
This should allow writing, but you've turned off added security server-wide. That's bad. Turn SELinux back
sudo setenforce 1
Then finally use SELinux to allow writing of the file by using this command
sudo chcon -R -t httpd_sys_rw_content_t storage
And you're off!
When i run my application from terminal sudo -u www-data ./scarga and open browser, the template file served well, everything ok. Command executed from /var/www/html/scarga.local/ directory.
When i run my application as sudo service scarga start it says: open ./resources/views/index.html: no such file or directory
File with HTTP handler: https://pastebin.com/MU7YDAWV
The scarga.service file: https://pastebin.com/eBL3jJFx
Tree of project: https://pastebin.com/rFVa8A3P
Rights for index.html file
-rwxr-xr-x 1 www-data www-data 3586 Jun 28 14:27 index.html`
Why this happens and how to solve?
Thank you!
You need to set the correct working directory in your script using WorkingDirectory= - presumably:
WorkingDirectory=/var/www/html/scarga.local
Using Rails 3.1 with the paperclip gem on nginx I get the following 500 error when attempting to upload images:
Permission denied - /webapps/my_rails_app/public/system
Observing the guidance offered elsewhere on this issue, I have modified my directory permissions such that
drwxr-xr-x 3 www-data www-data 4096 Mar 10 17:57 system
And, it would appear, this permission structure is maintained for all public sub-directories.
Yet, having restarted nginx, the error persists.
Do I need to modify nginx.conf or take another hack at permissioning the affected directory?
When I've run into this in the past, it's generally because the Rails application was started by a user or process that is not a part of the www-data group, and is not the www-data user. I'd first check to confirm that the www-data is in fact running your app.
This can be done using ps awwwx and perhaps some grep logic to find your app in the process stack.
I might suggest that you change the directory permissions to allow any member of the www-data group to write to the directory as well.