I have set my local laravel 5 storage folder to permissions 755 and to user www-data, as I use apache2 on my machine. However I was getting blank screens instead of stack traces on errors so I changed the permissions to 777 which resolved the issue.
However I feel this is a (terrible) bandaid as really it is allowing any user to modify this directory with all permissions rather than the correct user with limited permissions. I don't know if this issue will affect the development or production server but granting those permissions in such a scenario is not an option.
How do I figure out which user (or group) actually needs permissions to use this directory for laravel logging so I can assign the directory to them and return the permissions back to 755?
I have tried
ps aux | egrep '(apache|httpd)'
but it shows that most processes are being run as www-data...
You're on the right track with ps aux | egrep '(apache|httpd)'.
Processes
Apache/httpd is started as user root, but then it spawns processes to handle incoming request as the user defined in it's configuration. That default user is usually either www-data or apache.
Server OS
On CentOS/RedHat servers, you'll likely see processes being run as user/group apache (this is the default).
On Debian/Ubuntu, the default user set for the processes handling requests is www-data.
This all assumes apache is using mod-php. If you are using php-fpm, the use running PHP may be configured separately (altho it has the same defaults as apache in my experience).
Permissions for storage
As it sounds like you know, the storage directory needs to be writable by the user or group (depending on permissions) running those processes.
www-data?
It sounds like the result of ps aux | egrep '(apache|httpd)' was www-data, so it's likely, but not 100% definitive, that the directory needs to be writable by user/group www-data (either by setting it as the owner and ensuring owner has those permissions, or setting it via group permissions, or making it world-writable).
A quick test
One easy way to tell is if you delete the log file / view cache files from the storage directory, and then make that directory world-writable.
Make some requests in Laravel that would re-generate those files, and then see what user/group is set on the new files.
That is one way to see what the user/group is set to of the process running PHP.
Are the folders in storage set to 755 too?
If not, you should change the permissions recursively by doing chmod -R 755 storage. Just take care when you use chmod -R because you could set the entire server to 755 by mistake.
Related
I have a post-commit hook in my subversion that will export a copy of my repo to a desired location for deployment. That part works fine, but it comes in with apache:apache. I need this to be changed to prod_user:prod_user. If I try to add a chown statement in my script, it will fail. If I try to use sudo, it will ask for a password that I cant give because this happening in a post-commit script. I'd like this to be as automated as possible.
My question is: How can I make this work? I need to export the contents of my repo to the production folder and convert the users/groups to match existing production users/groups.
Is there a way to pass my password as an argument to a sudo command?
Thank you for your help!
Is there a way to pass my password as an argument to a sudo command?
Don't do it, if at all possible. This will leak your password to anyone that can read the script.
But if you can't avoid it, use echo <password> | sudo -S <command> - -S makes sudo read from stdin so you can give it the password from there
Don't do any of sudo, chown, chgrp. It is not the responsibility of the uploader to fix permissions on the remote server.
Have the server administrator properly setup these, so that pushing production files from the repository works straight without messing with sudo permission at the server.
If you are the one same person, then take the time to fix the server side to avoid having a remote user elevate its privileges (even temporarily with sudo) for the sake of fixing uploaded files permissions.
Use crontab -e as root user, then you can change ownership without escalation of privileges.
Or run as prod_user and make it check out the code ...then it is already the owner of the files.
Keeping a file with the last deployment timestamp can be used to compare to HEAD timestamp.
I am new to hadoop and I have a problem.
The problem is I want give someone to use hdfs command, but cannot give them root password, So everything that needed "sudo" or "su hdfs" could not work. I have the reason that i cannot give others root permission.
I have found some solution like:
Create a group, change and let the group have HDFS permission, and add a user in, so that user would have HDFS permission. I had try it but fail.
So, I want to let a user be able to use hdfs commands without using "sudo -su hdfs" command or any command needed sudo permission. Could you tell me how to set the related settings or files with deeper details or any useful reference website ? Thank you all!
I believe by just setting the permission of /usr/bin/hdfs to be '-rwxr-xr-x 1 root root' ,other accounts should be able to execute the command hdfs successfully. have you tried it?
Rajest is right, 'hdfs command does not need sudo', probably you are using 'sudo -su hdfs' because command is attacking to path where only user 'hdfs' has permissions, you must organize data for your users.
A workarround (if you are not using kerberos) for using hdfs with any user is executing this line before working:
export HADOOP_USER_NAME=<HDFS_USER>
I'm attempting to configure a OSX Mavericks server running Apache and Lasso. For security and convenience I only want users belonging to a specific "web" group to be able to access the web root. I have succeeded in letting both permitted regular users and Apache (_www) access the files, but I cannot for my life manage to set the correct permissions for Lasso. I'm hoping someone here can point me in the right direction.
Basically, what I have done is the following:
sudo dseditgroup -o create web
sudo dseditgroup -o edit -a _www -t user web
sudo dseditgroup -o edit -a _lasso -t user web
sudo chgrp -R web webroot
sudo chmod -R 770 webroot
This apparently works for Apache, but any lasso files merely output a Lasso permission error:
An unhandled failure during a web request
Error Code: 13
Error Msg: Permission denied - While opening //Library/Server/Web/Data/Sites/...
I have also tried adding the _www and _lasso groups to the web group, as well as creating a new Lasso instance in the instance manager with the effective group set to "web".
Strangely, setting permissions to the _lasso user or group directly on the files (i.e. not through the web group) seems to work which makes me believe there's something wrong with how I'm creating my ACLs.
A little more info:
ls -l#e example.lasso
-rwxrwx---+ 1 danielpervan web 0 Feb 19 15:20 example.lasso
0: user:_spotlight inherited allow read,execute
I've encountered problems similar to this when I have ACLs above and beyond the standard Unix permissions. From your post, it looks like there are some ACLs on the example.lasso file. I would run the following script on your web root to remove all ACLs from every folder / file:
sudo chmod -R -N /path/to/webroot/
If that doesn't work, verify that the _lasso user is part of the web group:
dscl . -read /groups/web | grep GroupMembership
I've been looking for a way to force usermod to modify the password/group/... files despite the user being in use.
What I do get now is this:
!! Failed to execute 'usermod --home '...' --password '...' --shell '/bin/false' 'zabbix' 2>&1':
usermod: user zabbix is currently used by process 518
I know that for being secure I need to restart the service. But this is done within a setup script. I am restarting all services at the end.
Is there any way to say --force? (well, except for modifying all necessary files.)
Thanks
If you can get root rights via sudo and are confident enough to change system files using vi then I would change the files manually.
Only a few things need to be changed in
- /etc/passwd
here you could change UID, GID, Homedirectory, Shell ...
- /etc/group
here you might need to change UID/GID as well for the username if there was a change
The File /etc/shadow will be changed automatically when using passwd to set a new password. This you can directly perform if you are root: "passwd username"
You can run usermod in a separate user namespace (with a recent enough linux), but you need to map the root user to root (otherwise you won't have permissions to modify /etc/passwd).
I.e. something like this:
unshare --user --map-root-user usermod ...
Now usermod won't find the processes running with the uid of user you are modifying.
You probably won't be able to modify the root user itself with this.
Is there a way to log into an EC2 ubuntu ami or a way to set up an ubuntu ami so that non-root users can log in? I tried creating a user and logging in with the associated password. I also tried using the private key, copied the authorized-keys file into the .ssh directory of the non-root user's home directory and tried to log in to the box with that user account id. Neither method worked.
Thanks in advance.
So, this works, but the missing high-order bit of information here has to do with setting the right permission on the authorized-keys file in the home directory for the user. So, I copied /root/.ssh/authorized-key to /home/user, then did with
cp -r /root/.ssh /home/user
chown -R user /home/user/.ssh
This allowed me to use the keypair.pem file to log in.
Make sure you are sending your AWS keypair as the identity file, i.e.
ssh -i ~/.ssh/keypair.pem user#ec2-174-129-xxx-xx.compute-1.amazonaws.com
Also check that SSH is enabled in your security group
Assuming you would like to have users log in with a password so they need not supply a key every time, all you must do is turn on the ability to SSH in with a password. This option is turned off by default in all Linux AMIs.
vi, nano, pico, etc. into the following file with root privileges:
sudo vi /etc/ssg/sshd_config
Change the following setting to yes:
PasswordAuthentication = yes
Finally you must restart SSH (Since you are SSHed onto a remote machine, a simple reboot is fine.)
That's it! Of course, you must still add users with the adduser command and give them passwords with the passwd command for them to be able to login to your AMI. Checkout this link for more info on the OpenSSH SSH client configuration files.