I am the owner, as shown by ls -alts, but for whatever reason, I can't change the permissions of the files like I want. I want to make the file read only:
chmod 400 <file-name>
however, the ls -al still shows -rwxrwxrwx.
The file is on an external drive. I know that sometimes this causes issues when users want to read and write. However, in this case, I'd like to make the access to my files more restrictive not less restrictive.
I checked out this SO question but I don't see an option to make the permissions more restrictive.
thanks.
You can't change the permissions on the file because it's on a FAT32 volume, and that volume format does not support storing file permissions (see, for example, this askubuntu question). But if all you want to to is make the file read-only, you can get that effect by locking it (and the lock attribute is supported on FAT32). You can either use the Finder's Get Info window (check the "Locked" box), or use the command chflags uchg <file-name>.
Related
How can I prevent .git/index from constantly changing its permissions and ownership?
I run ls -al .git/index and see that the file is owned by root.
I change the permissions with sudo chown -R $USER:USER and sudo chmod -R 775 .git
I even tried deleting the lock file with rm -rf .git/index.lock
The permissions update but then a few minutes later they change back to being owned by root and 740 which breaks the git commands I'm attempting.
I set the global git config via Ansible so I'm wondering if that messed something up? Is there a global file I need to modify?
When Git writes the index, the way it does so is to create a new file called .git/index.lock (with O_EXCL), adjusts its permissions according to core.sharedRepository, and then rename it over top. Git does not offer a way to rewrite the file in place.
If this file is being created such that it's being owned by root, then root is creating the file because it's updating the index. That probably means that some process owned by root is modifying the working tree.
If that wasn't your intention, then the best thing to do is find that process and stop it from modifying the working tree. It's not a good idea for multiple users to modify the same working tree, and if your process owned by root is reading files out of the working tree and it's shared with another user, that could lead to a security vulnerability.
If you're certain what you're doing is safe and you want to modify the permissions with which files in the .git directory are created, you can use core.sharedRepository to set them. For example, you could use the value 0664. Note that Git will handle the executable bit automatically, and the index should not be marked executable.
If you want to always use the same group for your repository, you can set the setgid bit on all the directories in the repository and then set their group to the appropriate value. Assuming you also set core.sharedRepository to a value that makes things group writable, you can then modify the repository with any user in that group, and things should work. Note that this may still have security implications if one or more of those users are untrusted or have lower privileges, so you should be careful.
I'm trying to remove all files except read-only ones, but this command removes all of them anyway:
yes n | rm *
Did I do something wrong? If not, why doesn't it work?
For rm to automatically enable -i mode that prompts the user to delete unwritable files, the standard input has to be a terminal (as specified in the man pages).
So, for the command to work correctly the user has to specify the -i option manually:
yes n | rm -i *
After doing so the command works as expected.
In Posix systems, the read-only state of a file does not prevent it from being removed by rm.
You haven't said what your shell is, but perhaps you have an alias to rm that does ask you for confirmation when the file is read-only, and that alias behaves differently when it stdin is part of a pipe.
The problem is you only need write permissions to the folder, not to the files, to remove them:
(From here)
Any attempt to access a file's data requires read permission. Any attempt to modify a file's data requires write permission. Any attempt to execute a file (a program or a script) requires execute permission.
In *nix systems directories are also files and thus use the same permission system as for regular files. Note permissions assigned to a directory are not inherited by the files within that directory.
Because directories are not used in the same way as regular files, the permissions work slightly (but only slightly) differently. An attempt to list the files in a directory requires read permission for the directory, but not on the files within. An attempt to add a file to a directory, delete a file from a directory, or to rename a file, all require write permission for the directory, but (perhaps surprisingly) not for the files within. Execute permission doesn't apply to directories (a directory can't also be a program). But that permission bit is reused for directories for other purposes.
To find files with specific permissions you can use
find -perm <mode>
read more
To remove files found by find you can use
find . -perm 444 -exec /bin/rm {} \;
(mybe slightly different, it depends on files you search and system you have)
more exec examples
I need your help with an access issue with neofetch on my macOS.
Here the thing, I recently install neofetch on my terminal (oh-my-zsh), it works but, between the firts line (last login) and the logo that displays :
mkdir: /Users/'MYUSERNAME'/.config/neofetch/: Permission denied
/usr/local/bin/Neofetch: line 4476:
/Users/'MYUSERNAME'/.config/neofetch/config.conf: Permission denied
And I don't know why, of course, I did many types of research on google before asking you.
Do you have an idea?
You need to change the permissions for your config directory:
sudo chmod -R 666 /Users/YOURUSERNAME/.config
666 means Read-Write for all users.
Doing the same as garritfra did but with that last directory line you have there worked for me on a windows 10 machine though. It may work for the mac as well?
sudo chmod -R 666 /Users/MYUSERNAME/.config/neofetch/config.conf
Replace MYUSERNAME with whatever is shown in the error.
I was having the same issue and was able to solve this in the following way:
Open up Finder
Reveal hidden folders & files by pressing CMD+>+SHIFT
Locate the .config folder and right click it and click 'get info'.
Under the sharing & permissions section click the small plus and just add the entire Administrators group and remember to change the permissions to read & write for the entire group.
neofetch
Here is a bulletproof one-liner that solves the issue:
sudo chmod -R 710 $HOME/.config
Execute this command in a terminal session.
After restarting your terminal or, alternatively, sourcing your shell configuration file (assuming you have added the neofetch command to that file) with:
source ~/.zshrc
(replacing ~/.zshrc with the path to your shell configuration file if you are using a different one), the error prompt should disappear.
Note that this only gives 'execute' permission to the 'group' class. There is no need, as the currently accepted answer suggests, to give 666 or 777 modes as that needlessly makes your system less secure (not to mention even no. octal figures such as 666 don't even work as they fail to give the required 'execute' permission, which requires an odd number bit).
Modes such as 730, 750, and 770 will work, but unless something changes in neofetch's future update that demands it, it is unnecessarily too generous and I wouldn't advise it.
Finally, there is absolutely no reason to give users in the 'other' class any permission to the ~/.config directory (unless you have a very compelling reason to), and hence the last permission bit (3rd digit in the mode represented by octal numbers) should always remain 0.
Is there a way to chmod 777 the contents of a tarfile upon creation (or shortly thereafter) before distributing? The write permissions of the directory that's being tar'd is unknown at the time of tar'ing (but typically 555). I would like the unrolled dir to be world writable without the users who are unrolling the tar to have to remember to chmod -R 777 <untarred dir> before proceeding.
The clumsy way would be to make a copy of the directory, and then chmod -R 777 <copydir> but I was wondering if there was a better solution.
I'm on a Solaris 10 machine.
BACKGROUND:
The root directory is in our ClearCase vob with specific file permissions, recursively. A tarfile is created and distributed to multiple "customers" within our org. Most only need the read/execute permissions (and specifically DON'T want them writable), but one group in particular needs their copy to be recursively writable since they may edit these files, or even restore back to a "fresh" copy (i.e., in their original state as I gave them).
This group is somewhat technically challenged. Even though they have instructions on the "how-to's" of the tarfile, they always seem to forget (or get wrong) the setting of the files to be recursively writable once untarred. This leads to phone calls to me to diagnose a variety of problems where the root cause is that they forgot to do (or did incorrectly) the chmod'ing of the unrolled directory.
And before you ask, yes, I wrote them a script to untar/chmod (specific just for them), but... oh never mind.
So, I figured I'd create a separate, recursively-writable version of the tar to distribute just to them. As I said originally, I could always create a copy of the dir, make the copy recursively writable and then tar up the copy dir, but the dir is fairly large, and disk space is sometimes near full (it can vary greatly), so making a copy of the dir will not be feasable 100% of the time.
With GNU tar, use the --mode option when creating the archive, e.g.:
tar cf archive.tar --mode='a+rwX' *
But note that when the archive is extracted, the umask will be applied by default. So unless the user's umask is 000, then the permissions will be updated at that point. However, the umask can be ignored by using the -p (--preserve) option, e.g.:
tar xfp archive.tar
You can easily change the permissions on the files prior to your tar command, although I generally recommend people NEVER use 777 for anything except /tmp on a unix system, it's more productive to use 755 or worst case 775 for directories. That way you're not letting the world write to your directories, which is generally advisable.
Most unix users don't like to set the permissions recursively because it sets the execute bit on files that should not be executable (configuration files for instance) to avoid this they invented a new way to use chmod some time ago, called symbolic mode. Reading the man page on chmod should provide details, but you could try this:
cd $targetdir; chmod -R u+rwX,a+rX .; tar zcvf $destTarFile .
Where your $targetdir is the directory you are tarring up and $destTarFile is the name of the tar file you're creating.
When you untar that tar file, the permissions are attempted to be retained. Certain rules govern that process of course - the uid and gid of the owner will only be retained if root is doing the untaring, but otherwise, they are set to the efective uid and gid of the current process.
I want to index some files and keep a registry as part of a utility I am writing in BASH. So, my tool would go to a massive directory and write some essential information about each file it finds there to another file that I would like to call "myregistry." The file would only be rewritten if the user asks it to - since going through a large file structure and "indexing" it this way would take considerable time.
I want this file to not show up when the user does ls in the directory where it is contained. In addition, I want the user to have no privileges with it at all - the user should not be able to open it up on vim or anything, not even to just look at it.
However, if the user executes my script again, I want the user to have the option of getting some information out of the file from there. I also want the script to have the permissions to look at the file and add or delete things from it, if the user prompts it to. But the user should not be able to do anything to it directly.
How can I do this? It would require using chmod but I have no idea how to put it together.
I'm thinking:
# Enable write permission
# Do Something - ensure that no one else is writing to this file
# Disable write permission
On Unix, you're more or less on an equal footing with other processes that run under the same user. Whatever you can do, they can do. If you can hide and unhide something, so can they. Interpreted scripts need read permissions to run, so it's not like you can hide any secrets in your executable. If you can however, distribute your software as a binary, you'll be able to run without being readable. Then you can hardcode a secret into the binary and use it to encrypt and decrypt files. Users will be able to run your binary, but only the superuser will be able to get the secret and decrypt your registry. That's real security (against regular (nonroot) users) (especially if you manage to create and embed the secret at installation time).
Playing with dotfiles and permissions won't fool any advanced user.
Something like this work? write_index and read_index are your work.
cd massive_dir
TFILE=$(mktemp --tmpdir=.)
write_index >$TFILE
mv -f $TFILE .index
chmod a-rw .index
To read
chmod +r .index
read_index .index
chmod -r .index
Note that no locking is needed because of the temp file. mv is atomic.