Is it possible to set read-only for myself on unix? - bash

I have been given the address to a very large folder on a shared Unix server. I've been given a path to some files on a unix server I'm working on through ssh. I don't want to waste space by creating a duplicate in my home area so I've linked the folder through ln -s. However I don't want to risk making any changes to the data within the folder.
How would I go about setting the files to read-only for myself? Do I have to ask the owner of the folder/file? Do I need sudo access? I am not the owner of the file and I do not have root access.

Read about chmod command to change the mask on the files the links point to.
The owner or root can restrict access to files.
Also you probably need to mount that shared folder as read-only. But I am not sure how your folder is connected
UPDATE
The desired behaviour can be achieved using mount tool. (man page for mount).
Note that the filesystem mount options will remain the same as those on the original mount point, and cannot be changed by passing the -o option along with --bind/--rbind. The mount options can be changed by a separate remount command, for example:
mount --bind olddir newdir
mount -o remount,ro newdir
Here is the similiar question to yours. Also solved via mount tool.

Related

Change the location of the .ssh directory on macOS

When generating ssh keys, a folder called .ssh is generated in my home directory.
Is there any way to move this folder to another location, such as .secure/ssh?
As an example, I can change the location of zsh history files by modifying the environmental variable $HISTFILE.
In terms of ssh, the only information I can find is that the directory is hard-coded to $HOME in which case there is nothing I can do. Is this indeed the case? If not, what is the proper environmental variable / method for macOS?
The default location of various items is ~/.ssh. There is no option to change that directory, though various options exist for individual assets found in this directory. For example, -F selects a configuration file other than ~/.ssh/config, -i selects a different identity file other than ~/.ssh/id_rsa et al., etc.

how to manually empty moosefs trash folder to save space

I found that moosefs trash take too much of my disk space. according to moosefs documentation, it will keep it for a while in case user want it back. But How to clean it up manually to save space?
In order to purge MooseFS' trash, you need to mount special directory called "MooseFS Meta".
Create mountdir for MooseFS Meta directory first:
mkdir /mnt/mfsmeta
and mount mfsmeta:
mfsmount -o mfsmeta /mnt/mfsmeta
If your Master Server Host Name differs from default mfsmaster and/or port differs from default 9421, use appropriate switch, e.g.:
mfsmount -H master.host.name -P PORT -o mfsmeta /mnt/mfsmeta
Then you can find your deleted files in /mnt/mfsmeta/trash/SUBTRASH directory. Subtrash is a directory inside /mnt/mfsmeta named 000..FFF. Subtrashes are helpful if you have many (e.g. millions) of files in trash, because you can easily operate on them using Unix tools like find, whereas if you had all the files in one directory, such tools may fail.
If you do not have many files in trash, mount Meta with mfsflattrash parameter:
mfsmount -o mfsmeta,mfsflattrash /mnt/mfsmeta
or if you use Master Host Name or Port other than default:
mfsmount -H master.host.name -P PORT -o mfsmeta,mfsflattrash /mnt/mfsmeta
In this case your deleted files will be available directly in /mnt/mfsmeta/trash (without subtrash).
In both cases you can remove files by simply using rm file or undelete them by moving them to undel directory available in trash or subtrash (mv file undel).
Remember, that if you do not want to have certain files moved to trash at all, set "trash time" (in seconds) for these files to 0 prior to deletion. If you set specific trash time for a directory, all the files created in this directory inherit trash time from parent, e.g.:
mfssettrashtime 0 /mnt/mfs/directory
You can also set a trash time to other value, e.g. 1 hour:
mfssettrashtime 3600 /mnt/mfs/directory
For more information on specific parameters passed to mfsmount or mfssettrashtime, see man mfsmount and man mfstrashtime.
Hope it helps!
Peter

How to change the permission of a directory inside the .tar.gz file? [duplicate]

Is there a way to chmod 777 the contents of a tarfile upon creation (or shortly thereafter) before distributing? The write permissions of the directory that's being tar'd is unknown at the time of tar'ing (but typically 555). I would like the unrolled dir to be world writable without the users who are unrolling the tar to have to remember to chmod -R 777 <untarred dir> before proceeding.
The clumsy way would be to make a copy of the directory, and then chmod -R 777 <copydir> but I was wondering if there was a better solution.
I'm on a Solaris 10 machine.
BACKGROUND:
The root directory is in our ClearCase vob with specific file permissions, recursively. A tarfile is created and distributed to multiple "customers" within our org. Most only need the read/execute permissions (and specifically DON'T want them writable), but one group in particular needs their copy to be recursively writable since they may edit these files, or even restore back to a "fresh" copy (i.e., in their original state as I gave them).
This group is somewhat technically challenged. Even though they have instructions on the "how-to's" of the tarfile, they always seem to forget (or get wrong) the setting of the files to be recursively writable once untarred. This leads to phone calls to me to diagnose a variety of problems where the root cause is that they forgot to do (or did incorrectly) the chmod'ing of the unrolled directory.
And before you ask, yes, I wrote them a script to untar/chmod (specific just for them), but... oh never mind.
So, I figured I'd create a separate, recursively-writable version of the tar to distribute just to them. As I said originally, I could always create a copy of the dir, make the copy recursively writable and then tar up the copy dir, but the dir is fairly large, and disk space is sometimes near full (it can vary greatly), so making a copy of the dir will not be feasable 100% of the time.
With GNU tar, use the --mode option when creating the archive, e.g.:
tar cf archive.tar --mode='a+rwX' *
But note that when the archive is extracted, the umask will be applied by default. So unless the user's umask is 000, then the permissions will be updated at that point. However, the umask can be ignored by using the -p (--preserve) option, e.g.:
tar xfp archive.tar
You can easily change the permissions on the files prior to your tar command, although I generally recommend people NEVER use 777 for anything except /tmp on a unix system, it's more productive to use 755 or worst case 775 for directories. That way you're not letting the world write to your directories, which is generally advisable.
Most unix users don't like to set the permissions recursively because it sets the execute bit on files that should not be executable (configuration files for instance) to avoid this they invented a new way to use chmod some time ago, called symbolic mode. Reading the man page on chmod should provide details, but you could try this:
cd $targetdir; chmod -R u+rwX,a+rX .; tar zcvf $destTarFile .
Where your $targetdir is the directory you are tarring up and $destTarFile is the name of the tar file you're creating.
When you untar that tar file, the permissions are attempted to be retained. Certain rules govern that process of course - the uid and gid of the owner will only be retained if root is doing the untaring, but otherwise, they are set to the efective uid and gid of the current process.

symbolic links work when shared to Windows or Linux (smb), but broken when shared to Mac (afp or smb)

On a Mac, I have a shared folder, ~\Documents. There are two subfolders, Data and Data_2011, the former containing folders of files from the last several years, and the latter containing symbolic links to the folders in the Data folder that have been updated since Jan 1 2011. The links were created with the standard ln -s command.
When I mount the shared Documents folder on a Windows computer, the links work. When I mount on Linux using smb, the links work. When I use these links directly on the hosting Mac, they work. However, when I mount the Documents folder from a remote Mac, the soft links are broken. To be clear, I mount the Documents folder by going to Finder > Connect to Server > afp://xxx.xxx.xx.xx/ or smb://xxx.xxx.xx.xx/Documents
Any ideas for how to get these soft links to work when shared to a remote Mac?
-Sibo
Mac OS file sharing exposes symbolic links as actual symbolic links.
If I connect one Mac to another, using either AFP or SMB, I can confirm this.
Note that symbolic links are resolved by the client -- even in a non-file-sharing case this means relative paths in symbolic links can be tricky, and in this case involving network file sharing, it means the client computer needs to be able to see the target file (the target file must also be in a folder that's shared and mounted), and the path needs to be the same.
For example, if I create a text file named "foo" in my home directory, then do "ln -s foo symlink" to create a link to it named symlink, then mount that home directory from a second computer and do "ls -l" it's shown as "symlink# -> foo", and if I cat the file I can read it. But if I create the symlink as "ln -s /Users/matt/foo symlink", then on the second computer ls -l shows it as "symlink# -> /Users/matt/foo", and cat says "cat: symlink: No such file or directory". That's because on the second computer, /Users/matt is a local home directory that doesn't contain a file named foo (and if it did, anything resolving the symlink would see the local foo, not the foo shared from the first computer).
So basically: you can use "ls -l" to see where the symlink points, and note that the client computer will resolve the symlink and try to open whatever file has that name, which may or may not be what you expected.
(Probably the reason that your test worked from your Linux machine and not your Mac is that the Linux machine has more network shares mounted or with different names, such that the symlink target name was a valid filename on the Linux machine but not the Mac.)

Rsync bash script and hard linking files

I am creating a bash script to backup my files with rsync.
Backups all come from a single directory.
I only want new or modified files to be backed up.
Currently, I am telling rsync to backup the dir, and to check the files compared to the last backup.
The way I am doing this is
THE_TIME=`date "+%Y-%m-%dT%H:%M:%S"`
rsync -aP --link-dest=/Backup/Current /usr/home/user/backup /Backup/Backup-$THE_TIME
rm -f /Backup/Current
ln -s /Backup/Backup-$THE_TIME /Backup/Current
I am pretty sure I have the syntax correct for this. Each backup will check against the "Current" folder, and upload only as necesary. It will then delete the Current folder, and re-create the symlink to the newest backup it just did.
I am getting an error when I run the script:
rsync: link "/Backup/Backup-2010-08-04-12:21:15/dgs1200series_manual_310.pdf"
=> /Backup/Current/dgs1200series_manual_310.pdf
failed: Operation not supported (45)
The host OS is running HFS filesystem, which supports hard linking. I am trying to figure out if something else is not supporting this, or if I have a problem in my code.
Thanks for any help
Edit:
I am able to create a hard link on my local machine.
I am also able to create a hard link on the remote server (when logged in locally)
I am NOT able to create a hard link on the remote server when mounted via afp. Even if both files exist on the server.
I am guessing this is a limitation of afp.
Just in case your command line is only an example: Be sure to always specify the link-dest directory with an absolute pathname! That’s something which took me quite some time to figure out …
Two things from the man page stand out that are worth checking:
If file's aren't linking, double-check their attributes. Also
check if some attributes are getting forced outside of rsync's
control, such a mount option that squishes root to a single
user, or mounts a removable drive with generic ownership (such
as OS X's “Ignore ownership on this volume” option).
and
Note that rsync versions prior to 2.6.1 had a bug that could
prevent --link-dest from working properly for a non-super-user
when -o was specified (or implied by -a). You can work-around
this bug by avoiding the -o option when sending to an old rsync.
Do you have the "ignore ownership" option turned on? What version of rsync do you have?
Also, have you tried manually creating a similar hardlink using ln at the command line?
I don't know if this is the same issue, but I know that rsync can't sync a file when the destination is a FAT32 partition and the filename has a ":" (colon) in it. [The source filesystem is ext3, and the destination is FAT32]
Try reconfiguring the date command so that it doesn't use a colon and see if that makes a difference.
e.g.
THE_TIME=`date "+%Y-%m-%dT%H_%_%S"`

Resources