how to manually empty moosefs trash folder to save space - moosefs

I found that moosefs trash take too much of my disk space. according to moosefs documentation, it will keep it for a while in case user want it back. But How to clean it up manually to save space?

In order to purge MooseFS' trash, you need to mount special directory called "MooseFS Meta".
Create mountdir for MooseFS Meta directory first:
mkdir /mnt/mfsmeta
and mount mfsmeta:
mfsmount -o mfsmeta /mnt/mfsmeta
If your Master Server Host Name differs from default mfsmaster and/or port differs from default 9421, use appropriate switch, e.g.:
mfsmount -H master.host.name -P PORT -o mfsmeta /mnt/mfsmeta
Then you can find your deleted files in /mnt/mfsmeta/trash/SUBTRASH directory. Subtrash is a directory inside /mnt/mfsmeta named 000..FFF. Subtrashes are helpful if you have many (e.g. millions) of files in trash, because you can easily operate on them using Unix tools like find, whereas if you had all the files in one directory, such tools may fail.
If you do not have many files in trash, mount Meta with mfsflattrash parameter:
mfsmount -o mfsmeta,mfsflattrash /mnt/mfsmeta
or if you use Master Host Name or Port other than default:
mfsmount -H master.host.name -P PORT -o mfsmeta,mfsflattrash /mnt/mfsmeta
In this case your deleted files will be available directly in /mnt/mfsmeta/trash (without subtrash).
In both cases you can remove files by simply using rm file or undelete them by moving them to undel directory available in trash or subtrash (mv file undel).
Remember, that if you do not want to have certain files moved to trash at all, set "trash time" (in seconds) for these files to 0 prior to deletion. If you set specific trash time for a directory, all the files created in this directory inherit trash time from parent, e.g.:
mfssettrashtime 0 /mnt/mfs/directory
You can also set a trash time to other value, e.g. 1 hour:
mfssettrashtime 3600 /mnt/mfs/directory
For more information on specific parameters passed to mfsmount or mfssettrashtime, see man mfsmount and man mfstrashtime.
Hope it helps!
Peter

Related

Is it possible to set read-only for myself on unix?

I have been given the address to a very large folder on a shared Unix server. I've been given a path to some files on a unix server I'm working on through ssh. I don't want to waste space by creating a duplicate in my home area so I've linked the folder through ln -s. However I don't want to risk making any changes to the data within the folder.
How would I go about setting the files to read-only for myself? Do I have to ask the owner of the folder/file? Do I need sudo access? I am not the owner of the file and I do not have root access.
Read about chmod command to change the mask on the files the links point to.
The owner or root can restrict access to files.
Also you probably need to mount that shared folder as read-only. But I am not sure how your folder is connected
UPDATE
The desired behaviour can be achieved using mount tool. (man page for mount).
Note that the filesystem mount options will remain the same as those on the original mount point, and cannot be changed by passing the -o option along with --bind/--rbind. The mount options can be changed by a separate remount command, for example:
mount --bind olddir newdir
mount -o remount,ro newdir
Here is the similiar question to yours. Also solved via mount tool.

How to change the permission of a directory inside the .tar.gz file? [duplicate]

Is there a way to chmod 777 the contents of a tarfile upon creation (or shortly thereafter) before distributing? The write permissions of the directory that's being tar'd is unknown at the time of tar'ing (but typically 555). I would like the unrolled dir to be world writable without the users who are unrolling the tar to have to remember to chmod -R 777 <untarred dir> before proceeding.
The clumsy way would be to make a copy of the directory, and then chmod -R 777 <copydir> but I was wondering if there was a better solution.
I'm on a Solaris 10 machine.
BACKGROUND:
The root directory is in our ClearCase vob with specific file permissions, recursively. A tarfile is created and distributed to multiple "customers" within our org. Most only need the read/execute permissions (and specifically DON'T want them writable), but one group in particular needs their copy to be recursively writable since they may edit these files, or even restore back to a "fresh" copy (i.e., in their original state as I gave them).
This group is somewhat technically challenged. Even though they have instructions on the "how-to's" of the tarfile, they always seem to forget (or get wrong) the setting of the files to be recursively writable once untarred. This leads to phone calls to me to diagnose a variety of problems where the root cause is that they forgot to do (or did incorrectly) the chmod'ing of the unrolled directory.
And before you ask, yes, I wrote them a script to untar/chmod (specific just for them), but... oh never mind.
So, I figured I'd create a separate, recursively-writable version of the tar to distribute just to them. As I said originally, I could always create a copy of the dir, make the copy recursively writable and then tar up the copy dir, but the dir is fairly large, and disk space is sometimes near full (it can vary greatly), so making a copy of the dir will not be feasable 100% of the time.
With GNU tar, use the --mode option when creating the archive, e.g.:
tar cf archive.tar --mode='a+rwX' *
But note that when the archive is extracted, the umask will be applied by default. So unless the user's umask is 000, then the permissions will be updated at that point. However, the umask can be ignored by using the -p (--preserve) option, e.g.:
tar xfp archive.tar
You can easily change the permissions on the files prior to your tar command, although I generally recommend people NEVER use 777 for anything except /tmp on a unix system, it's more productive to use 755 or worst case 775 for directories. That way you're not letting the world write to your directories, which is generally advisable.
Most unix users don't like to set the permissions recursively because it sets the execute bit on files that should not be executable (configuration files for instance) to avoid this they invented a new way to use chmod some time ago, called symbolic mode. Reading the man page on chmod should provide details, but you could try this:
cd $targetdir; chmod -R u+rwX,a+rX .; tar zcvf $destTarFile .
Where your $targetdir is the directory you are tarring up and $destTarFile is the name of the tar file you're creating.
When you untar that tar file, the permissions are attempted to be retained. Certain rules govern that process of course - the uid and gid of the owner will only be retained if root is doing the untaring, but otherwise, they are set to the efective uid and gid of the current process.

rsync backup to external hard disk exFat fails

I tried to back up data from my macbook to an external hard drive - formatted with exFat (bacause of the Windows and Linux/Mac compatibility).
With Automator I will create a little Program, to backup my data easily. It works fine on the local drive and from local drive to an SD-Card. But it do not work from local drive to an external hard drive. What's wrong?
SOURCE=/Users/username/Pictures/test
TARGET=/Volumes/Backup/
LOG=~/Documents/AutomatorLogs/BackupSync.log
rsync -aze "ssh" --delete --exclude=".*" "$SOURCE" "$TARGET" > "$LOG"
I got this Error:
rsync: recv_generator: mkdir "/Volumes/Backup/test" failed: Permission
denied (13)
I know this is older but I just ran into this and I wanted to make sure this info was included. I know the OP is a little different, but I'm using a macbook and ran into the error I describe so I don't know how even with changing the disk name it worked.
rsync can't use -a when the target is an exfat drive. It will make the directories and seem to be backing up but no files are actually created. You need to use:
rsync -rltDv [SRC] [DESTINATION]
where:
-v, --verbose increase verbosity
-r, --recursive recurse into directories
-l, --links copy symlinks as symlinks
--devices preserve device files (super-user only)
--specials preserve special files
-D same as --devices --specials
-t, --times preserve times
The reason is because rsync doesn't handle permissions on exfat. You will see an rsync error (in syslog or if you control-c out):
chgrp [file] failed: Function not implemented (38)
It looks like the user that you're running the command as doesn't have permission to make a new directory in the /Volumes/Backup/ directory.
To solve this, you will probably need to change the permissions on that directory so that your script will be able to write to it and create the new directory it uses to make the backup.
Here are some links about permissions:
http://linuxcommand.org/lts0070.php
http://www.perlfect.com/articles/chmod.shtml
I think, I've got it:
It is related to the name of the external hard disk.
With the name "Backup" for my external hard drive, it does not work.
If I changed the name to anything else, it works.
(I tested some other exFat formatted external hard drives with other names and it worked. So I changed the name of this external drive to anything else and now it works. Crazy...)

Rsync bash script and hard linking files

I am creating a bash script to backup my files with rsync.
Backups all come from a single directory.
I only want new or modified files to be backed up.
Currently, I am telling rsync to backup the dir, and to check the files compared to the last backup.
The way I am doing this is
THE_TIME=`date "+%Y-%m-%dT%H:%M:%S"`
rsync -aP --link-dest=/Backup/Current /usr/home/user/backup /Backup/Backup-$THE_TIME
rm -f /Backup/Current
ln -s /Backup/Backup-$THE_TIME /Backup/Current
I am pretty sure I have the syntax correct for this. Each backup will check against the "Current" folder, and upload only as necesary. It will then delete the Current folder, and re-create the symlink to the newest backup it just did.
I am getting an error when I run the script:
rsync: link "/Backup/Backup-2010-08-04-12:21:15/dgs1200series_manual_310.pdf"
=> /Backup/Current/dgs1200series_manual_310.pdf
failed: Operation not supported (45)
The host OS is running HFS filesystem, which supports hard linking. I am trying to figure out if something else is not supporting this, or if I have a problem in my code.
Thanks for any help
Edit:
I am able to create a hard link on my local machine.
I am also able to create a hard link on the remote server (when logged in locally)
I am NOT able to create a hard link on the remote server when mounted via afp. Even if both files exist on the server.
I am guessing this is a limitation of afp.
Just in case your command line is only an example: Be sure to always specify the link-dest directory with an absolute pathname! That’s something which took me quite some time to figure out …
Two things from the man page stand out that are worth checking:
If file's aren't linking, double-check their attributes. Also
check if some attributes are getting forced outside of rsync's
control, such a mount option that squishes root to a single
user, or mounts a removable drive with generic ownership (such
as OS X's “Ignore ownership on this volume” option).
and
Note that rsync versions prior to 2.6.1 had a bug that could
prevent --link-dest from working properly for a non-super-user
when -o was specified (or implied by -a). You can work-around
this bug by avoiding the -o option when sending to an old rsync.
Do you have the "ignore ownership" option turned on? What version of rsync do you have?
Also, have you tried manually creating a similar hardlink using ln at the command line?
I don't know if this is the same issue, but I know that rsync can't sync a file when the destination is a FAT32 partition and the filename has a ":" (colon) in it. [The source filesystem is ext3, and the destination is FAT32]
Try reconfiguring the date command so that it doesn't use a colon and see if that makes a difference.
e.g.
THE_TIME=`date "+%Y-%m-%dT%H_%_%S"`

RSync copies only folder directory structure not files

I am using RSync to copy tar balls to an external hard drive on a Windows XP machine.
My files are tar.gz files (perms 600) in a directory (perms 711).
However, when I do a dry-run, only the folders are returned, the files are ignored.
I use RSync a lot, so I presume there is no issue with my installation.
I have tried changing permissions of the files but this makes no difference
The owner of the files is root, which is also the user which the script logs in as
I am not using Rsync's CVS option
The command I am using is:
rsync^
-azvr^
--stats^
--progress^
-e 'ssh -p 222' root#servername:/home/directory/ ./
Is there something I am missing to get my files copied over?
I can think of only a single possibility: My experience with rsync is that it creates the directory structure before copying files in. Rsync may be terminating prematurely, but after this directory step has been completed.
Update0
You mentioned that you were running dry run. Rsync by default only shows the directory names when the directory and all its contents are not present on the receiver.
After a lot of experimentation, I'm only able to reproduce the behaviour you describe if the directories on the source have later modification dates than on the receiver. In this instance, the modification times are adjusted on the receiver.
I had this problem too, and it turns out that backing up to a windows drive from linux doesn't seem to copy the temp files in place, after they are transferred over.
Try adding the --inplace flag, when rsyncing to windows drives.

Resources