I have the following rsync script which I created to do incremental backups:
rsync -arv --exclude-from '/usr/bin/exclude-list.txt' --delete /Volumes/DOCS/ /Volumes/BKUP1/DOCS/
&& rsync -arv --delete /Volumes/Webserver/ /Volumes/BKUP1/Webserver/
My exclude list is
/Volumes/Webserver/.Spotlight-V100
/Volumes/Webserver/.Trashes
/Volumes/Webserver/.fseventsd
Everytime I run this backup. It seems to go through and copy all the files everytime, despite the fact that rsync is supposed to be an incremental backup solution.
E.G. First run:
....
sites/website/sites/all/libraries/tinymce/jscripts/tiny_mce/plugins/style/js/.svn/prop-base/
sites/website/sites/all/libraries/tinymce/jscripts/tiny_mce/plugins/style/js/.svn/props/
sites/website/sites/all/libraries/tinymce/jscripts/tiny_mce/plugins/style/js/.svn/text-base/
....
Second run:
....
sites/website/sites/all/libraries/tinymce/jscripts/tiny_mce/plugins/style/js/.svn/prop-base/
sites/website/sites/all/libraries/tinymce/jscripts/tiny_mce/plugins/style/js/.svn/props/
sites/website/sites/all/libraries/tinymce/jscripts/tiny_mce/plugins/style/js/.svn/text-base/
....
etc...
The same files are copied across again. Also I am constantly encountering the following permission denied errors, despite the fact they are ignored in my excude-from argument:
building file list ... rsync: opendir "/Volumes/Webserver/.Spotlight-V100" failed: Permission denied (13)
rsync: opendir "/Volumes/Webserver/.Trashes" failed: Permission denied (13)
rsync: opendir "/Volumes/Webserver/.fseventsd" failed: Permission denied (13)
Any ideas? I am hoping I can tweak this script so it will only copy across modified / new files and show me what files these are in the verbose output.
Many thanks in advanced.
I ran in to this myself. The best I could come up with, as silly as this sounds, is that the files' timestamps aren't being preserved. Then when you do it again it thinks "Hey! These timestamps don't match - better sync 'em!" If you use the -t option, It will send the timestamps along, then the files will be seen as the same
Or you can use the "size only" option, which does what it sounds, if you're sure there are no files you've modified but are the same size.
Are you copying to a FAT32 drive from a differently formatted drive? My understanding is that FAT32 keeps a 16-bit time-stamp which only permits a resolution of about two seconds, which is far less precise than other drive formats. By default, rsync requires timestamps to match exactly, so virtually every file will fail this test and be recopied.
To fix this, you need to have rsync to pass files that are time-stamped +/-1 second (2 second total range) from the source. You do this by adding
--modify-window=1
to the rsync command.
Related
I'm trying to set up rsync to run backups from my Unraid shares to external drives mounted to the Unraid server.
I have different folders in different shares that I want backed up to different folders on different external drives.
I.e:
/share1/folder1/ should be synced to /externalDisk1/folder1/
/share1/folder2/ should be synced to /externalDisk2/folder1/
etc.
I have eight of these jobs, all together.
The idea is to run these as a cron job every night, and so I wrote them all down in a bash script.
Running the rsync lines one by one, or even as "job1; job2.." in the command line works great, however, calling them from "backup.sh" returns errors.
The script (with only one rsync line as an example):
#!/bin/sh
rsync -rtvh --progress /mnt/user/share1/folder1/ /mnt/disks/externalDrive1/Folder1/
returns
sending incremental file list
rsync: [Receiver] mkdir "/mnt/disks/externalDisk1/Folder1/\#015" failed: No such file or directory (2)
rsync error: error in file IO (code 11) at main.c(784) [Receiver=3.2.3]
rsync: [sender] write error: Broken pipe (32)
rsync error: error in file IO (code 11) at io.c(823) [sender=3.2.3]
Any suggestions on what's going on and how to fix this?
Running the script through the CA User Scripts plug-in makes it run perfectly. It's a work-around, if nothing else :)
I tried to back up data from my macbook to an external hard drive - formatted with exFat (bacause of the Windows and Linux/Mac compatibility).
With Automator I will create a little Program, to backup my data easily. It works fine on the local drive and from local drive to an SD-Card. But it do not work from local drive to an external hard drive. What's wrong?
SOURCE=/Users/username/Pictures/test
TARGET=/Volumes/Backup/
LOG=~/Documents/AutomatorLogs/BackupSync.log
rsync -aze "ssh" --delete --exclude=".*" "$SOURCE" "$TARGET" > "$LOG"
I got this Error:
rsync: recv_generator: mkdir "/Volumes/Backup/test" failed: Permission
denied (13)
I know this is older but I just ran into this and I wanted to make sure this info was included. I know the OP is a little different, but I'm using a macbook and ran into the error I describe so I don't know how even with changing the disk name it worked.
rsync can't use -a when the target is an exfat drive. It will make the directories and seem to be backing up but no files are actually created. You need to use:
rsync -rltDv [SRC] [DESTINATION]
where:
-v, --verbose increase verbosity
-r, --recursive recurse into directories
-l, --links copy symlinks as symlinks
--devices preserve device files (super-user only)
--specials preserve special files
-D same as --devices --specials
-t, --times preserve times
The reason is because rsync doesn't handle permissions on exfat. You will see an rsync error (in syslog or if you control-c out):
chgrp [file] failed: Function not implemented (38)
It looks like the user that you're running the command as doesn't have permission to make a new directory in the /Volumes/Backup/ directory.
To solve this, you will probably need to change the permissions on that directory so that your script will be able to write to it and create the new directory it uses to make the backup.
Here are some links about permissions:
http://linuxcommand.org/lts0070.php
http://www.perlfect.com/articles/chmod.shtml
I think, I've got it:
It is related to the name of the external hard disk.
With the name "Backup" for my external hard drive, it does not work.
If I changed the name to anything else, it works.
(I tested some other exFat formatted external hard drives with other names and it worked. So I changed the name of this external drive to anything else and now it works. Crazy...)
I take delivery of files from multiple places as part of a publishing aggregation service. I need a way to move files that have been delivered to me from one location to another without losing the directory listings for sorting purposes.
Example:
Filepath of delivery: Server/Vendor/To_Company/Customer_Name/**
Filepath of processing: ~/Desktop/MM-DD-YYYY/Returned_Files/Customer_Name/**
I know I can move all of the directories by doing something such as:
find Server/Vendor/To_Company/* -exec mv -n ~/Desktop/MM-DD-YYYY/Returned_Files \;
but using that I can only run the script one time per day and there are times when I might need to run it multiple times.
It seems like ideally I should be able to create a copycat directory in my daily processing folder and then move the files from one to the other.
you can use rsync command with --remove-source-files option. you can run it as many times as needed.
#for trial run, without making any actual transfer.
rsync --dry-run -rv --remove-source-files Server/Vendor/To_Company/ ~/Desktop/MM-DD-YYYY/Returned_Files/
#command
rsync -rv --remove-source-files Server/Vendor/To_Company/ ~/Desktop/MM-DD-YYYY/Returned_Files/
reference:
http://www.cyberciti.biz/faq/linux-unix-bsd-appleosx-rsync-delete-file-after-transfer/
You could use rsync to do this for you:
rsync -a --remove-source-files /Server/Vendor/To_Company/Customer_Name ~/Desktop/$(date +"%y-%m-%d")/Returned_files/
Add -n to do a dry run to make sure it does what you want.
From the manual page:
--remove-source-files
This tells rsync to remove from the sending side the files (meaning non-directories) that are a part of the
transfer and have been successfully duplicated on the receiving side.
Note that you should only use this option on source files that are quiescent. If you are using this to move
files that show up in a particular directory over to another host, make sure that the finished files get renamed
into the source directory, not directly written into it, so that rsync can’t possibly transfer a file that is
not yet fully written. If you can’t first write the files into a different directory, you should use a naming
idiom that lets rsync avoid transferring files that are not yet finished (e.g. name the file "foo.new" when it
is written, rename it to "foo" when it is done, and then use the option --exclude='*.new' for the rsync trans‐
fer).
I am creating a bash script to backup my files with rsync.
Backups all come from a single directory.
I only want new or modified files to be backed up.
Currently, I am telling rsync to backup the dir, and to check the files compared to the last backup.
The way I am doing this is
THE_TIME=`date "+%Y-%m-%dT%H:%M:%S"`
rsync -aP --link-dest=/Backup/Current /usr/home/user/backup /Backup/Backup-$THE_TIME
rm -f /Backup/Current
ln -s /Backup/Backup-$THE_TIME /Backup/Current
I am pretty sure I have the syntax correct for this. Each backup will check against the "Current" folder, and upload only as necesary. It will then delete the Current folder, and re-create the symlink to the newest backup it just did.
I am getting an error when I run the script:
rsync: link "/Backup/Backup-2010-08-04-12:21:15/dgs1200series_manual_310.pdf"
=> /Backup/Current/dgs1200series_manual_310.pdf
failed: Operation not supported (45)
The host OS is running HFS filesystem, which supports hard linking. I am trying to figure out if something else is not supporting this, or if I have a problem in my code.
Thanks for any help
Edit:
I am able to create a hard link on my local machine.
I am also able to create a hard link on the remote server (when logged in locally)
I am NOT able to create a hard link on the remote server when mounted via afp. Even if both files exist on the server.
I am guessing this is a limitation of afp.
Just in case your command line is only an example: Be sure to always specify the link-dest directory with an absolute pathname! That’s something which took me quite some time to figure out …
Two things from the man page stand out that are worth checking:
If file's aren't linking, double-check their attributes. Also
check if some attributes are getting forced outside of rsync's
control, such a mount option that squishes root to a single
user, or mounts a removable drive with generic ownership (such
as OS X's “Ignore ownership on this volume” option).
and
Note that rsync versions prior to 2.6.1 had a bug that could
prevent --link-dest from working properly for a non-super-user
when -o was specified (or implied by -a). You can work-around
this bug by avoiding the -o option when sending to an old rsync.
Do you have the "ignore ownership" option turned on? What version of rsync do you have?
Also, have you tried manually creating a similar hardlink using ln at the command line?
I don't know if this is the same issue, but I know that rsync can't sync a file when the destination is a FAT32 partition and the filename has a ":" (colon) in it. [The source filesystem is ext3, and the destination is FAT32]
Try reconfiguring the date command so that it doesn't use a colon and see if that makes a difference.
e.g.
THE_TIME=`date "+%Y-%m-%dT%H_%_%S"`
I am using RSync to copy tar balls to an external hard drive on a Windows XP machine.
My files are tar.gz files (perms 600) in a directory (perms 711).
However, when I do a dry-run, only the folders are returned, the files are ignored.
I use RSync a lot, so I presume there is no issue with my installation.
I have tried changing permissions of the files but this makes no difference
The owner of the files is root, which is also the user which the script logs in as
I am not using Rsync's CVS option
The command I am using is:
rsync^
-azvr^
--stats^
--progress^
-e 'ssh -p 222' root#servername:/home/directory/ ./
Is there something I am missing to get my files copied over?
I can think of only a single possibility: My experience with rsync is that it creates the directory structure before copying files in. Rsync may be terminating prematurely, but after this directory step has been completed.
Update0
You mentioned that you were running dry run. Rsync by default only shows the directory names when the directory and all its contents are not present on the receiver.
After a lot of experimentation, I'm only able to reproduce the behaviour you describe if the directories on the source have later modification dates than on the receiver. In this instance, the modification times are adjusted on the receiver.
I had this problem too, and it turns out that backing up to a windows drive from linux doesn't seem to copy the temp files in place, after they are transferred over.
Try adding the --inplace flag, when rsyncing to windows drives.