I use rsync to upload websites to a server. I use it something like this:
rsync -auvzhL . username#host:/home/username/foldername
I only want to update things that are newer on my computer, and not to delete things on the server that I don't have.
This all worked fine, until I decided to symlink some files in the folder to other ones. Now if I change that file, it doesn't get included in rsync unless I delete and recreate the symlink. Assumedly because the symlink date is the creation of the link, not the content.
Is there anyway to either force rsync to always copy certain files, or, better, update the modified date on a symlink without deleting and recreating it?
In the end I just delete and recreate all the symlinks before rsyncing. I'm sure there is a better way, I just can't work it out!
Related
I am totally baffled. I used
su cp -a /my/home/dir /backup/home/dir
to backup my home directory and found a few files (about 20) that didn't copy due to input/output errors. These files look fine, some are .jpgs, some are .gifs, one is my Virtualbox VDI file...they work fine on the original home dir, but they JUST WON'T COPY. I tried manually doing them. I tried doing them using Nautilus. I tried changing the permissions to 777 and made sure the ownership was non-root...still no dice. I get:
cp: reading `/my/home/dir/subfolder/abc_def.gif': Input/output error
I'm scratching my head and while I could lose a gif or jpg here and there, I don't want to lose my vdi file. Do I need to add a --force to the cp command? Is there any way to find out more info about why these particular files aren't copying? In the case of the .jpgs, they're all in one folder of images I took during a recent trip, shot in the same camera, same CF card, and transferred the same way at the same time.
Totally baffled. Any help would be fantastic. Ideally, a way to force copy these files. They seem to be fine, usable, and I trust them, so I've no clue why they're not getting copied.
I currently have rsync working well. It copies all my files from one directory to another directory. The only thing is it is physically copying the files.
I have a lot of large files that I don't want to have a duplicate of all the files. I just want to create a symbolic link in the new directory so that I can serve the data on a webpage. The source directory has some scripts and files I don't want the public to see. I'm moving the safe data to the web root (destination).
What I would like rsync to do is any new files in the source directory would create links into the destination. That way I am not using up my hard drive space like I currently am doing. What I have works perfect except for doing the symbolic link aspect to it. Is there a way to have rsync track and create symbolic links?
rsync -aP --exclude="file.sql" --exclude="*~" --exclude=".*" --exclude="*.sh" . ${destination}
It's not a symlink, but you might be able to work with --link-dest=DIR. It creates a hard link which will create a new name for the same file. This will behave similarly to a softlink as long as:
Both files are on the same filesystem
You don't plan to delete the original and not the copy (the symlink would break but a hard-link won't)
You don't have anything explicitly checking to see if it's a softlink
You could use cp -aR -s (Linux or FreeBSD) or cp --archive --recursive --symbolic-link (Linux) to create symbolic links to the source files in the destination directory instead of copies. Note that -s is non-standard.
Can lndir be useful to you. According to manual it creates a shadow directory of symbolic links to another directory tree.
I think master_delivery is probably the best tool for this. With the already introduced --link-dest option of rsync, files which are not the same will be copied. If you don't mind the situation where copies and hardlinks are mixed, you can use rsync, but if you want to eliminate duplicates completely, use master_delivery.
Usage is:
gem install master_delivery
master_delivery -m <path_to_master> -d <path_to_delivery_root>
Using rsync, my source directory has a number of files and directories. My destination already has been synced, so it mirrors those files and directories.
However, I have manually created a symlink in my destination that does not exist in my source.
I need to use the --delete operation in rsync. Is there a way to get rsync to not remove the symlink?
There is no option to achieve this the way you suggested BUT the simplest solution to the described problem is just to add the filenames of the symlinks to the rsync exclude pattern
like: --exclude="folder/symlinkname1" --exclude="folder/symlinkname2"
if there are many symlinks you can keep a list of them in a exclude pattern file
this file may be autogenerated by a little script or a bash one liner...
Does your destination existing symlink point to a directory or file? If a directory, you may be able to use --keep-dirlinks.
I am creating a bash script to backup my files with rsync.
Backups all come from a single directory.
I only want new or modified files to be backed up.
Currently, I am telling rsync to backup the dir, and to check the files compared to the last backup.
The way I am doing this is
THE_TIME=`date "+%Y-%m-%dT%H:%M:%S"`
rsync -aP --link-dest=/Backup/Current /usr/home/user/backup /Backup/Backup-$THE_TIME
rm -f /Backup/Current
ln -s /Backup/Backup-$THE_TIME /Backup/Current
I am pretty sure I have the syntax correct for this. Each backup will check against the "Current" folder, and upload only as necesary. It will then delete the Current folder, and re-create the symlink to the newest backup it just did.
I am getting an error when I run the script:
rsync: link "/Backup/Backup-2010-08-04-12:21:15/dgs1200series_manual_310.pdf"
=> /Backup/Current/dgs1200series_manual_310.pdf
failed: Operation not supported (45)
The host OS is running HFS filesystem, which supports hard linking. I am trying to figure out if something else is not supporting this, or if I have a problem in my code.
Thanks for any help
Edit:
I am able to create a hard link on my local machine.
I am also able to create a hard link on the remote server (when logged in locally)
I am NOT able to create a hard link on the remote server when mounted via afp. Even if both files exist on the server.
I am guessing this is a limitation of afp.
Just in case your command line is only an example: Be sure to always specify the link-dest directory with an absolute pathname! That’s something which took me quite some time to figure out …
Two things from the man page stand out that are worth checking:
If file's aren't linking, double-check their attributes. Also
check if some attributes are getting forced outside of rsync's
control, such a mount option that squishes root to a single
user, or mounts a removable drive with generic ownership (such
as OS X's “Ignore ownership on this volume” option).
and
Note that rsync versions prior to 2.6.1 had a bug that could
prevent --link-dest from working properly for a non-super-user
when -o was specified (or implied by -a). You can work-around
this bug by avoiding the -o option when sending to an old rsync.
Do you have the "ignore ownership" option turned on? What version of rsync do you have?
Also, have you tried manually creating a similar hardlink using ln at the command line?
I don't know if this is the same issue, but I know that rsync can't sync a file when the destination is a FAT32 partition and the filename has a ":" (colon) in it. [The source filesystem is ext3, and the destination is FAT32]
Try reconfiguring the date command so that it doesn't use a colon and see if that makes a difference.
e.g.
THE_TIME=`date "+%Y-%m-%dT%H_%_%S"`
I am using RSync to copy tar balls to an external hard drive on a Windows XP machine.
My files are tar.gz files (perms 600) in a directory (perms 711).
However, when I do a dry-run, only the folders are returned, the files are ignored.
I use RSync a lot, so I presume there is no issue with my installation.
I have tried changing permissions of the files but this makes no difference
The owner of the files is root, which is also the user which the script logs in as
I am not using Rsync's CVS option
The command I am using is:
rsync^
-azvr^
--stats^
--progress^
-e 'ssh -p 222' root#servername:/home/directory/ ./
Is there something I am missing to get my files copied over?
I can think of only a single possibility: My experience with rsync is that it creates the directory structure before copying files in. Rsync may be terminating prematurely, but after this directory step has been completed.
Update0
You mentioned that you were running dry run. Rsync by default only shows the directory names when the directory and all its contents are not present on the receiver.
After a lot of experimentation, I'm only able to reproduce the behaviour you describe if the directories on the source have later modification dates than on the receiver. In this instance, the modification times are adjusted on the receiver.
I had this problem too, and it turns out that backing up to a windows drive from linux doesn't seem to copy the temp files in place, after they are transferred over.
Try adding the --inplace flag, when rsyncing to windows drives.