Rsync bash script and hard linking files - macos

I am creating a bash script to backup my files with rsync.
Backups all come from a single directory.
I only want new or modified files to be backed up.
Currently, I am telling rsync to backup the dir, and to check the files compared to the last backup.
The way I am doing this is
THE_TIME=`date "+%Y-%m-%dT%H:%M:%S"`
rsync -aP --link-dest=/Backup/Current /usr/home/user/backup /Backup/Backup-$THE_TIME
rm -f /Backup/Current
ln -s /Backup/Backup-$THE_TIME /Backup/Current
I am pretty sure I have the syntax correct for this. Each backup will check against the "Current" folder, and upload only as necesary. It will then delete the Current folder, and re-create the symlink to the newest backup it just did.
I am getting an error when I run the script:
rsync: link "/Backup/Backup-2010-08-04-12:21:15/dgs1200series_manual_310.pdf"
=> /Backup/Current/dgs1200series_manual_310.pdf
failed: Operation not supported (45)
The host OS is running HFS filesystem, which supports hard linking. I am trying to figure out if something else is not supporting this, or if I have a problem in my code.
Thanks for any help
Edit:
I am able to create a hard link on my local machine.
I am also able to create a hard link on the remote server (when logged in locally)
I am NOT able to create a hard link on the remote server when mounted via afp. Even if both files exist on the server.
I am guessing this is a limitation of afp.

Just in case your command line is only an example: Be sure to always specify the link-dest directory with an absolute pathname! That’s something which took me quite some time to figure out …

Two things from the man page stand out that are worth checking:
If file's aren't linking, double-check their attributes. Also
check if some attributes are getting forced outside of rsync's
control, such a mount option that squishes root to a single
user, or mounts a removable drive with generic ownership (such
as OS X's “Ignore ownership on this volume” option).
and
Note that rsync versions prior to 2.6.1 had a bug that could
prevent --link-dest from working properly for a non-super-user
when -o was specified (or implied by -a). You can work-around
this bug by avoiding the -o option when sending to an old rsync.
Do you have the "ignore ownership" option turned on? What version of rsync do you have?
Also, have you tried manually creating a similar hardlink using ln at the command line?

I don't know if this is the same issue, but I know that rsync can't sync a file when the destination is a FAT32 partition and the filename has a ":" (colon) in it. [The source filesystem is ext3, and the destination is FAT32]
Try reconfiguring the date command so that it doesn't use a colon and see if that makes a difference.
e.g.
THE_TIME=`date "+%Y-%m-%dT%H_%_%S"`

Related

How to change SYMLINK to SYMLINKD in batch script

We're sharing SYMLINKD files on our git project. It almost works, except git modifies our SYMLINKD files to SYMLINK files when pulled on another machine.
To be clear, on the original machine, symlink is created using the command:
mklink /D Annotations ..\..\submodules\Annotations\Assets
On the original machine, the dir cmd displays:
25/04/2018 09:52 <SYMLINKD> Annotations [..\..\submodules\Annotations\Assets]
After cloning, on the receiving machine, we get
27/04/2018 10:52 <SYMLINK> Annotations [..\..\submodules\Annotations\Assets]
As you might guess, a file target type pointing at a a directory [....\submodules\Annotations\Assets] does not work correctly.
To fix this problem we either need to:
Prevent git from modifying our symlink types.
Fix our symlinks with batch script triggered on a githook
We're going we 2, since we do not want to require all users to use a modified version of git.
My limited knowledge of batch scripting is impeding me. So far, I have looked into simply modifying the attrib of the file, using the info here:
How to get attributes of a file using batch file and https://superuser.com/questions/653951/how-to-remove-read-only-attribute-recursively-on-windows-7.
Can anyone suggest what attrib commands I need to modify the symlink?
Alternatively, I realise I can delete and recreate the symlink, but how do I get the target directory for the existing symlink short of using the dir command and parsing the path from the output?
I think it's https://github.com/git-for-windows/git/issues/1646.
To be more clear: your question appears to be a manifestation of the XY problem: the Git instance used to clone/fetch the project appears to incorrectly treat symbolic links to directories—creating symbolic links pointing to files instead. So it appears to be a bug in GfW, so instead of digging it up you've invented a workaround and ask how to make it work.
So, I'd better try help GfW maintainer and whoever reported #1646 to fix the problem. If you need a stop-gap solution, I'd say a proper way would be to go another route and script several calls to git ls-tree to figure out what the directory symlinks are (they'd have a special set of permission bits;
you may start here).
So you would traverse all the tree objects of the HEAD commit, recursively,
figuring out what the symlinks pointing at directories are and then
fixup the matching entries in the work tree by deleting them
and recreating with mklink /D or whatever creates a correct sort of
symlink.
Unfortunately, I'm afraid trying to script this using lame possibilities
of cmd.exe-s scripting facilities would be an exercise in futility.
I'd take some more "real" programming language (PowerShell as an example,
and—since you're probably a Windows shop—even a .NET would be OK).

rsync backup to external hard disk exFat fails

I tried to back up data from my macbook to an external hard drive - formatted with exFat (bacause of the Windows and Linux/Mac compatibility).
With Automator I will create a little Program, to backup my data easily. It works fine on the local drive and from local drive to an SD-Card. But it do not work from local drive to an external hard drive. What's wrong?
SOURCE=/Users/username/Pictures/test
TARGET=/Volumes/Backup/
LOG=~/Documents/AutomatorLogs/BackupSync.log
rsync -aze "ssh" --delete --exclude=".*" "$SOURCE" "$TARGET" > "$LOG"
I got this Error:
rsync: recv_generator: mkdir "/Volumes/Backup/test" failed: Permission
denied (13)
I know this is older but I just ran into this and I wanted to make sure this info was included. I know the OP is a little different, but I'm using a macbook and ran into the error I describe so I don't know how even with changing the disk name it worked.
rsync can't use -a when the target is an exfat drive. It will make the directories and seem to be backing up but no files are actually created. You need to use:
rsync -rltDv [SRC] [DESTINATION]
where:
-v, --verbose increase verbosity
-r, --recursive recurse into directories
-l, --links copy symlinks as symlinks
--devices preserve device files (super-user only)
--specials preserve special files
-D same as --devices --specials
-t, --times preserve times
The reason is because rsync doesn't handle permissions on exfat. You will see an rsync error (in syslog or if you control-c out):
chgrp [file] failed: Function not implemented (38)
It looks like the user that you're running the command as doesn't have permission to make a new directory in the /Volumes/Backup/ directory.
To solve this, you will probably need to change the permissions on that directory so that your script will be able to write to it and create the new directory it uses to make the backup.
Here are some links about permissions:
http://linuxcommand.org/lts0070.php
http://www.perlfect.com/articles/chmod.shtml
I think, I've got it:
It is related to the name of the external hard disk.
With the name "Backup" for my external hard drive, it does not work.
If I changed the name to anything else, it works.
(I tested some other exFat formatted external hard drives with other names and it worked. So I changed the name of this external drive to anything else and now it works. Crazy...)

How to keep two folders automatically synchronized?

I would like to have a synchronized copy of one folder with all its subtree.
It should work automatically in this way: whenever I create, modify, or delete stuff from the original folder those changes should be automatically applied to the sync-folder.
Which is the best approach to this task?
BTW: I'm on Ubuntu 12.04
Final goal is to have a separated real-time backup copy, without the use of symlinks or mount.
I used Ubuntu One to synchronize data between my computers, and after a while something went wrong and all my data was lost during a synchronization.
So I thought to add a step further to keep a backup copy of my data:
I keep my data stored on a "folder A"
I need the answer of my current question to create a one-way sync of "folder A" to "folder B" (cron a script with rsync? could be?). I need it to be one-way only from A to B any changes to B must not be applied to A.
The I simply keep synchronized "folder B" with Ubuntu One
In this manner any change in A will be appled to B, which will be detected from U1 and synchronized to the cloud. If anything goes wrong and U1 delete my data on B, I always have them on A.
Inspired by lanzz's comments, another idea could be to run rsync at startup to backup the content of a folder under Ubuntu One, and start Ubuntu One only after rsync is completed.
What do you think about that?
How to know when rsync ends?
You can use inotifywait (with the modify,create,delete,move flags enabled) and rsync.
while inotifywait -r -e modify,create,delete,move /directory; do
rsync -avz /directory /target
done
If you don't have inotifywait on your system, run sudo apt-get install inotify-tools
You need something like this:
https://github.com/axkibe/lsyncd
It is a tool which combines rsync and inotify - the former is a tool that mirrors, with the correct options set, a directory to the last bit. The latter tells the kernel to notify a program of changes to a directory ot file.
It says:
It aggregates and combines events for a few seconds and then spawns one (or more) process(es) to synchronize the changes.
But - according to Digital Ocean at https://www.digitalocean.com/community/tutorials/how-to-mirror-local-and-remote-directories-on-a-vps-with-lsyncd - it ought to be in the Ubuntu repository!
I have similar requirements, and this tool, which I have yet to try, seems suitable for the task.
Just simple modification of #silgon answer:
while true; do
inotifywait -r -e modify,create,delete /directory
rsync -avz /directory /target
done
(#silgon version sometimes crashes on Ubuntu 16 if you run it in cron)
Using the cross-platform fswatch and rsync:
fswatch -o /src | xargs -n1 -I{} rsync -a /src /dest
You can take advantage of fschange. It’s a Linux filesystem change notification. The source code is downloadable from the above link, you can compile it yourself. fschange can be used to keep track of file changes by reading data from a proc file (/proc/fschange). When data is written to a file, fschange reports the exact interval that has been modified instead of just saying that the file has been changed.
If you are looking for the more advanced solution, I would suggest checking Resilio Connect.
It is cross-platform, provides extended options for use and monitoring. Since it’s BitTorrent-based, it is faster than any other existing sync tool. It was written on their behalf.
I use this free program to synchronize local files and directories: https://github.com/Fitus/Zaloha.sh. The repository contains a simple demo as well.
The good point: It is a bash shell script (one file only). Not a black box like other programs. Documentation is there as well. Also, with some technical talents, you can "bend" and "integrate" it to create the final solution you like.

Rsync create symbolic links only

I currently have rsync working well. It copies all my files from one directory to another directory. The only thing is it is physically copying the files.
I have a lot of large files that I don't want to have a duplicate of all the files. I just want to create a symbolic link in the new directory so that I can serve the data on a webpage. The source directory has some scripts and files I don't want the public to see. I'm moving the safe data to the web root (destination).
What I would like rsync to do is any new files in the source directory would create links into the destination. That way I am not using up my hard drive space like I currently am doing. What I have works perfect except for doing the symbolic link aspect to it. Is there a way to have rsync track and create symbolic links?
rsync -aP --exclude="file.sql" --exclude="*~" --exclude=".*" --exclude="*.sh" . ${destination}
It's not a symlink, but you might be able to work with --link-dest=DIR. It creates a hard link which will create a new name for the same file. This will behave similarly to a softlink as long as:
Both files are on the same filesystem
You don't plan to delete the original and not the copy (the symlink would break but a hard-link won't)
You don't have anything explicitly checking to see if it's a softlink
You could use cp -aR -s (Linux or FreeBSD) or cp --archive --recursive --symbolic-link (Linux) to create symbolic links to the source files in the destination directory instead of copies. Note that -s is non-standard.
Can lndir be useful to you. According to manual it creates a shadow directory of symbolic links to another directory tree.
I think master_delivery is probably the best tool for this. With the already introduced --link-dest option of rsync, files which are not the same will be copied. If you don't mind the situation where copies and hardlinks are mixed, you can use rsync, but if you want to eliminate duplicates completely, use master_delivery.
Usage is:
gem install master_delivery
master_delivery -m <path_to_master> -d <path_to_delivery_root>

RSync copies only folder directory structure not files

I am using RSync to copy tar balls to an external hard drive on a Windows XP machine.
My files are tar.gz files (perms 600) in a directory (perms 711).
However, when I do a dry-run, only the folders are returned, the files are ignored.
I use RSync a lot, so I presume there is no issue with my installation.
I have tried changing permissions of the files but this makes no difference
The owner of the files is root, which is also the user which the script logs in as
I am not using Rsync's CVS option
The command I am using is:
rsync^
-azvr^
--stats^
--progress^
-e 'ssh -p 222' root#servername:/home/directory/ ./
Is there something I am missing to get my files copied over?
I can think of only a single possibility: My experience with rsync is that it creates the directory structure before copying files in. Rsync may be terminating prematurely, but after this directory step has been completed.
Update0
You mentioned that you were running dry run. Rsync by default only shows the directory names when the directory and all its contents are not present on the receiver.
After a lot of experimentation, I'm only able to reproduce the behaviour you describe if the directories on the source have later modification dates than on the receiver. In this instance, the modification times are adjusted on the receiver.
I had this problem too, and it turns out that backing up to a windows drive from linux doesn't seem to copy the temp files in place, after they are transferred over.
Try adding the --inplace flag, when rsyncing to windows drives.

Resources