I tried to back up data from my macbook to an external hard drive - formatted with exFat (bacause of the Windows and Linux/Mac compatibility).
With Automator I will create a little Program, to backup my data easily. It works fine on the local drive and from local drive to an SD-Card. But it do not work from local drive to an external hard drive. What's wrong?
SOURCE=/Users/username/Pictures/test
TARGET=/Volumes/Backup/
LOG=~/Documents/AutomatorLogs/BackupSync.log
rsync -aze "ssh" --delete --exclude=".*" "$SOURCE" "$TARGET" > "$LOG"
I got this Error:
rsync: recv_generator: mkdir "/Volumes/Backup/test" failed: Permission
denied (13)
I know this is older but I just ran into this and I wanted to make sure this info was included. I know the OP is a little different, but I'm using a macbook and ran into the error I describe so I don't know how even with changing the disk name it worked.
rsync can't use -a when the target is an exfat drive. It will make the directories and seem to be backing up but no files are actually created. You need to use:
rsync -rltDv [SRC] [DESTINATION]
where:
-v, --verbose increase verbosity
-r, --recursive recurse into directories
-l, --links copy symlinks as symlinks
--devices preserve device files (super-user only)
--specials preserve special files
-D same as --devices --specials
-t, --times preserve times
The reason is because rsync doesn't handle permissions on exfat. You will see an rsync error (in syslog or if you control-c out):
chgrp [file] failed: Function not implemented (38)
It looks like the user that you're running the command as doesn't have permission to make a new directory in the /Volumes/Backup/ directory.
To solve this, you will probably need to change the permissions on that directory so that your script will be able to write to it and create the new directory it uses to make the backup.
Here are some links about permissions:
http://linuxcommand.org/lts0070.php
http://www.perlfect.com/articles/chmod.shtml
I think, I've got it:
It is related to the name of the external hard disk.
With the name "Backup" for my external hard drive, it does not work.
If I changed the name to anything else, it works.
(I tested some other exFat formatted external hard drives with other names and it worked. So I changed the name of this external drive to anything else and now it works. Crazy...)
Related
I am trying to backup similar to time machine, many examples on the web are not complete. I am trying to do relative path -R but the destination includes /Volume and hides it so I cannot see backup until I manually unhide the folder. I am using -R also so non-existing directory will be created. Destination is to usb flash drive containing sparse bundle.
Source /Volume/Drive/Folder
Destination /Volume/USBBackup
I have tried putting in my filter.txt file:
H /Volumes/*
with and without this /Volume is included and hidden.
rsync -avHAXNR --fileflags --force-change --numeric-ids --protect-args --stats --progress --filter="._$FILTER" --exclude-from="$EXC" --link-dest="/Volume/USBBackup/$PREVDIR" "/Volume/Drive/Folder" "/Volume/USBBackup/name-timestamp/" 2> ~/Desktop/rsync-errors.txt
what gets backed up is /Volume/USBBackup/name-timestamp/Volume/Drive/Folder with Volume hidden.
what I want instead
/Volume/USBBackup/name-timestamp/Drive/Folder
not hidden.
Have you considered changing your working directory to /Volume while running that task?
You should be able to do something like:
(cd /Volumes; rsync Drive/Folder)
Or you could:
OLDPWD=$(pwd)
cd /Volumes
rsync Drive/Folder /Volumes/USBBackup....
cd $OLDPWD
I'm using rsync under cygwin to synchronize my music and pictures folders across two machines. I have full control over both of these folders on the Windows machine, and the permissions on the Linux machine are generally -rw-------, owned by me. When I use, for example, rsync -rvu --delete rsync://fraxtil#linuxmachine:/music/ /cygdrive/d/Music/, it creates the files and folders in D:\Music\, but I don't have permission to access them, and as a result rsync fails to recurse into newly created directories.
I've tried adding --chmod=a+rwx,g+rwx,o+rwx to the command, adding noacl to cygwin's fstab entry for /cygdrive/, and removing read only = yes from Linux's rsyncd.conf, but none of these solved the issue.
My rsyncd.conf:
log file = /var/log/rsync.log
timeout = 300
[music]
comment = Music
path = /home/fraxtil/music
#read only = yes
list = yes
uid = fraxtil
gid = fraxtil
auth users = fraxtil
secrets file = /etc/rsyncd.secrets
[pictures]
(mostly the same as above)
cygwin's /etc/fstab:
none /cygdrive cygdrive noacl,binary,posix=0,user 0 0
I've noticed that when I browse D:\ from bash under cygwin, most of the files have mode 0200 or 0000. This might be related to my problem. However, the newly created files, oddly enough, have mode 0270, which is baffling because those are the ones I can't access, yet they have more permissions.
Any ideas?
This was originally posted as an edit to my original question, but now that another answer exists I need to accept something to keep my 100% accept rate:
I just removed the --chmod bit from the rsync command and now I have the proper permissions again. I had tried --chmod with noacl and no --chmod with acl, but hadn't yet tried no --chmod with noacl. The lattermost works for me. I'll leave this open if anyone wants to explain why this happens the way it does.
Thanks for the solution.
I was stucked in rsyncing between a disk, and a USB disk.
rsync created stranged NTFS permissions on folder too.
Adding --chmod=a+rwx,g+rwx,o+rwx allow me to rsync locally.
Final command is :
rsync -rv --delete --exclude "Dropbox/" --chmod=a+rwx,g+rwx,o+rwx /cygdrive/d/source /cygdrive/h/destination
I'm trying to push changes to my server through ssh on windows (cygwin) using rsync.
The command I am using is:
rsync -rvz -e ssh /cygdrive/c/myfolder/ rsyncuser#192.168.1.110:/srv/www/prj112/myfolder/
/srv/www/prj112/myfolder/ is owned by rsyncuser. My problem is that eventhough with rsync the sub directories are create as they publish, each directory is assigned default permission of d--------- so rsync fails to copy any files inside it.
How do I fix this?
The option to ignore NTFS permissions has changed in Cygwin version 1.7. This might be what's causing the problem.
Try adding the 'noacl' flag to your Cygwin mounts in C:\cygwin\etc\fstab, for example:
none /cygdrive cygdrive user,noacl,posix=0 0 0
You can pass custom permissions via rsync using the 'chmod' option:
rsync -rvz --chmod=ugo=rwX -e ssh source destination
Your problem stems from the fact that the Unix permissions on that directory really are 0. All of the access information is stored in separate ACLs, which rsync does not copy. Thus, it sets the permissions on the remote copy to 0, and, obviously, is unable to write to that directory afterwards.
You can run
chmod -R 775
on that directory, which should fix your rsync problem.
After a look at the manpage I can tell that the chmod param is available in rsync since version ~2.6.8. But you have to use --chmod=ugo=rwX in combination with rsync -av
You should also try this command:
rsync -av <SOURCE_DIR> rsyncuser#192.168.1.110:/srv/www/prj112/myfolder
It would work on Linux at least. And note that rsync does not need to mention ssh--at least on Linux.
But if all fails and just to give an option you may take a look at this ready packed-up tool cwRsync
if you deploy a site from windows (for ex. octopress use rsync) it's possible set permission to 775 adding multiple chmod command:
rsync -avz --chmod=ug=rwx --chmod=o=rx -e ssh
To rsync from Windows to Unix/Linux you should provide a command like
SET BACKUP_SERVER=my.backup.server
SET SSH_USER=theUnixUserName
SET STORAGEPATH=/home/%SSH_USER%/Backup/
SET STORAGEURI=%BACKUP_SERVER%:%STORAGEPATH%
SET SSH_ID=/cygdrive/c/Users/theWindowsUserName/Documents/keyfiles/id_dsa
SET EXCLUDEFILE=backup_excludes.txt
SET BACKUPLOGFILE=/cygdrive/c/Users/theWindowsUserName/Backuplogs/backup-%DATE%-%TIME::=-%.log
The ssh command then is
SET BACKUP=rsync -azvu --chmod=Du=rwx,Dgo=rx,Fu=rw,Fgo=r --rsh="ssh -l %SSH_USER% -i '%SSH_ID%'" --exclude-from=%EXCLUDEFILE% --delete --delete-excluded --log-file="%BACKUPLOGFILE%"
with backup_excludes.txt containing lines of ignored elements like
.git
.svn
.o
\Debug
\Release
Then you would use this in a script with
%BACKUP% /cygdrive/c/mySensibleData %STORAGEURI%
%BACKUP% /cygdrive/c/myOtherSensibleData %STORAGEURI%
%BACKUP% /cygdrive/c/myOtherSensibleData2 %STORAGEURI%
and so on. This will backup your directories mySensibleData, myOtherSensibleData and myOtherSensibleData2 with the permissions 755 for directories and 644 for files. You also get backup logs in your %BACKUPLOGFILE% for each backup.
Cygwin rsync will report permission denied when some process has the target file open. Download and run Process Explorer and find out if anything else is locking the file or simply try renaming the file and see if you get the Windows error about some other process having the file open.
Also, you can try to create a (global) environment variable CYGWIN and set its value to nontsec
I am creating a bash script to backup my files with rsync.
Backups all come from a single directory.
I only want new or modified files to be backed up.
Currently, I am telling rsync to backup the dir, and to check the files compared to the last backup.
The way I am doing this is
THE_TIME=`date "+%Y-%m-%dT%H:%M:%S"`
rsync -aP --link-dest=/Backup/Current /usr/home/user/backup /Backup/Backup-$THE_TIME
rm -f /Backup/Current
ln -s /Backup/Backup-$THE_TIME /Backup/Current
I am pretty sure I have the syntax correct for this. Each backup will check against the "Current" folder, and upload only as necesary. It will then delete the Current folder, and re-create the symlink to the newest backup it just did.
I am getting an error when I run the script:
rsync: link "/Backup/Backup-2010-08-04-12:21:15/dgs1200series_manual_310.pdf"
=> /Backup/Current/dgs1200series_manual_310.pdf
failed: Operation not supported (45)
The host OS is running HFS filesystem, which supports hard linking. I am trying to figure out if something else is not supporting this, or if I have a problem in my code.
Thanks for any help
Edit:
I am able to create a hard link on my local machine.
I am also able to create a hard link on the remote server (when logged in locally)
I am NOT able to create a hard link on the remote server when mounted via afp. Even if both files exist on the server.
I am guessing this is a limitation of afp.
Just in case your command line is only an example: Be sure to always specify the link-dest directory with an absolute pathname! That’s something which took me quite some time to figure out …
Two things from the man page stand out that are worth checking:
If file's aren't linking, double-check their attributes. Also
check if some attributes are getting forced outside of rsync's
control, such a mount option that squishes root to a single
user, or mounts a removable drive with generic ownership (such
as OS X's “Ignore ownership on this volume” option).
and
Note that rsync versions prior to 2.6.1 had a bug that could
prevent --link-dest from working properly for a non-super-user
when -o was specified (or implied by -a). You can work-around
this bug by avoiding the -o option when sending to an old rsync.
Do you have the "ignore ownership" option turned on? What version of rsync do you have?
Also, have you tried manually creating a similar hardlink using ln at the command line?
I don't know if this is the same issue, but I know that rsync can't sync a file when the destination is a FAT32 partition and the filename has a ":" (colon) in it. [The source filesystem is ext3, and the destination is FAT32]
Try reconfiguring the date command so that it doesn't use a colon and see if that makes a difference.
e.g.
THE_TIME=`date "+%Y-%m-%dT%H_%_%S"`
I am using RSync to copy tar balls to an external hard drive on a Windows XP machine.
My files are tar.gz files (perms 600) in a directory (perms 711).
However, when I do a dry-run, only the folders are returned, the files are ignored.
I use RSync a lot, so I presume there is no issue with my installation.
I have tried changing permissions of the files but this makes no difference
The owner of the files is root, which is also the user which the script logs in as
I am not using Rsync's CVS option
The command I am using is:
rsync^
-azvr^
--stats^
--progress^
-e 'ssh -p 222' root#servername:/home/directory/ ./
Is there something I am missing to get my files copied over?
I can think of only a single possibility: My experience with rsync is that it creates the directory structure before copying files in. Rsync may be terminating prematurely, but after this directory step has been completed.
Update0
You mentioned that you were running dry run. Rsync by default only shows the directory names when the directory and all its contents are not present on the receiver.
After a lot of experimentation, I'm only able to reproduce the behaviour you describe if the directories on the source have later modification dates than on the receiver. In this instance, the modification times are adjusted on the receiver.
I had this problem too, and it turns out that backing up to a windows drive from linux doesn't seem to copy the temp files in place, after they are transferred over.
Try adding the --inplace flag, when rsyncing to windows drives.