How do I diagnose input/output errors? - bash

I am totally baffled. I used
su cp -a /my/home/dir /backup/home/dir
to backup my home directory and found a few files (about 20) that didn't copy due to input/output errors. These files look fine, some are .jpgs, some are .gifs, one is my Virtualbox VDI file...they work fine on the original home dir, but they JUST WON'T COPY. I tried manually doing them. I tried doing them using Nautilus. I tried changing the permissions to 777 and made sure the ownership was non-root...still no dice. I get:
cp: reading `/my/home/dir/subfolder/abc_def.gif': Input/output error
I'm scratching my head and while I could lose a gif or jpg here and there, I don't want to lose my vdi file. Do I need to add a --force to the cp command? Is there any way to find out more info about why these particular files aren't copying? In the case of the .jpgs, they're all in one folder of images I took during a recent trip, shot in the same camera, same CF card, and transferred the same way at the same time.
Totally baffled. Any help would be fantastic. Ideally, a way to force copy these files. They seem to be fine, usable, and I trust them, so I've no clue why they're not getting copied.

Related

iTerm can't delete directories

I just started working with iTerm an I've been playing around, creating folders and files and such. Now I want to get rid of those test folders. I used rmdir, rm -r and all of those commands and I successfully emptied the folders, but they still show up in my main folder.
When I try to delete them, it says the folders aren't empty, even though they are.
Does anybody know how to fix this? I just want to clean up my main folder. :(
Thanks in advance.
I tried various delete commands.

Shell script CP cannot overwrite directory

our users have written a shell script to copy an application into into the /Applications folder on OSX. it works great the first time, but the second time they get an error. This is a new development, it apparently used to work fine before we changed the App name.
The shell script runs the following:
cp -a ApplicationName.app /Applications
open -a '/Applications/ApplicationName.app/Contents/MacOS/ApplicationName' --args -LSRC autolaunch
The first time it runs, it works fine, the application is copied over and then it launches. the second time it comes back with the following errors
[jrivera#chamomile] $ sudo ./InstallScript.sh /SRNM ABC1234567
cp: cannot overwrite directory /Applications/ApplicationName.app/Contents/Frameworks/Sparkle.framework/Headers with non-directory ApplicationName.app/Contents/Frameworks/Sparkle.framework/Headers
cp: cannot overwrite directory /Applications/ApplicationName.app/Contents/Frameworks/Sparkle.framework/Resources with non-directory ApplicationName.app/Contents/Frameworks/Sparkle.framework/Resources
cp: cannot overwrite directory /Applications/ApplicationName.app/Contents/Frameworks/Sparkle.framework/Versions/A/Resources/fr_CA.lproj with non-directory ApplicationName.app/Contents/Frameworks/Sparkle.framework/Versions/A/Resources/fr_CA.lproj
cp: cannot overwrite directory /Applications/ApplicationName.app/Contents/Frameworks/Sparkle.framework/Versions/A/Resources/pt.lproj with non-directory ApplicationName.app/Contents/Frameworks/Sparkle.framework/Versions/A/Resources/pt.lproj
cp: cannot overwrite directory /Applications/ApplicationName.app/Contents/Frameworks/Sparkle.framework/Versions/Current with non-directory ApplicationName.app/Contents/Frameworks/Sparkle.framework/Versions/Current
I'm not exactly sure why that's happening. it's the exact same script in the exact same location copying the exact same things 30 seconds apart. I dug into each and the directories and files all appear the exact same file type. I tried adding other commands to the cp to force it (-RfXv) but got the same thing. Any ideas? maybe it's a strange thing with sparkle?
I would suspect that the problematic files/directories have some extended attributes, and that cp is having problems overwriting the target when it has those attributes. (cp when preserving permissions often seems unreliable on different platforms).
Given that, there are a couple of workarounds to explore:
remove the target /Applications/ApplicationName.app before re-copying it.
use rsync, e.g.,
rsync -vaz ApplicationName.app/ /Applications/ApplicationName.app
Removing the target first may interfere with people using it while you are updating it; rsync works incrementally (and almost always updates more rapidly than cp).

Rsync symlinked content

I use rsync to upload websites to a server. I use it something like this:
rsync -auvzhL . username#host:/home/username/foldername
I only want to update things that are newer on my computer, and not to delete things on the server that I don't have.
This all worked fine, until I decided to symlink some files in the folder to other ones. Now if I change that file, it doesn't get included in rsync unless I delete and recreate the symlink. Assumedly because the symlink date is the creation of the link, not the content.
Is there anyway to either force rsync to always copy certain files, or, better, update the modified date on a symlink without deleting and recreating it?
In the end I just delete and recreate all the symlinks before rsyncing. I'm sure there is a better way, I just can't work it out!

Rsync bash script and hard linking files

I am creating a bash script to backup my files with rsync.
Backups all come from a single directory.
I only want new or modified files to be backed up.
Currently, I am telling rsync to backup the dir, and to check the files compared to the last backup.
The way I am doing this is
THE_TIME=`date "+%Y-%m-%dT%H:%M:%S"`
rsync -aP --link-dest=/Backup/Current /usr/home/user/backup /Backup/Backup-$THE_TIME
rm -f /Backup/Current
ln -s /Backup/Backup-$THE_TIME /Backup/Current
I am pretty sure I have the syntax correct for this. Each backup will check against the "Current" folder, and upload only as necesary. It will then delete the Current folder, and re-create the symlink to the newest backup it just did.
I am getting an error when I run the script:
rsync: link "/Backup/Backup-2010-08-04-12:21:15/dgs1200series_manual_310.pdf"
=> /Backup/Current/dgs1200series_manual_310.pdf
failed: Operation not supported (45)
The host OS is running HFS filesystem, which supports hard linking. I am trying to figure out if something else is not supporting this, or if I have a problem in my code.
Thanks for any help
Edit:
I am able to create a hard link on my local machine.
I am also able to create a hard link on the remote server (when logged in locally)
I am NOT able to create a hard link on the remote server when mounted via afp. Even if both files exist on the server.
I am guessing this is a limitation of afp.
Just in case your command line is only an example: Be sure to always specify the link-dest directory with an absolute pathname! That’s something which took me quite some time to figure out …
Two things from the man page stand out that are worth checking:
If file's aren't linking, double-check their attributes. Also
check if some attributes are getting forced outside of rsync's
control, such a mount option that squishes root to a single
user, or mounts a removable drive with generic ownership (such
as OS X's “Ignore ownership on this volume” option).
and
Note that rsync versions prior to 2.6.1 had a bug that could
prevent --link-dest from working properly for a non-super-user
when -o was specified (or implied by -a). You can work-around
this bug by avoiding the -o option when sending to an old rsync.
Do you have the "ignore ownership" option turned on? What version of rsync do you have?
Also, have you tried manually creating a similar hardlink using ln at the command line?
I don't know if this is the same issue, but I know that rsync can't sync a file when the destination is a FAT32 partition and the filename has a ":" (colon) in it. [The source filesystem is ext3, and the destination is FAT32]
Try reconfiguring the date command so that it doesn't use a colon and see if that makes a difference.
e.g.
THE_TIME=`date "+%Y-%m-%dT%H_%_%S"`

RSync copies only folder directory structure not files

I am using RSync to copy tar balls to an external hard drive on a Windows XP machine.
My files are tar.gz files (perms 600) in a directory (perms 711).
However, when I do a dry-run, only the folders are returned, the files are ignored.
I use RSync a lot, so I presume there is no issue with my installation.
I have tried changing permissions of the files but this makes no difference
The owner of the files is root, which is also the user which the script logs in as
I am not using Rsync's CVS option
The command I am using is:
rsync^
-azvr^
--stats^
--progress^
-e 'ssh -p 222' root#servername:/home/directory/ ./
Is there something I am missing to get my files copied over?
I can think of only a single possibility: My experience with rsync is that it creates the directory structure before copying files in. Rsync may be terminating prematurely, but after this directory step has been completed.
Update0
You mentioned that you were running dry run. Rsync by default only shows the directory names when the directory and all its contents are not present on the receiver.
After a lot of experimentation, I'm only able to reproduce the behaviour you describe if the directories on the source have later modification dates than on the receiver. In this instance, the modification times are adjusted on the receiver.
I had this problem too, and it turns out that backing up to a windows drive from linux doesn't seem to copy the temp files in place, after they are transferred over.
Try adding the --inplace flag, when rsyncing to windows drives.

Resources