Shell script CP cannot overwrite directory - macos

our users have written a shell script to copy an application into into the /Applications folder on OSX. it works great the first time, but the second time they get an error. This is a new development, it apparently used to work fine before we changed the App name.
The shell script runs the following:
cp -a ApplicationName.app /Applications
open -a '/Applications/ApplicationName.app/Contents/MacOS/ApplicationName' --args -LSRC autolaunch
The first time it runs, it works fine, the application is copied over and then it launches. the second time it comes back with the following errors
[jrivera#chamomile] $ sudo ./InstallScript.sh /SRNM ABC1234567
cp: cannot overwrite directory /Applications/ApplicationName.app/Contents/Frameworks/Sparkle.framework/Headers with non-directory ApplicationName.app/Contents/Frameworks/Sparkle.framework/Headers
cp: cannot overwrite directory /Applications/ApplicationName.app/Contents/Frameworks/Sparkle.framework/Resources with non-directory ApplicationName.app/Contents/Frameworks/Sparkle.framework/Resources
cp: cannot overwrite directory /Applications/ApplicationName.app/Contents/Frameworks/Sparkle.framework/Versions/A/Resources/fr_CA.lproj with non-directory ApplicationName.app/Contents/Frameworks/Sparkle.framework/Versions/A/Resources/fr_CA.lproj
cp: cannot overwrite directory /Applications/ApplicationName.app/Contents/Frameworks/Sparkle.framework/Versions/A/Resources/pt.lproj with non-directory ApplicationName.app/Contents/Frameworks/Sparkle.framework/Versions/A/Resources/pt.lproj
cp: cannot overwrite directory /Applications/ApplicationName.app/Contents/Frameworks/Sparkle.framework/Versions/Current with non-directory ApplicationName.app/Contents/Frameworks/Sparkle.framework/Versions/Current
I'm not exactly sure why that's happening. it's the exact same script in the exact same location copying the exact same things 30 seconds apart. I dug into each and the directories and files all appear the exact same file type. I tried adding other commands to the cp to force it (-RfXv) but got the same thing. Any ideas? maybe it's a strange thing with sparkle?

I would suspect that the problematic files/directories have some extended attributes, and that cp is having problems overwriting the target when it has those attributes. (cp when preserving permissions often seems unreliable on different platforms).
Given that, there are a couple of workarounds to explore:
remove the target /Applications/ApplicationName.app before re-copying it.
use rsync, e.g.,
rsync -vaz ApplicationName.app/ /Applications/ApplicationName.app
Removing the target first may interfere with people using it while you are updating it; rsync works incrementally (and almost always updates more rapidly than cp).

Related

using entr to watch a directory without any matching files

I want to modify *.ica files (to launch Citrix apps) when they are downloaded (to add a transparent Key Passthrough option for remote desktop), so I settled on using entr to monitor the directory, and call then another script (which invokes sed) to update all ica files.
while true; do
ls *.ica | entr -d ~/Downloads/./transparentKeyPassthrough-CitrixIca.sh
done
However, this only works when there is already an .ica file in the directory. If the directory has no *.ica files when first executed, entr errors with:
entr: No regular files to match
Putting a dummy ica file suffices, in which case the new (real) ica file will be detected by entr, and then acted on.
Is there a better way to do this?
The alternative I can think of is to use entr to watch the whole directory for any changes, and if so, run ls -l *.ica and if the change resulted in a new ica file, and then in turn, run the above script.
It seems inelegant and complicated to nest entr that way, so wanted to know if there is some simple option I am missing.

Answer no to all questions using the yes command

I'm trying to remove all files except read-only ones, but this command removes all of them anyway:
yes n | rm *
Did I do something wrong? If not, why doesn't it work?
For rm to automatically enable -i mode that prompts the user to delete unwritable files, the standard input has to be a terminal (as specified in the man pages).
So, for the command to work correctly the user has to specify the -i option manually:
yes n | rm -i *
After doing so the command works as expected.
In Posix systems, the read-only state of a file does not prevent it from being removed by rm.
You haven't said what your shell is, but perhaps you have an alias to rm that does ask you for confirmation when the file is read-only, and that alias behaves differently when it stdin is part of a pipe.
The problem is you only need write permissions to the folder, not to the files, to remove them:
(From here)
Any attempt to access a file's data requires read permission. Any attempt to modify a file's data requires write permission. Any attempt to execute a file (a program or a script) requires execute permission.
In *nix systems directories are also files and thus use the same permission system as for regular files. Note permissions assigned to a directory are not inherited by the files within that directory.
Because directories are not used in the same way as regular files, the permissions work slightly (but only slightly) differently. An attempt to list the files in a directory requires read permission for the directory, but not on the files within. An attempt to add a file to a directory, delete a file from a directory, or to rename a file, all require write permission for the directory, but (perhaps surprisingly) not for the files within. Execute permission doesn't apply to directories (a directory can't also be a program). But that permission bit is reused for directories for other purposes.
To find files with specific permissions you can use
find -perm <mode>
read more
To remove files found by find you can use
find . -perm 444 -exec /bin/rm {} \;
(mybe slightly different, it depends on files you search and system you have)
more exec examples

rm -r -f doesn't delete inner folder

I have large projects and some scripts to compile them. I can't add all code here, so I'll try to simplify the problem: in the cleaning part, I need to clean folder named directory which contains other directory named innerDir. I have this bash command for cleaning directory:
clean:
rm -r -f directory
When directory is a folder that I created with mkdir -p beforehand. When I clean, I get this error:
rm: cannot remove 'directory': Directory not empty
But when I try to enter directory , I see that it's empty. So for debugging, I modified my cleanning part to be:
rm -r -f directory/*
find directory
rmdir directory
(it's suppose to do the same, but here I also get the chance to see if all the content of directory was really deleted).
Now I get this error:
find: 'directory/innerDir': Permission denied
There are two things that unclear for me here:
(1). innerDir was created with makedir -p before the clening part, without any change to the permissions of it later in the code. Why don't I have permission to delete it?
(2). If I try to clean again- the cleaning succeed and I don't have any permission problem. So, if I got permission error in the first time I tried to delete it, why don't I get it in the second time?
If your permissions are valid down the directory tree, rm -fr directory ought to work.
If you don't have read access on innerDir, then is it possible/likely (depending on running processes, perhaps) that something has written to innerDir, but the file gets cleaned up after so that the directory becomes free?
Can you give examples of permissions, ownership, and some scope of the operations happening between each step?
Could you rename the parent folder while working, and/or lock it's permissions to prevent other users or processes from altering things?

How to change the permission of a directory inside the .tar.gz file? [duplicate]

Is there a way to chmod 777 the contents of a tarfile upon creation (or shortly thereafter) before distributing? The write permissions of the directory that's being tar'd is unknown at the time of tar'ing (but typically 555). I would like the unrolled dir to be world writable without the users who are unrolling the tar to have to remember to chmod -R 777 <untarred dir> before proceeding.
The clumsy way would be to make a copy of the directory, and then chmod -R 777 <copydir> but I was wondering if there was a better solution.
I'm on a Solaris 10 machine.
BACKGROUND:
The root directory is in our ClearCase vob with specific file permissions, recursively. A tarfile is created and distributed to multiple "customers" within our org. Most only need the read/execute permissions (and specifically DON'T want them writable), but one group in particular needs their copy to be recursively writable since they may edit these files, or even restore back to a "fresh" copy (i.e., in their original state as I gave them).
This group is somewhat technically challenged. Even though they have instructions on the "how-to's" of the tarfile, they always seem to forget (or get wrong) the setting of the files to be recursively writable once untarred. This leads to phone calls to me to diagnose a variety of problems where the root cause is that they forgot to do (or did incorrectly) the chmod'ing of the unrolled directory.
And before you ask, yes, I wrote them a script to untar/chmod (specific just for them), but... oh never mind.
So, I figured I'd create a separate, recursively-writable version of the tar to distribute just to them. As I said originally, I could always create a copy of the dir, make the copy recursively writable and then tar up the copy dir, but the dir is fairly large, and disk space is sometimes near full (it can vary greatly), so making a copy of the dir will not be feasable 100% of the time.
With GNU tar, use the --mode option when creating the archive, e.g.:
tar cf archive.tar --mode='a+rwX' *
But note that when the archive is extracted, the umask will be applied by default. So unless the user's umask is 000, then the permissions will be updated at that point. However, the umask can be ignored by using the -p (--preserve) option, e.g.:
tar xfp archive.tar
You can easily change the permissions on the files prior to your tar command, although I generally recommend people NEVER use 777 for anything except /tmp on a unix system, it's more productive to use 755 or worst case 775 for directories. That way you're not letting the world write to your directories, which is generally advisable.
Most unix users don't like to set the permissions recursively because it sets the execute bit on files that should not be executable (configuration files for instance) to avoid this they invented a new way to use chmod some time ago, called symbolic mode. Reading the man page on chmod should provide details, but you could try this:
cd $targetdir; chmod -R u+rwX,a+rX .; tar zcvf $destTarFile .
Where your $targetdir is the directory you are tarring up and $destTarFile is the name of the tar file you're creating.
When you untar that tar file, the permissions are attempted to be retained. Certain rules govern that process of course - the uid and gid of the owner will only be retained if root is doing the untaring, but otherwise, they are set to the efective uid and gid of the current process.

RSync copies only folder directory structure not files

I am using RSync to copy tar balls to an external hard drive on a Windows XP machine.
My files are tar.gz files (perms 600) in a directory (perms 711).
However, when I do a dry-run, only the folders are returned, the files are ignored.
I use RSync a lot, so I presume there is no issue with my installation.
I have tried changing permissions of the files but this makes no difference
The owner of the files is root, which is also the user which the script logs in as
I am not using Rsync's CVS option
The command I am using is:
rsync^
-azvr^
--stats^
--progress^
-e 'ssh -p 222' root#servername:/home/directory/ ./
Is there something I am missing to get my files copied over?
I can think of only a single possibility: My experience with rsync is that it creates the directory structure before copying files in. Rsync may be terminating prematurely, but after this directory step has been completed.
Update0
You mentioned that you were running dry run. Rsync by default only shows the directory names when the directory and all its contents are not present on the receiver.
After a lot of experimentation, I'm only able to reproduce the behaviour you describe if the directories on the source have later modification dates than on the receiver. In this instance, the modification times are adjusted on the receiver.
I had this problem too, and it turns out that backing up to a windows drive from linux doesn't seem to copy the temp files in place, after they are transferred over.
Try adding the --inplace flag, when rsyncing to windows drives.

Resources