Deleting files from a list using shell - shell

I am a beginner with Applescript & Shell and am writing a script that at a certain point requires me to delete files that are listed within a .txt file. I have searched extensively on stackoverflow and was able to come up with the following command that I am running from within my Applescript...
do shell script "while read name; do
rm -r \"$name"\
done < ~Documents/Script\\ Test/filelist.txt"
It seems to recognize and read the file, but I get an error that says this and I cannot figure out why:
error "rm: ~/Documents/Script Test/filetodelete.rtf: No such file or directory" number 1
That said, I can navigate to that exact directory and verify that a file by that name with that extension does indeed exist. Can someone help shed some light on why this error might be occurring?

You have a typo. The path to the file is most probably ~/Documents, not ~Documents (which in Bash would be the home directory of a user whose account name is Documents).
If your shell is not Bash, it might not even support ~ for $HOME.
In the data file, you also cannot use ~ to refer to your home directory. You could augment the loop with a simple substitution to support this:
while read -r file; do
case $file in '~'*) file=$HOME${file#\~};; esac
rm -r "$file"
done < ~/"Documents/Script Test/filelist.txt"
Notice also the use of read -r to avoid some pesky problems with the legacy default behavior of read.

Related

Create file, but fail if it exists, with bash [duplicate]

In system call open(), if I open with O_CREAT | O_EXCL, the system call ensures that the file will only be created if it does not exist. The atomicity is guaranteed by the system call. Is there a similar way to create a file in an atomic fashion from a bash script?
UPDATE:
I found two different atomic ways
Use set -o noclobber. Then you can use > operator atomically.
Just use mkdir. Mkdir is atomic
A 100% pure bash solution:
set -o noclobber
{ > file ; } &> /dev/null
This command creates a file named file if there's no existent file named file. If there's a file named file, then do nothing (but return a non-zero return code).
Pros of > over the touch command:
Doesn't update timestamp if file already existed
100% bash builtin
Return code as expected: fail if file already existed or if file couldn't be created; success if file didn't exist and was created.
Cons:
need to set the noclobber option (but it's okay in a script, if you're careful with redirections, or unset it afterwards).
I guess this solution is really the bash counterpart of the open system call with O_CREAT | O_EXCL.
Here's a bash function using the mv -n trick:
function mkatomic() {
f="$(mktemp)"
mv -n "$f" "$1"
if [ -e "$f" ]; then
rm "$f"
echo "ERROR: file exists:" "$1" >&2
return 1
fi
}
Examples:
$ mkatomic foo
$ wc -c foo
0 foo
$ mkatomic foo
ERROR: file exists: foo
You could create it under a randomly-generated name, then rename (mv -n random desired) it into place with the desired name. The rename will fail if the file already exists.
Like this:
#!/bin/bash
touch randomFileName
mv -n randomFileName lockFile
if [ -e randomFileName ] ; then
echo "Failed to acquired lock"
else
echo "Acquired lock"
fi
Just to be clear, ensuring the file will only be created if it doesn't exist is not the same thing as atomicity. The operation is atomic if and only if, when two or more separate threads attempt to do the same thing at the same time, exactly one will succeed and all others will fail.
The best way I know of to create a file atomically in a shell script follows this pattern (and it's not perfect):
create a file that has an extremely high chance of not existing (using a decent random number selection or something in the file name), and place some unique content in it (something that no other thread would have - again, a random number or something)
verify that the file exists and contains the contents you expect it to
create a hard link from that file to the desired file
verify that the desired file contains the expected contents
In particular, touch is not atomic, since it will create the file if it's not there, or simply update the timestamp. You might be able to play games with different timestamps, but reading and parsing a timestamp to see if you "won" the race is harder than the above. mkdir can be atomic, but you would have to check the return code, because otherwise, you can only tell that "yes, the directory was created, but I don't know which thread won". If you're on a file system that doesn't support hard links, you might have to settle for a less ideal solution.
Another way to do this is to use umask to try to create the file and open it for writing, without creating it with write permissions, like this:
LOCK_FILE=only_one_at_a_time_please
UMASK=$(umask)
umask 777
echo "$$" > "$LOCK_FILE"
umask "$UMASK"
trap "rm '$LOCK_FILE'" EXIT
If the file is missing, the script will succeed at creating and opening it for writing, despite the file being created without writing permissions. If it already exists, the script won't be able to open the file for writing. It would be possible to use exec to open the file and keep the file descriptor around.
rm requires you to have write permissions to the directory itself, without regards to file permissions.
touch is the command you are looking for. It updates timestamps of the provided file if the file exists or creates it if it doesn't.

bash is zipping entire home

I am trying to back up a all world* folders from /home/mc/server/ and drop the zipped in /home/mc/backup/
#!/bin/bash
moment=$(date +"%Y%m%d%H%M")
backup="/home/mc/backup/map$moment.zip"
map="/home/mc/server/world*"
zipping="zip -r -9 $backup $map"
eval $zipping
The zipped file is created in backup folder as expected, but when I unzipped it contants the entire /home dir. I am running this bash in two ways:
Manually
Using user's crontab
Finally, If I put an echo of echo $zipping this prints correctly the command that I need to trigger. What am I missing? Thank you in advance.
There's no reason to use eval here (and no, justifying it on DRY grounds if you want to both log a command line and subsequently execute it does not count as a good reason IMO.)
Define a function and call it with the appropriate arguments:
#!/bin/bash
moment=$(date +"%Y%m%d%H%M")
zipping () {
output=$1
shift
zip -r -9 "$output" "$#"
}
zipping "/home/mc/backup/map$moment.zip" /home/mc/server/world*
(I'll admit, I don't know what is causing the behavior you report, but it would be better to confirm it is not somehow specific to the use of eval before trying to diagnose it further.)

Bash script to mkdir

I am trying to create a directory based on a variable entered by a user and then save files there.
I thought this would be simple enough for me but I get an error message "No such file or directory" when I try to save there.
When I hit "ls" it lists the directory with a "?" after it.
I am working with an .sh script on a Mac terminal.
Relevant code:
#get user input
echo "enter the collection number"
read COLLECTION
#create the directory
mkdir "$COLLECTION"dir
#calculate a checksum and save it to the above directory
sudo openssl md5 /dev/disk1 > "$COLLECTION"dir/md5.txt
--
Check you script to see if you have DOS style line endings (\r\n). You can safely run dos2unix on the script if you aren't sure.
The ? you see in the file name may actually be the carriage return at the end of the line (since Bash doesn't treat that as whitespace).
So "$COLLECTION"dir/ doesn't exist; "$COLLECTION"dir\r/ does.
Edit:
Vi usually does a good job showing you what those special characters are.
ls | vi -
The only piece of this code likely to give you a "No such file or directory" error is the last line. Does /dev/disk1 exist on your machine?
I use mkdir -p when I get that error ;)

Can a shell script indicate that its lines be loaded into memory initially?

UPDATE: this is a repost of How to make shell scripts robust to source being changed as they run
This is a little thing that bothers me every now and then:
I write a shell script (bash) for a quick and dirty job
I run the script, and it runs for quite a while
While it's running, I edit a few lines in the script, configuring it for a different job
But the first process is still reading the same script file and gets all screwed up.
Apparently, the script is interpreted by loading each line from the file as it is needed. Is there some way that I can have the script indicate to the shell that the entire script file should be read into memory all at once? For example, Perl scripts seem to do this: editing the code file does not affect a process that's currently interpreting it (because it's initially parsed/compiled?).
I understand that there are many ways I could get around this problem. For example, I could try something like:
cat script.sh | sh
or
sh -c "`cat script.sh`"
... although those might not work correctly if the script file is large and there are limits on the size of stream buffers and command-line arguments. I could also write an auxiliary wrapper that copies a script file to a locked temporary file and then executes it, but that doesn't seem very portable.
So I was hoping for the simplest solution that would involve modifications only to the script, not the way in which it is invoked. Can I just add a line or two at the start of the script? I don't know if such a solution exists, but I'm guessing it might make use of the $0 variable...
The best answer I've found is a very slight variation on the solutions offered to How to make shell scripts robust to source being changed as they run. Thanks to camh for noting the repost!
#!/bin/sh
{
# Your stuff goes here
exit
}
This ensures that all of your code is parsed initially; note that the 'exit' is critical to ensuring that the file isn't accessed later to see if there are additional lines to interpret. Also, as noted on the previous post, this isn't a guarantee that other scripts called by your script will be safe.
Thanks everyone for the help!
Use an editor that doesn't modify the existing file, and instead creates a new file then replaces the old file. For example, using :set writebackup backupcopy=no in Vim.
How about a solution to how you edit it.
If the script is running, before editing it, do this:
mv script script-old
cp script-old script
rm script-old
Since the shell keep's the file open as long as you don't change the contents of the open inode everything will work okay.
The above works because mv will preserve the old inode while cp will create a new one. Since a file's contents will not actually be removed if it is opened, you can remove it right away and it will be cleaned up once the shell closes the file.
According to the bash documentation if instead of
#!/bin/bash
body of script
you try
#!/bin/bash
script=$(cat <<'SETVAR'
body of script
SETVAR)
eval "$script"
then I think you will be in business.
Consider creating a new bang path for your quick-and-dirty jobs. If you start your scripts with:
#!/usr/local/fastbash
or something, then you can write a fastbash wrapper that uses one of the methods you mentioned. For portability, one can just create a symlink from fastbash to bash, or have a comment in the script saying one can replace fastbash with bash.
If you use Emacs, try M-x customize-variable break-hardlink-on-save. Setting this variable will tell Emacs to write to a temp file and then rename the temp file over the original instead of editing the original file directly. This should allow the running instance to keep its unmodified version while you save the new version.
Presumably, other semi-intelligent editors would have similar options.
A self contained way to make a script resistant to this problem is to have the script copy and re-execute itself like this:
#!/bin/bash
if [[ $0 != /tmp/copy-* ]] ; then
rm -f /tmp/copy-$$
cp $0 /tmp/copy-$$
exec /tmp/copy-$$ "$#"
echo "error copying and execing script"
exit 1
fi
rm $0
# rest of script...
(This will not work if the original script begins with the characters /tmp/copy-)
(This is inspired by R Samuel Klatchko's answer)

Running bash shell in Maemo

I have attempted to run the following bash script on my internet tablet (Nokia N810 running on Maemo Linux). However, it doesn't seem that it is running, and I have no clue of what's wrong with this script (it runs on my Ubuntu system if I change the directories). It would be great to receive some feedback on this or similar experiences of this issue. Thanks.
WORKING="/home/user/.gpe"
SVNPATH="/media/mmc1/gpe/"
cp calendar categories contacts todo $WORKING
What actually happens when you run your script? It's helpful if you include details of error messages or behavior that differs from what's expected and in what way.
If $WORKING contains the name of a directory, hidden or not, then the cp should copy those four files into it. Then ls -l /home/user/.gpe should show them plus whatever else is in there, regardless of whether it's "hidden".
By the way, the initial dot in a file or directory name doesn't really "hide" the entry, it's just that ls and echo * and similar commands don't show them, while these do:
ls -la
ls -d .*
ls -d {.*,*}
echo .*
echo {.*,*}
The bash cp command can copy multiple sources to a single destination, if it's a directory.
Does the directory /home/user/.gpe exist?
Bear in mind that the leading dot in the name can make it hidden unless you use ls -a
I tried your commands in cygwin:
But I used .gpe instead of /home/user/.gpe
I did a touch calendar categories contacts todo to create the files.
It worked fine.
If that's the entirety of your script, it's missing two. possible three, things:
A shebang line, such as #!/bin/sh at the start
Use of $SVNPATH. You probably want to cd $SVNPATH before the cp command. Your script should not assume the current working directory is correct.
Possibly execute permission on the script: chmod a+x script
Do you already have the /home/user/.gpe directory present? And also, try adding a -R parameter so that the directories are copied recursively.

Resources