Error handling in shell script - shell

I have the following the shell script. Which throws the following error if there is no file exist in the folder. So, how we do handle this so that script doesn't stop executing?
Error:
mv: cannot stat `*': No such file or directory
Script:
for file in *
do
fl=$(basename "$file")
flname="${fl%%.*}"
gunzip "$file"
mv "$flname" "$flname-NxDWeb2"
tar -zcf "$flname-NxDWeb2".tar.gz "$flname-NxDWeb2"
rm "$flname-NxDWeb2"
done;

If the shell is bash, you can allow * to expand to the null string: shopt -s nullglob before your loop.
BTW, you might want to explicitly specify the uncompressed filename to produce, in case your logic doesn't completely agree with gunzip's (which it probably won't, if there's more than one dot in the name, or the file ends with .tgz or .taz):
gunzip -c "$file" >"$flname"
(you will need to remove the original yourself in this case, though)
You can avoid the need to move, too:
flname="${fl%%.*}-NxDWeb2"
And you might want to use trap to ensure your temporaries are cleaned up in the failure case (possible make your temporaries in $TMPDIR, too).

Related

How to make folders for individual files within a directory via bash script?

So I've got a movie collection that's dumped into a single folder (I know, bad practice in retrospect.) I want to organize things a bit so I can use Radarr to grab all the appropriate metadata, but I need all the individual files in their own folders. I created the script below to try and automate the process a bit, but I get the following error.
Script
#! /bin/bash
for f in /the/path/to/files/* ;
do
[[ -d $f ]] && continue
mkdir "${f%.*}"
mv "$f" "${f%.*}"
done
EDIT
So I've now run the script through Shellcheck.net per the suggestion of Benjamin W. It doesn't throw any errors according to the site, though I still get the same errors when I try running the command.
EDIT 2*
No errors now, but the script does nothing when executed.
Assignments are evaluated only once, and not whenever the variable being assigned to is used, which I think is what your script assumes.
You could use a loop like this:
for f in /path/to/all/the/movie/files/*; do
mkdir "${f%.*}"
mv "$f" "${f%.*}"
done
This uses parameter expansion instead of cut to get rid of the file extension.

Are glob expressions subject to caching? How can a refresh be forced?

My script downloads files from a web server in an infinite loop. My script calls wget to get the newest files (ones I haven't gotten before), then each new file needs to be processed. The problem is that after running wget, the files have been properly downloaded (based on an ls in a separate window), but sometimes my script (specifically, the line beginning for curFile in) sees them and sometimes it doesn't, which makes me think it is sometimes looking at an outdated cache.
while [ 5 -lt 10 ]; do
timestamp=(date +%s)
wget -mbr -l0 --no-use-server-timestamps --user=username --password=password ftp://ftp.mysite.com/public_ftp/incoming/*.txt
for curFile in ftp.mysite.com/public_ftp/incoming/*.txt; do
curFileMtime=$(stat -c %W "$curFile")
if((curFileMtime > timestamp)); then
echo "$curFile"
cp "$curFile" CommLink/MDLFile
cd CommLink
SendMDLGetTab
cd ..
fi
done
sleep 120
done
The first few times through the loop this seems to work fine, then it becomes sporadic afterwards (sometimes it sees the new files and sometimes it doesn't). I've done a lot of googling, and found that bash does cache pathnames for use in running executables (so sometimes it tries to execute things that aren't there, if the executable has been recently removed) but I haven't found anything on caching non-executable filenames, which would result in it not seeing things that are there. Any ideas? If it is a caching problem, how can I force it to not look at the cache?
As the most immediate issue -- the -b argument to wget tells it to run in the background. Thus, with this flag set, the first subsequent command takes place while wget is still running.
Beyond that: Results of glob expressions -- such as ftp.mysite.com/public_ftp/incoming/*.txt -- are not cached by the shell. However, this glob is only evaluated once per loop: If a new text file isn't present at the start of that loop, it won't be picked up until the next iteration.
However, the mechanism the code in the question uses for excluding files already present before wget was run is prone to race conditions. I would suggest the following instead:
while IFS= read -r -d '' filename; do
[[ "$filename" -nt CommLink/MDLFile ]] || continue # retest in case MDLFile has changed
cp -- "$filename" CommLink/MDLFile && {
touch -r "$filename" CommLink/MDLFile # copy mtime to destination
(cd CommLink && exec SendMDLGetTab) # scope cd to subshell
}
done < <(find ftp.mysite.com/public_ftp/incoming/ \
-name '*.txt' \
-newer CommLink/MDLFile \
-print0)
Some of the finer points:
The code above compares timestamps to the current copy of MDLFile, rather than to the beginning of the current iteration of the loop. This is more robust in terms of ensuring that updates are processed if a prior invocation of this script was interrupted after the wget but before the cp.
Using touch -r ensures that the new copy of MDLFile retains the mtime of the original. (One might replace the cp with ln -f to hardlink the inode to get this same effect without any race conditions and while only storing MDLFile once on-disk, if the side effects are acceptable).
The code above only performs operations intended to be run inside a subdirectory if the cd into that subdirectory succeeded, and scopes that cd by performing operations intended to be performed in a separate directory in a subshell. (The cost of this subshell is offset by using exec when running the external command its ultimate intent is to trigger).
Using a NUL-delimited stream, as in find -print0 | while IFS= read -r -d '' filename, ensures that all possible names -- including names with literal newlines -- can be correctly handled.
Whether timestamps are stored at or beyond integer-level resolution varies by filesystem; however, bash only supports integer math. The above -- wherein no numeric comparisons are performed by the shell -- is thus somewhat more robust.

Prevent expansion of `~`

I have a script which sync's a few files with a remote host. The commands that I want to issue are of the form
rsync -avz ~/.alias user#y:~/.alias
My script looks like this:
files=(~/.alias ~/.vimrc)
for file in "${files[#]}"; do
rsync -avz "${file}" "user#server:${file}"
done
But the ~ always gets expanded and in fact I invoke the command
rsync -avz /home/user/.alias user#server:/home/user/.alias
instead of the one above. But the path to the home directory is not necessarily the same locally as it is on the server. I can use e.g. sed to replace this part, but it get's extremely tedious to do this for several servers with all different paths. Is there a way to use ~ without it getting expanded during the runtime of the script, but still rsync understands that the home directory is meant by ~?
files=(~/.alias ~/.vimrc)
The paths are already expanded at this point. If you don't want that, escape them or quote them.
files=('~/.alias' \~/.vimrc)
Of course, then you can't use them, because you prevented the shell from expanding '~':
~/.alias: No such file or directory
You can expand the tilde later in the command using eval (always try to avoid eval though!) or a simple substitution:
for file in "${files[#]}"; do
rsync -avz "${file/#\~/$HOME/}" "user#server:${file}"
done
You don't need to loop, you can just do:
rsync -avz ~/.alias/* 'user#y:~/.alias/'
EDIT: You can do:
files=(.alias .vimrc)
for file in "${files[#]}"; do
rsync -avz ~/"${file}" 'user#server:~/'"${file}"
done

If-else-statement is working wrong in crontab?

When i use this:
*/5 6-18 * * 1-6 [ "$(ls -A /DIR_WHERE_FILES_ARE_OR_NOT/)" ] &&
rsync -au /DIR_WHERE_FILES_ARE_OR_NOT/ /DIR_WHERE_FILES_SHOLD_GO; \
mv /DIR_WHERE_FILES_ARE_OR_NOT/* /SAVE_DIR/ ||
mail -s "DIR IS EMPTY" myemail#klkldkl.de <<< "message"
i get two mails:
mv: cannot stat `/DIR_WHERE_FILES_ARE_OR_NOT/*': No such file or
directory
and
"DIR IS EMPTY"
Why?
You get
mv: cannot stat `/DIR_WHERE_FILES_ARE_OR_NOT/*': No such file or directory
for exactly the reason stated: that directory is empty, hence it does not contain a file named * (asterisk). It's just the way glob expansion works in the shell: if the glob doesn't match anything it is passed literally to the command. Since mv attemps to rename a non-existing file, it complains as shown.
This would all be much more readable, if instead of a sequence of && and || operators in a crontab you would place the whole logic in a script with equivalent if/else/fi constructs and just call the script from cron.
You get two mails because you explicitly send the first with mail -s. The second is from cron because the output on stderr and stdout is not empty.
Your commands are equivalent to
if [ "$(ls ...)" ]; then
rsync
fi
if ! mv; then
mail
fi
Note that there is no else.
Just like user Jens already mentioned, and also from my experience, unless you are using a very simple and usually single command, you should stick to script files. So, in your case, I would go with a script file. I'll give you an example.
#!/bin/bash
dir_where_files_are_or_not=/filespath
dir_where_files_should_go=/another/filespath
save_dir=/savefiles/path
# ok, lets start by checking if dir contains files
if [ "$(ls -A $dir_where_files_are_or_not)" ]; then
# dir contains files, so lets rsync and mv them
rsync -au $dir_where_files_are_or_not/ $dir_where_files_should_go
mv $dir_where_files_are_or_not/* $save_dir
else
# dir is empty, lets send email
mail -s "DIR IS EMPTY" myemail#klkldkl.de <<< "message"
fi
Now, I just put this code in a file. Give it a name, for example "chkfiles" and save it in a directory (I use /usr/local/sbin for all of my scripts).
Next, in a shell, run the command chmod +x /usr/local/sbin/chkfiles to make the file executable. Then add the script to your crontab.
I would suggest the following line inside crontab:
*/5 6-18 * * 1-6 /bin/bash /usr/local/sbin/chkfiles
I used /bin/bash to call the right interpreter for this script. It should now work as expected.
Important Notes:
Before running the script, you need to change the dir_where_files_are_or_not, dir_where_files_should_go and save_dir vars to your needs.
Do NOT include trailing slashes in the dirs, otherwise the rsync and mv might not do what you really want
Regards
You get two mails because when mv fails, cron captures what is written to standard error and mails it to the owner, then runs the mail command. You can suppress the error message from mv to avoid the mail from cron.
mv /DIR_WHERE_FILES_ARE_OR_NOT/* /SAVE_DIR/ 2> /dev/null || mail -s "DIR IS EMPTY" myemail#klkldkl.de <<< "message"

Bash shell: how to add a name

I am trying to rename some zip files in bash with an _orig but I seem to be missing something. Any suggestions??
My goal:
move files to an orig directory
rename original files with a "_orig" in the name
The code Ive tried to write:
mv -v $PICKUP/*.zip $ORIGINALS
for origfile in $(ls $ORIGINALS/*.zip);do
echo "Adding _orig to zip file"
echo
added=$(basename $origfile '_orig').zip
mv -v $ORIGINALS/$origfile.zip $ORIGINALS/$added.zip
done
Sorry still kinda new at this.
Using (p)rename :
cd <ZIP DIR>
mkdir -p orig
rename 's#(.*?)\.zip#orig/$1_orig.zip#' *.zip
rename is http://search.cpan.org/~pederst/rename/ (default on many distros)
Thanks to never use
for i in $(ls $ORIGINALS/*.zip);do
but use globs instead :
for i in $ORIGINALS/*.zip;do
See http://porkmail.org/era/unix/award.html#ls.
I know you've got a solution already, but just for posterity, this simplified version of your own shell script should also work for the case you seem to be describing:
mkdir -p "$ORIGINALS"
for file in "$PICKUP"/*.zip; do
mv -v "$file" "$ORIGINALS/${file%.zip}_orig.zip"
done
This makes use of "Parameter Expansion" in bash (you can look that up in bash's man page). The initial mkdir -p simply insures that the target directory exists. The quotes around $PICKUP and $ORIGINALS are intended to make it safe to include special characters like spaces and newlines in the directory names.
While prename is a powerful solution to many problems, it's certainly not the only hammer in the toolbox.

Resources