Insert delay between lines of bash - bash

I have a very simple renaming script I'm running in OSX Terminal. It looks like this:
mv -nv /Volumes/COMMON-LIC-PHOTO/DATA/James/Rename_Test/1140122_alternate1.tif /Volumes/COMMON-LIC-PHOTO/DATA/James/Rename_Test/1140122_alternate1A.tif
I usually have several hundred lines of rename code like this one for all the files I have to rename.
However I think the network security at work is messing with the code because it will randomly jack up the file names. I think it's interrupting the code, the code is so simple I can't think of another reason why it wouldn't work.
I want to try adding a 1sec delay between each line, but how? I've read that something like sleep 1s might work but do I have to add that between every single line? That's going to be a headache if that's the case. If it is, is there another way?
UPDATE: I have a delay working but still getting the same problems as before. This is what Terminal returns:
mv -nv /Volumes/COMMON-LIC-PHOTO/DATA/James/Rename_Test/1247136_alternate1.tif /Volumes/COMMON-LIC-PHOTO/DATA/James/Rename_Test/1247136_alternate1A.tif
mv -nv /Volumes/COMMON-LIC-PHOTO/DATA/James/Rename_Test/1247136_alternate2.tif /Volumes/COMMON-LIC-PHOTO/DATA/James/Rename_Test/1247136_alternate2A.tif
mv -nv /Volumes/COMMON-LIC-PHOTO/DATA/James/Rename_Test/1247136_alternate3.tif /Volumes/COMMON-LIC-PHOTO/DATA/James/Remv -nvest/1247136_alternate3A.tif
mv -nv /Volumes/COMMON-LIC-PHOTO/DATA/James/Rename_Test/1247136_alternate4.tif /Volumes/COMMON-LIC-PHOTO/DATA/James/Remv -nTest/1247136_alternate4A.tif
mv -nv /Volumes/COMMON-LIC-PHOTO/DATA/James/Rename_Test/1247136_lifestyle.tif /Volumes/COMMON-LIC-PHOTO/DATA/James/Renmv -nv /Volume36_lifestyleA.tif
mv -nv /Volumes/COMMON-LIC-PHOTO/DATA/James/Rename_Test/1247136_standard.tif /Volumes/COMMON-LIC-PHOTO/DATA/James/Rename_Test/1247136_standardA.tifç^C^C^C^C^C
It's throwing up all kinds of junk in the rename part. It's messing with the file names and the directory names and I can't figure out why.

If you are planing to perform all those mv commands from terminal you can make a bash alias:
alias mvd='sleep 2s && mv'
In terms of a script, since scripts do not understand bash alias (at least easily) you can built a similar function in the beginning of your script:
function mvd { sleep 2s && mv "$#"; }
The only thing you need to do is to use the new mvd command instead of mv.
Tip: In case of alias you can also name your alias mv (same name as the command).

If you already have a script that has hardcoded paths (eg, the script looks like:
mv -nv /path1 /path2
mv -nv /path3 /path4
...
Then probably the simplest thing to do would be to define a function at the top of the script by adding:
mv() { command mv "$#"; sleep 1; }

the following script just reads in your file of commands and inserts a sleep after each command
while read curr_line; do
echo curr_line $curr_line
return_msg=$( $curr_line ) # execute cmd
# may want to do error checking on value of error variable $? and return_msg
sleep 1
done < ./input_file_of_original_cmds.txt # read in that file

Related

How to make folders for individual files within a directory via bash script?

So I've got a movie collection that's dumped into a single folder (I know, bad practice in retrospect.) I want to organize things a bit so I can use Radarr to grab all the appropriate metadata, but I need all the individual files in their own folders. I created the script below to try and automate the process a bit, but I get the following error.
Script
#! /bin/bash
for f in /the/path/to/files/* ;
do
[[ -d $f ]] && continue
mkdir "${f%.*}"
mv "$f" "${f%.*}"
done
EDIT
So I've now run the script through Shellcheck.net per the suggestion of Benjamin W. It doesn't throw any errors according to the site, though I still get the same errors when I try running the command.
EDIT 2*
No errors now, but the script does nothing when executed.
Assignments are evaluated only once, and not whenever the variable being assigned to is used, which I think is what your script assumes.
You could use a loop like this:
for f in /path/to/all/the/movie/files/*; do
mkdir "${f%.*}"
mv "$f" "${f%.*}"
done
This uses parameter expansion instead of cut to get rid of the file extension.

In bash i'm building an update script, how to update the updater script

I am building a little script to update application files on a raspberry pi.
It will do the following:
Download a zip file of the application files
Unzip them
Copy each one to the right place and make it executable etc as needed.
The problem i'm having is that one of the files is updatescript.sh.
I've read that it is dangerous to update / change a bash script while it is executing. See Edit shell script while it's running
Is there a good way to achieve what I'm trying to do?
What you've read is badly overblown.
It's completely safe to overwrite a shell script in-place by mving a different file over it. When you do this, the old file handle is still valid, referring to the original unmodified file contents. What you can't safely do is edit the existing file in-place.
So, the below is fine (and is what all your OS-vendor update tools like RPM do in effect):
#!/usr/bin/env bash
tempfile=$(mktemp "$BASH_SOURCE".XXXXXX)
if curl https://example.com/whatever >"$tempfile" &&
curl https://example.com/whatever.sig >"$tempfile.sig" &&
gpgv "$tempfile.sig" "$tempfile"; then
chown --reference="$BASH_SOURCE" -- "$tempfile"
chmod --reference="$BASH_SOURCE" -- "$tempfile"
sync # force your filesystem to fully flush file contents to disk
mv -- "$tempfile" "$BASH_SOURCE" && rm -f -- "$tempfile.sig"
else
rm -f -- "$tempfile" "$tempfile.sig"
exit 1
fi
...whereas this is risky:
curl https://example.com/whatever >/usr/local/bin/whatever
So do the first, thing, not the second one: When downloading a new version of your script, write that to a different file, and only rename it over the original when the download succeeded. That's what you want to do anyhow to ensure atomicity.
(There are also some demonstrations of code-signing validation practices above because, well, you need them when building an updater. You wouldn't be trying to distribute code via an automated download without verifying a signature, right? Because that's how one simple breakin to your web server results in every single one of your customers being 0wned. The above expects the public side of your code-signing keys to be in ~/.gnupg/trustedkeys.gpg, but you can put trustedkeys.gpg in any directory and point to it with the environment variable GNUPGHOME).
Even if you don't write your update code safely, the risk is still trivial to mitigate. If you move the body of your script into a function, such that it has to be completely read before any part of it can be executed, then there's no part of the file that isn't already read at the time when execution begins.
#!/usr/bin/env bash
main() {
echo "Logic all goes here"
}; { main "$#"; exit; }
Because { main "$#"; exit; } is part of a compound command, the parser reads the exit before it starts executing the main, so it's guaranteed that no further source-file content will be read after main exits, even if some future bash release didn't handle input line-by-line in the first place.
Basically do something along:
shouldbe="/tmp/$(basename "$0")"
if [ "$0" != "$shouldbe" ]; then
cp "$0" "$shouldbe"
exec env REALPATH="$0" "$shouldbe" "$#"
fi
Check if you are running from a temporary directory
If you are not, copy yourself and rerun from the temporary directory
You can even pass some variables/state along, by using environmental variables or arguments. Then you can update yourself by using simple cp, as the old path isn't sourced (or even opened) anymore.
cp "new_script_version.sh" "$REALPATH"
The script simply looks like this:
#!/bin/bash
# we need to be run from /tmp directory
shouldbe="/tmp/$(basename "$0")"
if [ "$0" != "$shouldbe" ]; then
cp "$0" "$shouldbe"
exec env REALPATH="$0" "$shouldbe" "$#"
fi
echo "Updatting...."
echo "downloading zip files"
echo "unziping zip files..."
echo "Copying each zip files etc."
cp directory"new_updatescript.sh "$REALPATH"
echo "Update succedded"
Live/test version available at tutorialspoint.
One would also implement some flock locking to the scripts just in case.

How to backup filesystem with tar using a bash script?

I want to backup my ubuntu filesystem, and I wrote this little script. It is very basic, but being my first try I am afraid to do mistakes. And since it will take few hours to complete to see results, I think it is better to ask you as experienced programmers if I did something wrong.
I'm particularly interested in > will that record output of mv or will it output also results of tar?
Also variables inside tar command is it correct way?
#!/bin/bash
mybackupname="backup-fullsys-$(date +%Y-%m-%d).tar.gz"
{ time tar -cfpzv $mybackupname --exclude=/$mybackupname --exclude=/proc --exclude=/lost+found --exclude=/sys --exclude=/mnt --exclude=/media --exclude=/dev / && ls -gh $mybackupname && mv -v $mybackupname backups/filesystem/ ; } > backup-system.log
exit
Anything I should know before I run this?
Sandro, you might want to consider spacing things out in your script and producing individual errors. Makes things much easier to read.
#!/bin/bash
mybackupname="backup-fullsys-$(date +%Y-%m-%d).tar.gz"
# Record start time by epoch second
start=$(date '+%s')
# List of excludes in a bash array, for easier reading.
excludes=(--exclude=/$mybackupname)
excludes+=(--exclude=/proc)
excludes+=(--exclude=/lost+found)
excludes+=(--exclude=/sys)
excludes+=(--exclude=/mnt)
excludes+=(--exclude=/media)
excludes+=(--exclude=/dev)
if ! tar -czf "$mybackupname" "${excludes[#]}" /; then
status="tar failed"
elif ! mv "$mybackupname" backups/filesystem/ ; then
status="mv failed"
else
status="success: size=$(stat -c%s backups/filesystem/$mybackupname) duration=$((`date '+%s'` - $start))"
fi
# Log to system log; handle this using syslog(8).
logger -t backup "$status"
If you wanted to keep debug information (like the stderr of tar or mv), that could be handled with redirection to a tmpfile or debug file. But if the command is being run via cron and has output, cron should send it to you via email. A silent cron job is a successful cron job.
The series of ifs causes each program to be run as long as the previous one was successful. It's like chaining your commands with &&, but lets you run other code in case of failure.
Note that I've changed the order of options for tar, because the thing that comes after -f is the file you're saving things to. Also, the -p option is only useful when extracting files from a tar. Permissions are always saved when you create (-c) a tar.
Others might wish to note that this usage of the stat command works in GNU/Linux, but not other unices like FreeBSD or Mac OSX. In BSD, you'd use stat -f%z $mybackupname.
The file redirection as you have it will only record the output of mv.
You can do
{ tar ... && mv ... ; } > logfile 2>&1
to capture the output of both, plus any errors that may occur.
It's a good idea to always be in the habit of quoting variables when they are expanded.
There's no need for the exit.

Is this a valid self-update approach for a bash script?

I'm working on a script that has gotten so complex I want to include an easy option to update it to the most recent version. This is my approach:
set -o errexit
SELF=$(basename $0)
UPDATE_BASE=http://something
runSelfUpdate() {
echo "Performing self-update..."
# Download new version
wget --quiet --output-document=$0.tmp $UPDATE_BASE/$SELF
# Copy over modes from old version
OCTAL_MODE=$(stat -c '%a' $0)
chmod $OCTAL_MODE $0.tmp
# Overwrite old file with new
mv $0.tmp $0
exit 0
}
The script seems to work as intended, but I'm wondering if there might be caveats with this kind of approach. I just have a hard time believing that a script can overwrite itself without any repercussions.
To be more clear, I'm wondering, if, maybe, bash would read and execute the script line-by-line and after the mv, the exit 0 could be something else from the new script. I think I remember Windows behaving like that with .bat files.
Update: My original snippet did not include set -o errexit. To my understanding, that should keep me safe from issues caused by wget.
Also, in this case, UPDATE_BASE points to a location under version control (to ease concerns).
Result: Based on the input from these answers, I constructed this revised approach:
runSelfUpdate() {
echo "Performing self-update..."
# Download new version
echo -n "Downloading latest version..."
if ! wget --quiet --output-document="$0.tmp" $UPDATE_BASE/$SELF ; then
echo "Failed: Error while trying to wget new version!"
echo "File requested: $UPDATE_BASE/$SELF"
exit 1
fi
echo "Done."
# Copy over modes from old version
OCTAL_MODE=$(stat -c '%a' $SELF)
if ! chmod $OCTAL_MODE "$0.tmp" ; then
echo "Failed: Error while trying to set mode on $0.tmp."
exit 1
fi
# Spawn update script
cat > updateScript.sh << EOF
#!/bin/bash
# Overwrite old file with new
if mv "$0.tmp" "$0"; then
echo "Done. Update complete."
rm \$0
else
echo "Failed!"
fi
EOF
echo -n "Inserting update process..."
exec /bin/bash updateScript.sh
}
(At least it doesn't try to continue running after updating itself!)
The thing that makes me nervous about your approach is that you're overwriting the current script (mv $0.tmp $0) as it's running. There are a number of reasons why this will probably work, but I wouldn't bet large amounts that it's guaranteed to work in all circumstances. I don't know of anything in POSIX or any other standard that specifies how the shell processes a file that it's executing as a script.
Here's what's probably going to happen:
You execute the script. The kernel sees the #!/bin/sh line (you didn't show it, but I presume it's there) and invokes /bin/sh with the name of your script as an argument. The shell then uses fopen(), or perhaps open() to open your script, reads from it, and starts interpreting its contents as shell commands.
For a sufficiently small script, the shell probably just reads the whole thing into memory, either explicitly or as part of the buffering done by normal file I/O. For a larger script, it might read it in chunks as it's executing. But either way, it probably only opens the file once, and keeps it open as long as it's executing.
If you remove or rename a file, the actual file is not necessarily immediately erased from disk. If there's another hard link to it, or if some process has it open, the file continues to exist, even though it may no longer be possible for another process to open it under the same name, or at all. The file is not physically deleted until the last link (directory entry) that refers to it has been removed, and no processes have it open. (Even then, its contents won't immediately be erased, but that's going beyond what's relevant here.)
And furthermore, the mv command that clobbers the script file is immediately followed by exit 0.
BUT it's at least conceivable that the shell could close the file and then re-open it by name. I can't think of any good reason for it to do so, but I know of no absolute guarantee that it won't.
And some systems tend to do stricter file locking that most Unix systems do. On Windows, for example, I suspect that the mv command would fail because a process (the shell) has the file open. Your script might fail on Cygwin. (I haven't tried it.)
So what makes me nervous is not so much the small possibility that it could fail, but the long and tenuous line of reasoning that seems to demonstrate that it will probably succeed, and the very real possibility that there's something else I haven't thought of.
My suggestion: write a second script whose one and only job is to update the first. Put the runSelfUpdate() function, or equivalent code, into that script. In your original script, use exec to invoke the update script, so that the original script is no longer running when you update it. If you want to avoid the hassle of maintaining, distributing, and installing two separate scripts. you could have the original script create the update script with a unique in /tmp; that would also solve the problem of updating the update script. (I wouldn't worry about cleaning up the autogenerated update script in /tmp; that would just reopen the same can of worms.)
Yes, but ... I would recommend you keep a more layered version of your script's history, unless the remote host can also perform version-control with histories. That being said, to respond directly to the code you have posted, see the following comments ;-)
What happens to your system when wget has a hiccup, quietly overwrites part of your working script with only a partial or otherwise corrupt copy? Your next step does a mv $0.tmp $0 so you've lost your working version. (I hope you have it in version control on the remote!)
You can check to see if wget returns any error messages
if ! wget --quiet --output-document=$0.tmp $UPDATE_BASE/$SELF ; then
echo "error on wget on $UPDATE_BASE/$SELF"
exit 1
fi
Also, Rule-of-thumb tests will help, i.e.
if (( $(wc -c < $0.tmp) >= $(wc -c < $0) )); then
mv $0.tmp $0
fi
but are hardly foolproof.
If your $0 could windup with spaces in it, better to surround all references like "$0".
To be super-bullet proof, consider checking all command returns AND that Octal_Mode has a reasonable value
OCTAL_MODE=$(stat -c '%a' $0)
case ${OCTAL_MODE:--1} in
-[1] )
printf "Error : OCTAL_MODE was empty\n"
exit 1
;;
777|775|755 ) : nothing ;;
* )
printf "Error in OCTAL_MODEs, found value=${OCTAL_MODE}\n"
exit 1
;;
esac
if ! chmod $OCTAL_MODE $0.tmp ; then
echo "error on chmod $OCTAL_MODE %0.tmp from $UPDATE_BASE/$SELF, can't continue"
exit 1
fi
I hope this helps.
Very late answer here, but as I just solved this too, I thought it might help someone to post the approach:
#!/usr/bin/env bash
#
set -fb
readonly THISDIR=$(cd "$(dirname "$0")" ; pwd)
readonly MY_NAME=$(basename "$0")
readonly FILE_TO_FETCH_URL="https://your_url_to_downloadable_file_here"
readonly EXISTING_SHELL_SCRIPT="${THISDIR}/somescript.sh"
readonly EXECUTABLE_SHELL_SCRIPT="${THISDIR}/.somescript.sh"
function get_remote_file() {
readonly REQUEST_URL=$1
readonly OUTPUT_FILENAME=$2
readonly TEMP_FILE="${THISDIR}/tmp.file"
if [ -n "$(which wget)" ]; then
$(wget -O "${TEMP_FILE}" "$REQUEST_URL" 2>&1)
if [[ $? -eq 0 ]]; then
mv "${TEMP_FILE}" "${OUTPUT_FILENAME}"
chmod 755 "${OUTPUT_FILENAME}"
else
return 1
fi
fi
}
function clean_up() {
# clean up code (if required) that has to execute every time here
}
function self_clean_up() {
rm -f "${EXECUTABLE_SHELL_SCRIPT}"
}
function update_self_and_invoke() {
get_remote_file "${FILE_TO_FETCH_URL}" "${EXECUTABLE_SHELL_SCRIPT}"
if [ $? -ne 0 ]; then
cp "${EXISTING_SHELL_SCRIPT}" "${EXECUTABLE_SHELL_SCRIPT}"
fi
exec "${EXECUTABLE_SHELL_SCRIPT}" "$#"
}
function main() {
cp "${EXECUTABLE_SHELL_SCRIPT}" "${EXISTING_SHELL_SCRIPT}"
# your code here
}
if [[ $MY_NAME = \.* ]]; then
# invoke real main program
trap "clean_up; self_clean_up" EXIT
main "$#"
else
# update myself and invoke updated version
trap clean_up EXIT
update_self_and_invoke "$#"
fi

OSX bash script works but fails in crontab on SFTP

this topic has been discussed at length, however, I have a variant on the theme that I just cannot crack. Two days into this now and decided to ping the community. THx in advance for reading..
Exec. summary is I have a script in OS X that runs fine and executes without issue or error when done manually. When I put the script in the crontab to run daily it still runs but it doesnt run all of the commands (specifically SFTP).
I have read enough posts to go down the path of environment issues, so as you will see below, I hard referenced the location of the SFTP in the event of a PATH issue...
The only thing that I can think of is the IdentityFile. NOTE: I am putting this in the crontab for my user not root. So I understand that it should pickup on the id_dsa.pub that I have created (and that has already been shared with the server)..
I am not trying to do any funky expect commands to bypass the password, etc. I dont know why when run from the cron that it is skipping the SFTP line.
please see the code below.. and help is greatly appreciated.. thx
#!/bin/bash
export DATE=`date +%y%m%d%H%M%S`
export YYMMDD=`date +%y%m%d`
PDATE=$DATE
YDATE=$YYMMDD
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
FEED="~/Dropbox/"
USER="user"
HOST="host.domain.tld"
A="/tmp/5nPR45bH"
>${A}.file1${PDATE}
>${A}.file2${PDATE}
BYEbye ()
{
rm ${A}.file1${PDATE}
rm ${A}.file2${PDATE}
echo "Finished cleaning internal logs"
exit 0
}
echo "get -r *" >> ${A}.file1${PDATE}
echo "quit" >> ${A}.file1${PDATE}
eval mkdir ${FEED}${YDATE}
eval cd ${FEED}${YDATE}
eval /usr/bin/sftp -b ${A}.file1${PDATE} ${USER}#${HOST}
BYEbye
exit 0
Not an answer, just comments about your code.
The way to handle filenames with spaces is to quote the variable: "$var" -- eval is not the way to go. Get into the habit of quoting all variables unless you specifically want to use the side effects of not quoting.
you don't need to export your variables unless there's a command you call that expects to see them in the environment.
you don't need to call date twice because the YYMMDD value is a substring of the DATE: YYMMDD="${DATE:0:6}"
just a preference: I use $HOME over ~ in a script.
you never use the "file2" temp file -- why do you create it?
since your sftp batch file is pretty simple, you don't really need a file for it:
printf "%s\n" "get -r *" "quit" | sftp -b - "$USER#$HOST"
Here's a rewrite, shortened considerably:
#!/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
FEED_DIR="$HOME/Dropbox/$(date +%Y%m%d)"
USER="user"
HOST="host.domain.tld"
mkdir "$FEED_DIR" || { echo "could not mkdir $FEED_DIR"; exit 1; }
cd "$FEED_DIR"
{
echo "get -r *"
echo quit
} |
sftp -b - "${USER}#${HOST}"

Resources