Is this a valid self-update approach for a bash script? - bash

I'm working on a script that has gotten so complex I want to include an easy option to update it to the most recent version. This is my approach:
set -o errexit
SELF=$(basename $0)
UPDATE_BASE=http://something
runSelfUpdate() {
echo "Performing self-update..."
# Download new version
wget --quiet --output-document=$0.tmp $UPDATE_BASE/$SELF
# Copy over modes from old version
OCTAL_MODE=$(stat -c '%a' $0)
chmod $OCTAL_MODE $0.tmp
# Overwrite old file with new
mv $0.tmp $0
exit 0
}
The script seems to work as intended, but I'm wondering if there might be caveats with this kind of approach. I just have a hard time believing that a script can overwrite itself without any repercussions.
To be more clear, I'm wondering, if, maybe, bash would read and execute the script line-by-line and after the mv, the exit 0 could be something else from the new script. I think I remember Windows behaving like that with .bat files.
Update: My original snippet did not include set -o errexit. To my understanding, that should keep me safe from issues caused by wget.
Also, in this case, UPDATE_BASE points to a location under version control (to ease concerns).
Result: Based on the input from these answers, I constructed this revised approach:
runSelfUpdate() {
echo "Performing self-update..."
# Download new version
echo -n "Downloading latest version..."
if ! wget --quiet --output-document="$0.tmp" $UPDATE_BASE/$SELF ; then
echo "Failed: Error while trying to wget new version!"
echo "File requested: $UPDATE_BASE/$SELF"
exit 1
fi
echo "Done."
# Copy over modes from old version
OCTAL_MODE=$(stat -c '%a' $SELF)
if ! chmod $OCTAL_MODE "$0.tmp" ; then
echo "Failed: Error while trying to set mode on $0.tmp."
exit 1
fi
# Spawn update script
cat > updateScript.sh << EOF
#!/bin/bash
# Overwrite old file with new
if mv "$0.tmp" "$0"; then
echo "Done. Update complete."
rm \$0
else
echo "Failed!"
fi
EOF
echo -n "Inserting update process..."
exec /bin/bash updateScript.sh
}

(At least it doesn't try to continue running after updating itself!)
The thing that makes me nervous about your approach is that you're overwriting the current script (mv $0.tmp $0) as it's running. There are a number of reasons why this will probably work, but I wouldn't bet large amounts that it's guaranteed to work in all circumstances. I don't know of anything in POSIX or any other standard that specifies how the shell processes a file that it's executing as a script.
Here's what's probably going to happen:
You execute the script. The kernel sees the #!/bin/sh line (you didn't show it, but I presume it's there) and invokes /bin/sh with the name of your script as an argument. The shell then uses fopen(), or perhaps open() to open your script, reads from it, and starts interpreting its contents as shell commands.
For a sufficiently small script, the shell probably just reads the whole thing into memory, either explicitly or as part of the buffering done by normal file I/O. For a larger script, it might read it in chunks as it's executing. But either way, it probably only opens the file once, and keeps it open as long as it's executing.
If you remove or rename a file, the actual file is not necessarily immediately erased from disk. If there's another hard link to it, or if some process has it open, the file continues to exist, even though it may no longer be possible for another process to open it under the same name, or at all. The file is not physically deleted until the last link (directory entry) that refers to it has been removed, and no processes have it open. (Even then, its contents won't immediately be erased, but that's going beyond what's relevant here.)
And furthermore, the mv command that clobbers the script file is immediately followed by exit 0.
BUT it's at least conceivable that the shell could close the file and then re-open it by name. I can't think of any good reason for it to do so, but I know of no absolute guarantee that it won't.
And some systems tend to do stricter file locking that most Unix systems do. On Windows, for example, I suspect that the mv command would fail because a process (the shell) has the file open. Your script might fail on Cygwin. (I haven't tried it.)
So what makes me nervous is not so much the small possibility that it could fail, but the long and tenuous line of reasoning that seems to demonstrate that it will probably succeed, and the very real possibility that there's something else I haven't thought of.
My suggestion: write a second script whose one and only job is to update the first. Put the runSelfUpdate() function, or equivalent code, into that script. In your original script, use exec to invoke the update script, so that the original script is no longer running when you update it. If you want to avoid the hassle of maintaining, distributing, and installing two separate scripts. you could have the original script create the update script with a unique in /tmp; that would also solve the problem of updating the update script. (I wouldn't worry about cleaning up the autogenerated update script in /tmp; that would just reopen the same can of worms.)

Yes, but ... I would recommend you keep a more layered version of your script's history, unless the remote host can also perform version-control with histories. That being said, to respond directly to the code you have posted, see the following comments ;-)
What happens to your system when wget has a hiccup, quietly overwrites part of your working script with only a partial or otherwise corrupt copy? Your next step does a mv $0.tmp $0 so you've lost your working version. (I hope you have it in version control on the remote!)
You can check to see if wget returns any error messages
if ! wget --quiet --output-document=$0.tmp $UPDATE_BASE/$SELF ; then
echo "error on wget on $UPDATE_BASE/$SELF"
exit 1
fi
Also, Rule-of-thumb tests will help, i.e.
if (( $(wc -c < $0.tmp) >= $(wc -c < $0) )); then
mv $0.tmp $0
fi
but are hardly foolproof.
If your $0 could windup with spaces in it, better to surround all references like "$0".
To be super-bullet proof, consider checking all command returns AND that Octal_Mode has a reasonable value
OCTAL_MODE=$(stat -c '%a' $0)
case ${OCTAL_MODE:--1} in
-[1] )
printf "Error : OCTAL_MODE was empty\n"
exit 1
;;
777|775|755 ) : nothing ;;
* )
printf "Error in OCTAL_MODEs, found value=${OCTAL_MODE}\n"
exit 1
;;
esac
if ! chmod $OCTAL_MODE $0.tmp ; then
echo "error on chmod $OCTAL_MODE %0.tmp from $UPDATE_BASE/$SELF, can't continue"
exit 1
fi
I hope this helps.

Very late answer here, but as I just solved this too, I thought it might help someone to post the approach:
#!/usr/bin/env bash
#
set -fb
readonly THISDIR=$(cd "$(dirname "$0")" ; pwd)
readonly MY_NAME=$(basename "$0")
readonly FILE_TO_FETCH_URL="https://your_url_to_downloadable_file_here"
readonly EXISTING_SHELL_SCRIPT="${THISDIR}/somescript.sh"
readonly EXECUTABLE_SHELL_SCRIPT="${THISDIR}/.somescript.sh"
function get_remote_file() {
readonly REQUEST_URL=$1
readonly OUTPUT_FILENAME=$2
readonly TEMP_FILE="${THISDIR}/tmp.file"
if [ -n "$(which wget)" ]; then
$(wget -O "${TEMP_FILE}" "$REQUEST_URL" 2>&1)
if [[ $? -eq 0 ]]; then
mv "${TEMP_FILE}" "${OUTPUT_FILENAME}"
chmod 755 "${OUTPUT_FILENAME}"
else
return 1
fi
fi
}
function clean_up() {
# clean up code (if required) that has to execute every time here
}
function self_clean_up() {
rm -f "${EXECUTABLE_SHELL_SCRIPT}"
}
function update_self_and_invoke() {
get_remote_file "${FILE_TO_FETCH_URL}" "${EXECUTABLE_SHELL_SCRIPT}"
if [ $? -ne 0 ]; then
cp "${EXISTING_SHELL_SCRIPT}" "${EXECUTABLE_SHELL_SCRIPT}"
fi
exec "${EXECUTABLE_SHELL_SCRIPT}" "$#"
}
function main() {
cp "${EXECUTABLE_SHELL_SCRIPT}" "${EXISTING_SHELL_SCRIPT}"
# your code here
}
if [[ $MY_NAME = \.* ]]; then
# invoke real main program
trap "clean_up; self_clean_up" EXIT
main "$#"
else
# update myself and invoke updated version
trap clean_up EXIT
update_self_and_invoke "$#"
fi

Related

In bash i'm building an update script, how to update the updater script

I am building a little script to update application files on a raspberry pi.
It will do the following:
Download a zip file of the application files
Unzip them
Copy each one to the right place and make it executable etc as needed.
The problem i'm having is that one of the files is updatescript.sh.
I've read that it is dangerous to update / change a bash script while it is executing. See Edit shell script while it's running
Is there a good way to achieve what I'm trying to do?
What you've read is badly overblown.
It's completely safe to overwrite a shell script in-place by mving a different file over it. When you do this, the old file handle is still valid, referring to the original unmodified file contents. What you can't safely do is edit the existing file in-place.
So, the below is fine (and is what all your OS-vendor update tools like RPM do in effect):
#!/usr/bin/env bash
tempfile=$(mktemp "$BASH_SOURCE".XXXXXX)
if curl https://example.com/whatever >"$tempfile" &&
curl https://example.com/whatever.sig >"$tempfile.sig" &&
gpgv "$tempfile.sig" "$tempfile"; then
chown --reference="$BASH_SOURCE" -- "$tempfile"
chmod --reference="$BASH_SOURCE" -- "$tempfile"
sync # force your filesystem to fully flush file contents to disk
mv -- "$tempfile" "$BASH_SOURCE" && rm -f -- "$tempfile.sig"
else
rm -f -- "$tempfile" "$tempfile.sig"
exit 1
fi
...whereas this is risky:
curl https://example.com/whatever >/usr/local/bin/whatever
So do the first, thing, not the second one: When downloading a new version of your script, write that to a different file, and only rename it over the original when the download succeeded. That's what you want to do anyhow to ensure atomicity.
(There are also some demonstrations of code-signing validation practices above because, well, you need them when building an updater. You wouldn't be trying to distribute code via an automated download without verifying a signature, right? Because that's how one simple breakin to your web server results in every single one of your customers being 0wned. The above expects the public side of your code-signing keys to be in ~/.gnupg/trustedkeys.gpg, but you can put trustedkeys.gpg in any directory and point to it with the environment variable GNUPGHOME).
Even if you don't write your update code safely, the risk is still trivial to mitigate. If you move the body of your script into a function, such that it has to be completely read before any part of it can be executed, then there's no part of the file that isn't already read at the time when execution begins.
#!/usr/bin/env bash
main() {
echo "Logic all goes here"
}; { main "$#"; exit; }
Because { main "$#"; exit; } is part of a compound command, the parser reads the exit before it starts executing the main, so it's guaranteed that no further source-file content will be read after main exits, even if some future bash release didn't handle input line-by-line in the first place.
Basically do something along:
shouldbe="/tmp/$(basename "$0")"
if [ "$0" != "$shouldbe" ]; then
cp "$0" "$shouldbe"
exec env REALPATH="$0" "$shouldbe" "$#"
fi
Check if you are running from a temporary directory
If you are not, copy yourself and rerun from the temporary directory
You can even pass some variables/state along, by using environmental variables or arguments. Then you can update yourself by using simple cp, as the old path isn't sourced (or even opened) anymore.
cp "new_script_version.sh" "$REALPATH"
The script simply looks like this:
#!/bin/bash
# we need to be run from /tmp directory
shouldbe="/tmp/$(basename "$0")"
if [ "$0" != "$shouldbe" ]; then
cp "$0" "$shouldbe"
exec env REALPATH="$0" "$shouldbe" "$#"
fi
echo "Updatting...."
echo "downloading zip files"
echo "unziping zip files..."
echo "Copying each zip files etc."
cp directory"new_updatescript.sh "$REALPATH"
echo "Update succedded"
Live/test version available at tutorialspoint.
One would also implement some flock locking to the scripts just in case.

Simple BASH script needed: moving and renaming files

Decades ago I was a programmer (IBM assembly, Fortran, COBOL, MS DOS scripting, a bit of Visual Basic.) Thus I'm familiar with the generalities of IF-Then-Else, For loops, etc.
However, I'm now needing to delve into Bash for my current job, and I'm having a difficult time with syntax and appropriate commands for what I need.
I'm in need of a trivial (concept-wise) script, which will:
Determine if a specific folder (e.g., ~/Desktop/Archive Folder) exists on the user Desktop
If not, create it ("Archive")
Move all files/folders on desktop - except for ~/Desktop/Archive, into "Archive Folder" - AND appending a timestamp onto the end of the filenames being moved.
It is this very last piece - the timestamp addition - which is holding me up.
I'm hoping a clear and simple solution can be sent my way. Here is what I've come up with so far:
#!/bin/bash
shopt -s extglob
FOLDERARCH="Archive Folder"
cd ~/Desktop
if [ ! -d $"FOLDERARCH" ]; then
mkdir "$FOLDERARCH"
echo "$FOLDERARCH did not exist, was created"
fi
mv !(-d "$FOLDERARCH") "$FOLDERARCH"
One final note: the script above works (without the timestamp piece) yet also ends with the message
mv: rename Archive Folder to Folder/Archive Folder: Invalid argument
Why?
Any help will be deeply, deeply appreciated. Please assume I know essentially zilch about the BASH environment, cmds and their arguments - this first request for assistance marks my first step into the journey of becoming at least proficient.
Update
First: much gratitude for the replies I've gotten; they've been very useful.
I've now got was it essentially a working version, but with some oddities I do not understand and, after hours of attempted research, have yet to understand/solve.
I'm hoping for some insight; I feel I'm on the verge of making some real headway in comprehending, but these anomalies are hindering my progress. Here's my (working, with "issues") code so far:
shopt -s extglob
FOLDERARCH="Archives"
NEWARCH=$(date +%F_%T)
cd ~/Desktop
if [ ! -d $"FOLDERARCH" ]; then
mkdir "$FOLDERARCH"
echo "$FOLDERARCH did not exist, was created"
fi
mkdir "$FOLDERARCH/$NEWARCH"
mv !(-d "$FOLDERARCH") $FOLDERARCH/$NEWARCH
This in fact largely accomplishes my goal, but:
In the case where the desktop Archives folder already exists, I'm expecting the if-then construct to simply follow through (with no echo msg) to the following mkdir command, but instead the msg "Archives not exist, was created" msg is output anyway (erroneously). Any answers as to why?
The script completes with the following msg:
mv: rename Archives to Archives/2016-01-10_00:06:54/Archives: Invalid argument
I don't understand this at all; what should be happening is that all files/folders on the desktop EXCEPT the /Desktop/Archives folder should be moved into a newly created "subfolder" of /Desktop/Archives, e.g., /Desktop/Archives/2016-01-10_00:06:54. In fact, the move accomplishes my goal, but that the message arises makes no sense to me. What is the invalid argument?
One last note: at this point in my newbie-status I'm looking for code which is clear and easy to read, versus much more elegant/sophisticated one-line piped-command solutions. I look forward to working my way up to those in due time.
You have several options. One of the simplest is to loop over the directories below ~/Desktop and if they are not "$FOLDERARCH", move them to "$FOLDERARCH", e.g.:
for i in */; do
[ "$i" != "$FOLDERARCH"/ ] && mv "$i" "$FOLDERARCH"
done
I haven't run a test case, but something similar to the following should work.
#!/bin/bash
shopt -s extglob
FOLDERARCH="Archive Folder"
cd ~/Desktop || { printf "failed to change to '~/Destop'\n"; exit 1; }
if [ ! -d "$FOLDERARCH" ]; then
if mkdir "$FOLDERARCH" , then
echo "$FOLDERARCH did not exist, was created"
else
echo "error: failed to create '$FOLDERARCH'"
exit 1
fi
fi
for i in */; do
[ "$i" != "$FOLDERARCH"/ ] && mv "$i" "$FOLDERARCH"
done
I apologize, I forgot the datestamp portion. As pointed out in the comments, you can include the datestamp (set the format to your taste) with something similar to the following:
tstamp=$(date +%s)
for i in */; do
[ "$i" != "$FOLDERARCH"/ ] && mv "$i" "$FOLDERARCH/${i}_${tstamp}"
done

Synchronizing Current Directory Between Two Zsh Sessions

I have two iTerm windows running zsh: one I use to documents in vim; the other I use to execute shell commands. I would like to synchronize the current working directories of the two sessions. I thought I could do this by outputting to a file ~/.cwd the new directory every time I change directories
alias cd="cd; pwd > ~/.cwd"
and creating a shell script ~/.dirsync that monitors the contents of ~/.cwd every second and changes directory if the other shell has updated it.
#!/bin/sh
echo $(pwd) > ~/.cwd
alias cd="cd; echo $(pwd) > ~/.cwd"
while true
do
if [[ $(pwd) != $(cat ~/.cwd) ]]
then
cd $(cat ~/.cwd)
fi
sleep 1
done
I would then append the following line of code to the end of my ~/.zshrc.
~/.dirsync &
However, it did not work. I then found out that shell scripts always execute in its own subshell. Does anyone know of a way to make this work?
Caveat emptor: I'm doing this on Ubuntu 10.04 with gnome-terminal, but it should work on any *NIX platform running zsh.
I've also changed things slightly. Instead of mixing "pwd" and "cwd", I've stuck with "pwd" everywhere.
Recording the Present Working Directory
If you want to run a function every time you cd, the preferred way is to use the chpwd function or the more extensible chpwd_functions array. I prefer chpwd_functions since you can dynamically append and remove functions from it.
# Records $PWD to file
function +record_pwd {
echo "$(pwd)" > ~/.pwd
}
# Removes the PWD record file
function +clean_up_pwd_record {
rm -f ~/.pwd
}
# Adds +record_pwd to the list of functions executed when "cd" is called
# and records the present directory
function start_recording_pwd {
if [[ -z $chpwd_functions[(r)+record_pwd] ]]; then
chpwd_functions=(${chpwd_functions[#]} "+record_pwd")
fi
+record_pwd
}
# Removes +record_pwd from the list of functions executed when "cd" is called
# and cleans up the record file
function stop_recording_pwd {
if [[ -n $chpwd_functions[(r)+record_pwd] ]]; then
chpwd_functions=("${(#)chpwd_functions:#+record_pwd}")
+clean_up_pwd_record
fi
}
Adding a + to the +record_pwd and +clean_up_pwd_record function names is a hack-ish way to hide it from normal use (similarly, the VCS_info hooks do this by prefixing everything with +vi).
With the above, you would simply call start_recording_pwd to start recording the present working directory every time you change directories. Likewise, you can call stop_recording_pwd to disable that behavior. stop_recording_pwd also removes the ~/.pwd file (just to keep things clean).
By doing things this way, synchronization be easily be made opt-in (since you may not want this for every single zsh session you run).
First Attempt: Using the preexec Hook
Similar to the suggestion of #Celada, the preexec hook gets run before executing a command. This seemed like an easy way to get the functionality you want:
autoload -Uz add-zsh-hook
function my_preexec_hook {
if [[-r ~/.pwd ]] && [[ $(pwd) != $(cat ~/.pwd) ]]; then
cd "$(cat ~/.pwd)"
fi
}
add-zsh-hook preexec my_preexec_hook
This works... sort of. Since the preexec hook runs before each command, it will automatically change directories before running your next command. However, up until then, the prompt stays in the last working directory, so it tab completes for the last directory, etc. (By the way, a blank line doesn't count as a command.) So, it sort of works, but it's not intuitive.
Second Attempt: Using signals and traps
In order to get a terminal to automatically cd and re-print the prompt, things got a lot more complicated.
After some searching, I found out that $$ (the shell's process ID) does not change in subshells. Thus, a subshell (or background job) can easily send signals to its parent. Combine this with the fact that zsh allows you to trap signals, and you have a means of polling ~/.pwd periodically:
# Used to make sure USR1 signals are not taken as synchronization signals
# unless the terminal has been told to do so
local _FOLLOWING_PWD
# Traps all USR1 signals
TRAPUSR1() {
# If following the .pwd file and we need to change
if (($+_FOLLOWING_PWD)) && [[ -r ~/.pwd ]] && [[ "$(pwd)" != "$(cat ~/.pwd)" ]]; then
# Change directories and redisplay the prompt
# (Still don't fully understand this magic combination of commands)
[[ -o zle ]] && zle -R && cd "$(cat ~/.pwd)" && precmd && zle reset-prompt 2>/dev/null
fi
}
# Sends the shell a USR1 signal every second
function +check_recorded_pwd_loop {
while true; do
kill -s USR1 "$$" 2>/dev/null
sleep 1
done
}
# PID of the disowned +check_recorded_pwd_loop job
local _POLLING_LOOP_PID
function start_following_recorded_pwd {
_FOLLOWING_PWD=1
[[ -n "$_POLLING_LOOP_PID" ]] && return
# Launch signalling loop as a disowned process
+check_recorded_pwd_loop &!
# Record the signalling loop's PID
_POLLING_LOOP_PID="$!"
}
function stop_following_recorded_pwd {
unset _FOLLOWING_PWD
[[ -z "$_POLLING_LOOP_PID" ]] && return
# Kill the background loop
kill "$_POLLING_LOOP_PID" 2>/dev/null
unset _POLLING_LOOP_PID
}
If you call start_following_recorded_pwd, this launches +check_recorded_pwd_loop as a disowned background process. This way, you won't get an annoying "suspended jobs" warning when you go to close your shell. The PID of the loop is recorded (via $!) so it can be stopped later.
The loop just sends the parent shell a USR1 signal every second. This signal gets trapped by TRAPUSR1(), which will cd and reprint the prompt if necessary. I don't understand having to call both zle -R and zle reset-prompt, but that was the magic combination that worked for me.
There is also the _FOLLOWING_PWD flag. Since every terminal will have the TRAPUSR1 function defined, this prevents them from handling that signal (and changing directories) unless you actually specified that behavior.
As with recording the present working directory, you can call stop_following_posted_pwd to stop the whole auto-cd thing.
Putting both halves together:
function begin_synchronize {
start_recording_pwd
start_following_recorded_pwd
}
function end_synchronize {
stop_recording_pwd
stop_following_recorded_pwd
}
Finally, you will probably want to do this:
trap 'end_synchronize' EXIT
This will automatically clean up everything just before your terminal exits, thus preventing you from accidentally leaving orphaned signalling loops around.

Shell script file existence on Mac issue

Ok so I have written a .sh file in Linux Ubuntu and it works perfectly. However on a Mac it always returns that the file was not found even when it is in the same directory. Can anyone help me out?
.sh file:
if [ ! -f file-3*.jar ]; then
echo "[INFO] jar could not be found."
exit
fi
Just thought I'd add, this isn't for more than one file, it's for a file that is renamed to multiple endings.
In a comment to #Paul R's answer, you said "The shell script is also in the same directory as the jar file. So they can just double click it after assigning SH files to open with terminal by default." I suspect that's the problem -- when you run a shell script by double-clicking it, it runs with the working directory set to the user's home directory, not the directory where the script's located. You can work around this by having the script cd to the directory it's in:
cd "$(dirname "$BASH_SOURCE")"
EDIT: $BASH_SOURCE is, of course, a bash extension not available in other shells. If your script can't count on running in bash, use this instead:
case "$0" in
*/*)
cd "$(dirname "$0")" ;;
*)
me="$(which "$0")"
if [ -n "$me" ]; then
cd "$(dirname "$me")"
else
echo "Can't locate script directory" >&2
exit 1
fi ;;
esac
BTW, the construct [ ! -f file-3*.jar ] makes me nervous, since it'll fail bizarrely if there's ever more than one matching file. (I know, that's not supposed to happen; but things that aren't supposed to happen have any annoying tendency to happen anyway.) I'd use this instead:
matchfiles=(file-3*.jar)
if [ ! -f "${matchfiles[0]}" ]; then
...
Again, if you can't count on bash extensions, here's an alternative that should work in any POSIX shell:
if [ ! -f "$(echo file-3*.jar)" ]; then
Note that this will fail (i.e. act as though the file didn't exist) if there's more than one match.
I think the problem lies elsewhere, as the script works as expected on Mac OS X here:
$ if [ ! -f file-3*.jar ]; then echo "[INFO] jar could not be found."; fi
[INFO] jar could not be found.
$ touch file-302.jar
$ if [ ! -f file-3*.jar ]; then echo "[INFO] jar could not be found."; fi
$
Perhaps your script is being run under the wrong shell, or in the wrong working directory ?
It's not that it doesn't work for you, it doesn't work for your users? The default shell for OS X has changed over the years (see this post) - but it looks like your comment says you have the #! in place.
Are you sure that your users have the JAR file in the right place? Perhaps it's not the script being wrong as much as it's telling you the correct answer - the required file is missing from where the script is being run.
This isn't so much an answer, as a strategy: consider some serious logging. Echo messages such as "[INFO] jar could not be found." both to the screen and to a log file, then add extra logging, such as the values of $PWD, $SHELL and $0 to the log. Then, when your customers/co-workers try to run the script and fail, they can email the log to you.
I would probably use something like
screenlog() {
echo "$*"
echo "$*" >> $LOGFILE
}
log() {
echo "$*" >> $LOGFILE
}
Define $LOGFILE at the top of your script. Then pepper your script with statements like screenlog "[INFO] jar could not be found." or log "\$PWD: $PWD".

OSX bash script works but fails in crontab on SFTP

this topic has been discussed at length, however, I have a variant on the theme that I just cannot crack. Two days into this now and decided to ping the community. THx in advance for reading..
Exec. summary is I have a script in OS X that runs fine and executes without issue or error when done manually. When I put the script in the crontab to run daily it still runs but it doesnt run all of the commands (specifically SFTP).
I have read enough posts to go down the path of environment issues, so as you will see below, I hard referenced the location of the SFTP in the event of a PATH issue...
The only thing that I can think of is the IdentityFile. NOTE: I am putting this in the crontab for my user not root. So I understand that it should pickup on the id_dsa.pub that I have created (and that has already been shared with the server)..
I am not trying to do any funky expect commands to bypass the password, etc. I dont know why when run from the cron that it is skipping the SFTP line.
please see the code below.. and help is greatly appreciated.. thx
#!/bin/bash
export DATE=`date +%y%m%d%H%M%S`
export YYMMDD=`date +%y%m%d`
PDATE=$DATE
YDATE=$YYMMDD
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
FEED="~/Dropbox/"
USER="user"
HOST="host.domain.tld"
A="/tmp/5nPR45bH"
>${A}.file1${PDATE}
>${A}.file2${PDATE}
BYEbye ()
{
rm ${A}.file1${PDATE}
rm ${A}.file2${PDATE}
echo "Finished cleaning internal logs"
exit 0
}
echo "get -r *" >> ${A}.file1${PDATE}
echo "quit" >> ${A}.file1${PDATE}
eval mkdir ${FEED}${YDATE}
eval cd ${FEED}${YDATE}
eval /usr/bin/sftp -b ${A}.file1${PDATE} ${USER}#${HOST}
BYEbye
exit 0
Not an answer, just comments about your code.
The way to handle filenames with spaces is to quote the variable: "$var" -- eval is not the way to go. Get into the habit of quoting all variables unless you specifically want to use the side effects of not quoting.
you don't need to export your variables unless there's a command you call that expects to see them in the environment.
you don't need to call date twice because the YYMMDD value is a substring of the DATE: YYMMDD="${DATE:0:6}"
just a preference: I use $HOME over ~ in a script.
you never use the "file2" temp file -- why do you create it?
since your sftp batch file is pretty simple, you don't really need a file for it:
printf "%s\n" "get -r *" "quit" | sftp -b - "$USER#$HOST"
Here's a rewrite, shortened considerably:
#!/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
FEED_DIR="$HOME/Dropbox/$(date +%Y%m%d)"
USER="user"
HOST="host.domain.tld"
mkdir "$FEED_DIR" || { echo "could not mkdir $FEED_DIR"; exit 1; }
cd "$FEED_DIR"
{
echo "get -r *"
echo quit
} |
sftp -b - "${USER}#${HOST}"

Resources