Shell script file existence on Mac issue - macos

Ok so I have written a .sh file in Linux Ubuntu and it works perfectly. However on a Mac it always returns that the file was not found even when it is in the same directory. Can anyone help me out?
.sh file:
if [ ! -f file-3*.jar ]; then
echo "[INFO] jar could not be found."
exit
fi
Just thought I'd add, this isn't for more than one file, it's for a file that is renamed to multiple endings.

In a comment to #Paul R's answer, you said "The shell script is also in the same directory as the jar file. So they can just double click it after assigning SH files to open with terminal by default." I suspect that's the problem -- when you run a shell script by double-clicking it, it runs with the working directory set to the user's home directory, not the directory where the script's located. You can work around this by having the script cd to the directory it's in:
cd "$(dirname "$BASH_SOURCE")"
EDIT: $BASH_SOURCE is, of course, a bash extension not available in other shells. If your script can't count on running in bash, use this instead:
case "$0" in
*/*)
cd "$(dirname "$0")" ;;
*)
me="$(which "$0")"
if [ -n "$me" ]; then
cd "$(dirname "$me")"
else
echo "Can't locate script directory" >&2
exit 1
fi ;;
esac
BTW, the construct [ ! -f file-3*.jar ] makes me nervous, since it'll fail bizarrely if there's ever more than one matching file. (I know, that's not supposed to happen; but things that aren't supposed to happen have any annoying tendency to happen anyway.) I'd use this instead:
matchfiles=(file-3*.jar)
if [ ! -f "${matchfiles[0]}" ]; then
...
Again, if you can't count on bash extensions, here's an alternative that should work in any POSIX shell:
if [ ! -f "$(echo file-3*.jar)" ]; then
Note that this will fail (i.e. act as though the file didn't exist) if there's more than one match.

I think the problem lies elsewhere, as the script works as expected on Mac OS X here:
$ if [ ! -f file-3*.jar ]; then echo "[INFO] jar could not be found."; fi
[INFO] jar could not be found.
$ touch file-302.jar
$ if [ ! -f file-3*.jar ]; then echo "[INFO] jar could not be found."; fi
$
Perhaps your script is being run under the wrong shell, or in the wrong working directory ?

It's not that it doesn't work for you, it doesn't work for your users? The default shell for OS X has changed over the years (see this post) - but it looks like your comment says you have the #! in place.
Are you sure that your users have the JAR file in the right place? Perhaps it's not the script being wrong as much as it's telling you the correct answer - the required file is missing from where the script is being run.

This isn't so much an answer, as a strategy: consider some serious logging. Echo messages such as "[INFO] jar could not be found." both to the screen and to a log file, then add extra logging, such as the values of $PWD, $SHELL and $0 to the log. Then, when your customers/co-workers try to run the script and fail, they can email the log to you.
I would probably use something like
screenlog() {
echo "$*"
echo "$*" >> $LOGFILE
}
log() {
echo "$*" >> $LOGFILE
}
Define $LOGFILE at the top of your script. Then pepper your script with statements like screenlog "[INFO] jar could not be found." or log "\$PWD: $PWD".

Related

script file not found when using source

I have a bash script in a file named reach.sh.
This file is given exe rights using chmod 755 /Users/vb/Documents/util/bash/reach.sh.
I then created an alias using alias reach='/Users/vb/Documents/util/bash/reach.sh'
So far this works great.
It happens that I need to run this script in my current process, so theoretically I would need to add . or source before my script path.
So I now have alias reach='source /Users/vb/Documents/util/bash/reach.sh'
At this point when I run my alias reach, the script is failing.
Error /Users/vb/Documents/util/bash/reach.sh:7: = not found
Line 7 if [ "$1" == "cr" ] || [ "$1" == "c" ]; then
Full script
#!/bin/bash
# env
REACH_ROOT="/Users/vb/Documents/bitbucket/fork/self"
# process
if [ "$1" == "cr" ] || [ "$1" == "c" ]; then
echo -e "Redirection to subfolder"
cd ${REACH_ROOT}/src/cr
pwd
else
echo -e "Redirection to root folder"
cd ${REACH_ROOT}
pwd
fi
Any idea what I could be missing ?
I'm running my script in zsh which is not a bash shell, so when I force it to run in my current process it runs in a zsh shell and does not recognize bash commands anymore.
In your question, you say "It happens that I need to run this script in my current process", so I'm wondering why you are using source at all. Just run the script. Observe:
bash-script.sh
#!/bin/bash
if [ "$1" == "aaa" ]; then
echo "AAA"
fi
zsh-script.sh
#!/bin/zsh
echo "try a ..."
./bash-script.sh a
echo "try aaa ..."
./bash-script.sh aaa
echo "try b ..."
./bash-script.sh b
output from ./zsh-script.sh
try a ...
try aaa ...
AAA
try b ...
If, in zsh-script.sh, I put source in front of each ./bash-script.sh, I do get the behavior you described in your question.
But, if you just need to "run this script in my current process", well, then ... just run it.
source tries to read a file as lines to be interpreted by the current shell, which is zsh as you have said. But simply running it, causes the first line (the #!/bin/bash "shebang" line) to start a new shell that interprets the lines itself. That will totally resolve the use of bash syntax from within a zsh context.

Shell Script that monitors a folder for new files

I'm not a pro in shell scripting, thats why I ask here :).
Let's say I got a folder. I need a script that monitors that folder for new files (no prefix name of files is given). When a new file gets copied into that folder, another script should start. Has the second script processed the file successfully the file should be deleted.
I hope you can give me some ideas on how to achieve such script :)
Thank you very much in advance.
Thomas
Try this:
watcher.sh:
#!/bin/bash
if [ -z $1 ];
then
echo "You need to specify a dir as argument."
echo "Usage:"
echo "$0 <dir>"
exit 1
fi
while true;
do
for a in $(ls -1 $1/* 2>/dev/null);
do
otherscript $a && rm $a #calls otherscript with the file a as argument and removes it if otherscript returned something non-zero
done
sleep 2s
done
Don't forget to make it executable
chmod +x ./watcher.sh
call it with:
./watcher.sh <dirname>
try inotify(http://man7.org/linux/man-pages/man7/inotify.7.html)
or you may need to install inotify-tools (http://www.ibm.com/developerworks/linux/library/l-ubuntu-inotify/) to use it by shell.

bash script mv command not working

I'm trying to write a script that renames a file at login in OSX Lion.
Here is my script so far:
#!/bin/bash
if [ -f /Users/$1/Library/Google/GoogleSoftwareUpdate/GoogleSoftwareUpdate.bundle/Contents/MacOS/ksadmin ]; then
mv ~/Library/Google/GoogleSoftwareUpdate/GoogleSoftwareUpdate.bundle/Contents/MacOS/ksadmin ~/Library/Google/GoogleSoftwareUpdate/GoogleSoftwareUpdate.bundle/Contents/MacOS/ksadmin1
say "Successful"
else
say "Unsuccessful"
fi
I've created a LoginHook which executes the script. I know it executes at login because the computer speaks when it can finds the "ksadmin" file. I know it finds the "ksadmin" file because the computer says "Successful". I have also manually renamed the file, logged out, back in and the computer says "Unsuccessful".
The problem is that the script doesn't rename "ksadmin" to "ksadmin1". Have I written to command properly?
Any ideas would be great.
Morgan
What are the permissions on ksadmin? If it's read only for your login id and a ksadmin1 already exists, then you may need a mv -f.
Also, you may want to expand "~" to the absolute path. Not sure when exactly your script gets executed but perhaps bash can not yet expand it.
Thanks to cdarke, Miquel and mVChr for help. The solution to the problem is as follows:
#!/bin/bash
if [ -f /Users/$1/Library/Google/GoogleSoftwareUpdate/GoogleSoftwareUpdate.bundle/Contents/MacOS/ksadmin ]; then
mv /Users/$1/Library/Google/GoogleSoftwareUpdate/GoogleSoftwareUpdate.bundle/Contents/MacOS/ksadmin /Users/$1/Library/Google/GoogleSoftwareUpdate/GoogleSoftwareUpdate.bundle/Contents/MacOS/ksadmin1
say "Successful"
else
say "Unsuccessful"
fi
Version I use for deployment:
#!/bin/bash
if [ -f /Users/$1/Library/Google/GoogleSoftwareUpdate/GoogleSoftwareUpdate.bundle/Contents/MacOS/ksadmin ]; then
mv /Users/$1/Library/Google/GoogleSoftwareUpdate/GoogleSoftwareUpdate.bundle/Contents/MacOS/ksadmin /Users/$1/Library/Google/GoogleSoftwareUpdate/GoogleSoftwareUpdate.bundle/Contents/MacOS/ksadmin1
fi

Is this a valid self-update approach for a bash script?

I'm working on a script that has gotten so complex I want to include an easy option to update it to the most recent version. This is my approach:
set -o errexit
SELF=$(basename $0)
UPDATE_BASE=http://something
runSelfUpdate() {
echo "Performing self-update..."
# Download new version
wget --quiet --output-document=$0.tmp $UPDATE_BASE/$SELF
# Copy over modes from old version
OCTAL_MODE=$(stat -c '%a' $0)
chmod $OCTAL_MODE $0.tmp
# Overwrite old file with new
mv $0.tmp $0
exit 0
}
The script seems to work as intended, but I'm wondering if there might be caveats with this kind of approach. I just have a hard time believing that a script can overwrite itself without any repercussions.
To be more clear, I'm wondering, if, maybe, bash would read and execute the script line-by-line and after the mv, the exit 0 could be something else from the new script. I think I remember Windows behaving like that with .bat files.
Update: My original snippet did not include set -o errexit. To my understanding, that should keep me safe from issues caused by wget.
Also, in this case, UPDATE_BASE points to a location under version control (to ease concerns).
Result: Based on the input from these answers, I constructed this revised approach:
runSelfUpdate() {
echo "Performing self-update..."
# Download new version
echo -n "Downloading latest version..."
if ! wget --quiet --output-document="$0.tmp" $UPDATE_BASE/$SELF ; then
echo "Failed: Error while trying to wget new version!"
echo "File requested: $UPDATE_BASE/$SELF"
exit 1
fi
echo "Done."
# Copy over modes from old version
OCTAL_MODE=$(stat -c '%a' $SELF)
if ! chmod $OCTAL_MODE "$0.tmp" ; then
echo "Failed: Error while trying to set mode on $0.tmp."
exit 1
fi
# Spawn update script
cat > updateScript.sh << EOF
#!/bin/bash
# Overwrite old file with new
if mv "$0.tmp" "$0"; then
echo "Done. Update complete."
rm \$0
else
echo "Failed!"
fi
EOF
echo -n "Inserting update process..."
exec /bin/bash updateScript.sh
}
(At least it doesn't try to continue running after updating itself!)
The thing that makes me nervous about your approach is that you're overwriting the current script (mv $0.tmp $0) as it's running. There are a number of reasons why this will probably work, but I wouldn't bet large amounts that it's guaranteed to work in all circumstances. I don't know of anything in POSIX or any other standard that specifies how the shell processes a file that it's executing as a script.
Here's what's probably going to happen:
You execute the script. The kernel sees the #!/bin/sh line (you didn't show it, but I presume it's there) and invokes /bin/sh with the name of your script as an argument. The shell then uses fopen(), or perhaps open() to open your script, reads from it, and starts interpreting its contents as shell commands.
For a sufficiently small script, the shell probably just reads the whole thing into memory, either explicitly or as part of the buffering done by normal file I/O. For a larger script, it might read it in chunks as it's executing. But either way, it probably only opens the file once, and keeps it open as long as it's executing.
If you remove or rename a file, the actual file is not necessarily immediately erased from disk. If there's another hard link to it, or if some process has it open, the file continues to exist, even though it may no longer be possible for another process to open it under the same name, or at all. The file is not physically deleted until the last link (directory entry) that refers to it has been removed, and no processes have it open. (Even then, its contents won't immediately be erased, but that's going beyond what's relevant here.)
And furthermore, the mv command that clobbers the script file is immediately followed by exit 0.
BUT it's at least conceivable that the shell could close the file and then re-open it by name. I can't think of any good reason for it to do so, but I know of no absolute guarantee that it won't.
And some systems tend to do stricter file locking that most Unix systems do. On Windows, for example, I suspect that the mv command would fail because a process (the shell) has the file open. Your script might fail on Cygwin. (I haven't tried it.)
So what makes me nervous is not so much the small possibility that it could fail, but the long and tenuous line of reasoning that seems to demonstrate that it will probably succeed, and the very real possibility that there's something else I haven't thought of.
My suggestion: write a second script whose one and only job is to update the first. Put the runSelfUpdate() function, or equivalent code, into that script. In your original script, use exec to invoke the update script, so that the original script is no longer running when you update it. If you want to avoid the hassle of maintaining, distributing, and installing two separate scripts. you could have the original script create the update script with a unique in /tmp; that would also solve the problem of updating the update script. (I wouldn't worry about cleaning up the autogenerated update script in /tmp; that would just reopen the same can of worms.)
Yes, but ... I would recommend you keep a more layered version of your script's history, unless the remote host can also perform version-control with histories. That being said, to respond directly to the code you have posted, see the following comments ;-)
What happens to your system when wget has a hiccup, quietly overwrites part of your working script with only a partial or otherwise corrupt copy? Your next step does a mv $0.tmp $0 so you've lost your working version. (I hope you have it in version control on the remote!)
You can check to see if wget returns any error messages
if ! wget --quiet --output-document=$0.tmp $UPDATE_BASE/$SELF ; then
echo "error on wget on $UPDATE_BASE/$SELF"
exit 1
fi
Also, Rule-of-thumb tests will help, i.e.
if (( $(wc -c < $0.tmp) >= $(wc -c < $0) )); then
mv $0.tmp $0
fi
but are hardly foolproof.
If your $0 could windup with spaces in it, better to surround all references like "$0".
To be super-bullet proof, consider checking all command returns AND that Octal_Mode has a reasonable value
OCTAL_MODE=$(stat -c '%a' $0)
case ${OCTAL_MODE:--1} in
-[1] )
printf "Error : OCTAL_MODE was empty\n"
exit 1
;;
777|775|755 ) : nothing ;;
* )
printf "Error in OCTAL_MODEs, found value=${OCTAL_MODE}\n"
exit 1
;;
esac
if ! chmod $OCTAL_MODE $0.tmp ; then
echo "error on chmod $OCTAL_MODE %0.tmp from $UPDATE_BASE/$SELF, can't continue"
exit 1
fi
I hope this helps.
Very late answer here, but as I just solved this too, I thought it might help someone to post the approach:
#!/usr/bin/env bash
#
set -fb
readonly THISDIR=$(cd "$(dirname "$0")" ; pwd)
readonly MY_NAME=$(basename "$0")
readonly FILE_TO_FETCH_URL="https://your_url_to_downloadable_file_here"
readonly EXISTING_SHELL_SCRIPT="${THISDIR}/somescript.sh"
readonly EXECUTABLE_SHELL_SCRIPT="${THISDIR}/.somescript.sh"
function get_remote_file() {
readonly REQUEST_URL=$1
readonly OUTPUT_FILENAME=$2
readonly TEMP_FILE="${THISDIR}/tmp.file"
if [ -n "$(which wget)" ]; then
$(wget -O "${TEMP_FILE}" "$REQUEST_URL" 2>&1)
if [[ $? -eq 0 ]]; then
mv "${TEMP_FILE}" "${OUTPUT_FILENAME}"
chmod 755 "${OUTPUT_FILENAME}"
else
return 1
fi
fi
}
function clean_up() {
# clean up code (if required) that has to execute every time here
}
function self_clean_up() {
rm -f "${EXECUTABLE_SHELL_SCRIPT}"
}
function update_self_and_invoke() {
get_remote_file "${FILE_TO_FETCH_URL}" "${EXECUTABLE_SHELL_SCRIPT}"
if [ $? -ne 0 ]; then
cp "${EXISTING_SHELL_SCRIPT}" "${EXECUTABLE_SHELL_SCRIPT}"
fi
exec "${EXECUTABLE_SHELL_SCRIPT}" "$#"
}
function main() {
cp "${EXECUTABLE_SHELL_SCRIPT}" "${EXISTING_SHELL_SCRIPT}"
# your code here
}
if [[ $MY_NAME = \.* ]]; then
# invoke real main program
trap "clean_up; self_clean_up" EXIT
main "$#"
else
# update myself and invoke updated version
trap clean_up EXIT
update_self_and_invoke "$#"
fi

Quick bash script to run a script in a specified folder?

I am attempting to write a bash script that changes directory and then runs an existing script in the new working directory.
This is what I have so far:
#!/bin/bash
cd /path/to/a/folder
./scriptname
scriptname is an executable file that exists in /path/to/a/folder - and (needless to say), I do have permission to run that script.
However, when I run this mind numbingly simple script (above), I get the response:
scriptname: No such file or directory
What am I missing?! the commands work as expected when entered at the CLI, so I am at a loss to explain the error message. How do I fix this?
Looking at your script makes me think that the script you want to launch a script which is locate in the initial directory. Since you change you directory before executing it won't work.
I suggest the following modified script:
#!/bin/bash
SCRIPT_DIR=$PWD
cd /path/to/a/folder
$SCRIPT_DIR/scriptname
cd /path/to/a/folder
pwd
ls
./scriptname
which'll show you what it thinks it's doing.
I usually have something like this in my useful script directory:
#!/bin/bash
# Provide usage information if not arguments were supplied
if [[ "$#" -le 0 ]]; then
echo "Usage: $0 <executable> [<argument>...]" >&2
exit 1
fi
# Get the executable by removing the last slash and anything before it
X="${1##*/}"
# Get the directory by removing the executable name
D="${1%$X}"
# Check if the directory exists
if [[ -d "$D" ]]; then
# If it does, cd into it
cd "$D"
else
if [[ "$D" ]]; then
# Complain if a directory was specified, but does not exist
echo "Directory '$D' does not exist" >&2
exit 1
fi
fi
# Check if the executable is, well, executable
if [[ -x "$X" ]]; then
# Run the executable in its directory with the supplied arguments
exec ./"$X" "${#:2}"
else
# Complain if the executable is not a valid
echo "Executable '$X' does not exist in '$D'" >&2
exit 1
fi
Usage:
$ cdexec
Usage: /home/archon/bin/cdexec <executable> [<argument>...]
$ cdexec /bin/ls ls
ls
$ cdexec /bin/xxx/ls ls
Directory '/bin/xxx/' does not exist
$ cdexec /ls ls
Executable 'ls' does not exist in '/'
One source of such error messages under those conditions is a broken symlink.
However, you say the script works when run from the command line. I would also check to see whether the directory is a symlink that's doing something other than what you expect.
Does it work if you call it in your script with the full path instead of using cd?
#!/bin/bash
/path/to/a/folder/scriptname
What about when called that way from the command line?

Resources