Understanding Bash if statement that invokes a command - bash

Does anyone know what this is doing:
if ! /fgallery/fgallery -v -j3 /images /usr/share/nginx/html/ "${GALLERY_TITLE:-Gallery}"; then
mkdir -p /usr/share/nginx/html
I understand the first part is saying if /fgallery/fgallery directory doesn't exist but after this it it not clear for me.

In Bash, we can build an if based on the exit status of a command this way:
if command; then
echo "Command succeeded"
else
echo "Command failed"
fi
then part is executed when the command exits with 0 and else part otherwise.
Your code is doing exactly that.
It can be rewritten as:
/fgallery/fgallery -v -j3 /images /usr/share/nginx/html/ "${GALLERY_TITLE:-Gallery}"; fgallery_status=$?
if [ "$fgallery_status" -ne 0 ]; then
mkdir -p /usr/share/nginx/html
fi
But the former construct is more elegant and less error prone.
See these posts:
How to conditionally do something if a command succeeded or failed
Why is testing "$?" to see if a command succeeded or not, an antipattern?

Related

A way to ignore exit status in gitlab job pipeline [duplicate]

In our project we have a shell script which is to be sourced to set up environment variables for the subsequent build process or to run the built applications.
It contains a block which checks the already set variables and does some adjustment.
# part of setup.sh
for LIBRARY in "${LIBRARIES_WE_NEED[#]}"
do
echo $LD_LIBRARY_PATH | \grep $LIBRARY > /dev/null
if [ $? -ne 0 ]
then
echo Adding $LIBRARY
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$LIBRARY
else
echo Not adding $LIBRARY
fi
done
i.e. it checks if a path to a library is already in $LD_LIBRARY_PATH and if not, adds it.
(To be fair, this could be written differently (like here), but assume the script is supposed to achieve something which is very hard to do without calling a program, checking $? and then either doing one thing or doing another thing).
The .gitlab-ci.yml then contains
before_script:
- yum install -y <various packages>
- source setup.sh
but the runner decides to stop the before script the very moment $? is non-zero, i.e. when the if-statement decides to add a path to $LD_LIBRARY_PATH.
Now it is nice that the gitlab runner checks $? after each line of my script, but here it'd be great if the lines in .gitlab-ci.yml were considered atomic.
Is there a way to avoid the intermediate checks of $? in a script that's sourced in .gitlab-ci.yml?
Use command_that_might_fail || true to mask the exit status of said command.
Also note that you can use grep -q to prevent output:
echo "$LD_LIBRARY_PATH" | grep -q "$LIBRARY" || true
This will however also mask $? which you might not want. If you want to check if the command exits correct you might use:
if echo "$LD_LIBRARY_PATH" | grep -q "$LIBRARY"; then
echo "Adding $LIBRARY"
else
...
fi
I suspect that gitlab-ci sets -e which you can disabled with set +e:
set +e # Disable exit on error
for library in "${LIBRARIES_WE_NEED[#]}"; do
...
done
set -e # Enable exit on error
Future reading: Why double quotes matter and Pitfalls with set -e
Another trick that I am using is a special kind of "|| true", combined with having access to previous exit code.
- exit_code=0
- ./myScript.sh || exit_code=$?
- if [ ${exit_code} -ne 0 ]; then echo "It failed!" ; else echo "It worked!"; fi
The $exit_code=$? always evaluates to "true" so you get a non failing command but you also receive exit_code and you can do whatever you want with it.
Note please, that you shouldn't skip the first line or exit_code will be uninitialized (since on successful run of script, the or'ed part is never executed and the if ends up being)
if [ -ne 0 ];
instead of
if [ 0 -ne 0 ];
Which causes syntax error.

Testing server setup bash scripts

I'm just learning to write bash scripts.
I'm writing a script to setup a new server.
How should I go about testing the script.
i.e.
I use apt install for certain packages like apache, php etc. and then a couple of lines down there is an error.
I then need to fix the error and run it again but it will run all the install commands again.
The system will probably say the package is installed already, but what if there are commands which append strings to files.
If these are run again it will append the same string to the file a second time.
What is the best approach to write bash-scripts like this?
Can you do test runs which rollback everything after an error or end of the script?
Or even better to have the script continue from the line where the error occured the next time it is run?
I'm doing this on an Ubuntu 18.04 server.
it's a matter of how clear you want it to be to read it, but
[ -f .step01-done ] || your install command && touch .step01-done
[ -f .step02-done ] || your other install command && touch .step02-done
maybe a little easier to read:
if ! [ -f .step01-done ]; then
if your install command ; then
touch .step01-done
fi
fi
if ! [ -f .step02-done ]; then
if your other install command ; then
touch .step02-done
fi
fi
...or something in between.
Now, I would suggest creating a directory somewhere and maybe logging output from the commands to some file there (maybe tee it) but definitely putting all these files you are creating with touch there. That way if you start it from another directory by accident, it won't matter. You just need to make sure that apt-get or whatever you use actual returns false if it fails. It should.
You could even make a function that does it in a nice way...
#!/bin/bash
function do_cmd() {
if [ -f "$1.done" ]; then
echo "$2: skipping already completed step"
return 0
fi
echo -n "$2: "
$3 1> "$1.out" 2> "$1.err"
if $?; then
echo "ok"
touch "$1.done"
return 0
else
echo "failed"
echo -e "see \"$1.out\" and/or \"$1.err\" for details."
return 1
# could "exit 1" instead
fi
}
[ -d /root/mysetup ] || mkdir /root/mysetup
if ! [ -d /root/mysetup ]; then
echo "failed to find or create /root/mysetup directory
exit 1
fi
cd /root/mysetup
# ---------------- your steps go here -------------------
do_cmd prog1 "installing prog1" "apt-get install prog1" || exit 1
do_cmd prog2 "installing prog2" "apt-get install prog2" || exit 1
do_cmd startfoo "starting foo service" "service foo start" || exit 1
echo "all setup functions finished."
You would use:
do_cmd identifier "description" "command or function"
description
identifier: unique identifier used when files are generated:
identifier.out: standard output from command
identifier.err: standard error from command
identifier.done: created when command is successful
description: this is actually printed to the terminal when the step is being executed.
command or function: this is the actual command to run
not sure why stackoverflow forced me to format that last bit as code but w/e

Prompt for `sudo` only if Bash script runs into "Permission denied"

Let's say I have a very simple script which creates a link in a certain directory, and kills the script if it fails.
ln -s "/opt/myapp" "${1}/link" || exit 1;
Right now it just quits if it runs into errors. I want to change it so only if it runs into permission errors when creating the link, it will execute the following lines instead of exiting:
echo "The target directory requires root privileges to access."
sudo ln -s "/opt/myapp" "${1}/myapp" || exit 1;
I don't want to prompt the users to run as root unless they absolutely have to.
ln seems to retun exit code 1 on failure regardless of whether it was a problem with permissions or any other errors such as a directory not existing, so I can't use that to detect which problem it ran into.
And if I instead store search through the output of ln for the string "Permission denied", I'm assuming it will fail on non-english operating systems.
I don't know of any ways to categorize ln exit reasons, or at least any documentation about specific exit codes you could test with $?, but you can test for relevant permissions with the standard test or [ command:
SOURCEFILE="/opt/myapp"
DESTDIR="${1}"
DESTTARGET="${DESTDIR}/myapp"
if [ ! -d "$DESTDIR" -o ! -e "$SOURCEFILE" ]; then
echo "Source file does not exist or destination directory does not exist." >&2
elif [ ! -r "$SOURCEFILE" -o ! -w "$DESTDIR" ]; then
echo "Source file is not readable or destination directory is not writable." >&2
# Run sudo command here
else
# Should work, run command here
fi

Bash: How to test for failure of mkdir command?

I'm writing a bash script and want to do robust error checking in it.
The exit status code for mv to make it fail is easy to simulate a failure. All you have to do is move a file that doesn't exist, and it fails.
However with mkdir I want to simulate it failing. mkdir could fail for any number of reasons, problems with the disk, or lack of permissions, but not sure how to simulate a failure.
Just use
mkdir your_directory/
if [ $? -ne 0 ] ; then
echo "fatal"
else
echo "success"
fi
where $? stands for the exit code from the last command executed.
To create parent directories, when these don't exist, run mkdir -p parent_directory/your_directory/
if ! mkdir your_directory 2>/dev/null; then
print_error
exit
fi
or
mkdir your_directory 2>/dev/null || { print_error; exit; }
mkdir will fail if the directory already exists (unless you are using -p), and return an error code of 1 (on my system), so create the directory first to test this on your own system. (Although I would assume that is standard across all shells.)
Alternatively, make the parent directory read-only.
in your script , you could also put a check for the new dir ....
mkdir -p new_dir ;
if [ -d new_dir ]
cd new_dir && ...... anything else you want .
else
echo "error in directory creation ";
exit 2 ;
fi
If you are lazy a simple set -e in the beginning of you script is enough. Often you just want to print an error and then terminate if something goes wrong.
Not exactly what you asked for, but perhaps what you want.

Quick bash script to run a script in a specified folder?

I am attempting to write a bash script that changes directory and then runs an existing script in the new working directory.
This is what I have so far:
#!/bin/bash
cd /path/to/a/folder
./scriptname
scriptname is an executable file that exists in /path/to/a/folder - and (needless to say), I do have permission to run that script.
However, when I run this mind numbingly simple script (above), I get the response:
scriptname: No such file or directory
What am I missing?! the commands work as expected when entered at the CLI, so I am at a loss to explain the error message. How do I fix this?
Looking at your script makes me think that the script you want to launch a script which is locate in the initial directory. Since you change you directory before executing it won't work.
I suggest the following modified script:
#!/bin/bash
SCRIPT_DIR=$PWD
cd /path/to/a/folder
$SCRIPT_DIR/scriptname
cd /path/to/a/folder
pwd
ls
./scriptname
which'll show you what it thinks it's doing.
I usually have something like this in my useful script directory:
#!/bin/bash
# Provide usage information if not arguments were supplied
if [[ "$#" -le 0 ]]; then
echo "Usage: $0 <executable> [<argument>...]" >&2
exit 1
fi
# Get the executable by removing the last slash and anything before it
X="${1##*/}"
# Get the directory by removing the executable name
D="${1%$X}"
# Check if the directory exists
if [[ -d "$D" ]]; then
# If it does, cd into it
cd "$D"
else
if [[ "$D" ]]; then
# Complain if a directory was specified, but does not exist
echo "Directory '$D' does not exist" >&2
exit 1
fi
fi
# Check if the executable is, well, executable
if [[ -x "$X" ]]; then
# Run the executable in its directory with the supplied arguments
exec ./"$X" "${#:2}"
else
# Complain if the executable is not a valid
echo "Executable '$X' does not exist in '$D'" >&2
exit 1
fi
Usage:
$ cdexec
Usage: /home/archon/bin/cdexec <executable> [<argument>...]
$ cdexec /bin/ls ls
ls
$ cdexec /bin/xxx/ls ls
Directory '/bin/xxx/' does not exist
$ cdexec /ls ls
Executable 'ls' does not exist in '/'
One source of such error messages under those conditions is a broken symlink.
However, you say the script works when run from the command line. I would also check to see whether the directory is a symlink that's doing something other than what you expect.
Does it work if you call it in your script with the full path instead of using cd?
#!/bin/bash
/path/to/a/folder/scriptname
What about when called that way from the command line?

Resources