I have the following bash script
aws s3 sync s3://test/ s3://test-li/
if [[ ! $? -eq 0 ]]; then
echo "Unable to copy from test bucket"
exit 1
fi
Is that the right way to run a command and check its return value?
Your code will work, but any good shell script declares what shell should interpret it. You should add a first line of your script (no spaces at the left margin)
#!/bin/bash
(Or whatever is the correct path to bash in your environment). 99% that it is /bin/bash.
Your test is a slightly baroque, there is an -ne operator that you could use, i.e.
if [[ $? -ne 0 ]] ; then
. . .
Or you can go advanced, and let if directly test the return code from you aws command, i.e.
if ! aws sync s3://test/ s3://test-li/ ; then
echo "Unable to copy from test bucket"
exit 1
fi
In this case, you need the ! to have block execute.
You could even capture any output so you can review error msgs etc with
if ! aws sync s3://test/ s3://test-li/ > /tmp/aws_launchlog.txt 2>&1 ; then
echo "Unable to copy from test bucket"
exit 1
fi
IHTH
Unlike most other languages, in shells like bash the if statement is followed by a command, the brackets are fooling you. [[ is actually a shell keyword (as is !), a development of [ which is a shell built-in, also known as the test command.
Use [[ when you wish to do pattern matching, use (( when you wish to do arithmetic comparisons.
if (( some_variable > 0 ))
then
...
fi
If you just want to test if a command worked (returned zero) then any form of brackets are superfluous.
if ! aws s3 sync s3://test/ s3://test-li/
then
# Always send error messages to stderr
echo "Unable to copy from test bucket" >&2
exit 1
fi
Having said that, there are thousands of scripts out there in the wild that do what you have done, and they still work. Unfortunately.
Your way will work. A simpler way is to put the command directly in the if:
if ! aws s3 sync s3://test/ s3://test-li/
then
echo "Unable to copy from test bucket"
exit 1
fi
bash's set -e is quite useful in these cases.
#!/bin/bash
set -e
aws s3 sync s3://test/ s3://test-li/
# if `aws` returns non-zero, the following code will not be executed
echo 'it succeeded!'
Related
I need to check if a file exists in a gitlab deployment pipeline. How to do it efficiently and reliably?
Use gsutil ls gs://bucket/object-name and check the return value for 0.
If the object does not exist, the return value is 1.
You can add the following Shell script in a Gitlab job :
#!/usr/bin/env bash
set -o pipefail
set -u
gsutil -q stat gs://your_bucket/folder/your_file.csv
PATH_EXIST=$?
if [ ${PATH_EXIST} -eq 0 ]; then
echo "Exist"
else
echo "Not Exist"
fi
I used gcloud cli and gsutil with stat command with -q option.
In this case, if the file exists the command returns 0 otherwise 1.
This answer evolved from the answer of Mazlum Tosun. Because I think it is a substantial improvement with less lines and no global settings switching it needs to be a separate answer.
Ideally the answer would be something like this
- gsutil stat $BUCKET_PATH
- if [ $? -eq 0 ]; then
... # do if file exists
else
... # do if file does not exists
fi
$? stores the exit_status of the previous command. 0 if success. This works fine in a local console. The problem with Gitlab will be that if the file does not exists, then "gsutil stat $BUCKET_PATH" will produce a non-zero exit code and the whole pipeline will stop at that line with an error. We need to catch the error, while still storing the exit code.
We will use the or operator || to suppress the error. FILE_EXISTS=false will only be executed if gsutil stat fails.
- gsutil stat $BUCKET_PATH || FILE_EXISTS=false
- if [ "$FILE_EXISTS" = false ]; then
... # do stuff if file does not exist
else
... # do stuff if file exists
fi
Also we can use the -q flag to let the command stats be silent if that is desired.
- gsutil -q stat $BUCKET_PATH || FILE_EXISTS=false
Clean and simple: how do I check with bash for certain parts of the folder I'm currently in?
#!/usr/bin/sh
CURRENTFOLDER=$(pwd)
echo "${CURRENTFOLDER}"
CHECKFOLDER="/home/*/domains/*/public_html"
if [ $CURRENTFOLDER ! $CHECKFOLDER ]
then
echo "Current folder is not /home/user/domains/domain.com/public_html"
exit
fi
User and domain are variable, I don't need to know them for this checkup, just the 3 pre-defined folders in the variable CHECKFOLDER
There's a problem with this approach.
For example in bash the following expression evaluates to true:
[[ /www/user/domains/local/public_html == /www/*/public_html ]]
It is more accurate to use a bash regex:
[[ /www/user/domains/local/public_html =~ ^/www/[^/]+/public_html$ ]]
So your code would become:
#!/bin/bash
current_folder=$PWD
check_folder='^/home/[^/]+/domains/[^/]+/public_html$'
if ! [[ $current_folder =~ $check_folder ]]
then
echo "Current folder is not /home/user/domains/domain.com/public_html"
exit
fi
BTW, the shebang needs to be a bash, not sh. And it's kind of dangerous to capitalize your variables.
Try this (almost) Shellcheck-clean code:
#! /usr/bin/sh
curr_phpath=''
for phpath in /home/*/domains/*/public_html/; do
if [ "$phpath" -ef . ]; then
curr_phpath=$phpath
break
fi
done
if [ -z "$curr_phpath" ]; then
echo "Current folder is not /home/user/domains/domain.com/public_html" >&2
exit 1
fi
Because of aliasing mechanisms (e.g. symbolic links, bind mounts) it is very difficult in general to determine if two paths reference the same file or directory by comparing them textually. See How to check if two paths are equal in Bash? for more information. This solution uses a more reliable mechanism to determine if the current directory is one of the valid ones.
Since the shebang line references sh instead of bash, the code avoids Bashisms. It's been tested with both bash and dash (probably the most common non-Bash sh).
See Correct Bash and shell script variable capitalization for an explanation of why the code does not use ALL_UPPERCASE variable names.
The [ "$phpath" -ef . ] test is true if the .../public_html path being checked is the same directory as the current directory. The -ef operator is not in POSIX so it is not guaranteed to be supported by an sh shell, and Shellcheck (correctly) warns about it. However, it is supported in both bash and dash, and sh is usually one of those (on Linux at least).
You can save a step just by changing to the directory instead of checking.
Check your glob matches only one file first.
Then, cd to check it's a dir.
#! /bin/bash
IFS="$(printf '\n\t')"
files=( $(compgen -G '/home/*/domains/*/public_html') )
if [[ "${#files[#]}" != 1 ]]
then
printf 'Multiple matches\n' >&2
exit 1
fi
if ! cd "${files[0]}"
then
printf 'Cannot chdir\n'
exit 1
fi
In our project we have a shell script which is to be sourced to set up environment variables for the subsequent build process or to run the built applications.
It contains a block which checks the already set variables and does some adjustment.
# part of setup.sh
for LIBRARY in "${LIBRARIES_WE_NEED[#]}"
do
echo $LD_LIBRARY_PATH | \grep $LIBRARY > /dev/null
if [ $? -ne 0 ]
then
echo Adding $LIBRARY
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$LIBRARY
else
echo Not adding $LIBRARY
fi
done
i.e. it checks if a path to a library is already in $LD_LIBRARY_PATH and if not, adds it.
(To be fair, this could be written differently (like here), but assume the script is supposed to achieve something which is very hard to do without calling a program, checking $? and then either doing one thing or doing another thing).
The .gitlab-ci.yml then contains
before_script:
- yum install -y <various packages>
- source setup.sh
but the runner decides to stop the before script the very moment $? is non-zero, i.e. when the if-statement decides to add a path to $LD_LIBRARY_PATH.
Now it is nice that the gitlab runner checks $? after each line of my script, but here it'd be great if the lines in .gitlab-ci.yml were considered atomic.
Is there a way to avoid the intermediate checks of $? in a script that's sourced in .gitlab-ci.yml?
Use command_that_might_fail || true to mask the exit status of said command.
Also note that you can use grep -q to prevent output:
echo "$LD_LIBRARY_PATH" | grep -q "$LIBRARY" || true
This will however also mask $? which you might not want. If you want to check if the command exits correct you might use:
if echo "$LD_LIBRARY_PATH" | grep -q "$LIBRARY"; then
echo "Adding $LIBRARY"
else
...
fi
I suspect that gitlab-ci sets -e which you can disabled with set +e:
set +e # Disable exit on error
for library in "${LIBRARIES_WE_NEED[#]}"; do
...
done
set -e # Enable exit on error
Future reading: Why double quotes matter and Pitfalls with set -e
Another trick that I am using is a special kind of "|| true", combined with having access to previous exit code.
- exit_code=0
- ./myScript.sh || exit_code=$?
- if [ ${exit_code} -ne 0 ]; then echo "It failed!" ; else echo "It worked!"; fi
The $exit_code=$? always evaluates to "true" so you get a non failing command but you also receive exit_code and you can do whatever you want with it.
Note please, that you shouldn't skip the first line or exit_code will be uninitialized (since on successful run of script, the or'ed part is never executed and the if ends up being)
if [ -ne 0 ];
instead of
if [ 0 -ne 0 ];
Which causes syntax error.
I'm trying to write a script that will only accept exactly one argument. I'm still learning so I don't understand what's wrong with my code. I don't understand why, even though I change the number of inputs the code just exits. (Note: I'm going to use $dir for later if then statements but I haven't included it.)
#!/bin/bash
echo -n "Specify the name of the directory"
read dir
if [ $# -ne 1 ]; then
echo "Script requires one and only one argument"
exit
fi
You can use https://www.shellcheck.net/ to double check your syntax.
$# tells you how many arguments the script was called with.
Here you have two options.
Option 1: Use arguments
#!/bin/bash
if [[ $# -ne 1 ]]
then
echo "Script requires one and only one argument"
exit 1
else
echo "ok, arg1 is $1"
fi
To call the script do: ./script.bash argument
Use [[ ]] for testing conditions (http://mywiki.wooledge.org/BashFAQ/031)
exit 1: by default when a script exists with a 0 status code, it means it worked ok. Here since it is an error, specify a non-zero value.
Option 2: Do not use arguments, ask the user for a value.
Note: this version does not use arguments at all.
#!/bin/bash
read -r -p "Specify the name of the directory: " dir
if [[ ! -d "$dir" ]]
then
echo "Error, directory $dir does not exist."
exit 1
else
echo "ok, directory $dir exists."
fi
To call the script do: ./script.bash without any arguments.
You should research bash tutorials to learn how to use arguments.
I want to create a directory with increasing numbers every time I run the script. My current solution is:
COUNTER=1
while mkdir $COUNTER; (( $? != 0 ))
do
COUNTER=$((COUNTER + 1))
done
Is separating the commands in the while condition with a ;(semicolon) the best practice?
The very purpose of while and the shell's other control statements is to run a command and examine its exit code. You can use ! to negate the exit code.
while ! mkdir "$COUNTER"
do
COUNTER=$((COUNTER + 1))
done
Notice also the quoting; see further Why is testing "$?" to see if a command succeeded or not, an anti-pattern?
As such, if you want two commands to run and only care about the exit code from the second, a semicolon is the correct separator. Often, you want both commands to succeed; then, && is the correct separator to use.
You don't need to test the exit status, just check if the directory exists already and increment. Here is one way
#!/usr/bin/env bash
counter=1
while [[ -e $counter ]]; do
((counter++))
done
if ! mkdir "$counter"; then ##: mkdir failed
echo failed ##: execute this code
fi
POSIX sh shell.
#!/usr/bin/env sh
counter=1
while [ -e "$counter" ]; do
counter=$((counter+1))
done
if ! mkdir "$counter"; then ##: If mkdir failed
echo failed ##: Execute this code
fi
The bang ! negates the exit status of mkdir.
See help test | grep '^[[:blank:]]*!'
Well if you're just going to negate the exit status of mkdir inside the while loop then you might as well use until, which is the opposite of while
counter=1
until mkdir "$COUNTER"; do
:
COUNTER=$((COUNTER + 1))
done