I try to check the command return inside my bash script for all commands. I created a function for this named check_command_return_code. This function is called in some other function which run command and it seems to work as expected except for the envsubst command.
This is my check_command_return_code:
check_command_return_code(){
"$#"
if [ "$?" -ne 0 ]; then
echo "[ERROR] Error with command $#"
exit 1
fi
echo "[SUCCESS] Command $# has successfully run"
}
I also write this function in order to substitute env variable inside yaml file:
substitute_env_variables_into_file(){
echo "Create new file named $2 from $1 by substituting environment variables within it"
check_command_return_code envsubst < $1 > $2
}
I call my function which proceeds the substitution like this:
substitute_env_variables_into_file "./ingress-values.yaml" "./ingress-values-subst.yaml"
This is my ingress-values.yaml file:
controller:
replicaCount: 2
service:
loadBalancerIP: "$INTERNAL_LOAD_BALANCER_IP"
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
I expect my ingress-values-subst.yaml looks like this:
controller:
replicaCount: 2
service:
loadBalancerIP: "my_private_ip"
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal:
Unfortunately the ingress-values-subst.yaml is expanded with the echo of my check_command_return_code function as you can see:
controller:
replicaCount: 2
service:
loadBalancerIP: "my_private_ip"
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
[SUCCESS] Command envsubst has successfully run
I enabled the "debug" mode thanks to the following command in order to have more verbosity:
set -x
These logs are those from the output of my script:
++ substitute_env_variables_into_file ./private-ingress-values.yaml ./private-ingress-values-subst.yaml
++ echo 'Create new file named ./ingress/private-ingress-values-subst.yaml from ./private-ingress-values.yaml by substituting environment variables within it'
Create new file named ./private-ingress-values-subst.yaml from ./private-ingress-values.yaml by substituting environment variables within it
++ check_command_return_code envsubst
++ envsubst
++ '[' 0 -ne 0 ']'
++ echo '[SUCCESS] Command envsubst has successfully run'
I don't understand why the parameter of my command envsubst are not passed into my check_command_return_code function as you can see in the previous logs.
Thanks in advance for your help
I don't understand why the parameter of my command envsubst are not passed into my check_command_return_code
Redirections are not parameters. Redirections are opened at the time the line is executed.
When you do your_function > file, then inside your_function standard output is redirected to file for the whole duration of the function, including all the commands inside your_function.
Wrap it in yet another function:
myenvsubst() {
envsubst < "$1" > "$2"
}
check_command_return_code myenvsubst "$1" "$2"
Or better yet, write log information to standard error, or another file descriptor.
echo "[ERROR] Error with command $*" >&2
Check your scripts with shellcheck to find such problems like:
< $1 > $2
are not quoted. They should be < "$1" > "$2"
if [ "$?" -ne 0 ]; then
is an antipattern. Prefer if ! "$#"; then.
echo "[ERROR] Error with command $#"
is an odd usage of quoted $#. Prefer $*, or move to a separate argument.
Related
I have a script that I'd like people to source, but optionally so. So they can run it with or without sourcing it, it's up to them.
e.g. The following should both work:
$ . test.sh
$ test.sh
The problem is, test.sh contains exit statements if correct args aren't passed in. If someone sources the script, then the exit commands exit the terminal!
I've done a bit of research and see from this StackOverflow post that I could detect if it's being sourced, and do something different, but what would that something different be?
The normal way to exit from a sourced script is simply to return (optionally adding the desired exit code) outside of any function. Assuming that when run as a command we have the -e flag set, this will also exit from a shell program:
#!/bin/sh -eu
if [ $# = 0 ]
then
echo "Usage $0 <argument>" >&2
return 1
fi
If we're running without -e, we might be able to return || exit instead.
There may be better ways to do this, but here's a sample script showing how I got this to work:
bparks#home
$ set | grep TESTVAR
bparks#home
$ ./test.sh
Outputs some useful information to the console. Please pass one arg.
bparks#home
$ set | grep TESTVAR
bparks#home
$ . ./test.sh
Outputs some useful information to the console. Please pass one arg.
bparks#home
$ set | grep TESTVAR
bparks#home
$ ./test.sh asdf
export TESTVAR=me
bparks#home
$ set | grep TESTVAR
bparks#home
$ . ./test.sh asdf
bparks#home
$ set | grep TESTVAR
TESTVAR=me
bparks#home
$
test.sh
#!/usr/bin/env bash
# store if we're sourced or not in a variable
(return 0 2>/dev/null) && SOURCED=1 || SOURCED=0
exitIfNotSourced(){
[[ "$SOURCED" != "0" ]] || exit;
}
showHelp(){
IT=$(cat <<EOF
Outputs some useful information to the console. Please pass one arg.
EOF
)
echo "$IT"
}
# Show help if no args supplied - works if sourced or not sourced
if [ -z "$1" ]
then
showHelp
exitIfNotSourced;
return;
fi
# your main script follows
# this sample shows exporting a variable if sourced,
# and outputting this to stdout if not sourced
if [ "$SOURCED" == "1" ]
then
export TESTVAR=me
else
echo "export TESTVAR=me"
fi
Checkout this answer for better description and porper solution.
And here is how it is used in docker-entrypoint.sh in official Mysql image:
# check to see if this file is being run or sourced from another script
_is_sourced() {
# https://unix.stackexchange.com/a/215279
[ "${#FUNCNAME[#]}" -ge 2 ] \
&& [ "${FUNCNAME[0]}" = '_is_sourced' ] \
&& [ "${FUNCNAME[1]}" = 'source' ]
}
* the wording of the question is terrible, sorry!
I have some bash functions I create
test() {echo "hello wold"}
test2() {echo "hello wold"}
Then in my .bashrc I source the file that has the above function . ~/my_bash_scripts/testFile
In the terminal I can run test and get hello world.
is there a way for me to add parent variable that holds all my functions together. For example personal test, personal test2.
Similar to every other gem out there, I downloaded a tweeter one. All it's methods are followed by the letter t, as in t status to write a status, instead of just status
You are asking about writing a command-line program. Just a simple one here:
#!/usr/bin/env bash
if [[ $# -eq 0 ]]; then
echo "no command specified"
exit
elif [[ $# -gt 1 ]]; then
echo "only one argument expected"
exit
fi
case "$1" in
test)
echo "hello, this is test1"
;;
test2)
echo "hello, this is test2"
;;
*)
echo "unknown command: $1"
;;
esac
Then save it and make it an executable by run chmod +x script.sh, and in your .bashrc file, add alias personal="/fullpath/to/the/script.sh".
This is just very basic and simple example using bash and of course you can use any language you like, e.g. Python, Ruby, Node e.t.c.
Use arguments to determine final outputs.
You can use "$#" for number of arguments.
For example,
if [ $# -ne 2 ]; then
# TODO: print usage
exit 1
fi
Above code exits if arguments not euqal to 2.
So below bash program
echo $#
with
thatscript foo bar baz quux
will output 4.
Finally you can combine words to determine what to put stdout.
If you want to flag some functions as your personal functions; no, there is no explicit way to do that, and essentially, all shell functions belong to yourself (although some may be defined by your distro maintainer or system administrator as system-wide defaults).
What you could do is collect the output from declare -F at the very top of your personal shell startup file; any function not in that list is your personal function.
SYSFNS=$(declare -F | awk '{ a[++i] = $3 }
END { for (n=1; n<=i; n++) printf "%s%s", (n>1? ":" : ""), a[n] }')
This generates a variable SYSFNS which contains a colon-separated list of system-declared functions.
With that defined, you can check out which functions are yours:
myfns () {
local fun
declare -F |
while read -r _ _ fun; do
case :$SYSFNS: in *:"$fun":*) continue;; esac
echo "$fun"
done
}
I am currently learning bash programming and dont really understand why the passing argument for me is not working.
i have a script like this
#!/bin/bash
# the following environment variables must be set before running this script
# SIM_DIR name of directory containing armsim
# TEST_DIR name of the directory containing this script and the expected outputs
# LOG_DIR name of the directory that your output is written to by the run_test2 script
# ARMSIM_VERBOSE set to "-v" for verbose logging or leave unset
# First check the environment variables are set
giveup=0
if [[ ${#SIM_DIR} -eq 0 || ${#TEST_DIR} -eq 0 || ${#LOG_DIR} -eq 0 ]] ; then
echo One or more of the following environment variables must be set:
echo SIM_DIR, TEST_DIR, LOG_DIR
giveup=1
fi
# Now check the verbose flag
if [[ ${#ARMSIM_VERBOSE} != 0 && "x${ARMSIM_VERBOSE}" != "x-v" ]] ; then
echo ARMSIM_VERBOSE must be unset, empty or set to -v
giveup=1
fi
# Stop if environment is not set up
if [ ${giveup} -eq 1 ] ; then
exit 0
fi
cd ${TEST_DIR}
for i in test2-*.sh; do
echo "**** Running test ${i%.sh} *****"
./$i > ${LOG_DIR}/${i%.sh}.log
done
When I run the .sh file and pass in 3 example argument as below:-
$ ./run_test2 SIM_DIR TEST_DIR LOG_DIR
It still show: One or more of the following environment variables must be set:
SIM_DIR, TEST_DIR, LOG_DIR
Can anyone guide me on this? Thank you.
That's not how it's intended to work. The environment variables must be set beforehand either in the script or in the terminal like
export SIM_DIR=/home/someone/simulations
export TEST_DIR=/home/someone/tests
export LOG_DIR=/home/someone/logs
./run_test2
If you use these variables frequently, you might want to export them in ~/.bashrc. The syntax is identical to the exports in the above example.
Environment variables aren't really arguments in the sense I understand from your question/example. It sounds to me like you want to give arguments to a function/script, if you do that you can find your arguments in $1-9 (I think bash supports even more, unsure), the number of arguments are stored in $#
Example function that expects two arguments:
my_func() {
if [ $# -ne 2 ]; then
printf "You need to give 2 arguments\n"
return
fi
printf "Your first argument: %s\n" "$1"
printf "Your second argument: $s\n" "$2"
}
# Call the functionl like this
my_func arg1 arg2
I want to write code like this:
command="some command"
safeRunCommand $command
safeRunCommand() {
cmnd=$1
$($cmnd)
if [ $? != 0 ]; then
printf "Error when executing command: '$command'"
exit $ERROR_CODE
fi
}
But this code does not work the way I want. Where did I make the mistake?
Below is the fixed code:
#!/bin/ksh
safeRunCommand() {
typeset cmnd="$*"
typeset ret_code
echo cmnd=$cmnd
eval $cmnd
ret_code=$?
if [ $ret_code != 0 ]; then
printf "Error: [%d] when executing command: '$cmnd'" $ret_code
exit $ret_code
fi
}
command="ls -l | grep p"
safeRunCommand "$command"
Now if you look into this code, the few things that I changed are:
use of typeset is not necessary, but it is a good practice. It makes cmnd and ret_code local to safeRunCommand
use of ret_code is not necessary, but it is a good practice to store the return code in some variable (and store it ASAP), so that you can use it later like I did in printf "Error: [%d] when executing command: '$command'" $ret_code
pass the command with quotes surrounding the command like safeRunCommand "$command". If you don’t then cmnd will get only the value ls and not ls -l. And it is even more important if your command contains pipes.
you can use typeset cmnd="$*" instead of typeset cmnd="$1" if you want to keep the spaces. You can try with both depending upon how complex is your command argument.
'eval' is used to evaluate so that a command containing pipes can work fine
Note: Do remember some commands give 1 as the return code even though there isn't any error like grep. If grep found something it will return 0, else 1.
I had tested with KornShell and Bash. And it worked fine. Let me know if you face issues running this.
Try
safeRunCommand() {
"$#"
if [ $? != 0 ]; then
printf "Error when executing command: '$1'"
exit $ERROR_CODE
fi
}
It should be $cmd instead of $($cmd). It works fine with that on my box.
Your script works only for one-word commands, like ls. It will not work for "ls cpp". For this to work, replace cmd="$1"; $cmd with "$#". And, do not run your script as command="some cmd"; safeRun command. Run it as safeRun some cmd.
Also, when you have to debug your Bash scripts, execute with '-x' flag. [bash -x s.sh].
There are several things wrong with your script.
Functions (subroutines) should be declared before attempting to call them. You probably want to return() but not exit() from your subroutine to allow the calling block to test the success or failure of a particular command. That aside, you don't capture 'ERROR_CODE' so that is always zero (undefined).
It's good practice to surround your variable references with curly braces, too. Your code might look like:
#!/bin/sh
command="/bin/date -u" #...Example Only
safeRunCommand() {
cmnd="$#" #...insure whitespace passed and preserved
$cmnd
ERROR_CODE=$? #...so we have it for the command we want
if [ ${ERROR_CODE} != 0 ]; then
printf "Error when executing command: '${command}'\n"
exit ${ERROR_CODE} #...consider 'return()' here
fi
}
safeRunCommand $command
command="cp"
safeRunCommand $command
The normal idea would be to run the command and then use $? to get the exit code. However, sometimes you have multiple cases in which you need to get the exit code. For example, you might need to hide its output, but still return the exit code, or print both the exit code and the output.
ec() { [[ "$1" == "-h" ]] && { shift && eval $* > /dev/null 2>&1; ec=$?; echo $ec; } || eval $*; ec=$?; }
This will give you the option to suppress the output of the command you want the exit code for. When the output is suppressed for the command, the exit code will directly be returned by the function.
I personally like to put this function in my .bashrc file.
Below I demonstrate a few ways in which you can use this:
# In this example, the output for the command will be
# normally displayed, and the exit code will be stored
# in the variable $ec.
$ ec echo test
test
$ echo $ec
0
# In this example, the exit code is output
# and the output of the command passed
# to the `ec` function is suppressed.
$ echo "Exit Code: $(ec -h echo test)"
Exit Code: 0
# In this example, the output of the command
# passed to the `ec` function is suppressed
# and the exit code is stored in `$ec`
$ ec -h echo test
$ echo $ec
0
Solution to your code using this function
#!/bin/bash
if [[ "$(ec -h 'ls -l | grep p')" != "0" ]]; then
echo "Error when executing command: 'grep p' [$ec]"
exit $ec;
fi
You should also note that the exit code you will be seeing will be for the grep command that's being run, as it is the last command being executed. Not the ls.
Is there any variable in bash that contains the name of the .sh file executed? The line number would be great too.
I want to use it in error messages such as:
echo "ERROR: [$FILE:L$LINE] $somefile not found"
#!/bin/bash
echo $LINENO
echo `basename $0`
$LINENO for the current line number
$0 for the current file. I used basename to ensure you only get the file name and not the path.
UPDATE:
#!/bin/bash
MY_NAME=`basename $0`
function ouch {
echo "Fail # [${MY_NAME}:${1}]"
exit 1
}
ouch $LINENO
You have to pass the line as a parameter if you use the function approach else you will get the line of the function definition.
I find the "BASH_SOURCE" and "BASH_LINENO" built-in arrays very useful:
$ cat xx
#!/bin/bash
_ERR_HDR_FMT="%.23s %s[%s]: "
_ERR_MSG_FMT="${_ERR_HDR_FMT}%s\n"
error_msg() {
printf "$_ERR_MSG_FMT" $(date +%F.%T.%N) ${BASH_SOURCE[1]##*/} ${BASH_LINENO[0]} "${#}"
}
error_msg "here"
error_msg "and here"
Invoking xx yields
2010-06-16.15:33:13.069 xx[11]: here
2010-06-16.15:33:13.073 xx[14]: and here
You just need to
echo $LINENO
echo $(basename $0)
Here's how to do it in a reusable function. if the following is in a file named script:
#!/bin/bash
debug() {
echo "${BASH_SOURCE[1]##*/}:${FUNCNAME[1]}[${BASH_LINENO[0]}]" > /dev/tty
}
debug
This produces the output:
script:main[5]
Which indicates the line on which debug was called.
The following will print out the filename, function, line and an optional message.
Also works in zsh for extra goodness.
# Say the file, line number and optional message for debugging
# Inspired by bash's `caller` builtin
# Thanks to https://unix.stackexchange.com/a/453153/143394
function yelp () {
# shellcheck disable=SC2154 # undeclared zsh variables in bash
if [[ $BASH_VERSION ]]; then
local file=${BASH_SOURCE[1]##*/} func=${FUNCNAME[1]} line=${BASH_LINENO[0]}
else # zsh
emulate -L zsh # because we may be sourced by zsh `emulate bash -c`
# $funcfiletrace has format: file:line
local file=${funcfiletrace[1]%:*} line=${funcfiletrace[1]##*:}
local func=${funcstack[2]}
[[ $func =~ / ]] && func=source # $func may be filename. Use bash behaviour
fi
echo "${file##*/}:$func:$line $*" > /dev/tty
}