Leaving sourced shell script without exiting terminal - bash

I'm writing a shell script to save some key strokes and avoid typos. I would like to keep the script as a single file that calls internal methods/functions and terminates the functions if problems arise without leaving the terminal.
my_script.sh
#!/bin/bash
exit_if_no_git() {
# if no git directory found, exit
# ...
exit 1
}
branch() {
exit_if_no_git
# some code...
}
push() {
exit_if_no_git
# some code...
}
feature() {
exit_if_no_git
# some code...
}
bug() {
exit_if_no_git
# some code...
}
I would like to call it via:
$ branch
$ feature
$ bug
$ ...
I know I can source git_extensions.sh in my .bash_profile, but when I execute one of the commands and there is no .git directory, it will exit 1 as expected but this also exits out of the terminal itself (since it's sourced).
Is there an alternative to exiting the functions, which also exits the terminal?

Instead of defining a function exit_if_no_git, define one as has_git_dir:
has_git_dir() {
local dir=${1:-$PWD} # allow optional argument
while [[ $dir = */* ]]; do # while not at root...
[[ -d $dir/.git ]] && return 0 # ...if a .git exists, return success
dir=${dir%/*} # ...otherwise trim the last element
done
return 1 # if nothing was found, return failure
}
...and, elsewhere:
branch() {
has_git_dir || return
# ...actual logic here...
}
That way the functions are short-circuited, but no shell-level exit occurs.
It's also possible to exit a file being sourced using return, preventing later functions within it from even being defined, if return is run at top-level within such a file.

Related

Jenkins pipeline undefined variable

I'm trying to build a Jenkins Pipeline for which a parameter is
optional:
parameters {
string(
name:'foo',
defaultValue:'',
description:'foo is foo'
)
}
My purpose is calling a shell script and providing foo as argument:
stages {
stage('something') {
sh "some-script.sh '${params.foo}'"
}
}
The shell script will do the Right Thing™ if the provided value is the empty
string.
Unfortunately I can't just get an empty string. If the user does not provide
a value for foo, Jenkins will set it to null, and I will get null
(as string) inside my command.
I found this related question but the only answer is not really helpful.
Any suggestion?
OP here realized a wrapper script can be helpful… I ironically called it junkins-cmd and I call it like this:
stages {
stage('something') {
sh "junkins-cmd some-script.sh '${params.foo}'"
}
}
Code:
#!/bin/bash
helpme() {
cat <<EOF
Usage: $0 <command> [parameters to command]
This command is a wrapper for jenkins pipeline. It tries to overcome jenkins
idiotic behaviour when calling programs without polluting the remaining part
of the toolkit.
The given command is executed with the fixed version of the given
parameters. Current fixes:
- 'null' is replaced with ''
EOF
} >&2
trap helpme EXIT
command="${1:?Missing command}"; shift
trap - EXIT
typeset -a params
for p in "$#"; do
# Jenkins pipeline uses 'null' when the parameter is undefined.
[[ "$p" = 'null' ]] && p=''
params+=("$p")
done
exec $command "${params[#]}"
Beware: prams+=("$p") seems not to be portable among shells: hence this ugly script is running #!/bin/bash.

Set redirection from inside function in bash

I’d like to do something of this form:
one() {
redirect_stderr_to '/tmp/file_one'
# function commands
}
two() {
redirect_stderr_to '/tmp/file_two'
# function commands
}
one
two
This would run one and two in succession, redirecting stderr to the respective files. The working equivalent would be:
one() {
# function commands
}
two() {
# function commands
}
one 2> '/tmp/file_one'
two 2> '/tmp/file_two'
But that is a bit ugly. I’d rather just have all the redirection instructions inside the functions themselves. It’d be easier to manage. I have a feeling this might not be possible, but want to be sure.
The simplest and most robust approach is to use function-level redirection: note how a redirection command is applied to whole functions, after the closing } below and is scoped to each function (no need to reset):
# Define functions with redirected stderr streams.
one() {
# Write something to stderr:
echo one >&2
} 2> '/tmp/file_one'
two() {
# Write something to stderr:
echo two >&2
} 2> '/tmp/file_two'
one
two
# Since the function-level redirections are localized to each function,
# this will again print to the terminal.
echo "done" >&2
Documentation links (thanks, #gniourf_gniourf):
Shell Functions in the Bash reference manual
Function Definition Command in the POSIX spec
Note that this implies that the feature is POSIX-compliant, and you can use it in sh (POSIX-features-only) scripts, too.
You can use the exec builtin (notice the effect of exec is not canceled once the function returns):
one() {
exec 2> '/tmp/file_one'
# function commands
}
two() {
exec 2> '/tmp/file_two'
# function commands
}
one # stderr redirected to /tmp/file_one
echo "hello world" >&2 # this is also redirected to /tmp/file_one
exec 2> "$(tty)" # here you are setting the default again (your terminal)
echo "hello world" >&2 # this is wrtitten in your terminal
two # stderr redirected to /tmp/file_two
Now, if you want to apply the redirection only to the function, the best approach is in mklement0's answer.
You can also use :
#!/bin/bash
one() {
(
# function commands
) 2> /tmp/file_one
}
two() {
(
# function commands
) 2> /tmp/file_two
}
one
two

Only use functions of .ksh script into other script

I have a script a.sh which has :
a() {
echo "123"
}
echo "dont"
Then I have other script b.sh which has :
b() {
echo "345"
}
All I want to do is to use a in b, but when I source it I don't want to print whatever is in a() or echo "Dont".
I just want to source it for now.
so I did, source a.sh in b.sh
But it doesn't work.
Reason for sourcing is. so if I want I can call any functions when I want too.
If I do . /a.sh in b.sh it prints everything in a.sh.
One approach which will work on any POSIX-compliant shell is this:
# put function definitions at the top
a() {
echo "123"
}
# divide them with a conditional return
[ -n "$FUNCTIONS_ONLY" ] && return
# put direct code to execute below
echo "dont"
...and in your other script:
FUNCTIONS_ONLY=1 . other.sh
Make a library of common functions in a file called functionLib.sh like this:
#!/bin/sh
a(){
echo Inside a, with $1
}
b(){
echo Inside b, with $1
}
Then in script1, do this:
#!/bin/sh
. functionLib.sh # Source in the functions
a 42 # Use one
b 37 # Use another
and in another script, script2 re-use the functions:
#!/bin/sh
. functionLib.sh # Source in the functions
a 23 # Re-use one
b 24 # Re-use another
I have adopted a style in my shell scripts that allows me to design every script as a potential library, making it behave differently when it is sourced (with . .../path/script) and when it is executed directly. You can compare this to the python if __name__ == '__main__': trick.
I have not found a method that is portable across all Bourne shell descendants without explicitly referring to the script's name, but this is what I use:
a() {
echo a
}
b() {
echo b
}
(program=xyzzy
set -u -e
case $0 in
*${program}) : ;;
*) exit;;
esac
# main
a
b
)
The rules for this method are strictly:
Start a section with nothing but functions.
No variable assignments or any other activity.
Then, at the very end, create a subshell ( ... )
The first action inside the subshell tests whether
it's being sourced. If so, exit from the subshell.
If not, run a command.

Stop processing sourced file and continue

When writing bash that sources another file, sometimes I want to skip processing if some conditions are true. For now, I've been either:
Wrapping the entire sub file in nested if statements
Wrapping the call to the source file in nested if statements
Both of these strategies have some drawbacks. It'd be so much better if I could write my scripts with this code style:
main.sh
echo "before"
. other
echo "after"
other.sh
# guard
if true; then
# !! return to main somehow
fi
if true; then
# !! return to main somehow
fi
# commands
echo "should never get here"
Is there a way to do this so that the echo "after" line in main gets called?
Yes, you could return:
if true; then
return
fi
Quoting help return:
return: return [n]
Return from a shell function.
Causes a function or sourced script to exit with the return value
specified by N. If N is omitted, the return status is that of the
last command executed within the function or script.
Exit Status:
Returns N, or failure if the shell is not executing a function or script.

Remove temporary files at end of bourne shell script

I've tried to use trap to remove a temporary file at the end of a Bourne shell script, but this doesn't work:
trap "trap \"rm \\\"$out\\\"\" EXIT INT TERM" 0
This is inside a function, by the way, hence the attempt at a nested trap.
How do I do it?
You can only have one trap set for each signal. If different parts of your script need to perform different cleanup actions, you’ll have to create lists of cleanup actions. Then set a single trap handler that performs all the required cleanup actions.
Here’s an example:
set -xv
PROG="$(basename -- "${0}")"
# set up your trap handler
TEMP_FILES=()
trap_handler() {
for F in "${TEMP_FILES[#]}"; do
rm -f "${F}"
done
}
trap trap_handler 0 1 2 3 15
something_that_uses_temp_files() {
mytemp="$(mktemp -t "${PROG}")"
TEMP_FILES+=("${mytemp}")
date > "${mytemp}"
# ...
}
# ...
something_that_uses_temp_files
# ...
There’s a single trap handler, but you can register cleanup actions from anywhere in the script by appending to the TEMP_FILES array. The cleanup actions can be registered from inside functions too.
If you’re not using a shell with arrays, the basic idea is the same, but the implementation details will be a little bit different. For example, you could store the list as a colon-separated string variable, using the ${parameter%%word} expansions in every POSIX shell to iterate through its elements in the trap handler:
#!/bin/sh
set -xv
PROG="$(basename -- "${0}")"
# set up your trap handler
TEMP_FILES=""
trap_handler() {
while [ -n "${TEMP_FILES}" ]; do
CUR_FILE="${TEMP_FILES%%:*}"
TEMP_FILES="${TEMP_FILES#*:}"
if [ "${CUR_FILE}" = "${TEMP_FILES}" ]; then
# there were no colons -- CUR_FILE is the last file to process
TEMP_FILES=""
fi
if [ -n "${CUR_FILE}" ]; then
rm -f "${CUR_FILE}"
fi
done }
trap trap_handler 0 1 2 3 15
something_that_uses_temp_files() {
mytemp="$(mktemp -t "${PROG}")"
TEMP_FILES="${TEMP_FILES}:${mytemp}"
date > "${mytemp}"
# ... }
# ...
something_that_uses_temp_files
something_that_uses_temp_files
# ...

Resources