I'm writing a script where I want to dynamically create an environment variable name, and check if it has been set.
#!/bin/bash
#######################################
# Builds options string
# Globals:
# JVM_OPTS_DIR
# Arguments:
# 1. Options env variable name
# 2. File containing options defaults
# Returns:
# Options
#######################################
function buildOpts() {
declare -n opts=$1
declare -n excludeOpts="EXCLUDE_$1"
local -r optsFile=$2
local x=
if [ -z ${excludeOpts+x} ]; then
while read -r o; do
if [ -n "${o// }" ]; then
x+=" $o"
fi
done <"$JVM_OPTS_DIR/$optsFile"
fi
if [ -n "$opts" ]; then
for o in $opts; do
x+=" $o"
done
fi
printf '%s' "$x"
}
# https://docs.oracle.com/javase/8/docs/technotes/tools/unix/java.html
# Standard options
stdOpts=$(buildOpts STD_OPTS std.opts)
...
javaCmd="java $stdOpts $nonStdOpts $advRtOpts $advCompOpts $advServOpts $advGcOpts -jar $APP_DIR/app.jar"
printf '%s' "$javaCmd"
# eval $javaCmd "$#"
The above script serves as a Docker entrypoint
docker build -t jdk . && docker run --rm -it jdk -e APP_NAME=test -e APP_LOG_DIR=test -e APP_DIR=test -e STD_OPTS='a b' -e EXCLUDE_STD_OPTS=true
However, I don't see a and b included in the javaCmd, neither do I see the EXCLUDE working. Basically, none of the if conditions in function buildOpts are working.
I'm a backend programmer, and not a Bash wizard. Help.
You need to put the envs flags before the image name.
Syntax of docker run is:
docker run [OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...]
Anything after the image name will be treated as [COMMAND][ARG...]
Related
I want to write some wrappers around the sha1sum function in bash. From the manpage:
SHA1SUM(1) User Commands SHA1SUM(1)
NAME
sha1sum - compute and check SHA1 message digest
SYNOPSIS
sha1sum [OPTION]... [FILE]...
DESCRIPTION
Print or check SHA1 (160-bit) checksums.
With no FILE, or when FILE is -, read standard input.
How can I set up my wrapper so that it works in the same way? I.e.:
my_wrapper(){
# some code here
}
that could work both as:
my_wrapper PATH_TO_FILE
and
echo -n "blabla" | my_wrapper
I think this is somehow related to Redirect standard input dynamically in a bash script but not sure how to make it 'nicely'.
Edit 1
I program in a quite defensive way, so I use in my whole script:
# exit if a command fails
set -o errexit
# make sure to show the error code of the first failing command
set -o pipefail
# do not overwrite files too easily
set -o noclobber
# exit if try to use undefined variable
set -o nounset
Anything that works with that?
You can use this simple wrapper:
args=("$#") # save arguments into an array
set -o noclobber nounset pipefail errexit
set -- "${args[#]}" # set positional arguments from array
my_wrapper() {
[[ -f $1 ]] && SHA1SUM "$1" || SHA1SUM
}
my_wrapper "$#"
Note that you can use:
my_wrapper PATH_TO_FILE
or:
echo -n "blabla" | my_wrapper
This code works for me, put it in a file named wrapper
#!/bin/bash
my_wrapper(){
if [[ -z "$1" ]];then
read PARAM
else
PARAM="$1"
fi
echo "PARAM:$PARAM"
}
Load the function in your environment
. ./wrapper
Test the function with input pipe
root#51ce582167d0:~# echo hello | my_wrapper
PARAM:hello
Test the function with parameter
root#51ce582167d0:~# my_wrapper bybye
PARAM:bybye
Ok, so the answers posted here are fine often, but in my case with defensive programming options:
# exit if a command fails
set -o errexit
# exit if try to use undefined variable
set -o nounset
things do not work as well. So I am now using something in this kind:
digest_function(){
# argument is either filename or read from std input
# similar to the sha*sum functions
if [[ "$#" = "1" ]]
then
# this needs to be a file that exists
if [ ! -f $1 ]
then
echo "File not found! Aborting..."
exit 1
else
local ARGTYPE="Filename"
local PARAM="$1"
fi
else
local ARGTYPE="StdInput"
local PARAM=$(cat)
fi
if [[ "${ARGTYPE}" = "Filename" ]]
then
local DIGEST=$(sha1sum ${PARAM})
else
local DIGEST=$(echo -n ${PARAM} | sha1sum)
fi
}
I am missing my bash aliases in fish, and don't want to manually convert all of them to fish functions.
How to get them all from bash to fish?
Bonus points if:
the solution supports an iterative process, as in: i can easily change the aliases in bash, and re-convert/re-import them into fish
the solution also imports bash functions
Convert bash aliases to bash scripts
I decided to do this instead of approach below, and putting the scripts into ~/bin/, which is in my PATH. This allows me to use them from whatever shell i am currently executing, and it prevents potential problems with quoting.
use like so:
# converts all bash aliases to script files
convert_bash_aliases_to_scripts
# removes all scripts previously converted by this script
convert_bash_aliases_to_scripts clean
bash conversion script:
#!/bin/bash
# Convert bash aliases to bash scripts.
#
# Copyright 2018 <hoijui.quaero#gmail.com>, licensed under the GPL-3.0+
#
# Usage:
# convert_bash_aliases_to_scripts # converts all bash aliases to script files
# convert_bash_aliases_to_scripts clean # removes all scripts previously converted by this script
COLOR_RED=$'\e[0;31m'
COLOR_ORANGE=$'\e[0;33m'
COLOR_BLUE=$'\e[0;34m'
COLOR_BLUE_LIGHT=$'\e[1;34m'
COLOR_GREEN=$'\e[0;32m'
COLOR_BROWN=$'\e[0;33m'
COLOR_YELLOW=$'\e[1;33m'
COLOR_WHITE=$'\e[1;37m'
COLOR_CYAN=$'\e[0;36m'
COLOR_PURPLE=$'\e[0;35m'
COLOR_GRAY=$'\e[1;30m'
COLOR_GRAY_LIGHT=$'\e[0;37m'
COLOR_NONE=$'\e[m' # No Color
OUTPUT_DIR=~/bin/converted/aliases
LINKS_DIR=~/bin
README_FILE_NAME="README.md"
README_FILE="$OUTPUT_DIR/$README_FILE_NAME"
if [ "$1" = "clean" ]
then
for script_file in $(find "$LINKS_DIR" -maxdepth 1 -type l)
do
conv_script_file="$OUTPUT_DIR/$(basename $script_file)"
if [ -e $conv_script_file ] && [ "$(readlink --canonicalize $script_file)" = "$(realpath $conv_script_file)" ]
then
script_name=$(basename $script_file)
echo "removing converted bash alias-script: $script_name"
rm $conv_script_file \
&& rm $script_file
fi
done
rm $README_FILE 2> /dev/null
rmdir $OUTPUT_DIR 2> /dev/null
exit 0
fi
SOURCE_FILES="${HOME}/.bashrc ${HOME}/.bash_aliases"
mkdir -p $OUTPUT_DIR
echo -e "# Bash alias conversion scripts\n\nsee $0\n\nWARNING: Do NOT manually edit files in this directory. instead, copy them to $LINKS_DIR (replacing the symbolic link that already exists there), and edit that new file.\nIf you edit the files in this dir, it will be replaced on the next (re)conversion from aliases." \
> $README_FILE
AUTO_IMPORT_WARNING="# WARNING Do NOT edit this file by hand, as it was auto-generated from a bash alias, and may be overwritten in the future. please read ${README_FILE}"
function _is_link_to {
local file_link=$1
local file_target=$2
test -e $file_target \
&& test "$(readlink --canonicalize $file_link)" = "$(realpath $file_target)"
return $?
}
for source_file in $SOURCE_FILES
do
IFS=$'\n'
for a in $(cat $source_file | grep "^alias")
do
a_name="$(echo "$a" | sed -e 's/alias \([^=]*\)=.*/\1/')"
a_command="$(echo "$a" | sed -e 's/alias \([^=]*\)=//' -e 's/[ \t]*#.*$//')"
if echo "${a_command:0:1}" | grep -q -e "[\'\"]"
then
# unquote
a_command_start=1
let a_command_end="${#a_command} - 2"
else
# leave as is
a_command_start=0
let a_command_end="${#a_command}"
fi
script_file="$LINKS_DIR/$a_name"
conv_script_file="$OUTPUT_DIR/$a_name"
# Check whether the script already exists.
# If so, we skip importing it, unless it is just a link to a previously imported script.
log_action="none"
log_action_color="${COLOR_NONE}"
log_content=""
if [ -e $script_file ] && ! $(_is_link_to $script_file $conv_script_file)
then
log_action="skipped (exists)"
log_action_color="${COLOR_ORANGE}"
log_content=""
else
if [ -e $script_file ]
then
log_action="reimporting"
log_action_color="${COLOR_BLUE}"
else
log_action="importing"
log_action_color="${COLOR_GREN}"
fi
# write the script file to a temporary location
conv_script_file_tmp="${conv_script_file}_BAK"
echo "#!/bin/bash" > $conv_script_file_tmp
echo -e "$AUTO_IMPORT_WARNING" >> $conv_script_file_tmp
echo -e "#\n# Imported bash alias '$a_name' from file '$source_file'" >> $conv_script_file_tmp
cat >> "${conv_script_file_tmp}" <<EOF
${a_command:${a_command_start}:${a_command_end}} \${#}
EOF
if diff -N ${conv_script_file_tmp} ${conv_script_file} > /dev/null
then
log_content="no change"
log_content_color="${COLOR_NONE}"
else
log_content="changed"
log_content_color="${COLOR_GREEN}"
fi
log_content=$(printf "%s %10s -> %s${COLOR_NONE}" "${log_content_color}" "${log_content}" "$a_command")
mv "${conv_script_file_tmp}" "${conv_script_file}"
# make the script executable
chmod +x $conv_script_file
# remove the link if it already exists (in case of reimport)
rm $script_file 2> /dev/null
# .. and re-create it as local symbolic link
# to the function in the imports dir
ln --symbolic --relative $conv_script_file $script_file
fi
printf "%s%20s: %-25s${COLOR_NONE}%s\n" "${log_action_color}" "${log_action}" "$a_name" "${log_content}"
done
done
Deprecated: Creating fish wrappers that execute bash code
Below is a script that creates fish script wrappers for the local bash aliases: For each bash alias, it takes the contents, and creates a fish alias/script that executes the code in bash sub-shell.
It is not optimal, but is sufficient for most of my aliases.
WARNING It might happen that the imported function acts differently then in bash. You may loose data or accidentally DDOS your coworkers when using them.
use like so:
# imports (or reimports) all bash aliases into fish functions, permanently
import_bash_aliases
# removes all fish functions previously imported by this script
import_bash_aliases clean
save this in ~/.config/fish/functions/import_bash_aliases.fish:
#!/usr/bin/fish
# Fish function to import bash aliases
#
# Copyright 2018 <hoijui.quaero#gmail.com>, licensed under the GPL-3.0+
#
# This script is based on a script from Malte Biermann,
# see: https://glot.io/snippets/efh1c4aec0
#
# WARNING: There is no guarantee that the imported aliases work the same way
# as they do in bash, so be cautious!
#
# Usage:
# import_bash_aliases # imports (or reimports) all bash aliases into fish functions, permanently
# import_bash_aliases clean # removes all fish functions previously imported by this script from bash aliases
function import_bash_aliases --description 'Converts bash aliases to .fish functions.\nThis might be called repeatedly, and will not override functions that are already defined in fish, except they are merely an older import from this script.'
set -l FISH_FUNCTIONS_DIR ~/.config/fish/functions
set -l BASH_IMPORTS_DIR_NAME bash-imports
set -l BASH_IMPORTS_DIR $FISH_FUNCTIONS_DIR/$BASH_IMPORTS_DIR_NAME
set -l README_FILE $BASH_IMPORTS_DIR/README.md
if test "$argv[1]" = "clean"
for fun_file in (find $FISH_FUNCTIONS_DIR -maxdepth 1 -name '*.fish')
set -l imp_fun_file $BASH_IMPORTS_DIR/(basename $fun_file)
if test -e $imp_fun_file ; and test (readlink --canonicalize $fun_file) = (realpath $imp_fun_file)
set -l fun_name (basename $fun_file '.fish')
echo "removing imported bash alias/function $fun_name"
rm $imp_fun_file
and rm $fun_file
and functions --erase $fun_name
end
end
rm $README_FILE ^ /dev/null
rmdir $BASH_IMPORTS_DIR ^ /dev/null
return 0
end
set -l SOURCE_FILES ~/.bashrc ~/.bash_aliases
mkdir -p $BASH_IMPORTS_DIR
echo -e "# Bash alias imports\n\nsee `$argv[0]`\n\nWARNING: Do NOT manually edit files in this directory. instead, copy them to $FISH_FUNCTIONS_DIR (replacing the symbolic link that already exists there), and edit that new file.\nIf you edit the files in this dir, it will be replaced on the next (re)import from bash aliases." \
> $README_FILE
set -l UNUSED_STUB_MSG "The bash alias corresponding to this function was NOT imported, because a corresponding function already exists at %s\n"
set -l AUTO_IMPORT_WARNING "# WARNING Do NOT edit this file by hand, as it was auto-generated from a bash alias, and may be overwritten in the future. please read {$README_FILE}"
function _fish_func_exists
set -l fun_name $argv[1]
# This also detects in-memory functions
functions --query $fun_name
# This also detects script files in the functions dir
# that do not contain a function wiht the same name
or test -e "$FISH_FUNCTIONS_DIR/$fun_name.fish"
return $status
end
function _is_link_to
set -l file_link $argv[1]
set -l file_target $argv[2]
test -e $file_target
and test (readlink --canonicalize $file_link) = (realpath $file_target)
return $status
end
for source_file in $SOURCE_FILES
for a in (cat $source_file | grep "^alias")
set -l a_name (echo $a | sed -e 's/alias \([^=]*\)=.*/\1/')
set -l a_command (echo $a | sed -e 's/alias \([^=]*\)=//' -e 's/[ \t]*#[^\'\"]\+$//')
set -l fun_file "$FISH_FUNCTIONS_DIR/$a_name.fish"
set -l imp_fun_file "$BASH_IMPORTS_DIR/$a_name.fish"
# Check whether the function already exists.
# If so, we skip importing it, unless it is just a link to a previously imported function.
if _fish_func_exists $a_name; and not _is_link_to $fun_file $imp_fun_file
set_color red
printf "%20s: %-25s\n" "skipping (exists)" $a_name
set_color normal
#printf $UNUSED_STUB_MSG $fun_file > $imp_fun_file
else
set_color green
printf "%20s: %-25s -> %s\n" "(re-)importing" $a_name $a_command
set_color normal
# remove the link, in case of re-importing
rm $fun_file ^ /dev/null
# write the function file
echo "#!/usr/bin/fish" > $imp_fun_file
echo "\
$AUTO_IMPORT_WARNING
function $a_name -d 'bash alias "$a_name" import'
bash -c $a_command' '\$argv''
end
" \
>> $imp_fun_file
# make the script executable
chmod +x $imp_fun_file
# .. and re-create it as local symbolic link
# to the function in the imports dir
ln --symbolic --relative $imp_fun_file $fun_file
end
end
end
# (re-)load all the functions we just defined
exec fish
end
I have a script on my local machine, but need to run it on a remote machine without copying it over there (IE, I can't sftp it over and just run it there)
I currently have the following functioning command
echo 'cd /place/to/execute' | cat - test.sh | ssh -T user#hostname
However, I also need to provide a commandline argument to test.sh.
I tried just adding it after the .sh, like I would for local execution, but that didn't work:
echo 'cd /place/to/execute' | cat - test.sh "arg" | ssh -T user#hostname
"cat: arg: No such file or directory" is the resulting error
You need to override the arguments:
echo 'set -- arg; cd /place/to/execute' | cat - test.sh | ssh -T user#hostname
The above will set the first argument to arg.
Generally:
set -- arg1 arg2 arg3
will overwrite the $1, $2, $3 in bash.
This will basically make the result of cat - test.sh a standalone script that doesn't need any arguments`.
Depends on the complexity of the script that you have. You might want to rewrite it to be able to use rpcsh functionality to remotely execute shell functions from your script.
Using https://gist.github.com/Shadowfen/2b510e51da6915adedfb saved into /usr/local/include/rpcsh.inc (for example) you could have a script
#!/bin/sh
source /usr/local/include/rpcsh.inc
MASTER_ARG=""
function ahelper() {
# used by doremotely just to show that we can
echo "master arg $1 was passed in"
}
function doremotely() {
# this executes on the remote host
ahelper $MASTER_ARG > ~/sample_rpcsh.txt
}
# main
MASTER_ARG="newvalue"
# send the function(s) and variable to the remote host and then execute it
rpcsh -u user -h host -f "ahelper doremotely" -v MASTER_ARG -r doremotely
This will give you a ~/sample_rpcsh.txt file on the remote host that contains
master arg newvalue was passed in
Copy of rpcsh.inc (in case link goes bad):
#!/bin/sh
# create an inclusion guard (to prevent multiple inclusion)
if [ ! -z "${RPCSH_GUARD+xxx}" ]; then
# already sourced
return 0
fi
RPCSH_GUARD=0
# rpcsh -- Runs a function on a remote host
# This function pushes out a given set of variables and functions to
# another host via ssh, then runs a given function with optional arguments.
# Usage:
# rpcsh -h remote_host -u remote_login -v "variable list" \
# -f "function list" -r mainfunc [-- param1 [param2]* ]
#
# The "function list" is a list of shell functions to push to the remote host
# (including the main function to run, and any functions that it calls).
#
# Use the "variable list" to send a group of variables to the remote host.
#
# Finally "mainfunc" is the name of the function (from "function list")
# to execute on the remote side. Any additional parameters specified (after
# the --)gets passed along to mainfunc.
#
# You may specify multiple -v "variable list" and -f "function list" options.
#
# Requires that you setup passwordless access to the remote system for the script
# that will be running this.
rpcsh() {
if ! args=("$(getopt -l "host:,user:,pushvars:,pushfuncs:,run:" -o "h:u:v:f:r:A" -- "$#")")
then
echo getopt failed
logger -t ngp "rpcsh: getopt failed"
exit 1
fi
sshvars=( -q -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null )
eval set -- "${args[#]}"
pushvars=""
pushfuncs=""
while [ -n "$1" ]
do
case $1 in
-h|--host) host=$2;
shift; shift;;
-u|--user) user=$2;
shift; shift;;
-v|--pushvars) pushvars="$pushvars $2";
shift; shift;;
-f|--pushfuncs) pushfuncs="$pushfuncs $2";
shift; shift;;
-r|--run) run=$2;
shift; shift;;
-A) sshvars=( "${sshvars[#]}" -A );
shift;;
-i) sshvars=( "${sshvars[#]}" -i $2 );
shift; shift;;
--) shift; break;;
esac
done
remote_args=( "$#" )
vars=$([ -z "$pushvars" ] || declare -p $pushvars 2>/dev/null)
ssh ${sshvars[#]} ${user}#${host} "
#set -x
$(declare -p remote_args )
$vars
$(declare -f $pushfuncs )
$run ${remote_args[#]}
"
}
During the configuration of Symfony 2 project it is required to set appropriate privilages to the cache and log directories.
Documentation says to do it in two ways. One of them is calling setfacl command with -m modificator. However not every version contains this modificator. Is it possible to check if this command or any other command allows to set some modificator ?
For example with following pseudocode:
if [ checkmods --command=setfacl --modificator=-m ]
setfacl -m ....
else
chmod ...
You can parse the usage information by running setfacl --help and check if contains the modificator. For example:
if setfacl --help | grep -q -- -m,
then
echo "setfacl -m supported"
else
echo "setfacl -m not supported"
fi
If you want to do it for any command which has the --help option, take a look at the _parse_help function available in your bash-completion file.
http://anonscm.debian.org/gitweb/?p=bash-completion/bash-completion.git;a=blob;f=bash_completion
# Parse GNU style help output of the given command.
# #param $1 command; if "-", read from stdin and ignore rest of args
# #param $2 command options (default: --help)
#
_parse_help()
{
eval local cmd=$( quote "$1" )
local line
{ case $cmd in
-) cat ;;
*) LC_ALL=C "$( dequote "$cmd" )" ${2:---help} 2>&1 ;;
esac } \
| while read -r line; do
[[ $line == *([ $'\t'])-* ]] || continue
# transform "-f FOO, --foo=FOO" to "-f , --foo=FOO" etc
while [[ $line =~ \
((^|[^-])-[A-Za-z0-9?][[:space:]]+)\[?[A-Z0-9]+\]? ]]; do
line=${line/"${BASH_REMATCH[0]}"/"${BASH_REMATCH[1]}"}
done
__parse_options "${line// or /, }"
done
}
Sometimes, it happens to me that I'm executing certain commands, and only afterwards I realize that I sent the wrong parameter to a command ( like a restart of a Heroku application ). I'd like to modify bash in such a way that if it sees a command containing a certain string, it will prompt me whether I'm sure or not. For example ( imagine the string is tempus ):
$ heroku restart --app tempus
Now, I'd like bash to prompt me with a Y/N prompt, and if I type y, only then I'd like it to execute the command. If I type N, the command will not be executed. How could I handle this problem?
I don't know of any way to intercept all bash commands, but you can intercept predetermined commands using the following trick.
Create a directory (say ~/interception) and set it as the first entry in $PATH
Create the following script in that directory with a list of commands you wish to intercept and the full path to the actual command
[bash]$ cat intercept.sh
#!/bin/bash
# map commands to full path
declare -A COMMANDS
COMMANDS[heroku]=/usr/bin/heroku
COMMANDS[grep]=/bin/grep
# ... more ...
CMD=$(basename $0) # command used to call this script
if [[ ! -z "${COMMANDS[$CMD]+x}" ]]; then # mapping found
# Do what you wish here. You can even modify/inspect the params.
echo "intercepted $CMD command... "
${COMMANDS[$CMD]} $# # run actual command with all params
else
echo "Unknown command $CMD"
fi
In the same directory, create symlinks to that script using the name of the commands you wish to intercept
[bash]$ ln -s intercept.sh grep
[bash]$ ln -s intercept.sh heroku
Now, each time you call the command, that script is invoked via the symlink and it can then do your bidding before calling the actual command.
You can extend this further by sourcing the $COMMANDS from a config file and create helper commands to augment the config file and create/remove the sym links. You would then be able to manage the who setup using commands such as:
intercept_add `which heroku`
intercept_remove heroku
intercept_list
Because bash itself doesn't support command line filter, it's not possible to intercept commands.
Here is a dirty solution:
Find all executables in PATH and create wrapper functions for each of them.
The wrapper function then call prefilter() function if it's declared.
If prefilter() function failed, the command is canceled.
SOURCE: cmd-wrap.sh
#!/bin/bash # The shebang is only useful for debug. Don't execute this script.
function create_wrapper() {
local exe="$1"
local name="${exe##*/}"
# Only create wrappers for non-builtin commands
[ `type -t "$name"` = 'file' ] || return
# echo "Create command wrapper for $exe"
eval "
function $name() {\
if [ \"\$(type -t prefilter)\" = 'function' ]; then \
prefilter \"$name\" \"\$#\" || return; \
fi; \
$exe \"\$#\";
}"
}
# It's also possible to add pre/post hookers by install
# [ `type -t \"$name-pre\"` = 'function' ] && \"$name-pre\" \"\$#\"
# into the dynamic generated function body.
function _create_wrappers() {
local paths="$PATH"
local path
local f n
while [ -n "$paths" ]; do
path="${paths%%:*}"
if [ "$path" = "$paths" ]; then
paths=
else
paths="${paths#*:}"
fi
# For each path element:
for f in "$path"/*; do
if [ -x "$f" ]; then
# Don't create wrapper for strange command names.
n="${f##*/}"
[ -n "${n//[a-zA-Z_-]/}" ] || create_wrapper "$f"
fi
done
done
unset _create_wrappers # Remove the installer.
unset create_wrapper # Remove the helper fn, which isn't used anymore.
}
_create_wrappers
To utilize it for your problem:
source it in bash:
. ./cmd-wrap.sh
Create your version of prefilter() to check if any argument contains the string:
function prefilter() {
local a y
for a in "$#"; do
if [ "$a" != "${a/tempus}" ]; then
echo -n "WARNING: The command contains tempus. Continue?"
read y
[ "$y" = 'Y' ] || [ "$y" = 'y' ]
return $?
fi
done
return 0
}
Run
heroku restart --app tempus
but not
/usr/bin/heroku restart --app tempus
to make use of the wrapper function.
The easiest way is to use aliases. This simple example should work for you:
This protects from executing the heroku command with tempus in the arguments
function protect_heroku {
# use grep to determine if the bad string is in arguments
echo "$*" | grep tempus > /dev/null
# if string is not in arguments
if [ $? != 0 ]; then
# run the protected command using its full path, so as not to trigger alias
/path/to/heroku "$#"
else
# get user confirmation
echo -n Are you sure \(y/n\)?' '
read CONFIRM
if [ "$CONFIRM" = y ]; then
# run the protected command using its full path
/path/to/heroku "$#"
fi
fi
}
# This is the key:
# This alias command means that 'heroku' from now refers
# to the function protect_heroku, rather than /bin/heroku
alias heroku=protect_heroku
Put this code into your bash profile ~/.profile and then log out and log back in. From now on, bash will protect you from accidentally running heroku with tempus.
Simplest way is to replace heroku with a script that does the checking before executing the real heroku. Another way would be to add a bash alias for heroku.