I'm trying to customize my bash prompt and I'm having trouble with a few conditionals.
My current PS1 looks like this.
export PS1="\
$PS1USERCOLOR\u\
$COLOR_WHITE#\
$COLOR_GREEN\h\
$COLOR_WHITE:\
$COLOR_YELLOW\W\
\`if type parse_git_branch > /dev/null 2>&1; then parse_git_branch; fi\`\
\`if [ \$? = 0 ]; then echo -e '$COLOR_WHITE'; else echo -e '$COLOR_RED'; fi\`\$\
$COLOR_WHITE"
The first 6 lines just set regular PS1 stuff.
Line 7 then calls a function to display the current git branch and status if applicable.
Line 8 then tests the return code of the previous command and changes the colour of the $ on the end.
Line 9 sets the prompt back to white ready for the user's command.
However line 8 is responding to the return code from line 7's function and not the previous command as I first expected.
I've tried moving line 8 before line 7 and eveything works as it should. But I don't want line 8 before line 7, the $ must be on the end.
I've tried setting a variable earlier on to be the value of $? and then testing that variable like so
export PS1="\
\`RETURN=\$?\`\
$PS1USERCOLOR\u\
$COLOR_WHITE#\
$COLOR_GREEN\h\
$COLOR_WHITE:\
$COLOR_YELLOW\W\
\`if type parse_git_branch > /dev/null 2>&1; then parse_git_branch; fi\`\
\`if [ \$RETURN = 0 ]; then echo -e '$COLOR_WHITE'; else echo -e '$COLOR_RED'; fi\`\$\
$COLOR_WHITE"
But this doesn't work.
Does anybody have any idea how to solve my problem?
The proper way is to use PROMPT_COMMAND like so:
prompt_cmd () {
LAST_STATUS=$?
PS1="$PS1USERCOLOR\u"
PS1+="$COLOR_WHITE#"
PS1+="$COLOR_GREEN\h"
PS1+="$COLOR_WHITE:"
PS1+="$COLOR_YELLOW\W"
if type parse_git_branch > /dev/null 2>&1; then
PS1+=$(parse_git_branch)
fi
if [[ $LAST_STATUS = 0 ]]; then
PS1+="$COLOR_WHITE"
else
PS1+="$COLOR_RED"
fi
PS1+='\$'
PS1+="$COLOR_WHITE"
}
Since PROMPT_COMMAND is evaluated prior to each prompt, you simply execute code that sets PS1 they way you like for each prompt instance, rather than trying to embed deferred logic in the string itself.
A couple of notes:
You must save $? in the first line of the code, before the value you want is overwritten.
I use double quotes for most of the steps, except for \$; you could use PS1+="\\\$" if you like.
The standard solution to this problem is to make use of the bash environment variable PROMPT_COMMAND. If you set this variable to the name of a shell function, said function will be executed before each bash prompt is shown. Then, inside said function, you can set up whatever variables you want. Here's how I do almost exactly what you're looking for in my .bashrc:
titlebar_str='\[\e]0;\u#\h: \w\a\]'
time_str='\[\e[0;36m\]\t'
host_str='\[\e[1;32m\]\h'
cwd_str='\[\e[0;33m\]$MYDIR'
git_str='\[\e[1;37m\]`/usr/bin/git branch --no-color 2> /dev/null | /bin/grep -m 1 ^\* | /bin/sed -e "s/\* \(.*\)/ [\1]/"`\[\e[0m\]'
dolr_str='\[\e[0;`[ $lastStatus -eq 0 ] && echo 32 || echo 31`m\]\$ \[\e[0m\]'
export PS1="$titlebar_str$time_str $host_str $cwd_str$git_str$dolr_str"
function prompt_func {
# Capture the exit status currently in existence so we don't overwrite it with
# any operations performed here.
lastStatus=$?
# ... run some other commands (which will have their own return codes) to set MYDIR
}
export PROMPT_COMMAND=prompt_func
Now bash will run prompt_func before displaying each new prompt. The exit status of the preceding command is captured in lastStatus. Because git_str, dolr_str, etc. are defined with single quotes, the variables (including lastStatus) and commands inside them are then re-evaluated when bash dereferences PS1.
Solved it!
I need to use the PROMPT_COMMAND variable to set the RETURN variable. PROMPT_COMMAND is a command which is called before PS1 is loaded.
My script now looks like
PROMPT_COMMAND='RETURN=$?'
export PS1="\
$PS1USERCOLOR\u\
$COLOR_WHITE#\
$COLOR_GREEN\h\
$COLOR_WHITE:\
$COLOR_YELLOW\W\
\`if type parse_git_branch > /dev/null 2>&1; then parse_git_branch; fi\`\
\`if [[ \$RETURN = 0 ]]; then echo -e '$COLOR_WHITE'; else echo -e '$COLOR_RED'; fi\`\$\
$COLOR_WHITE"
Related
I have a function that runs a set of scripts that set variables, functions, and aliases in the current shell.
reloadVariablesFromScript() {
for script in "${scripts[#]}"; do
. "$script"
done
}
If one of the scripts has an error, I want to exit the script and then exit the function, but not to kill the shell.
reloadVariablesFromScript() {
for script in "${scripts[#]}"; do
{(
set -e
. "$script"
)}
if [[ $? -ne 0 ]]; then
>&2 echo $script failed. Skipping remaining scripts.
return 1
fi
done
}
This would do what I want except it doesn't set the variables in the script whether the script succeeds or fails.
Without the subshell, set -e causes the whole shell to exit, which is undesirable.
Is there a way I can either prevent the called script from continuing on an error without killing the shell or else set/export variables, aliases, and functions from within a subshell?
The following script simulates my problem:
test() {
{(
set -e
export foo=bar
false
echo Should not have gotten here!
export bar=baz
)}
local errorCode=$?
echo foo="'$foo'". It should equal 'bar'.
echo bar="'$bar'". It should not be set.
if [[ $errorCode -ne 0 ]]; then
echo Script failed correctly. Exiting function.
return 1
fi
echo Should not have gotten here!
}
test
If worst comes to worse, since these scripts don't actually edit the filesystem, I can run each script in a subshell, check the exit code, and if it succeeds, run it outside of a subshell.
Note that set -e has a number of surprising behaviors -- relying on it is not universally considered a good idea. That caveat being give, though: We can shuffle environment variables, aliases, and shell functions out as text:
envTest() {
local errorCode newVars
newVars=$(
set -e
{
export foo=bar
false
echo Should not have gotten here!
export bar=baz
} >&2
# print generate code which, when eval'd, recreates our functions and variables
declare -p | egrep -v '^declare -[^[:space:]]*r'
declare -f
alias -p
); errorCode=$?
if (( errorCode == 0 )); then
eval "$newVars"
fi
printf 'foo=%q. It should equal %q\n' "$foo" "bar"
printf 'bar=%q. It should not be set.\n' "$bar"
if [[ $errorCode -ne 0 ]]; then
echo 'Script failed correctly. Exiting function.'
return 1
fi
echo 'Should not have gotten here!'
}
envTest
Note that this code only evaluates either export should the entire script segment succeed; the question text and comments appear to indicate that this is acceptable if not desired.
Have a look at this little script:
#!/bin/bash
function do_something() {(
set -e
mkdir "/opt/some_folder" # <== returns 1 -> abort?
echo "mkdir returned $?" # <== sets $0 to 0 again
rsync $( readlink -f "${BASH_SOURCE[0]}" ) /opt/some_folder/ # <== returns 23 -> abort?
echo "rsync returned $?" # <== sets $0 to 0 again
)}
# here every command inside `do_something` will be executed - regardless of errors
echo "run do_something in if-context.."
if ! do_something ; then
echo "running do_something did not work"
fi
# here `do_something` aborts on first error
echo "run do_something standalone.."
do_something
echo $?
I was trying to do what was suggested here (don't miss the extra parentheses introducing a sub-shell) but I didn't execute the function (do_something in my case) separately but together with the if-expression.
Now when I run if ! do_something the set -e command seems to have no effect.
Can someone explain this to me?
This is expected and described in the Bash Reference Manual.
-e
[...]
The shell does not exit if the command that fails is part of the command list immediately following a while or until keyword, part of the test in an if statement, [...].
[...]
If a compound command or shell function executes in a context where -e is being ignored, none of the commands executed within the compound command or function body will be affected by the -e setting, even if -e is set and a command returns a failure status. If a compound command or shell function sets -e while executing in a context where -e is ignored, that setting will not have any effect until the compound command or the command containing the function call completes.
Using a function to change settings and traps overcomes this limitation, at least in Homebrew Bash 5.2.15(1)
If I start with this:
errexit_ignore() {
set +e
trap - ERR
}
errexit_fail() {
set -e
trap failed ERR
}
errexit_fail
I can later do horrible useful things like this:
for customization in ${customizations[#]}; do
log_mark 2 "Checking ${customization} ..."
errexit_ignore
diff -qs "${expected}/${customization}" "${tmpdir}/${customization}"
diff_exitcode="${?}"
errexit_fail
if [ "${diff_exitcode}" != "0" ]; then ## If it's not the expected file, the customization may have already been applied
errexit_ignore
diff -qs "${tmpdir}/${customization}" "${customized}/${customization}"
diff_exitcode="${?}"
errexit_fail
if [ "${diff_exitcode}" != "0" ]; then ## If it's neither the expected file nor the customized file, either default has changed or the customization has changed
errexit_ignore
git ls-files --error-unmatch "${customized}/${customization}"
track_exitcode="${?}" ## Detect untracked file
git diff --exit-code "${customized}/${customization}"
diff_exitcode="${?}" ## Detect modified tracked file
errexit_fail
if [ "${track_exitcode}" != "0" -o "${diff_exitcode}" != "0" ]; then ## If the customization has uncomitted changes, assume the default hasn't changed
log_mark 1 "Customized ${customization} will be updated"
else ## If the default has changed, manual review is needed (which may result in an updated customization)
diff -u "${expected}/${customization}" "${tmpdir}/${customization}" || :
diff -u "${tmpdir}/${customization}" "${customized}/${customization}" || :
abort "Default version of ${customization} has changed, expected version must be updated and customization must be checked for compatibility"
fi
else
log_mark 1 "Customized ${customization} already in place"
fi
else
log_mark 1 "Default ${customization} has not changed"
fi
done
Notes:
After trap function ERR, set +e doesn't work unless you also trap - ERR
Neither errexit_ignore nor errexit_fail can be defined on a single line (I'm not sure why not)
I need to differentiate two cases: ( …subshell… ) vs $( …command substitution… )
I already have the following function which differentiates between being run in either a command substitution or a subshell and being run directly in the script.
#!/bin/bash
set -e
function setMyPid() {
myPid="$(bash -c 'echo $PPID')"
}
function echoScriptRunWay() {
local myPid
setMyPid
if [[ $myPid == $$ ]]; then
echo "function run directly in the script"
else
echo "function run from subshell or substitution"
fi
}
echoScriptRunWay
echo "$(echoScriptRunWay)"
( echoScriptRunWay; )
Example output:
function run directly in the script
function run from subshell or substitution
function run from subshell or substitution
Desired output
But I want to update the code so it differentiates between command substitution and subshell. I want it to produce the output:
function run directly in the script
function run from substitution
function run from subshell
P.S. I need to differentiate these cases because Bash has different behavior for the built-in trap command when run in command substitution and in a subshell.
P.P.S. i care about echoScriptRunWay | cat command also. But it's new question for me which i created here.
I don't think one can reliably test if a command is run inside a command substitution.
You could test if stdout differs from the stdout of the main script, and if it does, boldly infer it might have been redirected. For example
samefd() {
# Test if the passed file descriptors share the same inode
perl -MPOSIX -e "exit 1 unless (fstat($1))[1] == (fstat($2))[1]"
}
exec {mainstdout}>&1
whereami() {
if ((BASHPID == $$))
then
echo "In parent shell."
elif samefd 1 $mainstdout
then
echo "In subshell."
else
echo "In command substitution (I guess so)."
fi
}
whereami
(whereami)
echo $(whereami)
I am studying the book "Beginning Linux Programming 4th ed" and chapter 2 is about shell programming. I was impressed by the example on page 53, and tried to develop a script to display more on that. Here is my code:
enter code here
#!/bin/bash
var1=10
var2=20
var3=30
var4=40
for i in 1 2 3 4 # This works as intended!
do
x=var$i
y=$(($x))
echo $x = $y # But we can avoid declaring extra parameters x and y, see next line
printf " %s \n" "var$i = $(($x))"
done
for j in 1 2 3 4 #This has problems!
do
psword=PS$j
#eval psval='$'PS$i # Produces the same output as the next line
eval psval='$'$psword
echo '$'$psword = $psval
#echo "\$$psword = $psval" #The same as previous line
#echo $(eval '$'PS${i}) #Futile attempts
#echo PS$i = $(($PS${i}))
#echo PS$i = $(($PS{i}))
done
#I can not make it work as I want : the output I expect is
#PS1 = \[\e]0;\u#\h: \w\a\]${debian_chroot:+($debian_chroot)}\u#\h:\w\$
#PS2 = >
#PS3 =
#PS4 = +
How can I get the intended output? When I run it as it is I only get
PS1 =
PS2 =
PS3 =
PS4 = +
What happened with PS1 and PS2 ?
Why do not I get the same value that I get with
echo $PS1
echo $PS2
echo $PS3
echo $PS4
because that was what I am trying to get.
Shell running a script is always non interactive shell. You may force to run the script in interactive mode using '-i' option:
Try to change:
#!/bin/bash
to:
#!/bin/bash -i
see INVOCATION section in 'man bash' (bash.bashrc is where your PS1 is defined):
When an interactive shell that is not a login shell is started, bash reads and executes commands from
/etc/bash.bashrc and ~/.bashrc, if these files exist. This may be inhibited by using the --norc option. The
--rcfile file option will force bash to read and execute commands from file instead of /etc/bash.bashrc and
~/.bashrc.
When bash is started non-interactively, to run a shell script, for example, it looks for the variable BASH_ENV in
the environment, expands its value if it appears there, and uses the expanded value as the name of a file to read
and execute. Bash behaves as if the following command were executed:
if [ -n "$BASH_ENV" ]; then . "$BASH_ENV"; fi
but the value of the PATH variable is not used to search for the file name.
you can also read: http://tldp.org/LDP/abs/html/intandnonint.html
simple test:
$ cat > test.sh
echo "PS1: $PS1"
$ ./test.sh
PS1:
$ cat > test.sh
#!/bin/bash -i
echo "PS1: $PS1"
$ ./test.sh
PS1: ${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u#\h\[\033[01;34m\] \w \$\[\033[00m\]
Use indirect expansion:
for j in 0 1 2 3 4; do
psword="PS$j"
echo "$psword = ${!psword}"
done
I want to write code like this:
command="some command"
safeRunCommand $command
safeRunCommand() {
cmnd=$1
$($cmnd)
if [ $? != 0 ]; then
printf "Error when executing command: '$command'"
exit $ERROR_CODE
fi
}
But this code does not work the way I want. Where did I make the mistake?
Below is the fixed code:
#!/bin/ksh
safeRunCommand() {
typeset cmnd="$*"
typeset ret_code
echo cmnd=$cmnd
eval $cmnd
ret_code=$?
if [ $ret_code != 0 ]; then
printf "Error: [%d] when executing command: '$cmnd'" $ret_code
exit $ret_code
fi
}
command="ls -l | grep p"
safeRunCommand "$command"
Now if you look into this code, the few things that I changed are:
use of typeset is not necessary, but it is a good practice. It makes cmnd and ret_code local to safeRunCommand
use of ret_code is not necessary, but it is a good practice to store the return code in some variable (and store it ASAP), so that you can use it later like I did in printf "Error: [%d] when executing command: '$command'" $ret_code
pass the command with quotes surrounding the command like safeRunCommand "$command". If you don’t then cmnd will get only the value ls and not ls -l. And it is even more important if your command contains pipes.
you can use typeset cmnd="$*" instead of typeset cmnd="$1" if you want to keep the spaces. You can try with both depending upon how complex is your command argument.
'eval' is used to evaluate so that a command containing pipes can work fine
Note: Do remember some commands give 1 as the return code even though there isn't any error like grep. If grep found something it will return 0, else 1.
I had tested with KornShell and Bash. And it worked fine. Let me know if you face issues running this.
Try
safeRunCommand() {
"$#"
if [ $? != 0 ]; then
printf "Error when executing command: '$1'"
exit $ERROR_CODE
fi
}
It should be $cmd instead of $($cmd). It works fine with that on my box.
Your script works only for one-word commands, like ls. It will not work for "ls cpp". For this to work, replace cmd="$1"; $cmd with "$#". And, do not run your script as command="some cmd"; safeRun command. Run it as safeRun some cmd.
Also, when you have to debug your Bash scripts, execute with '-x' flag. [bash -x s.sh].
There are several things wrong with your script.
Functions (subroutines) should be declared before attempting to call them. You probably want to return() but not exit() from your subroutine to allow the calling block to test the success or failure of a particular command. That aside, you don't capture 'ERROR_CODE' so that is always zero (undefined).
It's good practice to surround your variable references with curly braces, too. Your code might look like:
#!/bin/sh
command="/bin/date -u" #...Example Only
safeRunCommand() {
cmnd="$#" #...insure whitespace passed and preserved
$cmnd
ERROR_CODE=$? #...so we have it for the command we want
if [ ${ERROR_CODE} != 0 ]; then
printf "Error when executing command: '${command}'\n"
exit ${ERROR_CODE} #...consider 'return()' here
fi
}
safeRunCommand $command
command="cp"
safeRunCommand $command
The normal idea would be to run the command and then use $? to get the exit code. However, sometimes you have multiple cases in which you need to get the exit code. For example, you might need to hide its output, but still return the exit code, or print both the exit code and the output.
ec() { [[ "$1" == "-h" ]] && { shift && eval $* > /dev/null 2>&1; ec=$?; echo $ec; } || eval $*; ec=$?; }
This will give you the option to suppress the output of the command you want the exit code for. When the output is suppressed for the command, the exit code will directly be returned by the function.
I personally like to put this function in my .bashrc file.
Below I demonstrate a few ways in which you can use this:
# In this example, the output for the command will be
# normally displayed, and the exit code will be stored
# in the variable $ec.
$ ec echo test
test
$ echo $ec
0
# In this example, the exit code is output
# and the output of the command passed
# to the `ec` function is suppressed.
$ echo "Exit Code: $(ec -h echo test)"
Exit Code: 0
# In this example, the output of the command
# passed to the `ec` function is suppressed
# and the exit code is stored in `$ec`
$ ec -h echo test
$ echo $ec
0
Solution to your code using this function
#!/bin/bash
if [[ "$(ec -h 'ls -l | grep p')" != "0" ]]; then
echo "Error when executing command: 'grep p' [$ec]"
exit $ec;
fi
You should also note that the exit code you will be seeing will be for the grep command that's being run, as it is the last command being executed. Not the ls.