I've tried to use trap to remove a temporary file at the end of a Bourne shell script, but this doesn't work:
trap "trap \"rm \\\"$out\\\"\" EXIT INT TERM" 0
This is inside a function, by the way, hence the attempt at a nested trap.
How do I do it?
You can only have one trap set for each signal. If different parts of your script need to perform different cleanup actions, you’ll have to create lists of cleanup actions. Then set a single trap handler that performs all the required cleanup actions.
Here’s an example:
set -xv
PROG="$(basename -- "${0}")"
# set up your trap handler
TEMP_FILES=()
trap_handler() {
for F in "${TEMP_FILES[#]}"; do
rm -f "${F}"
done
}
trap trap_handler 0 1 2 3 15
something_that_uses_temp_files() {
mytemp="$(mktemp -t "${PROG}")"
TEMP_FILES+=("${mytemp}")
date > "${mytemp}"
# ...
}
# ...
something_that_uses_temp_files
# ...
There’s a single trap handler, but you can register cleanup actions from anywhere in the script by appending to the TEMP_FILES array. The cleanup actions can be registered from inside functions too.
If you’re not using a shell with arrays, the basic idea is the same, but the implementation details will be a little bit different. For example, you could store the list as a colon-separated string variable, using the ${parameter%%word} expansions in every POSIX shell to iterate through its elements in the trap handler:
#!/bin/sh
set -xv
PROG="$(basename -- "${0}")"
# set up your trap handler
TEMP_FILES=""
trap_handler() {
while [ -n "${TEMP_FILES}" ]; do
CUR_FILE="${TEMP_FILES%%:*}"
TEMP_FILES="${TEMP_FILES#*:}"
if [ "${CUR_FILE}" = "${TEMP_FILES}" ]; then
# there were no colons -- CUR_FILE is the last file to process
TEMP_FILES=""
fi
if [ -n "${CUR_FILE}" ]; then
rm -f "${CUR_FILE}"
fi
done }
trap trap_handler 0 1 2 3 15
something_that_uses_temp_files() {
mytemp="$(mktemp -t "${PROG}")"
TEMP_FILES="${TEMP_FILES}:${mytemp}"
date > "${mytemp}"
# ... }
# ...
something_that_uses_temp_files
something_that_uses_temp_files
# ...
Related
Why is this make_request function ending just after a single traversal?
make_request(){
path="${1//' '/'%20'}"
echo $path
mkdir -p $HOME/"$1"
$(curl --output $HOME/"$1"/$FILE_NAME -v -X GET $BASE_URL"/"$API_METHOD"/"$path &> /dev/null)
# sample response from curl
# {
# "count":2,
# "items": [
# {"path": "somepath1", "type": "folder"},
# {"path": "somepath2", "type": "folder"},
# ]
# }
count=$(jq ".count" $HOME/"$1"/$FILE_NAME)
for (( c=0; c<$count; c++ ))
do
child=$(jq -r ".items[$c] .path" $HOME/"$1"/$FILE_NAME);
fileType=$(jq -r ".items[$c] .type" $HOME/"$1"/$FILE_NAME);
if [ "$fileType" == "folder" ]; then
make_request "$child"
fi
done
}
make_request "/"
make_request "/" should give the following output:
/folder
/folder/folder1-1
/folder/folder1-1/folder1-2
/folder/foler2-1
/folder/folder2-1/folder2-2
/folder/folder2-1/folder2-3 ...
but I am getting the following:
/folder
/folder/folder1-1
/folder/folder1-1/folder1-2
You are using global variables everywhere. Therefore, the inner call changes the loop variables c and count of the outer call, resulting in bogus.
Minimal example:
f() {
this_is_global=$1
echo "enter $1: $this_is_global"
((RANDOM%2)) && f "$(($1+1))"
echo "exit $1: $this_is_global"
}
Running f 1 prints something like
enter 1: 1
enter 2: 2
enter 3: 3
exit 3: 3
exit 2: 3
exit 1: 3
Solution: Make the variables local by writing local count=$(...) and so on. For your loop, you have to put an additional statement local c above the for.
As currently written all variables have global scope; this means that all function calls are overwriting and referencing the same set of variables, this in turn means that when a child function returns to the parent the parent will find its variables have been overwritten by the child, this in turn leads to the type of behavior seen here.
In this particular case the loop variable c leaves the last child process with a value of c=$count and all parent loops now see c=$count and thus exit; it actually gets a bit more interesting because count is also changing with each function call. The previous comment to add set -x (aka enable debug mode) before the first function call should show what's going on with each variable at each function call.
What OP wants to do is insure each function is working with a local copy of a variable. The easiest approach is to add a local <variable_list> at the top of the function, making sure to list all variables that should be treated as 'local', eg:
local path count c child fileType
change variables to have local scope instead of global.
...
local count; # <------ VARIABLE MADE LOCAL
count=$(jq ".count" $HOME/"$1"/$FILE_NAME)
local c; # <------ VARIABLE MADE LOCAL
for (( c=0; c<$count; c++ ))
do
....
done
...
I have a long (~2,000 lines) script that I'm trying to log for future debugging. Right now I have:
function log_with_time()
{
while read a; do
echo `date +'%H:%M:%S.%4N '` " $a" >> $LOGFILE
done
}
exec 7> >(log_with_time)
BASH_XTRACEFD=7
PS4=' exit($?)ln:$LINENO: '
set -x
echo "helloWorld 1"
which gives me very nice logging for any and all commands that are run:
15:18:03.6359 exit(0)ln:28: echo 'helloWorld 1'
The issue that I'm running into is that xtrace seems to be asynchronous. With longer scripts, the log times fall behind the actual time the commands are called, and the exit code doesn't match the logged command.
There has to be a better way to do this but I'd be happy if I could just synchronize xtrace.
...
tldr: How can I generally log the time, command and exit code for all commands in a script?
...
(First time posting, feedback appreciated)
UPDATE:
exec {BASH_XTRACEFD}>>$LOGFILE
PS4=' time:$(date +%H:%M:%S.%4N) ln:$LINENO: '
set -x
fail()
{
echo "fail" >> $LOGFILE
return 1
}
trap 'echo exit:$? >> $LOGFILE' DEBUG
fail
solves all of my synchronization issues. exit codes and timestamps are working beautifully. My only issue now is one of formatting: the trap itself is getting reported by xtrace.
time:18:30:07.6080 ln:27: fail
time:18:30:07.6089 ln:12: echo fail
fail
time:18:30:07.6126 ln:13: return 1
time:18:30:07.6134 ln:28: echo exit:1
exit:1
I've tried setting +x in the trap but then set +x gets logged. If I could find a way to omit one line from xtrace, this log would be perfect.
The async behavior is coming from the process substitution -- anything in >(...) is running in its own subshell on the other end of a FIFO. Since it's a separate process, it's inherently unsynchronized.
You don't need log_with_time here at all, though, and so you don't need BASH_XTRACEFD redirecting to a process substitution in the first place. Consider:
# aside: $(date ...) has a *huge* amount of performance overhead here. Personally, I'd
# advise against using it, unless you really need all that precision; $SECONDS will
# be orders-of-magnitude cheaper.
PS4=' prior-exit:$? time:$(date +%H:%M:%S.%4N) ln:$LINENO: '
...thereafter:
$ true
prior-exit:0 time:16:01:17.2509 ln:28: true
$ false
prior-exit:0 time:16:01:18.4242 ln:29: false
$ false
prior-exit:1 time:16:01:19.2963 ln:30: false
$ true
prior-exit:1 time:16:01:20.2159 ln:31: true
$ true
prior-exit:0 time:16:01:20.8650 ln:32: true
Per conversation with Charles Duffy in the comments to whom all credit is given:
Process substitution >(...) is asynchronous, allowing the log writing to fall behind and out of sync with the xtrace.
Instead use:
exec {BASH_XTRACEFD}>>$LOGFILE
PS4=' time:$(date +%H:%M:%S.%4N) ln:$LINENO: '
for synchronously logging the time and line.
Furthermore, xtrace is triggered before running the command, making it a bad candidate for capturing exit codes. Instead use:
trap 'echo exit:$? >> $LOGFILE' DEBUG
to log the exit codes of each command since trap triggers on command completion. Note that this won't report on every step in a function call like xtrace will. (could use some help with the phrasing here)
No solution yet for omitting the trap from xtrace, but it's good enough:
LOGFILE="SomeFile.log"
exec {BASH_XTRACEFD}>>$LOGFILE
PS4=' time:$(date +%H:%M:%S.%4N) ln:$LINENO: '
set -x
fail() # test function that returns 1
{
echo "fail" >> $LOGFILE
return 1
}
success() # test function that returns 0
{
echo "success" >> $LOGFILE
return 0
}
trap 'echo $? >> $LOGFILE' DEBUG
fail
success
echo "complete"
yields:
time:14:10:22.2686 ln:21: trap 'echo $? >> $LOGFILE' DEBUG
time:14:10:22.2693 ln:23: echo 0
0
time:14:10:22.2736 ln:23: fail
time:14:10:22.2741 ln:12: echo fail
fail
time:14:10:22.2775 ln:13: return 1
time:14:10:22.2782 ln:24: echo 1
1
time:14:10:22.2830 ln:24: success
time:14:10:22.2836 ln:17: echo success
success
time:14:10:22.2873 ln:18: return 0
time:14:10:22.2881 ln:26: echo 0
0
time:14:10:22.2912 ln:26: echo complete
I'm writing a shell script to save some key strokes and avoid typos. I would like to keep the script as a single file that calls internal methods/functions and terminates the functions if problems arise without leaving the terminal.
my_script.sh
#!/bin/bash
exit_if_no_git() {
# if no git directory found, exit
# ...
exit 1
}
branch() {
exit_if_no_git
# some code...
}
push() {
exit_if_no_git
# some code...
}
feature() {
exit_if_no_git
# some code...
}
bug() {
exit_if_no_git
# some code...
}
I would like to call it via:
$ branch
$ feature
$ bug
$ ...
I know I can source git_extensions.sh in my .bash_profile, but when I execute one of the commands and there is no .git directory, it will exit 1 as expected but this also exits out of the terminal itself (since it's sourced).
Is there an alternative to exiting the functions, which also exits the terminal?
Instead of defining a function exit_if_no_git, define one as has_git_dir:
has_git_dir() {
local dir=${1:-$PWD} # allow optional argument
while [[ $dir = */* ]]; do # while not at root...
[[ -d $dir/.git ]] && return 0 # ...if a .git exists, return success
dir=${dir%/*} # ...otherwise trim the last element
done
return 1 # if nothing was found, return failure
}
...and, elsewhere:
branch() {
has_git_dir || return
# ...actual logic here...
}
That way the functions are short-circuited, but no shell-level exit occurs.
It's also possible to exit a file being sourced using return, preventing later functions within it from even being defined, if return is run at top-level within such a file.
Consider this PS1
PS1='\n${_:+$? }$ '
Here is the result of a few commands
$ [ 2 = 2 ]
0 $ [ 2 = 3 ]
1 $
1 $
The first line shows no status as expected, and the next two lines show the
correct exit code. However on line 3 only Enter was pressed, so I would like the
status to go away, like line 1. How can I do this?
Here's a funny, very simple possibility: it uses the \# escape sequence of PS1 together with parameter expansions (and the way Bash expands its prompt).
The escape sequence \# expands to the command number of the command to be executed. This is incremented each time a command has actually been executed. Try it:
$ PS1='\# $ '
2 $ echo hello
hello
3 $ # this is a comment
3 $
3 $ echo hello
hello
4 $
Now, each time a prompt is to be displayed, Bash first expands the escape sequences found in PS1, then (provided the shell option promptvars is set, which is the default), this string is expanded via parameter expansion, command substitution, arithmetic expansion, and quote removal.
The trick is then to have an array that will have the k-th field set (to the empty string) whenever the (k-1)-th command is executed. Then, using appropriate parameter expansions, we'll be able to detect when these fields are set and to display the return code of the previous command if the field isn't set. If you want to call this array __cmdnbary, just do:
PS1='\n${__cmdnbary[\#]-$? }${__cmdnbary[\#]=}\$ '
Look:
$ PS1='\n${__cmdnbary[\#]-$? }${__cmdnbary[\#]=}\$ '
0 $ [ 2 = 3 ]
1 $
$ # it seems that it works
$ echo "it works"
it works
0 $
To qualify for the shortest answer challenge:
PS1='\n${a[\#]-$? }${a[\#]=}$ '
that's 31 characters.
Don't use this, of course, as a is a too trivial name; also, \$ might be better than $.
Seems you don't like that the initial prompt is 0 $; you can very easily modify this by initializing the array __cmdnbary appropriately: you'll put this somewhere in your configuration file:
__cmdnbary=( '' '' ) # Initialize the field 1!
PS1='\n${__cmdnbary[\#]-$? }${__cmdnbary[\#]=}\$ '
Got some time to play around this weekend. Looking at my earlier answer (not-good) and other answers I think this may be probably the smallest answer.
Place these lines at the end of your ~/.bash_profile:
PS1='$_ret$ '
trapDbg() {
local c="$BASH_COMMAND"
[[ "$c" != "pc" ]] && export _cmd="$c"
}
pc() {
local r=$?
trap "" DEBUG
[[ -n "$_cmd" ]] && _ret="$r " || _ret=""
export _ret
export _cmd=
trap 'trapDbg' DEBUG
}
export PROMPT_COMMAND=pc
trap 'trapDbg' DEBUG
Then open a new terminal and note this desired behavior on BASH prompt:
$ uname
Darwin
0 $
$
$
$ date
Sun Dec 14 05:59:03 EST 2014
0 $
$
$ [ 1 = 2 ]
1 $
$
$ ls 123
ls: cannot access 123: No such file or directory
2 $
$
Explanation:
This is based on trap 'handler' DEBUG and PROMPT_COMMAND hooks.
PS1 is using a variable _ret i.e. PS1='$_ret$ '.
trap command runs only when a command is executed but PROMPT_COMMAND is run even when an empty enter is pressed.
trap command sets a variable _cmd to the actually executed command using BASH internal var BASH_COMMAND.
PROMPT_COMMAND hook sets _ret to "$? " if _cmd is non-empty otherwise sets _ret to "". Finally it resets _cmd var to empty state.
The variable HISTCMD is updated every time a new command is executed. Unfortunately, the value is masked during the execution of PROMPT_COMMAND (I suppose for reasons related to not having history messed up with things which happen in the prompt command). The workaround I came up with is kind of messy, but it seems to work in my limited testing.
# This only works if the prompt has a prefix
# which is displayed before the status code field.
# Fortunately, in this case, there is one.
# Maybe use a no-op prefix in the worst case (!)
PS1_base=$'\n'
# Functions for PROMPT_COMMAND
PS1_update_HISTCMD () {
# If HISTCONTROL contains "ignoredups" or "ignoreboth", this breaks.
# We should not change it programmatically
# (think principle of least astonishment etc)
# but we can always gripe.
case :$HISTCONTROL: in
*:ignoredups:* | *:ignoreboth:* )
echo "PS1_update_HISTCMD(): HISTCONTROL contains 'ignoredups' or 'ignoreboth'" >&2
echo "PS1_update_HISTCMD(): Warning: Please remove this setting." >&2 ;;
esac
# PS1_HISTCMD needs to contain the old value of PS1_HISTCMD2 (a copy of HISTCMD)
PS1_HISTCMD=${PS1_HISTCMD2:-$PS1_HISTCMD}
# PS1_HISTCMD2 needs to be unset for the next prompt to trigger properly
unset PS1_HISTCMD2
}
PROMPT_COMMAND=PS1_update_HISTCMD
# Finally, the actual prompt:
PS1='${PS1_base#foo${PS1_HISTCMD2:=${HISTCMD%$PS1_HISTCMD}}}${_:+${PS1_HISTCMD2:+$? }}$ '
The logic in the prompt is roughly as follows:
${PS1_base#foo...}
This displays the prefix. The stuff in #... is useful only for its side effects. We want to do some variable manipulation without having the values of the variables display, so we hide them in a string substitution. (This will display odd and possibly spectacular things if the value of PS1_base ever happens to begin with foo followed by the current command history index.)
${PS1_HISTCMD2:=...}
This assigns a value to PS1_HISTCMD2 (if it is unset, which we have made sure it is). The substitution would nominally also expand to the new value, but we have hidden it in a ${var#subst} as explained above.
${HISTCMD%$PS1_HISTCMD}
We assign either the value of HISTCMD (when a new entry in the command history is being made, i.e. we are executing a new command) or an empty string (when the command is empty) to PS1_HISTCMD2. This works by trimming off the value HISTCMD any match on PS1_HISTCMD (using the ${var%subst} suffix replacement syntax).
${_:+...}
This is from the question. It will expand to ... something if the value of $_ is set and nonempty (which it is when a command is being executed, but not e.g. if we are performing a variable assignment). The "something" should be the status code (and a space, for legibility) if PS1_HISTCMD2 is nonempty.
${PS1_HISTCMD2:+$? }
There.
'$ '
This is just the actual prompt suffix, as in the original question.
So the key parts are the variables PS1_HISTCMD which remembers the previous value of HISTCMD, and the variable PS1_HISTCMD2 which captures the value of HISTCMD so it can be accessed from within PROMPT_COMMAND, but needs to be unset in the PROMPT_COMMAND so that the ${PS1_HISTCMD2:=...} assignment will fire again the next time the prompt is displayed.
I fiddled for a bit with trying to hide the output from ${PS1_HISTCMD2:=...} but then realized that there is in fact something we want to display anyhow, so just piggyback on that. You can't have a completely empty PS1_base because the shell apparently notices, and does not even attempt to perform a substitution when there is no value; but perhaps you can come up with a dummy value (a no-op escape sequence, perhaps?) if you have nothing else you want to display. Or maybe this could be refactored to run with a suffix instead; but that is probably going to be trickier still.
In response to Anubhava's "smallest answer" challenge, here is the code without comments or error checking.
PS1_base=$'\n'
PS1_update_HISTCMD () { PS1_HISTCMD=${PS1_HISTCMD2:-$PS1_HISTCMD}; unset PS1_HISTCMD2; }
PROMPT_COMMAND=PS1_update_HISTCMD
PS1='${PS1_base#foo${PS1_HISTCMD2:=${HISTCMD%$PS1_HISTCMD}}}${_:+${PS1_HISTCMD2:+$? }}$ '
This is probably not the best way to do this, but it seems to be working
function pc {
foo=$_
fc -l > /tmp/new
if cmp -s /tmp/{new,old} || test -z "$foo"
then
PS1='\n$ '
else
PS1='\n$? $ '
fi
cp /tmp/{new,old}
}
PROMPT_COMMAND=pc
Result
$ [ 2 = 2 ]
0 $ [ 2 = 3 ]
1 $
$
I need to use great script bash-preexec.sh.
Although I don't like external dependencies, this was the only thing to help me avoid to have 1 in $? after just pressing enter without running any command.
This goes to your ~/.bashrc:
__prompt_command() {
local exit="$?"
PS1='\u#\h: \w \$ '
[ -n "$LASTCMD" -a "$exit" != "0" ] && PS1='['${red}$exit$clear"] $PS1"
}
PROMPT_COMMAND=__prompt_command
[-f ~/.bash-preexec.sh ] && . ~/.bash-preexec.sh
preexec() { LASTCMD="$1"; }
UPDATE: later I was able to find a solution without dependency on .bash-preexec.sh.
Here's a shell script:
globvar=0
function myfunc {
let globvar=globvar+1
echo "myfunc: $globvar"
}
myfunc
echo "something" | myfunc
echo "Global: $globvar"
When called, it prints out the following:
$ sh zzz.sh
myfunc: 1
myfunc: 2
Global: 1
$ bash zzz.sh
myfunc: 1
myfunc: 2
Global: 1
$ zsh zzz.sh
myfunc: 1
myfunc: 2
Global: 2
The question is: why this happens and what behavior is correct?
P.S. I have a strange feeling that function behind the pipe is called in a forked shell... So, can there be a simple workaround?
P.P.S. This function is a simple test wrapper. It runs test application and analyzes its output. Then it increments $PASSED or $FAILED variables. Finally, you get a number of passed/failed tests in global variables. The usage is like:
test-util << EOF | myfunc
input for test #1
EOF
test-util << EOF | myfunc
input for test #2
EOF
echo "Passed: $PASSED, failed: $FAILED"
Korn shell gives the same results as zsh, by the way.
Please see BashFAQ/024. Pipes create subshells in Bash and variables are lost when subshells exit.
Based on your example, I would restructure it something like this:
globvar=0
function myfunc {
echo $(($1 + 1))
}
myfunc "$globvar"
globalvar=$(echo "something" | myfunc "$globalvar")
Piping something into myfunc in sh or bash causes a new shell to spawn. You can confirm this by adding a long sleep in myfunc. While it's sleeping call ps and you'll see a subprocess. When the function returns, that sub shell exits without changing the value in the parent process.
If you really need that value to be changed, you'll need to return a value from the function and check $PIPESTATUS after, I guess, like this:
globvar=0
function myfunc {
let globvar=globvar+1
echo "myfunc: $globvar"
return $globvar
}
myfunc
echo "something" | myfunc
globvar=${PIPESTATUS[1]}
echo "Global: $globvar"
The problem is 'which end of a pipeline using built-ins is executed by the original process?'
In zsh, it looks like the last command in the pipeline is executed by the main shell script when the command is a function or built-in.
In Bash (and sh is likely to be a link to Bash if you're on Linux), then either both commands are run in a sub-shell or the first command is run by the main process and the others are run by sub-shells.
Clearly, when the function is run in a sub-shell, it does not affect the variable in the parent shell (only the global in the sub-shell).
Consider adding an extra test:
echo Something | { myfunc; echo $globvar; }
echo $globvar