I am trying to echo the last command run inside a bash script. I found a way to do it with some history,tail,head,sed which works fine when commands represent a specific line in my script from a parser standpoint. However under some circumstances I don't get the expected output, for instance when the command is inserted inside a case statement:
The script:
#!/bin/bash
set -o history
date
last=$(echo `history |tail -n2 |head -n1` | sed 's/[0-9]* //')
echo "last command is [$last]"
case "1" in
"1")
date
last=$(echo `history |tail -n2 |head -n1` | sed 's/[0-9]* //')
echo "last command is [$last]"
;;
esac
The output:
Tue May 24 12:36:04 CEST 2011
last command is [date]
Tue May 24 12:36:04 CEST 2011
last command is [echo "last command is [$last]"]
[Q] Can someone help me find a way to echo the last run command regardless of how/where this command is called within the bash script?
My answer
Despite the much appreciated contributions from my fellow SO'ers, I opted for writing a run function - which runs all its parameters as a single command and display the command and its error code when it fails - with the following benefits:
-I only need to prepend the commands I want to check with run which keeps them on one line and doesn't affect the conciseness of my script
-Whenever the script fails on one of these commands, the last output line of my script is a message that clearly displays which command fails along with its exit code, which makes debugging easier
Example script:
#!/bin/bash
die() { echo >&2 -e "\nERROR: $#\n"; exit 1; }
run() { "$#"; code=$?; [ $code -ne 0 ] && die "command [$*] failed with error code $code"; }
case "1" in
"1")
run ls /opt
run ls /wrong-dir
;;
esac
The output:
$ ./test.sh
apacheds google iptables
ls: cannot access /wrong-dir: No such file or directory
ERROR: command [ls /wrong-dir] failed with error code 2
I tested various commands with multiple arguments, bash variables as arguments, quoted arguments... and the run function didn't break them. The only issue I found so far is to run an echo which breaks but I do not plan to check my echos anyway.
Bash has built in features to access the last command executed. But that's the last whole command (e.g. the whole case command), not individual simple commands like you originally requested.
!:0 = the name of command executed.
!:1 = the first parameter of the previous command
!:4 = the fourth parameter of the previous command
!:* = all of the parameters of the previous command
!^ = the first parameter of the previous command (same as !:1)
!$ = the final parameter of the previous command
!:-3 = all parameters in range 0-3 (inclusive)
!:2-5 = all parameters in range 2-5 (inclusive)
!! = the previous command line
etc.
So, the simplest answer to the question is, in fact:
echo !!
...alternatively:
echo "Last command run was ["!:0"] with arguments ["!:*"]"
Try it yourself!
echo this is a test
echo !!
In a script, history expansion is turned off by default, you need to enable it with
set -o history -o histexpand
The command history is an interactive feature. Only complete commands are entered in the history. For example, the case construct is entered as a whole, when the shell has finished parsing it. Neither looking up the history with the history built-in (nor printing it through shell expansion (!:p)) does what you seem to want, which is to print invocations of simple commands.
The DEBUG trap lets you execute a command right before any simple command execution. A string version of the command to execute (with words separated by spaces) is available in the BASH_COMMAND variable.
trap 'previous_command=$this_command; this_command=$BASH_COMMAND' DEBUG
…
echo "last command is $previous_command"
Note that previous_command will change every time you run a command, so save it to a variable in order to use it. If you want to know the previous command's return status as well, save both in a single command.
cmd=$previous_command ret=$?
if [ $ret -ne 0 ]; then echo "$cmd failed with error code $ret"; fi
Furthermore, if you only want to abort on a failed commands, use set -e to make your script exit on the first failed command. You can display the last command from the EXIT trap.
set -e
trap 'echo "exit $? due to $previous_command"' EXIT
Note that if you're trying to trace your script to see what it's doing, forget all this and use set -x.
After reading the answer from Gilles, I decided to see if the $BASH_COMMAND var was also available (and the desired value) in an EXIT trap - and it is!
So, the following bash script works as expected:
#!/bin/bash
exit_trap () {
local lc="$BASH_COMMAND" rc=$?
echo "Command [$lc] exited with code [$rc]"
}
trap exit_trap EXIT
set -e
echo "foo"
false 12345
echo "bar"
The output is
foo
Command [false 12345] exited with code [1]
bar is never printed because set -e causes bash to exit the script when a command fails and the false command always fails (by definition). The 12345 passed to false is just there to show that the arguments to the failed command are captured as well (the false command ignores any arguments passed to it)
I was able to achieve this by using set -x in the main script (which makes the script print out every command that is executed) and writing a wrapper script which just shows the last line of output generated by set -x.
This is the main script:
#!/bin/bash
set -x
echo some command here
echo last command
And this is the wrapper script:
#!/bin/sh
./test.sh 2>&1 | grep '^\+' | tail -n 1 | sed -e 's/^\+ //'
Running the wrapper script produces this as output:
echo last command
history | tail -2 | head -1 | cut -c8-
tail -2 returns the last two command lines from history
head -1 returns just first line
cut -c8- returns just command line, removing PID and spaces.
There is a racecondition between the last command ($_) and last error ( $?) variables. If you try to store one of them in an own variable, both encountered new values already because of the set command. Actually, last command hasn't got any value at all in this case.
Here is what i did to store (nearly) both informations in own variables, so my bash script can determine if there was any error AND setting the title with the last run command:
# This construct is needed, because of a racecondition when trying to obtain
# both of last command and error. With this the information of last error is
# implied by the corresponding case while command is retrieved.
if [[ "${?}" == 0 && "${_}" != "" ]] ; then
# Last command MUST be retrieved first.
LASTCOMMAND="${_}" ;
RETURNSTATUS='✓' ;
elif [[ "${?}" == 0 && "${_}" == "" ]] ; then
LASTCOMMAND='unknown' ;
RETURNSTATUS='✓' ;
elif [[ "${?}" != 0 && "${_}" != "" ]] ; then
# Last command MUST be retrieved first.
LASTCOMMAND="${_}" ;
RETURNSTATUS='✗' ;
# Fixme: "$?" not changing state until command executed.
elif [[ "${?}" != 0 && "${_}" == "" ]] ; then
LASTCOMMAND='unknown' ;
RETURNSTATUS='✗' ;
# Fixme: "$?" not changing state until command executed.
fi
This script will retain the information, if an error occured and will obtain the last run command. Because of the racecondition i can not store the actual value. Besides, most commands actually don't even care for error noumbers, they just return something different from '0'. You'll notice that, if you use the errono extention of bash.
It should be possible with something like a "intern" script for bash, like in bash extention, but i'm not familiar with something like that and it wouldn't be compatible as well.
CORRECTION
I didn't think, that it was possible to retrieve both variables at the same time. Although i like the style of the code, i assumed it would be interpreted as two commands. This was wrong, so my answer devides down to:
# Because of a racecondition, both MUST be retrieved at the same time.
declare RETURNSTATUS="${?}" LASTCOMMAND="${_}" ;
if [[ "${RETURNSTATUS}" == 0 ]] ; then
declare RETURNSYMBOL='✓' ;
else
declare RETURNSYMBOL='✗' ;
fi
Although my post might not get any positive rating, i solved my problem myself, finally.
And this seems appropriate regarding the intial post. :)
Related
Here's my problem, from console if I type the below,
var=`history 1`
echo $var
I get the desired output. But when I do the same inside a shell script, it is not showing any output. Also, for other commands like pwd, ls etc, the script shows the desired output without any issue.
As value of variable contains space, add quotes around it.
E.g.:
var='history 1'
echo $var
I believe all you need is this as follows:
1- Ask user for the number till which user need to print the history in script.
2- Run the script and take Input from user and get the output as follows:
cat get_history.ksh
echo "Enter the line number of history which you want to get.."
read number
if [[ $# -eq 0 ]]
then
echo "Usage of script: get_history.ksh number_of_lines"
exit
else
history "$number"
fi
Added logic where it will check arguments if number of arguments passed is 0 then it will exit from script then.
By default history is turned off in a script, therefore you need to turn it on:
set -o history
var=$(history 1)
echo "$var"
Note the preferred use of $( ) rather than the deprecated backticks.
However, this will only look at the history of the current process, that is this shell script, so it is fairly useless.
Is the output of a Bash command stored in any register? E.g. something similar to $? capturing the output instead of the exit status.
I could assign the output to a variable with:
output=$(command)
but that's more typing...
You can use $(!!)
to recompute (not re-use) the output of the last command.
The !! on its own executes the last command.
$ echo pierre
pierre
$ echo my name is $(!!)
echo my name is $(echo pierre)
my name is pierre
The answer is no. Bash doesn't allocate any output to any parameter or any block on its memory. Also, you are only allowed to access Bash by its allowed interface operations. Bash's private data is not accessible unless you hack it.
Very Simple Solution
One that I've used for years.
Script (add to your .bashrc or .bash_profile)
# capture the output of a command so it can be retrieved with ret
cap () { tee /tmp/capture.out; }
# return the output of the most recent command that was captured by cap
ret () { cat /tmp/capture.out; }
Usage
$ find . -name 'filename' | cap
/path/to/filename
$ ret
/path/to/filename
I tend to add | cap to the end of all of my commands. This way when I find I want to do text processing on the output of a slow running command I can always retrieve it with ret.
If you are on mac, and don't mind storing your output in the clipboard instead of writing to a variable, you can use pbcopy and pbpaste as a workaround.
For example, instead of doing this to find a file and diff its contents with another file:
$ find app -name 'one.php'
/var/bar/app/one.php
$ diff /var/bar/app/one.php /var/bar/two.php
You could do this:
$ find app -name 'one.php' | pbcopy
$ diff $(pbpaste) /var/bar/two.php
The string /var/bar/app/one.php is in the clipboard when you run the first command.
By the way, pb in pbcopy and pbpaste stand for pasteboard, a synonym for clipboard.
One way of doing that is by using trap DEBUG:
f() { bash -c "$BASH_COMMAND" >& /tmp/out.log; }
trap 'f' DEBUG
Now most recently executed command's stdout and stderr will be available in /tmp/out.log
Only downside is that it will execute a command twice: once to redirect output and error to /tmp/out.log and once normally. Probably there is some way to prevent this behavior as well.
Inspired by anubhava's answer, which I think is not actually acceptable as it runs each command twice.
save_output() {
exec 1>&3
{ [ -f /tmp/current ] && mv /tmp/current /tmp/last; }
exec > >(tee /tmp/current)
}
exec 3>&1
trap save_output DEBUG
This way the output of last command is in /tmp/last and the command is not called twice.
Yeah, why type extra lines each time; agreed.
You can redirect the returned from a command to input by pipeline, but redirecting printed output to input (1>&0) is nope, at least not for multiple line outputs.
Also you won't want to write a function again and again in each file for the same. So let's try something else.
A simple workaround would be to use printf function to store values in a variable.
printf -v myoutput "`cmd`"
such as
printf -v var "`echo ok;
echo fine;
echo thankyou`"
echo "$var" # don't forget the backquotes and quotes in either command.
Another customizable general solution (I myself use) for running the desired command only once and getting multi-line printed output of the command in an array variable line-by-line.
If you are not exporting the files anywhere and intend to use it locally only, you can have Terminal set-up the function declaration. You have to add the function in ~/.bashrc file or in ~/.profile file. In second case, you need to enable Run command as login shell from Edit>Preferences>yourProfile>Command.
Make a simple function, say:
get_prev() # preferably pass the commands in quotes. Single commands might still work without.
{
# option 1: create an executable with the command(s) and run it
#echo $* > /tmp/exe
#bash /tmp/exe > /tmp/out
# option 2: if your command is single command (no-pipe, no semi-colons), still it may not run correct in some exceptions.
#echo `"$*"` > /tmp/out
# option 3: (I actually used below)
eval "$*" > /tmp/out # or simply "$*" > /tmp/out
# return the command(s) outputs line by line
IFS=$(echo -en "\n\b")
arr=()
exec 3</tmp/out
while read -u 3 -r line
do
arr+=($line)
echo $line
done
exec 3<&-
}
So what we did in option 1 was print the whole command to a temporary file /tmp/exe and run it and save the output to another file /tmp/out and then read the contents of the /tmp/out file line-by-line to an array.
Similar in options 2 and 3, except that the commands were exectuted as such, without writing to an executable to be run.
In main script:
#run your command:
cmd="echo hey ya; echo hey hi; printf `expr 10 + 10`'\n' ; printf $((10 + 20))'\n'"
get_prev $cmd
#or simply
get_prev "echo hey ya; echo hey hi; printf `expr 10 + 10`'\n' ; printf $((10 + 20))'\n'"
Now, bash saves the variable even outside previous scope, so the arr variable created in get_prev function is accessible even outside the function in the main script:
#get previous command outputs in arr
for((i=0; i<${#arr[#]}; i++))
do
echo ${arr[i]}
done
#if you're sure that your output won't have escape sequences you bother about, you may simply print the array
printf "${arr[*]}\n"
Edit:
I use the following code in my implementation:
get_prev()
{
usage()
{
echo "Usage: alphabet [ -h | --help ]
[ -s | --sep SEP ]
[ -v | --var VAR ] \"command\""
}
ARGS=$(getopt -a -n alphabet -o hs:v: --long help,sep:,var: -- "$#")
if [ $? -ne 0 ]; then usage; return 2; fi
eval set -- $ARGS
local var="arr"
IFS=$(echo -en '\n\b')
for arg in $*
do
case $arg in
-h|--help)
usage
echo " -h, --help : opens this help"
echo " -s, --sep : specify the separator, newline by default"
echo " -v, --var : variable name to put result into, arr by default"
echo " command : command to execute. Enclose in quotes if multiple lines or pipelines are used."
shift
return 0
;;
-s|--sep)
shift
IFS=$(echo -en $1)
shift
;;
-v|--var)
shift
var=$1
shift
;;
-|--)
shift
;;
*)
cmd=$option
;;
esac
done
if [ ${#} -eq 0 ]; then usage; return 1; fi
ERROR=$( { eval "$*" > /tmp/out; } 2>&1 )
if [ $ERROR ]; then echo $ERROR; return 1; fi
local a=()
exec 3</tmp/out
while read -u 3 -r line
do
a+=($line)
done
exec 3<&-
eval $var=\(\${a[#]}\)
print_arr $var # comment this to suppress output
}
print()
{
eval echo \${$1[#]}
}
print_arr()
{
eval printf "%s\\\n" "\${$1[#]}"
}
Ive been using this to print space-separated outputs of multiple/pipelined/both commands as line separated:
get_prev -s " " -v myarr "cmd1 | cmd2; cmd3 | cmd4"
For example:
get_prev -s ' ' -v myarr whereis python # or "whereis python"
# can also be achieved (in this case) by
whereis python | tr ' ' '\n'
Now tr command is useful at other places as well, such as
echo $PATH | tr ':' '\n'
But for multiple/piped commands... you know now. :)
-Himanshu
Like konsolebox said, you'd have to hack into bash itself. Here is a quite good example on how one might achieve this. The stderred repository (actually meant for coloring stdout) gives instructions on how to build it.
I gave it a try: Defining some new file descriptor inside .bashrc like
exec 41>/tmp/my_console_log
(number is arbitrary) and modify stderred.c accordingly so that content also gets written to fd 41. It kind of worked, but contains loads of NUL bytes, weird formattings and is basically binary data, not readable. Maybe someone with good understandings of C could try that out.
If so, everything needed to get the last printed line is tail -n 1 [logfile].
Not sure exactly what you're needing this for, so this answer may not be relevant. You can always save the output of a command: netstat >> output.txt, but I don't think that's what you're looking for.
There are of course programming options though; you could simply get a program to read the text file above after that command is run and associate it with a variable, and in Ruby, my language of choice, you can create a variable out of command output using 'backticks':
output = `ls` #(this is a comment) create variable out of command
if output.include? "Downloads" #if statement to see if command includes 'Downloads' folder
print "there appears to be a folder named downloads in this directory."
else
print "there is no directory called downloads in this file."
end
Stick this in a .rb file and run it: ruby file.rb and it will create a variable out of the command and allow you to manipulate it.
If you don't want to recompute the previous command you can create a macro that scans the current terminal buffer, tries to guess the -supposed- output of the last command, copies it to the clipboard and finally types it to the terminal.
It can be used for simple commands that return a single line of output (tested on Ubuntu 18.04 with gnome-terminal).
Install the following tools: xdootool, xclip , ruby
In gnome-terminal go to Preferences -> Shortcuts -> Select all and set it to Ctrl+shift+a.
Create the following ruby script:
cat >${HOME}/parse.rb <<EOF
#!/usr/bin/ruby
stdin = STDIN.read
d = stdin.split(/\n/)
e = d.reverse
f = e.drop_while { |item| item == "" }
g = f.drop_while { |item| item.start_with? "${USER}#" }
h = g[0]
print h
EOF
In the keyboard settings add the following keyboard shortcut:
bash -c '/bin/sleep 0.3 ; xdotool key ctrl+shift+a ; xdotool key ctrl+shift+c ; ( (xclip -out | ${HOME}/parse.rb ) > /tmp/clipboard ) ; (cat /tmp/clipboard | xclip -sel clip ) ; xdotool key ctrl+shift+v '
The above shortcut:
copies the current terminal buffer to the clipboard
extracts the output of the last command (only one line)
types it into the current terminal
I have an idea that I don't have time to try to implement immediately.
But what if you do something like the following:
$ MY_HISTORY_FILE = `get_temp_filename`
$ MY_HISTORY_FILE=$MY_HISTORY_FILE bash -i 2>&1 | tee $MY_HISTORY_FILE
$ some_command
$ cat $MY_HISTORY_FILE
$ # ^You'll want to filter that down in practice!
There might be issues with IO buffering. Also the file might get too huge. One would have to come up with a solution to these problems.
I think using script command might help. Something like,
script -c bash -qf fifo_pid
Using bash features to set after parsing.
Demo for non-interactive commands only: http://asciinema.org/a/395092
For also supporting interactive commands, you'd have to hack the script binary from util-linux to ignore any screen-redrawing console codes, and run it from bashrc to save your login session's output to a file.
You can use -exec to run a command on the output of a command. So it will be a reuse of the output as an example given with a find command below:
find . -name anything.out -exec rm {} \;
you are saying here -> find a file called anything.out in the current folder, if found, remove it. If it is not found, the remaining after -exec will be skipped.
I want to execute a script and have it run a command every x minutes.
Also any general advice on any resources for learning bash scripting could be really cool. I use Linux for my personal development work, so bash scripts are not totally foreign to me, I just haven't written any of my own from scratch.
If you want to run a command periodically, there's 3 ways :
using the crontab command ex. * * * * * command (run every minutes)
using a loop like : while true; do ./my_script.sh; sleep 60; done (not precise)
using systemd timer
See cron
Some pointers for best bash scripting practices :
http://mywiki.wooledge.org/BashFAQ
Guide: http://mywiki.wooledge.org/BashGuide
ref: http://www.gnu.org/software/bash/manual/bash.html
http://wiki.bash-hackers.org/
USE MORE QUOTES!: http://www.grymoire.com/Unix/Quote.html
Scripts and more: http://www.shelldorado.com/
In addition to #sputnick's answer, there is also watch. From the man page:
Execute a program periodically, showing output full screen
By default this is every 2 seconds. watch is useful for tailing logs, for example.
macOS users: here's a partial implementation of the GNU watch command (as of version 0.3.0) for interactive periodic invocations for primarily visual inspection:
It is syntax-compatible with the GNU version and fails with a specific error message if an unimplemented feature is used.
Notable limitations:
The output is not limited to one screenful.
Displaying output differences is not supported.
Using precise timing is not supported.
Colored output is always passed through (--color is implied).
Also implements a few non-standard features, such as waiting for success (-E) to complement waiting for error (-e) and showing the time of day of the last invocation as well as the total time elapsed so far.
Run watch -h for details.
Examples:
watch -n 1 ls # list current dir every second
watch -e 'ls *.lockfile' # list lock files and exit once none exist anymore.
Source code (paste into a script file named watch, make it executable, and place in a directory in your $PATH; note that syntax highlighting here is broken, but the code works):
#!/usr/bin/env bash
THIS_NAME=$(basename "$BASH_SOURCE")
VERSION='0.1'
# Helper function for exiting with error message due to runtime error.
# die [errMsg [exitCode]]
# Default error message states context and indicates that execution is aborted. Default exit code is 1.
# Prefix for context is always prepended.
# Note: An error message is *always* printed; if you just want to exit with a specific code silently, use `exit n` directly.
die() {
echo "$THIS_NAME: ERROR: ${1:-"ABORTING due to unexpected error."}" 1>&2
exit ${2:-1} # Note: If the argument is non-numeric, the shell prints a warning and uses exit code 255.
}
# Helper function for exiting with error message due to invalid parameters.
# dieSyntax [errMsg]
# Default error message is provided, as is prefix and suffix; exit code is always 2.
dieSyntax() {
echo "$THIS_NAME: PARAMETER ERROR: ${1:-"Invalid parameter(s) specified."} Use -h for help." 1>&2
exit 2
}
# Get the elapsed time since the specified epoch time in format HH:MM:SS.
# Granularity: whole seconds.
# Example:
# tsStart=$(date +'%s')
# ...
# getElapsedTime $tsStart
getElapsedTime() {
date -j -u -f '%s' $(( $(date +'%s') - $1 )) +'%H:%M:%S'
}
# Command-line help.
if [[ "$1" == '--help' || "$1" == '-h' ]]; then
cat <<EOF
SYNOPSIS
$THIS_NAME [-n seconds] [opts] cmd [arg ...]
DESCRIPTION
Executes a command periodically and displays its output for visual inspection.
NOTE: This is a PARTIAL implementation of the GNU \`watch\` command, for OS X.
Notably, the output is not limited to one screenful, and displaying
output differences and using precise timing are not supported.
Also, colored output is always passed through (--color is implied).
Unimplemented features are marked as [NOT IMPLEMENTED] below.
Conversely, features specific to this implementation are marked as [NONSTD].
Reference version is GNU watch 0.3.0.
CMD may be a simple command with separately specified
arguments, if any, or a single string containing one or more
;-separated commands (including arguments) - in the former case the command
is directly executed by bash, in the latter the string is passed to \`bash -c\`.
Note that GNU watch uses sh, not bash.
To use \`exec\` instead, specify -x (see below).
By default, CMD is re-invoked indefinitely; terminate with ^-C or
exit based on conditions:
-e, --errexit
exits once CMD indicates an error, i.e., returns a non-zero exit code.
-E, --okexit [NONSTD]
is the inverse of -e: runs until CMD returns exit code 0.
By default, all output is passed through; the following options modify this
behavior; note that suppressing output only relates to CMD's output, not the
messages output by this utility itself:
-q, --quiet [NONSTD]
suppresses stdout output from the command invoked;
-Q, --quiet-both [NONSTD]
suppresses both stdout and stderr output.
-l, --list [NONSTD]
list-style display; i.e., suppresses clearing of the screen
before every invocation of CMD.
-n secs, --interval secs
interval in seconds between the end of the previous invocation of CMD
and the next invocation - 2 seconds by default, fractional values permitted;
thus, the interval between successive invocations is the specified interval
*plus* the last CMD's invocation's execution duration.
-x, --exec
uses \`exec\` rather than bash to execute CMD; this requires
arguments to be passed to CMD to be specified as separate arguments
to this utility and prevents any shell expansions of these arguments
at invocation time.
-t, --no-title
suppresses the default title (header) that displays the interval,
and (NONSTD) a time stamp, the time elapsed so far, and the command executed.
-b, --beep
beeps on error (bell signal), i.e., when CMD reports a non-zero exit code.
-c, --color
IMPLIED AND ALWAYS ON: colored command output is invariably passed through.
-p, --precise [NOT IMPLEMENTED]
-d, --difference [NOT IMPLEMENTED]
EXAMPLES
# List files in home folder every second.
$THIS_NAME -n 1 ls ~
# Wait until all *.lockfile files disappear from the current dir, checking every 2 secs.
$THIS_NAME -e 'ls *.lockfile'
EOF
exit 0
fi
# Make sure that we're running on OSX.
[[ $(uname) == 'Darwin' ]] || die "This script is designed to run on OS X only."
# Preprocess parameters: expand compressed options to individual options; e.g., '-ab' to '-a -b'
params=() decompressed=0 argsReached=0
for p in "$#"; do
if [[ $argsReached -eq 0 && $p =~ ^-[a-zA-Z0-9]+$ ]]; then # compressed options?
decompressed=1
params+=(${p:0:2})
for (( i = 2; i < ${#p}; i++ )); do
params+=("-${p:$i:1}")
done
else
(( argsReached && ! decompressed )) && break
[[ $p == '--' || ${p:0:1} != '-' ]] && argsReached=1
params+=("$p")
fi
done
(( decompressed )) && set -- "${params[#]}"; unset params decompressed argsReached p # Replace "$#" with the expanded parameter set.
# Option-parameters loop.
interval=2 # default interval
runUntilFailure=0
runUntilSuccess=0
quietStdOut=0
quietStdOutAndStdErr=0
dontClear=0
noHeader=0
beepOnErr=0
useExec=0
while (( $# )); do
case "$1" in
--) # Explicit end-of-options marker.
shift # Move to next param and proceed with data-parameter analysis below.
break
;;
-p|--precise|-d|--differences|--differences=*)
dieSyntax "Sadly, option $1 is NOT IMPLEMENTED."
;;
-v|--version)
echo "$VERSION"; exit 0
;;
-x|--exec)
useExec=1
;;
-c|--color)
# a no-op: unlike the GNU version, we always - and invariably - pass color codes through.
;;
-b|--beep)
beepOnErr=1
;;
-l|--list)
dontClear=1
;;
-e|--errexit)
runUntilFailure=1
;;
-E|--okexit)
runUntilSuccess=1
;;
-n|--interval)
shift; interval=$1;
errMsg="Please specify a positive number of seconds as the interval."
interval=$(bc <<<"$1") || dieSyntax "$errMsg"
(( 1 == $(bc <<<"$interval > 0") )) || dieSyntax "$errMsg"
[[ $interval == *.* ]] || interval+='.0'
;;
-t|--no-title)
noHeader=1
;;
-q|--quiet)
quietStdOut=1
;;
-Q|--quiet-both)
quietStdOutAndStdErr=1
;;
-?|--?*) # An unrecognized switch.
dieSyntax "Unrecognized option: '$1'. To force interpretation as non-option, precede with '--'."
;;
*) # 1st data parameter reached; proceed with *argument* analysis below.
break
;;
esac
shift
done
# Make sure we have at least a command name
[[ -n "$1" ]] || dieSyntax "Too few parameters specified."
# Suppress output streams, if requested.
# Duplicate stdout and stderr first.
# This allows us to produce output to stdout (>&3) and stderr (>&4) even when suppressed.
exec 3<&1 4<&2
if (( quietStdOutAndStdErr )); then
exec &> /dev/null
elif (( quietStdOut )); then
exec 1> /dev/null
fi
# Set an exit trap to ensure that the duplicated file descriptors are closed.
trap 'exec 3>&- 4>&-' EXIT
# Start loop with periodic invocation.
# Note: We use `eval` so that compound commands - e.g. 'ls; bash --version' - can be passed.
tsStart=$(date +'%s')
while :; do
(( dontClear )) || clear
(( noHeader )) || echo "Every ${interval}s. [$(date +'%H:%M:%S') - elapsed: $(getElapsedTime $tsStart)]: $#"$'\n' >&3
if (( useExec )); then
(exec "$#") # run in *subshell*, otherwise *this* script will be replaced by the process invoked
else
if [[ $* == *' '* ]]; then
# A single argument with interior spaces was provided -> we must use `bash -c` to evaluate it properly.
bash -c "$*"
else
# A command name only or a command name + arguments were specified as separate arguments -> let bash run it directly.
"$#"
fi
fi
ec=$?
(( ec != 0 && beepOnErr )) && printf '\a'
(( ec == 0 && runUntilSuccess )) && { echo $'\n'"[$(date +'%H:%M:%S') - elapsed: $(getElapsedTime $tsStart)] Exiting as requested: exit code 0 reported." >&3; exit 0; }
(( ec != 0 && runUntilFailure )) && { echo $'\n'"[$(date +'%H:%M:%S') - elapsed: $(getElapsedTime $tsStart)] Exiting as requested: non-zero exit code ($ec) reported." >&3; exit 0; }
sleep $interval
done
If you need a command to run periodically so that you can monitor its output, use watch [options] command. For example, to monitor free memory, run:
watch -n 1 free -m
the -n 1 option sets update interval to 1 second (default is 2 seconds).
Check man watch or the online manual for details.
If you need to monitor changes in one or multiple files (typically, logs), tail is your command of choice, for example:
# monitor one log file
tail -f /path/to/logs/file.log
# monitor multiple log files concurrently
tail -f $(ls /path/to/logs/*.log)
the -f (for “follow”) option tells tail to output new content as the file grows.
Check man tail or the online manual for details.
I want to execute the script and have it run a command every {time interval}
cron (https://en.wikipedia.org/wiki/Cron) was designed for this purpose. If you run man cron or man crontab you will find instructions for how to use it.
any general advice on any resources for learning bash scripting could be really cool. I use Linux for my personal development work, so bash scripts are not totally foreign to me, I just haven't written any of my own from scratch.
If you are comfortable working with bash, I recommend reading through the bash manpage first (man bash) -- there are lots of cool tidbits.
Avoiding Time Drift
Here's what I do to remove the time it takes for the command to run and still stay on schedule:
#One-liner to execute a command every 600 seconds avoiding time drift
#Runs the command at each multiple of :10 minutes
while sleep $(echo 600-`date "+%s"`%600 | bc); do ls; done
This will drift off by no more than one second. Then it will snap back in sync with the clock. If you need something with less than 1 second drift and your sleep command supports floating point numbers, try adding including nanoseconds in the calculation like this
while sleep $(echo 6-`date "+%s.%N"`%6 | bc); do date '+%FT%T.%N'; done
Here is my solution to reduce drift from loop payload run time.
tpid=0
while true; do
wait ${tpid}
sleep 3 & tpid=$!
{ body...; }
done
There is some approximation to timer object approach, with sleep command executed parallel with all other commands, including even true in condition check. I think it's most precise variant without drift cound using date command.
There could be true timer object bash function, implementing timer event by just 'echo' call, then piped to loop with read cmd, like this:
timer | { while read ev; do...; done; }
I was faced with this challenge recently. I wanted a way to execute a piece of the script every hour within the bash script file that runs on crontab every 5 minutes without having to use sleep or rely fully on crontab. if the TIME_INTERVAL is not met, the script will fail at the first condition. If the TIME_INTERVAL is met, however the TIME_CONDITION is not met, the script will fail at second condition. The below script does work hand-in-hand with crontab - adjust according.
NOTE: touch -m "$0" - This will change modification timestamp of the bash script file. You will have to create a separate file for storing the last script run time, if you don't want to change the modification timestamp of the bash script file.
CURRENT_TIME=$(date "+%s")
LAST_SCRIPT_RUN_TIME=$(date -r "$0" "+%s")
TIME_INTERVAL='3600'
START_TIME=$(date -d '07:00:00' "+%s")
END_TIME=$(date -d '16:59:59' "+%s")
TIME_DIFFERENCE=$((${CURRENT_TIME} - ${LAST_SCRIPT_RUN_TIME}))
TIME_CONDITION=$((${START_TIME} <= ${CURRENT_TIME} && ${CURRENT_TIME} <= $END_TIME}))
if [[ "$TIME_DIFFERENCE" -le "$TIME_INTERVAL" ]];
then
>&2 echo "[ERROR] FAILED - script failed to run because of time conditions"
elif [[ "$TIME_CONDITION" = '0' ]];
then
>&2 echo "[ERROR] FAILED - script failed to run because of time conditions"
elif [[ "$TIME_CONDITION" = '1' ]];
then
>&2 touch -m "$0"
>&2 echo "[INFO] RESULT - script ran successfully"
fi
Based on the answer from #david-h and its comment from #kvantour, I wrote a new version which comes with bash only, i.e. without bc.
export intervalsec=60
while sleep $(LANG=C now="$(date -Ins)"; printf "%0.0f.%09.0f" $((${intervalsec}-1-$(date "+%s" -d "$now")%${intervalsec})) $((1000000000-$(printf "%.0f" $(date "+%9N" -d "$now"))))); do \
date '+%FT%T.%N'; \
done
bash arithmetic operations ($(())) can only operate on integers. That's why the seconds and the nanoseconds have to be calculated separately.
printf is used to combine the two calculations together as well as to remove and to add leading zeros. Leading zeros from "+%N" must be removed because it gets interpreted as ocal instead of decimal and must be added again before merging into the floating point.
Because the concept needs to separate date commands, a date gets cached and reused to prevent flips.
set -e (or a script starting with #!/bin/sh -e) is extremely useful to automatically bomb out if there is a problem. It saves me having to error check every single command that might fail.
How do I get the equivalent of this inside a function?
For example, I have the following script that exits immediately on error with an error exit status:
#!/bin/sh -e
echo "the following command could fail:"
false
echo "this is after the command that fails"
The output is as expected:
the following command could fail:
Now I'd like to wrap this into a function:
#!/bin/sh -e
my_function() {
echo "the following command could fail:"
false
echo "this is after the command that fails"
}
if ! my_function; then
echo "dealing with the problem"
fi
echo "run this all the time regardless of the success of my_function"
Expected output:
the following command could fail:
dealing with the problem
run this all the time regardless of the success of my_function
Actual output:
the following output could fail:
this is after the command that fails
run this all the time regardless of the success of my_function
(ie. the function is ignoring set -e)
This presumably is expected behaviour. My question is: how do I get the effect and usefulness of set -e inside a shell function? I'd like to be able to set something up such that I don't have to individually error check every call, but the script will stop on encountering an error. It should unwind the stack as far as is needed until I do check the result, or exit the script itself if I haven't checked it. This is what set -e does already, except it doesn't nest.
I've found the same question asked outside Stack Overflow but no suitable answer.
I eventually went with this, which apparently works. I tried the export method at first, but then found that I needed to export every global (constant) variable the script uses.
Disable set -e, then run the function call inside a subshell that has set -e enabled. Save the exit status of the subshell in a variable, re-enable set -e, then test the var.
f() { echo "a"; false; echo "Should NOT get HERE"; }
# Don't pipe the subshell into anything or we won't be able to see its exit status
set +e ; ( set -e; f ) ; err_status=$?
set -e
## cleaner syntax which POSIX sh doesn't support. Use bash/zsh/ksh/other fancy shells
if ((err_status)) ; then
echo "f returned false: $err_status"
fi
## POSIX-sh features only (e.g. dash, /bin/sh)
if test "$err_status" -ne 0 ; then
echo "f returned false: $err_status"
fi
echo "always print this"
You can't run f as part of a pipeline, or as part of a && of || command list (except as the last command in the pipe or list), or as the condition in an if or while, or other contexts that ignore set -e. This code also can't be in any of those contexts, so if you use this in a function, callers have to use the same subshell / save-exit-status trickery. This use of set -e for semantics similar to throwing/catching exceptions is not really suitable for general use, given the limitations and hard-to-read syntax.
trap err_handler_function ERR has the same limitations as set -e, in that it won't fire for errors in contexts where set -e won't exit on failed commands.
You might think the following would work, but it doesn't:
if ! ( set -e; f );then ##### doesn't work, f runs ignoring -e
echo "f returned false: $?"
fi
set -e doesn't take effect inside the subshell because it remembers that it's inside the condition of an if. I thought being a subshell would change that, but only being in a separate file and running a whole separate shell on it would work.
From documentation of set -e:
When this option is on, if a simple command fails for any of the
reasons listed in Consequences of
Shell Errors or returns an exit status
value > 0, and is not part of the
compound list following a while,
until, or if keyword, and is not a
part of an AND or OR list, and is not
a pipeline preceded by the ! reserved
word, then the shell shall immediately
exit.
In your case, false is a part of a pipeline preceded by ! and a part of if. So the solution is to rewrite your code so that it isn't.
In other words, there's nothing special about functions here. Try:
set -e
! { false; echo hi; }
You may directly use a subshell as your function definition and set it to exit immediately with set -e. This would limit the scope of set -e to the function subshell only and would later avoid switching between set +e and set -e.
In addition, you can use a variable assignment in the if test and then echo the result in an additional else statement.
# use subshell for function definition
f() (
set -exo pipefail
echo a
false
echo Should NOT get HERE
exit 0
)
# next line also works for non-subshell function given by agsamek above
#if ret="$( set -e && f )" ; then
if ret="$( f )" ; then
true
else
echo "$ret"
fi
# prints
# ++ echo a
# ++ false
# a
This is a bit of a kludge, but you can do:
export -f f
if sh -ec f; then
...
This will work if your shell supports export -f (bash does).
Note that this will not terminate the script. The echo
after the false in f will not execute, nor will the body
of the if, but statements after the if will be executed.
If you are using a shell that does not support export -f, you can
get the semantics you want by running sh in the function:
f() { sh -ec '
echo This will execute
false
echo This will not
'
}
Note/Edit: As a commenter pointed out, this answer uses bash, and not sh like the OP used in his question. I missed that detail when I originaly posted an answer. I will leave this answer up anyway since it might be interested to some passerby.
Y'aaaaaaaaaaaaaaaaaaallll ready for this?
Here's a way to do it with leveraging the DEBUG trap, which runs before each command, and sort of makes errors like the whole exception/try/catch idioms from other languages. Take a look. I've made your example one more 'call' deep.
#!/bin/bash
# Get rid of that disgusting set -e. We don't need it anymore!
# functrace allows RETURN and DEBUG traps to be inherited by each
# subshell and function. Plus, it doesn't suffer from that horrible
# erasure problem that -e and -E suffer from when the command
# is used in a conditional expression.
set -o functrace
# A trap to bubble up the error unless our magic command is encountered
# ('catch=$?' in this case) at which point it stops. Also don't try to
# bubble the error if were not in a function.
trap '{
code=$?
if [[ $code != 0 ]] && [[ $BASH_COMMAND != '\''catch=$?'\'' ]]; then
# If were in a function, return, else exit.
[[ $FUNCNAME ]] && return $code || exit $code
fi
}' DEBUG
my_function() {
my_function2
}
my_function2() {
echo "the following command could fail:"
false
echo "this is after the command that fails"
}
# the || isn't necessary, but the 'catch=$?' is.
my_function || catch=$?
echo "Dealing with the problem with errcode=$catch (⌐■_■)"
echo "run this all the time regardless of the success of my_function"
and the output:
the following command could fail:
Dealing with the problem with errcode=1 (⌐■_■)
run this all the time regardless of the success of my_function
I haven't tested this in the wild, but off the top of my head, there are a bunch of pros:
It's actually not that slow. I've ran the script in a tight loop with and without the functrace option, and times are very close to each other under 10 000 iterations.
You could expand on this DEBUG trap to print a stack trace without doing that whole looping over $FUNCNAME and $BASH_LINENO nonsense. You kinda get it for free (besides actually doing an echo line).
Don't have to worry about that shopt -s inherit_errexit gotcha.
Join all commands in your function with the && operator. It's not too much trouble and will give the result you want.
This is by design and POSIX specification. We can read in man bash:
If a compound command or shell function executes in a context where -e is being ignored, none of the commands executed within the compound command or function body will be affected by the -e setting, even if -e is set and a command returns a failure status. If a compound command or shell function sets -e while executing in a context where -e is ignored, that setting will not have any effect until the compound command or the command containing the function call completes.
therefore you should avoid relying on set -e within functions.
Given the following exampleAustin Group:
set -e
start() {
some_server
echo some_server started successfully
}
start || echo >&2 some_server failed
the set -e is ignored within the function, because the function is a command in an AND-OR list other than the last.
The above behaviour is specified and required by POSIX (see: Desired Action):
The -e setting shall be ignored when executing the compound list following the while, until, if, or elif reserved word, a pipeline beginning with the ! reserved word, or any command of an AND-OR list other than the last.
I know this isn't what you asked, but you may or may not be aware that the behavior you seek is built into "make". Any part of a "make" process that fails aborts the run. It's a wholly different way of "programming", though, than shell scripting.
You will need to call your function in a sub shell (inside brackets ()) to achieve this.
I think you want to write your script like this:
#!/bin/sh -e
my_function() {
echo "the following command could fail:"
false
echo "this is after the command that fails"
}
(my_function)
if [ $? -ne 0 ] ; then
echo "dealing with the problem"
fi
echo "run this all the time regardless of the success of my_function"
Then the output is (as desired):
the following command could fail:
dealing with the problem
run this all the time regardless of the success of my_function
If a subshell isn't an option (say you need to do something crazy like set a variable) then you can just check every single command that might fail and deal with it by appending || return $?. This causes the function to return the error code on failure.
It's ugly, but it works
#!/bin/sh
set -e
my_function() {
echo "the following command could fail:"
false || return $?
echo "this is after the command that fails"
}
if ! my_function; then
echo "dealing with the problem"
fi
echo "run this all the time regardless of the success of my_function"
gives
the following command could fail:
dealing with the problem
run this all the time regardless of the success of my_function
set -e (or a script starting with #!/bin/sh -e) is extremely useful to automatically bomb out if there is a problem. It saves me having to error check every single command that might fail.
How do I get the equivalent of this inside a function?
For example, I have the following script that exits immediately on error with an error exit status:
#!/bin/sh -e
echo "the following command could fail:"
false
echo "this is after the command that fails"
The output is as expected:
the following command could fail:
Now I'd like to wrap this into a function:
#!/bin/sh -e
my_function() {
echo "the following command could fail:"
false
echo "this is after the command that fails"
}
if ! my_function; then
echo "dealing with the problem"
fi
echo "run this all the time regardless of the success of my_function"
Expected output:
the following command could fail:
dealing with the problem
run this all the time regardless of the success of my_function
Actual output:
the following output could fail:
this is after the command that fails
run this all the time regardless of the success of my_function
(ie. the function is ignoring set -e)
This presumably is expected behaviour. My question is: how do I get the effect and usefulness of set -e inside a shell function? I'd like to be able to set something up such that I don't have to individually error check every call, but the script will stop on encountering an error. It should unwind the stack as far as is needed until I do check the result, or exit the script itself if I haven't checked it. This is what set -e does already, except it doesn't nest.
I've found the same question asked outside Stack Overflow but no suitable answer.
I eventually went with this, which apparently works. I tried the export method at first, but then found that I needed to export every global (constant) variable the script uses.
Disable set -e, then run the function call inside a subshell that has set -e enabled. Save the exit status of the subshell in a variable, re-enable set -e, then test the var.
f() { echo "a"; false; echo "Should NOT get HERE"; }
# Don't pipe the subshell into anything or we won't be able to see its exit status
set +e ; ( set -e; f ) ; err_status=$?
set -e
## cleaner syntax which POSIX sh doesn't support. Use bash/zsh/ksh/other fancy shells
if ((err_status)) ; then
echo "f returned false: $err_status"
fi
## POSIX-sh features only (e.g. dash, /bin/sh)
if test "$err_status" -ne 0 ; then
echo "f returned false: $err_status"
fi
echo "always print this"
You can't run f as part of a pipeline, or as part of a && of || command list (except as the last command in the pipe or list), or as the condition in an if or while, or other contexts that ignore set -e. This code also can't be in any of those contexts, so if you use this in a function, callers have to use the same subshell / save-exit-status trickery. This use of set -e for semantics similar to throwing/catching exceptions is not really suitable for general use, given the limitations and hard-to-read syntax.
trap err_handler_function ERR has the same limitations as set -e, in that it won't fire for errors in contexts where set -e won't exit on failed commands.
You might think the following would work, but it doesn't:
if ! ( set -e; f );then ##### doesn't work, f runs ignoring -e
echo "f returned false: $?"
fi
set -e doesn't take effect inside the subshell because it remembers that it's inside the condition of an if. I thought being a subshell would change that, but only being in a separate file and running a whole separate shell on it would work.
From documentation of set -e:
When this option is on, if a simple command fails for any of the
reasons listed in Consequences of
Shell Errors or returns an exit status
value > 0, and is not part of the
compound list following a while,
until, or if keyword, and is not a
part of an AND or OR list, and is not
a pipeline preceded by the ! reserved
word, then the shell shall immediately
exit.
In your case, false is a part of a pipeline preceded by ! and a part of if. So the solution is to rewrite your code so that it isn't.
In other words, there's nothing special about functions here. Try:
set -e
! { false; echo hi; }
You may directly use a subshell as your function definition and set it to exit immediately with set -e. This would limit the scope of set -e to the function subshell only and would later avoid switching between set +e and set -e.
In addition, you can use a variable assignment in the if test and then echo the result in an additional else statement.
# use subshell for function definition
f() (
set -exo pipefail
echo a
false
echo Should NOT get HERE
exit 0
)
# next line also works for non-subshell function given by agsamek above
#if ret="$( set -e && f )" ; then
if ret="$( f )" ; then
true
else
echo "$ret"
fi
# prints
# ++ echo a
# ++ false
# a
This is a bit of a kludge, but you can do:
export -f f
if sh -ec f; then
...
This will work if your shell supports export -f (bash does).
Note that this will not terminate the script. The echo
after the false in f will not execute, nor will the body
of the if, but statements after the if will be executed.
If you are using a shell that does not support export -f, you can
get the semantics you want by running sh in the function:
f() { sh -ec '
echo This will execute
false
echo This will not
'
}
Note/Edit: As a commenter pointed out, this answer uses bash, and not sh like the OP used in his question. I missed that detail when I originaly posted an answer. I will leave this answer up anyway since it might be interested to some passerby.
Y'aaaaaaaaaaaaaaaaaaallll ready for this?
Here's a way to do it with leveraging the DEBUG trap, which runs before each command, and sort of makes errors like the whole exception/try/catch idioms from other languages. Take a look. I've made your example one more 'call' deep.
#!/bin/bash
# Get rid of that disgusting set -e. We don't need it anymore!
# functrace allows RETURN and DEBUG traps to be inherited by each
# subshell and function. Plus, it doesn't suffer from that horrible
# erasure problem that -e and -E suffer from when the command
# is used in a conditional expression.
set -o functrace
# A trap to bubble up the error unless our magic command is encountered
# ('catch=$?' in this case) at which point it stops. Also don't try to
# bubble the error if were not in a function.
trap '{
code=$?
if [[ $code != 0 ]] && [[ $BASH_COMMAND != '\''catch=$?'\'' ]]; then
# If were in a function, return, else exit.
[[ $FUNCNAME ]] && return $code || exit $code
fi
}' DEBUG
my_function() {
my_function2
}
my_function2() {
echo "the following command could fail:"
false
echo "this is after the command that fails"
}
# the || isn't necessary, but the 'catch=$?' is.
my_function || catch=$?
echo "Dealing with the problem with errcode=$catch (⌐■_■)"
echo "run this all the time regardless of the success of my_function"
and the output:
the following command could fail:
Dealing with the problem with errcode=1 (⌐■_■)
run this all the time regardless of the success of my_function
I haven't tested this in the wild, but off the top of my head, there are a bunch of pros:
It's actually not that slow. I've ran the script in a tight loop with and without the functrace option, and times are very close to each other under 10 000 iterations.
You could expand on this DEBUG trap to print a stack trace without doing that whole looping over $FUNCNAME and $BASH_LINENO nonsense. You kinda get it for free (besides actually doing an echo line).
Don't have to worry about that shopt -s inherit_errexit gotcha.
Join all commands in your function with the && operator. It's not too much trouble and will give the result you want.
This is by design and POSIX specification. We can read in man bash:
If a compound command or shell function executes in a context where -e is being ignored, none of the commands executed within the compound command or function body will be affected by the -e setting, even if -e is set and a command returns a failure status. If a compound command or shell function sets -e while executing in a context where -e is ignored, that setting will not have any effect until the compound command or the command containing the function call completes.
therefore you should avoid relying on set -e within functions.
Given the following exampleAustin Group:
set -e
start() {
some_server
echo some_server started successfully
}
start || echo >&2 some_server failed
the set -e is ignored within the function, because the function is a command in an AND-OR list other than the last.
The above behaviour is specified and required by POSIX (see: Desired Action):
The -e setting shall be ignored when executing the compound list following the while, until, if, or elif reserved word, a pipeline beginning with the ! reserved word, or any command of an AND-OR list other than the last.
I know this isn't what you asked, but you may or may not be aware that the behavior you seek is built into "make". Any part of a "make" process that fails aborts the run. It's a wholly different way of "programming", though, than shell scripting.
You will need to call your function in a sub shell (inside brackets ()) to achieve this.
I think you want to write your script like this:
#!/bin/sh -e
my_function() {
echo "the following command could fail:"
false
echo "this is after the command that fails"
}
(my_function)
if [ $? -ne 0 ] ; then
echo "dealing with the problem"
fi
echo "run this all the time regardless of the success of my_function"
Then the output is (as desired):
the following command could fail:
dealing with the problem
run this all the time regardless of the success of my_function
If a subshell isn't an option (say you need to do something crazy like set a variable) then you can just check every single command that might fail and deal with it by appending || return $?. This causes the function to return the error code on failure.
It's ugly, but it works
#!/bin/sh
set -e
my_function() {
echo "the following command could fail:"
false || return $?
echo "this is after the command that fails"
}
if ! my_function; then
echo "dealing with the problem"
fi
echo "run this all the time regardless of the success of my_function"
gives
the following command could fail:
dealing with the problem
run this all the time regardless of the success of my_function