Send commands to a GNU screen - bash

I have a GNU screen named demo, I want to send commands to it. How do I do this?
screen -S demo -X /home/aa/scripts/outputs.sh
yeilds No screen session found.
and doing screen -ls shows that it isn't running.

If the Screen session isn't running, you won't be able to send things to it. Start it first.
Once you've got a session, you need to distinguish between Screen commands and keyboard input. screen -X expects a Screen command. The stuff command sends input, and if you want to run that program from a shell prompt, you'll have to pass a newline as well.
screen -S demo -X stuff '/home/aa/scripts/outputs.sh
'
Note that this may be the wrong approach. Are you sure you want to type into whatever is active in that session? To direct the input at a particular window, use
screen -S demo -p 1 -X stuff '/home/aa/scripts/outputs.sh
'
where 1 is the window number (you can use its title instead).
To start a new window in that session, use the screen command instead. (That's the screen Screen command, not the screen shell command.)
screen -S demo -p 1 -X screen '/home/aa/scripts/outputs.sh'

I put this together to capture the output from the commands. It also handles stdin if you want to pipe some input.
function xscreen {
# Usage: xscreen <screen-name> command...
local SCREEN_NAME=$1
shift
# Create screen if it doesn't exist
if ! screen -list | grep $SCREEN_NAME >/dev/null ; then
screen -dmS $SCREEN_NAME
fi
# Create I/O pipes
local DIR=$( mktemp -d )
local STDIN=$DIR/stdin
local STDOUT=$DIR/stdout
local STDERR=$DIR/stderr
mkfifo $STDIN $STDOUT $STDERR
trap 'rm -f $STDIN $STDOUT $STDERR; rmdir $DIR' RETURN
# Print output and kill stdin when both pipes are closed
{ cat $STDERR >&2 & cat $STDOUT & wait ; fuser -s -PIPE -k -w $STDIN ; } &
# Start the command (Clear line ^A^K, enter command with redirects, run with ^O)
screen -S $SCREEN_NAME -p0 -X stuff "$(echo -ne '\001\013') { $* ; } <$STDIN 1> >(tee $STDOUT) 2> >(tee $STDERR >&2)$(echo -ne '\015')"
# Forward stdin
cat > $STDIN
# Just in case stdin is closed
wait
}
Taking it a step further, it can be useful to call this function over ssh:
ssh user#host -n xscreen somename 'echo hello world'
Maybe combine it with something like ssh user#host "$(typeset -f xscreen); xscreen ..." so you don't have to have the function already defined on the remote host.
A longer version in a bash script that handles the return status and syntax errors:
#!/bin/bash
function usage {
echo "$(basename $0) [[user#]server:[port]] <screen-name> command..." >&2
exit 1
}
[[ $# -ge 2 ]] || usage
SERVER=
SERVERPORT="-p 22"
SERVERPAT='^(([a-z]+#)?([A-Za-z0-9.]+)):([0-9]+)?$'
if [[ "$1" =~ $SERVERPAT ]]; then
SERVER="${BASH_REMATCH[1]}"
[[ -n "${BASH_REMATCH[4]}" ]] && SERVERPORT="-p ${BASH_REMATCH[4]}"
shift
fi
function xscreen {
# Usage: xscreen <screen-name> command...
local SCREEN_NAME=$1
shift
if ! screen -list | grep $SCREEN_NAME >/dev/null ; then
echo "Screen $SCREEN_NAME not found." >&2
return 124
# Create screen if it doesn't exist
#screen -dmS $SCREEN_NAME
fi
# Create I/O pipes
local DIR=$( mktemp -d )
mkfifo $DIR/stdin $DIR/stdout $DIR/stderr
echo 123 > $DIR/status
trap 'rm -f $DIR/{stdin,stdout,stderr,status}; rmdir $DIR' RETURN
# Forward ^C to screen
trap "screen -S $SCREEN_NAME -p0 -X stuff $'\003'" INT
# Print output and kill stdin when both pipes are closed
{
cat $DIR/stderr >&2 &
cat $DIR/stdout &
wait
[[ -e $DIR/stdin ]] && fuser -s -PIPE -k -w $DIR/stdin
} &
READER_PID=$!
# Close all the pipes if the command fails to start (e.g. syntax error)
{
# Kill the sleep when this subshell is killed. Ugh.. bash.
trap 'kill $(jobs -p)' EXIT
# Try to write nothing to stdin. This will block until something reads.
echo -n > $DIR/stdin &
TEST_PID=$!
sleep 2.0
# If the write failed and we're not killed, it probably didn't start
if [[ -e $DIR/stdin ]] && kill $TEST_PID 2>/dev/null; then
echo 'xscreen timeout' >&2
wait $TEST_PID 2>/dev/null
# Send ^C to clear any half-written command (e.g. no closing braces)
screen -S $SCREEN_NAME -p0 -X stuff $'\003'
# Write nothing to output, triggers SIGPIPE
echo -n 1> $DIR/stdout 2> $DIR/stderr
# Stop stdin by creating a fake reader and sending SIGPIPE
cat $DIR/stdin >/dev/null &
fuser -s -PIPE -k -w $DIR/stdin
fi
} &
CHECKER_PID=$!
# Start the command (Clear line ^A^K, enter command with redirects, run with ^O)
screen -S $SCREEN_NAME -p0 -X stuff "$(echo -ne '\001\013') { $* ; echo \$? > $DIR/status ; } <$DIR/stdin 1> >(tee $DIR/stdout) 2> >(tee $DIR/stderr >&2)$(echo -ne '\015')"
# Forward stdin
cat > $DIR/stdin
kill $CHECKER_PID 2>/dev/null && wait $CHECKER_PID 2>/dev/null
# Just in case stdin is closed early, wait for output to finish
wait $READER_PID 2>/dev/null
trap - INT
return $(cat $DIR/status)
}
if [[ -n $SERVER ]]; then
ssh $SERVER $SERVERPORT "$(typeset -f xscreen); xscreen $#"
RET=$?
if [[ $RET == 124 ]]; then
echo "To start screen: ssh $SERVER $SERVERPORT \"screen -dmS $1\"" >&2
fi
exit $RET
else
xscreen "$1" "${#:2}"
fi

Related

How to change the terminal title to currently running process?

I know how to change the Terminal Window title. What I am trying to find out is how to make bash not zsh write out the currently running process so if I say do
$ ls -lF
I would get something like this for the title
/home/me/curerntFolder (ls -lF)
Getting the last executed command would be too late since the command has executed already, so it won't set the title with the command that was executed.
In addition to #markp-fuso's answer, here's how I did it to make it work with Starship.
function set_win_title() {
local cmd=" ($#)"
if [[ "$cmd" == " (starship_precmd)" || "$cmd" == " ()" ]]
then
cmd=""
fi
if [[ $PWD == $HOME ]]
then
if [[ $SSH_TTY ]]
then
echo -ne "\033]0; 🏛ī¸ # $HOSTNAME ~$cmd\a" < /dev/null
else
echo -ne "\033]0; 🏠 ~$cmd\a" < /dev/null
fi
else
BASEPWD=$(basename "$PWD")
if [[ $SSH_TTY ]]
then
echo -ne "\033]0; 🌩ī¸ $BASEPWD # $HOSTNAME $cmd\a" < /dev/null
else
echo -ne "\033]0; 📁 $BASEPWD $cmd\a" < /dev/null
fi
fi
}
starship_precmd_user_func="set_win_title"
eval "$(starship init bash)"
trap "$(trap -p DEBUG | awk -F"'" '{print $2}');set_win_title \${BASH_COMMAND}" DEBUG
Note this differs from the Custom pre-prompt and pre-execution Commands in Bash instructions in that the trap is set after starship init. Which I have noted in a bug.
UPDATE: my previous answer (below) displays the previous command in the title bar.
Ignoring everything from my previous answer and starting from scratch:
trap 'echo -ne "\033]0;${PWD}: (${BASH_COMMAND})\007"' DEBUG
Running the following at the command prompt:
$ sleep 10
The window title bar changes to /my/current/directory: (sleep 10) while the sleep 10 is running.
Running either of these:
$ sleep 1; sleep 2; sleep 3
$ { sleep 1; sleep2; sleep 3; }
The title bar changes as each sleep command is invoked.
Running this:
$ ( sleep 1; sleep 2; sleep 3 )
The title bar does not change (the trap does not apply within a subprocess call).
One last one:
$ echo $(sleep 3; echo abc)
The title bar displays (echo $sleep 3; echo abc)).
previous answer
Adding to this answer:
store_command() {
declare -g last_command current_command
last_command=$current_command
current_command=$BASH_COMMAND
return 0
}
trap store_command DEBUG
PROMPT_COMMAND='echo -ne "\033]0;${PWD}: (${last_command})\007"'
Additional reading materials re: trap / DEBUG:
bash guide on traps
SO Q&A
You can combine setting the window title with setting the prompt.
Here's an example using bashs PROMPT_COMMAND:
tputps () {
echo -n '\['
tput "$#"
echo -n '\]'
}
prompt_builder () {
# Window title - operating system command (OSC) ESC + ]
echo -ne '\033]0;'"${USER}#${HOSTNAME}:$(dirs)"'\a' >&2
# username, green
tputps setaf 2
echo -n '\u'
# directory, orange
tputps setaf 208
echo -n ' \w'
tputps sgr0 0
}
prompt_cmd () {
PS1="$(prompt_builder) \$ "
}
export PROMPT_COMMAND=prompt_cmd
For Linux OS Adding following function in bashrc file
Following Steps
Open bashrc file
vi ~/.bashrc
Write a function in bashrc file
function set-title() {
if [[ -z "$ORIG" ]]; then
ORIG=$PS1
fi
TITLE="\[\e]2;$*\a\]"
PS1=${ORIG}${TITLE}
}
save the file
source ~/.bashrc
call function
set-title "tab1"
The easiest way to change the title of the terminal I could think of is to use echo in shell script
echo "\033]0;Your title \007"
And to change open a new tab with new title name is
meta-terminal--tab-t"Your title"

Write and read from a fifo from two different script

I have two bash script.
One script write in a fifo. The second one read from the fifo, but AFTER the first one end to write.
But something does not work. I do not understand where the problem is. Here the code.
The first script is (the writer):
#!/bin/bash
fifo_name="myfifo";
# Se non esiste, crea la fifo;
[ -p $fifo_name ] || mkfifo $fifo_name;
exec 3<> $fifo_name;
echo "foo" > $fifo_name;
echo "bar" > $fifo_name;
The second script is (the reader):
#!/bin/bash
fifo_name="myfifo";
while true
do
if read line <$fifo_name; then
# if [[ "$line" == 'ar' ]]; then
# break
#fi
echo $line
fi
done
Can anyone help me please?
Thank you
Replace the second script with:
#!/bin/bash
fifo_name="myfifo"
while true
do
if read line; then
echo $line
fi
done <"$fifo_name"
This opens the fifo only once and reads every line from it.
The problem with your setup is that you have fifo creation in the wrong script if you wish to control fifo access to time when the reader is actually running. In order to correct the problem you will need to do something like this:
reader: fifo_read.sh
#!/bin/bash
fifo_name="/tmp/myfifo" # fifo name
trap "rm -f $fifo_name" EXIT # set trap to rm fifo_name at exit
[ -p "$fifo_name" ] || mkfifo "$fifo_name" # if fifo not found, create
exec 3< $fifo_name # redirect fifo_name to fd 3
# (not required, but makes read clearer)
while :; do
if read -r -u 3 line; then # read line from fifo_name
if [ "$line" = 'quit' ]; then # if line is quit, quit
printf "%s: 'quit' command received\n" "$fifo_name"
break
fi
printf "%s: %s\n" "$fifo_name" "$line" # print line read
fi
done
exec 3<&- # reset fd 3 redirection
exit 0
writer: fifo_write.sh
#!/bin/bash
fifo_name="/tmp/myfifo"
# Se non esiste, exit :);
[ -p "$fifo_name" ] || {
printf "\n Error fifo '%s' not found.\n\n" "$fifo_name"
exit 1
}
[ -n "$1" ] &&
printf "%s\n" "$1" > "$fifo_name" ||
printf "pid: '%s' writing to fifo\n" "$$" > "$fifo_name"
exit 0
operation: (start reader in 1st terminal)
$ ./fifo_read.sh # you can background with & at end
(launch writer in second terminal)
$ ./fifo_write.sh "message from writer" # second terminal
$ ./fifo_write.sh
$ ./fifo_write.sh quit
output in 1st terminal:
$ ./fifo_read.sh
/tmp/myfifo: message from writer
/tmp/myfifo: pid: '28698' writing to fifo
/tmp/myfifo: 'quit' command received
The following script should do the job:
#!/bin/bash
FIFO="/tmp/fifo"
if [ ! -e "$FIFO" ]; then
mkfifo "$FIFO"
fi
for script in "$#"; do
echo $script > $FIFO &
done
while read script; do
/bin/bash -c $script
done < $FIFO
Given two script a.sh and b.sh where both scripts pass "a" and "b" to stdout, respectively, one will get the following result (given that the script above is called test.sh):
./test.sh /tmp/a.sh /tmp/b.sh
a
b
Best,
Julian

shell script ssh command exit status

In a loop in shell script, I am connecting to various servers and running some commands. For example
#!/bin/bash
FILENAME=$1
cat $FILENAME | while read HOST
do
0</dev/null ssh $HOST 'echo password| sudo -S
echo $HOST
echo $?
pwd
echo $?'
done
Here I am running "echo $HOST" and "pwd" commands and I am getting exit status via "echo $?".
My question is that I want to be able to store the exit status of the commands I run remotely in some variable and then ( based on if the command was success or not) , write a log to a local file.
Any help and code is appreciated.
ssh will exit with the exit code of the remote command. For example:
$ ssh localhost exit 10
$ echo $?
10
So after your ssh command exits, you can simply check $?. You need to make sure that you don't mask your return value. For example, your ssh command finishes up with:
echo $?
This will always return 0. What you probably want is something more like this:
while read HOST; do
echo $HOST
if ssh $HOST 'somecommand' < /dev/null; then
echo SUCCESS
else
echo FAIL
done
You could also write it like this:
while read HOST; do
echo $HOST
if ssh $HOST 'somecommand' < /dev/null
if [ $? -eq 0 ]; then
echo SUCCESS
else
echo FAIL
done
You can assign the exit status to a variable as simple as doing:
variable=$?
Right after the command you are trying to inspect. Do not echo $? before or the new value of $? will be the exit code of echo (usually 0).
An interesting approach would be to retrieve the whole output of each ssh command set in a local variable using backticks, or even seperate with a special charachter (for simplicity say ":") something like:
export MYVAR=`ssh $HOST 'echo -n ${HOSTNAME}\:;pwd'`
after this you can use awk to split MYVAR into your results and continue bash testing.
Perhaps prepare the log file on the other side and pipe it to stdout, like this:
ssh -n user#example.com 'x() { local ret; "$#" >&2; ret=$?; echo "[`date +%Y%m%d-%H%M%S` $ret] $*"; return $ret; };
x true
x false
x sh -c "exit 77";' > local-logfile
Basically just prefix everything on the remote you want to invoke with this x wrapper. It works for conditionals, too, as it does not alter the exit code of a command.
You can easily loop this command.
This example writes into the log something like:
[20141218-174611 0] true
[20141218-174611 1] false
[20141218-174611 77] sh -c exit 77
Of course you can make it better parsable or adapt it to your whishes how the logfile shall look like. Note that the uncatched normal stdout of the remote programs is written to stderr (see the redirection in x()).
If you need a recipe to catch and prepare output of a command for the logfile, here is a copy of such a catcher from https://gist.github.com/hilbix/c53d525f113df77e323d - but yes, this is a bit bigger boilerplate to "Run something in current context of shell, postprocessing stdout+stderr without disturbing return code":
# Redirect lines of stdin/stdout to some other function
# outfn and errfn get following arguments
# "cmd args.." "one line full of output"
: catch outfn errfn cmd args..
catch()
{
local ret o1 o2 tmp
tmp=$(mktemp "catch_XXXXXXX.tmp")
mkfifo "$tmp.out"
mkfifo "$tmp.err"
pipestdinto "$1" "${*:3}" <"$tmp.out" &
o1=$!
pipestdinto "$2" "${*:3}" <"$tmp.err" &
o2=$!
"${#:3}" >"$tmp.out" 2>"$tmp.err"
ret=$?
rm -f "$tmp.out" "$tmp.err" "$tmp"
wait $o1
wait $o2
return $ret
}
: pipestdinto cmd args..
pipestdinto()
{
local x
while read -r x; do "$#" "$x" </dev/null; done
}
STAMP()
{
date +%Y%m%d-%H%M%S
}
# example output function
NOTE()
{
echo "NOTE `STAMP`: $*"
}
ERR()
{
echo "ERR `STAMP`: $*" >&2
}
catch_example()
{
# Example use
catch NOTE ERR find /proc -ls
}
See the second last line for an example (scroll down)

Execute a shell function with timeout

Why would this work
timeout 10s echo "foo bar" # foo bar
but this wouldn't
function echoFooBar {
echo "foo bar"
}
echoFooBar # foo bar
timeout 10s echoFooBar # timeout: failed to run command `echoFooBar': No such file or directory
and how can I make it work?
As Douglas Leeder said you need a separate process for timeout to signal to. Workaround by exporting function to subshells and running subshell manually.
export -f echoFooBar
timeout 10s bash -c echoFooBar
timeout is a command - so it is executing in a subprocess of your bash shell. Therefore it has no access to your functions defined in your current shell.
The command timeout is given is executed as a subprocess of timeout - a grand-child process of your shell.
You might be confused because echo is both a shell built-in and a separate command.
What you can do is put your function in it's own script file, chmod it to be executable, then execute it with timeout.
Alternatively fork, executing your function in a sub-shell - and in the original process, monitor the progress, killing the subprocess if it takes too long.
There's an inline alternative also launching a subprocess of bash shell:
timeout 10s bash <<EOT
function echoFooBar {
echo foo
}
echoFooBar
sleep 20
EOT
You can create a function which would allow you to do the same as timeout but also for other functions:
function run_cmd {
cmd="$1"; timeout="$2";
grep -qP '^\d+$' <<< $timeout || timeout=10
(
eval "$cmd" &
child=$!
trap -- "" SIGTERM
(
sleep $timeout
kill $child 2> /dev/null
) &
wait $child
)
}
And could run as below:
run_cmd "echoFooBar" 10
Note: The solution came from one of my questions:
Elegant solution to implement timeout for bash commands and functions
if you just want to add timeout as an additional option for the entire existing script, you can make it test for the timeout-option, and then make it call it self recursively without that option.
example.sh:
#!/bin/bash
if [ "$1" == "-t" ]; then
timeout 1m $0 $2
else
#the original script
echo $1
sleep 2m
echo YAWN...
fi
running this script without timeout:
$./example.sh -other_option # -other_option
# YAWN...
running it with a one minute timeout:
$./example.sh -t -other_option # -other_option
function foo(){
for i in {1..100};
do
echo $i;
sleep 1;
done;
}
cat <( foo ) # Will work
timeout 3 cat <( foo ) # Will Work
timeout 3 cat <( foo ) | sort # Wont work, As sort will fail
cat <( timeout 3 cat <( foo ) ) | sort -r # Will Work
This function uses only builtins
Maybe consider evaling "$*" instead of running $# directly depending on your needs
It starts a job with the command string specified after the first arg that is the timeout value and monitors the job pid
It checks every 1 seconds, bash supports timeouts down to 0.01 so that can be tweaked
Also if your script needs stdin, read should rely on a dedicated fd (exec {tofd}<> <(:))
Also you might want to tweak the kill signal (the one inside the loop) which is default to -15, you might want -9
## forking is evil
timeout() {
to=$1; shift
$# & local wp=$! start=0
while kill -0 $wp; do
read -t 1
start=$((start+1))
if [ $start -ge $to ]; then
kill $wp && break
fi
done
}
Putting my comment to Tiago Lopo's answer into more readable form:
I think it's more readable to impose a timeout on the most recent subshell, this way we don't need to eval a string and the whole script can be highlighted as shell by your favourite editor. I simply put the commands after the subshell with eval has spawned into a shell-function (tested with zsh, but should work with bash):
timeout_child () {
trap -- "" SIGTERM
child=$!
timeout=$1
(
sleep $timeout
kill $child
) &
wait $child
}
Example usage:
( while true; do echo -n .; sleep 0.1; done) & timeout_child 2
And this way it also works with a shell function (if it runs in the background):
print_dots () {
while true
do
sleep 0.1
echo -n .
done
}
> print_dots & timeout_child 2
[1] 21725
[3] 21727
...................[1] 21725 terminated print_dots
[3] + 21727 done ( sleep $timeout; kill $child; )
I have a slight modification of #Tiago Lopo's answer that can handle commands with multiple arguments. I've also tested TauPan's solution, but it does not work if you use it multiple times in a script, while Tiago's does.
function timeout_cmd {
local arr
local cmd
local timeout
arr=( "$#" )
# timeout: first arg
# cmd: the other args
timeout="${arr[0]}"
cmd=( "${arr[#]:1}" )
(
eval "${cmd[#]}" &
child=$!
echo "child: $child"
trap -- "" SIGTERM
(
sleep "$timeout"
kill "$child" 2> /dev/null
) &
wait "$child"
)
}
Here's a fully functional script thant you can use to test the function above:
$ ./test_timeout.sh -h
Usage:
test_timeout.sh [-n] [-r REPEAT] [-s SLEEP_TIME] [-t TIMEOUT]
test_timeout.sh -h
Test timeout_cmd function.
Options:
-n Dry run, do not actually sleep.
-r REPEAT Reapeat everything multiple times [default: 1].
-s SLEEP_TIME Sleep for SLEEP_TIME seconds [default: 5].
-t TIMEOUT Timeout after TIMEOUT seconds [default: no timeout].
For example you cnal launch like this:
$ ./test_timeout.sh -r 2 -s 5 -t 3
Try no: 1
- Set timeout to: 3
child: 2540
-> retval: 143
-> The command timed out
Try no: 2
- Set timeout to: 3
child: 2593
-> retval: 143
-> The command timed out
Done!
#!/usr/bin/env bash
#shellcheck disable=SC2128
SOURCED=false && [ "$0" = "$BASH_SOURCE" ] || SOURCED=true
if ! $SOURCED; then
set -euo pipefail
IFS=$'\n\t'
fi
#################### helpers
function check_posint() {
local re='^[0-9]+$'
local mynum="$1"
local option="$2"
if ! [[ "$mynum" =~ $re ]] ; then
(echo -n "Error in option '$option': " >&2)
(echo "must be a positive integer, got $mynum." >&2)
exit 1
fi
if ! [ "$mynum" -gt 0 ] ; then
(echo "Error in option '$option': must be positive, got $mynum." >&2)
exit 1
fi
}
#################### end: helpers
#################### usage
function short_usage() {
(>&2 echo \
"Usage:
test_timeout.sh [-n] [-r REPEAT] [-s SLEEP_TIME] [-t TIMEOUT]
test_timeout.sh -h"
)
}
function usage() {
(>&2 short_usage )
(>&2 echo \
"
Test timeout_cmd function.
Options:
-n Dry run, do not actually sleep.
-r REPEAT Reapeat everything multiple times [default: 1].
-s SLEEP_TIME Sleep for SLEEP_TIME seconds [default: 5].
-t TIMEOUT Timeout after TIMEOUT seconds [default: no timeout].
")
}
#################### end: usage
help_flag=false
dryrun_flag=false
SLEEP_TIME=5
TIMEOUT=-1
REPEAT=1
while getopts ":hnr:s:t:" opt; do
case $opt in
h)
help_flag=true
;;
n)
dryrun_flag=true
;;
r)
check_posint "$OPTARG" '-r'
REPEAT="$OPTARG"
;;
s)
check_posint "$OPTARG" '-s'
SLEEP_TIME="$OPTARG"
;;
t)
check_posint "$OPTARG" '-t'
TIMEOUT="$OPTARG"
;;
\?)
(>&2 echo "Error. Invalid option: -$OPTARG.")
(>&2 echo "Try -h to get help")
short_usage
exit 1
;;
:)
(>&2 echo "Error.Option -$OPTARG requires an argument.")
(>&2 echo "Try -h to get help")
short_usage
exit 1
;;
esac
done
if $help_flag; then
usage
exit 0
fi
#################### utils
if $dryrun_flag; then
function wrap_run() {
( echo -en "[dry run]\\t" )
( echo "$#" )
}
else
function wrap_run() { "$#"; }
fi
# Execute a shell function with timeout
# https://stackoverflow.com/a/24416732/2377454
function timeout_cmd {
local arr
local cmd
local timeout
arr=( "$#" )
# timeout: first arg
# cmd: the other args
timeout="${arr[0]}"
cmd=( "${arr[#]:1}" )
(
eval "${cmd[#]}" &
child=$!
echo "child: $child"
trap -- "" SIGTERM
(
sleep "$timeout"
kill "$child" 2> /dev/null
) &
wait "$child"
)
}
####################
function sleep_func() {
local secs
local waitsec
waitsec=1
secs=$(($1))
while [ "$secs" -gt 0 ]; do
echo -ne "$secs\033[0K\r"
sleep "$waitsec"
secs=$((secs-waitsec))
done
}
command=("wrap_run" \
"sleep_func" "${SLEEP_TIME}"
)
for i in $(seq 1 "$REPEAT"); do
echo "Try no: $i"
if [ "$TIMEOUT" -gt 0 ]; then
echo " - Set timeout to: $TIMEOUT"
set +e
timeout_cmd "$TIMEOUT" "${command[#]}"
retval="$?"
set -e
echo " -> retval: $retval"
# check if (retval % 128) == SIGTERM (== 15)
if [[ "$((retval % 128))" -eq 15 ]]; then
echo " -> The command timed out"
fi
else
echo " - No timeout"
"${command[#]}"
retval="$?"
fi
done
echo "Done!"
exit 0
This small modification to TauPan's answer adds some useful protection. If the child process that is being waited for has already exited before the sleep $timeout completes. The kill command attempts to kill a process that no longer exists. This is probably harmless, but there is no absolute guarantee that the same PID has not been re-assigned. To obviate this, a quick check is done to test that the child PID exists and that its parent is the shell it was forked from. Also trying to kill a non-existent process generates errors which if not suppressed can easily fill up logs.
I also used a more aggressive kill -9. This is the only way to kill a process that is blocking not on the shell command but instead from the file system eg. read < named_pipe.
A consequence of this is that the kill -9 $child command send its kill signal asynchronously to the process and hence generates a message into the calling shell. This can be suppressed by re-directing the wait $child > /dev/null 2>&1. With obvious consequences for debugging.
#!/bin/bash
function child_timeout () {
child=$!
timeout=$1
(
#trap -- "" SIGINT
sleep $timeout
if [ $(ps -o pid= -o comm= --ppid $$ | grep -o $child) ]; then
kill -9 $child
fi
) &
wait $child > /dev/null 2>&1
}
( tail -f /dev/null ) & child_timeout 10
This one liner will exit your Bash session after 10s
$ TMOUT=10 && echo "foo bar"

bash: redirect (and append) stdout and stderr to file and terminal and get proper exit status

To redirect (and append) stdout and stderr to a file, while also displaying it on the terminal, I do this:
command 2>&1 | tee -a file.txt
However, is there another way to do this such that I get an accurate value for the exit status?
That is, if I test $?, I want to see the exit status of command, not the exit status of tee.
I know that I can use ${PIPESTATUS[0]} here instead of $?, but I am looking for another solution that would not involve having to check PIPESTATUS.
Perhaps you could put the exit value from PIPESTATUS into $?
command 2>&1 | tee -a file.txt ; ( exit ${PIPESTATUS} )
Another possibility, with some bash flavours, is to turn on the pipefail option:
pipefail
If set, the return value of a pipeline is
the value of the last (rightmost)
command to exit with a non-zero
status, or zero if all commands in the
pipeline exit successfully. This
option is disabled by default.
set -o pipefail
...
command 2>&1 | tee -a file.txt || echo "Command (or tee?) failed with status $?"
This having been said, the only way of achieving PIPESTATUS functionality portably (e.g. so it'd also work with POSIX sh) is a bit convoluted, i.e. it requires a temp file to propagate a pipe exit status back to the parent shell process:
{ command 2>&1 ; echo $? >"/tmp/~pipestatus.$$" ; } | tee -a file.txt
if [ "`cat \"/tmp/~pipestatus.$$\"`" -ne 0 ] ; then
...
fi
or, encapsulating for reuse:
log2file() {
LOGFILE="$1" ; shift
{ "$#" 2>&1 ; echo $? >"/tmp/~pipestatus.$$" ; } | tee -a "$LOGFILE"
MYPIPESTATUS="`cat \"/tmp/~pipestatus.$$\"`"
rm -f "/tmp/~pipestatus.$$"
return $MYPIPESTATUS
}
log2file file.txt command param1 "param 2" || echo "Command failed with status $?"
or, more generically perhaps:
save_pipe_status() {
STATUS_ID="$1" ; shift
"$#"
echo $? >"/tmp/~pipestatus.$$.$STATUS_ID"
}
get_pipe_status() {
STATUS_ID="$1" ; shift
return `cat "/tmp/~pipestatus.$$.$STATUS_ID"`
}
save_pipe_status my_command_id ./command param1 "param 2" | tee -a file.txt
get_pipe_status my_command_id || echo "Command failed with status $?"
...
rm -f "/tmp/~pipestatus.$$."* # do this in a trap handler, too, to be really clean
There is an arcane POSIX way of doing this:
exec 4>&1; R=$({ { command1; echo $? >&3 ; } | { command2 >&4; } } 3>&1); exec 4>&-
It will set the variable R to the return value of command1, and pipe output of command1 to command2, whose output is redirected to the output of parent shell.
Use process substitution:
command > >( tee -a "$logfile" ) 2>&1
tee runs in a subshell so $? holds the exit status of command.

Resources