Execute correctly a bash command that have interactive input - bash

I try to execute from bash a command and retrieve stdout, stderr and exit code.
So far so good, there is plenty way.
The problem begin when that the program have an interactive input.
More precisly, I execute "git commit" (without -m) and "GNU nano" is executed in order to put a commit message.
If I use simply :
git commit
or
exec git commit
I can see the prompt, but I can't get stdout/stderr.
If I use
output=`git commit 2>&1`
or
output=$(git commit 2>&1)
I can retrieve stdout/stderr, but I can't see the prompt.
I can still do ctrl+X to abort the git commit.
My first attempt was by function call and my script end up hanging on a blank screen and ctrl+x / ctrl+c doesn't work.
function Execute()
{
if [[ $# -eq 0 ]]; then
echo "Error : function 'Execute' called without argument."
exit 3
fi
local msg=$("$# 2>&1")
local error=$?
if [[ $error -ne 0 ]]; then
echo "Error : '"$(printf '%q ' "$#")"' return '$error' error code."
echo "$1 message :"
echo "$msg"
echo
exit 1
fi
}
Execute git commit
I begin to ran out of idea/knowledge. Is what I want to do impossible ? Or is there a way that I don't know ?

Try this which processes every line output to stdout or stderr and redirects based on content:
#!/bin/env bash
foo() {
printf 'prompt: whos on first?\n' >&2
printf 'error: uh-oh\n' >&2
}
var=$(foo 2>&1 | awk '{print | "cat>&"(/prompt/ ? 2 : 1)}' )
echo "var=$var"
$ ./tst.sh
prompt: whos on first?
var=error: uh-oh
or this which just processes stderr:
#!/bin/env bash
foo() {
printf 'prompt: whos on first?\n' >&2
printf 'error: uh-oh\n' >&2
}
var=$(foo 2> >(awk '{print | "cat>&"(/prompt/ ? 2 : 1)}') )
echo "var=$var"
$ ./tst.sh
prompt: whos on first?
var=error: uh-oh
The awk command splits it's input to stderr or stdout based on content and only stdout is saved in the variable var. I don't know if your prompt is coming to stderr or stdout or where you really want it go go but massage to suit wrt what you want to go to stdout vs stderr and what you want to capture in the variable vs see printed to the screen. You just need to have something in the prompt to recognize as such so you can separate the prompt from the rest of the stdout and stderr and print the prompt to stderr while everything else gets redirected to stdout.
Alternatively here's a version that prints the first line (regardless of content) to stderr for display and everything else to stdout for capture:
$ cat tst.sh
#!/bin/env bash
foo() {
printf 'prompt: whos on first?\n' >&2
printf 'error: uh-oh\n' >&2
}
var=$(foo 2>&1 | awk '{print | "cat>&"(NR>1 ? 1 : 2)}' )
echo "var=$var"
$ ./tst.sh
prompt: whos on first?
var=error: uh-oh

Related

Bash script customize stdout and stderr in script

I want to execute some scripts in install.sh, looks like:
#!/bin/bash
./script1.sh
./script2.sh
./script3.sh
...
It executes a bunch of scripts, so I want to distinguish stdout and stderr by color (green for stdout, red for stderr), and also where the outputs come from.
The output format I want is:
script1.sh: Hello # in green color (stdout)
script2.sh: Cannot read a file. # in red color (stderr)
My goal is to print outputs in scripts in format of:
{script_name}: {green_if_stdout, red_if_stderr}
I don't want to edit every single command in all scripts.
Is there any way to override (or customize) all stdout and stderr outputs in the script?
#!/bin/bash
override_stdout_and_stderr
echo "Start" # It also prints as green color
./script1.sh
./script2.sh
./script3.sh
...
restore_if_needed
You asked how to color all of stdout and stderr and prefix all lines with the script name.
In the answer below used redirection to send stdout to one process and stderr to another process. Credit to how to redirect stderr.
Using awk to prefix the incoming output with the needed color, red or green, then printing each line of input, and clearing the color setting upon finishing the print.
#!/bin/bash
function colorize()
{
"$#" 2> >( awk '{ printf "'$1':""\033[0;31m" $0 "\033[0m\n"}' ) \
1> >( awk '{ printf "'$1':""\033[0;32m" $0 "\033[0m\n"}' )
}
colorize ./script1.sh
#!/bin/sh
# script1.sh
echo "Hello GREEN"
>&2 echo "Hello RED"
Expect output similar to this command.
printf 'script1.sh:\033[0;32mHello GREEN\033[0m\nscript1.sh:\033[0;31mHello RED\033[0m\n'
Using read instead of awk:
#!/bin/bash
function greenchar()
{
while read ln ; do
printf "$1:\033[0;32m${ln}\033[0;0m\n" >&1
done
}
function redchar()
{
while read ln ; do
printf "$1:\033[0;31m${ln}\033[0;0m\n" >&2
done
}
function colorize()
{
$* 2> >( redchar $1 ) 1> >( greenchar $1 )
}
colorize ./script2.sh
#!/bin/bash
# script2.sh
echo "Hello GREEN"
>&2 echo "Hello RED"
>&1 echo "YES OR NO?"
select yn in "Yes" "No"; do
case $yn in
Yes) echo "YOU PICKED YES" ; break;;
No) echo "YOU PICKED NO" ; break;;
esac
done
Example output, the output is similar to output of these commands.
RED="\033[0;31m"
GRN="\033[0;32m"
NC="\033[0;0m"
printf "./script1.sh:${GRN}Hello GREEN${NC}\n"
printf "./script1.sh:${GRN}YES OR NO?${NC}\n"
printf "./script1.sh:${RED}Hello RED${NC}\n"
printf "./script1.sh:${RED}1) Yes${NC}\n"
printf "./script1.sh:${RED}2) No${NC}\n"
printf "${NC}1${NC}\n"
printf "./script1.sh:${GRN}YOU PICKED YES${NC}\n"

/bin/sh: capture stderr into a variable

I am assigning the output of a command to variable A:
A=$(some_command)
How can I "capture" stderr into a variable B ?
I have tried some variations with 2>&1 and read but that does not work:
A=$(some_command) 2>&1 | read B
echo $B
Here's a code snippet that might help you
# capture stderr into a variable and print it
echo "capture stderr into a variable and print it"
var=$(lt -l /tmp 2>&1)
echo $var
capture stderr into a variable and print it
zsh: command not found: lt
# capture stdout into a variable and print it
echo "capture stdout into a variable and print it"
var=$(ls -l /tmp)
echo $var
# capture both stderr and stdout into a variable and print it
echo "capture both stderr and stdout into a variable and print it"
var=$(ls -l /tmp 2>&1)
echo $var
# more classic way of executing a command which I always follow is as follows. This way I am always in control of what is going on and can act accordingly
if somecommand ; then
echo "command succeeded"
else
echo "command failed"
fi
If you have to capture the output and stderr in different variables, then the following might help as well
## create a file using file descriptor for stdout
exec 3> stdout.txt
# create a file using file descriptor for stderr
exec 4> stderr.txt
A=$($1 /tmp 2>&4 >&3);
## close file descriptor
exec 3>&-
exec 4>&-
## open file descriptor for reading
exec 3< stdout.txt
exec 4< stderr.txt
## read from file using file descriptor
read line <&3
read line2 <&4
## close file descriptor
exec 3<&-
exec 4<&-
## print line read from file
echo "stdout: $line"
echo "stderr: $line2"
## delete file
rm stdout.txt
rm stderr.txt
You can try running it with the following
╰─ bash test.sh pwd
stdout: /tmp/somedir
stderr:
╰─ bash test.sh pwdd
stdout:
stderr: test.sh: line 8: pwdd: command not found
As noted in a comment your use case may be better served in other scripting languages. An example: in Perl you can achieve what you want quite simple:
#!/usr/bin/env perl
use v5.26; # or earlier versions
use Capture::Tiny 'capture'; # library is not in core
my $cmd = 'date';
my #arg = ('-R', '-u');
my ($stdout, $stderr, $exit) = capture {
system( $cmd, #arg );
};
say "STDOUT: $stdout";
say "STDERR: $stderr";
say "EXIT: $exit";
I'm sure similar solutions are available in python, ruby, and all the rest.
I gave it another try using process substitution and came up with this:
# command with no error
date +%b > >(read A; if [ "$A" = 'Sep' ]; then echo 'September'; fi ) 2> >(read B; if [ ! -z "$B" ]; then echo "$B"; fi >&2)
September
# command with error
date b > >(read A; if [ "$A" = 'Sep' ]; then echo 'September'; fi ) 2> >(read B; if [ ! -z "$B" ]; then echo "$B"; fi >&2)
date: invalid date “b“
# command with both at the same time should work too
I had no success "exporting" the variables from the subprocesses back to the original script. It might be possible though. I just couldn't figure it out.
But this gives you at least access to stdout and stderr as a variable. This means you can do whatever processing you want on them as variables. It depends on your use case if this is of any help to you. Good luck :-)

Bash Script - If Pyscript Returns Non-zero, Write Errors to Output File AND Quite Bash Script

Have the below in a bash script -
python3 run_tests.py 2>&1 | tee tests.log
If I run python3 run_tests.py alone, I can do the below to exit the script:
python3 run_tests.py
if [ $? -ne 0 ]; then
echo 'ERROR: pytest failed, exiting ...'
exit $?
However, the above working code doesn't write the output of pytest to a file.
When I run python3 run_tests.py 2>&1 | tee tests.log, the output of pytest will output to the file, but this always returns status 0, since the output job successfully ran.
I need a way to somehow capture the returned code of the python script tests prior to writing to the file. Either that, or something that accomplishes the same end result of quitting the job if a test fails while also getting the failures in the output file.
Any help would be appreciated! :)
The exit status of a pipeline is the status of the last command, so $? is the status of tee, not pytest.
In bash you can use the $PIPESTATUS array to get the status of each command in the pipeline.
python3 run_tests.py 2>&1 | tee tests.log
status=${PIPESTATUS[0]} # status of run_tests.py
if [ $status -ne 0 ]; then
echo 'ERROR: pytest failed, exiting ...'
exit $status
fi
Note that you need to save the status in another variable, because $? and $PIPESTATUS are updated after each command.
I don't have python on my system so using awk to produce output and a specific exit status instead:
$ { awk 'BEGIN{print "foo"; exit 1}'; ret="$?"; } > >(tee tests.log)
foo
$ echo "$ret"
1
$ cat tests.log
foo
or if you want a script:
$ cat tst.sh
#!/usr/bin/env bash
#########
exec 3>&1 # save fd 1 (stdout) in fd 3 to restore later
exec > >(tee tests.log) # redirect stdout of this script to go to tee
awk 'BEGIN{print "foo"; exit 1}' # run whatever command you want
ret="$?" # save that command's exit status
exec 1>&3; 3>&- # restore stdout and close fd 3
#########
echo "here's the result:"
echo "$ret"
cat tests.log
$ ./tst.sh
foo
here's the result:
1
foo
Obviously just test the value of ret to exit or not, e.g.:
if (( ret != 0 )); then
echo 'the sky is falling' >&2
exit "$ret"
fi
You could also wrap the command call in a function if you like:
$ cat tst.sh
#!/usr/bin/env bash
doit() {
local ret=0
exec 3>&1 # save fd 1 (stdout) in fd 3 to restore later
exec > >(tee tests.log) # redirect stdout of this script to go to tee
awk 'BEGIN{print "foo"; exit 1}' # run whatever command you want
ret="$?" # save that command's exit status
exec 1>&3; 3>&- # restore stdout and close fd 3
return "$ret"
}
doit
echo "\$?=$?"
cat tests.log
$ ./tst.sh
foo
$?=1
foo

setting variables inside a compound command in bash fails (command group)

The group command { list; } should execute list in the current shell environment.
This allows things like variable assignments to be visible outside of the command group (http://mywiki.wooledge.org/BashGuide/CompoundCommands).
I use it to send output to a logfile as well as terminal:
{ { echo "Result is 13"; echo "ERROR: division by 0" 1>&2; } | tee -a stdout.txt; } 3>&1 1>&2 2>&3 | tee -a stderr.txt;
On the topic "pipe stdout and stderr to two different processes in shell script?" read here: pipe stdout and stderr to two different processes in shell script?.
{ echo "Result is 13"; echo "ERROR: division by 0" 1>&2; }
simulates a command with output to stdout and stderr.
I want to evaluate the exit status also. /bin/true and /bin/false simulate a command that may succeed or fail. So I try to save $? to a variable r:
~$ r=init; { /bin/true; r=$?; } | cat; echo $r;
init
~$ r=init; { /bin/true; r=$?; } 2>/dev/null; echo $r;
0
As you can see the above pipeline construct does not set variable r while the second command line leads to the expected result. Is it a bug or is it my fault? Thanks.
I tested Ubuntu 12.04.2 LTS (~$) and Debian GNU/Linux 7.0 (wheezy) (~#) with the following versions of bash:
~$ echo $BASH_VERSION
4.2.25(1)-release
~# echo $BASH_VERSION
4.2.37(1)-release
I think, you miss that /bin/true returs 0 and /bin/false returns 1
$ r='res:'; { /bin/true; r+=$?; } 2>/dev/null; echo $r;
res:0
And
$ r='res:'; { /bin/false; r+=$?; } 2>/dev/null; echo $r;
res:1
I tried a test program:
x=0
{ x=$$ ; echo "$$ $BASHPID $x" ; }
echo $x
x=0
{ x=$$ ; echo "$$ $BASHPID $x" ; } | cat
echo $x
And indeed - it looks like the pipe forces the prior code into another process, but without reinitialising bash - so $BASHPID changes but $$ does.
See Difference between bash pid and $$ for more details of the different between $$ and $BASHPID.
Also outputting $BASH_SUBSHELL shows that the second bit is running in a subshell (level 1), and the first is at level 0.
bash executes all elements of a pipeline as subprocesses; if they're shell builtins or command groups, that means they execute in subshells and so any variables they set don't propagate to the parent shell. This can be tricky to work around in general, but if all you need is the exit status of the command group, you can use the $PIPESTATUS array to get it:
$ { false; } | cat; echo "${PIPESTATUS[#]}"
1 0
$ { false; } | cat; r=${PIPESTATUS[0]}; echo $r
1
$ { true; } | cat; r=${PIPESTATUS[0]}; echo $r
0
Note that this only works for getting the exit status of the last command in the group:
$ { false; true; false; uselessvar=$?; } | cat; r=${PIPESTATUS[0]}; echo $r
0
... because uselessvar=$? succeeded.
Using a variable to hold the exit status is no appropriate method with pipelines:
~$ r=init; { /bin/true; r=$?; } | cat; echo $r;
init
The pipeline creates a subshell. In the pipe the exit status is assigned to a (local) copy of variable r whose value is dropped.
So I want to add my solution to the orginating challenge to send output to a logfile as well as terminal while keeping track of exit status. I decided to use another file descriptor. Formatting in a single line may be a bit confusing ...
{ { r=$( { { { echo "Result is 13"; echo "ERROR: division by 0" 1>&2; /bin/false; echo $? 1>&4; } | tee stdout.txt; } 3>&1 1>&2 2>&3 | tee stderr.txt; } 4>&1 1>&2 2>&3 ); } 3>&1; } 1>stdout.term 2>stderr.term; echo r=$r
... so I apply some indentation:
{
{
: # no operation
r=$( {
{
{
echo "Result is 13"
echo "ERROR: division by 0" 1>&2
/bin/false; echo $? 1>&4
} | tee stdout.txt;
} 3>&1 1>&2 2>&3 | tee stderr.txt;
} 4>&1 1>&2 2>&3 );
} 3>&1;
} 1>stdout.term 2>stderr.term; echo r=$r
Do not mind the line "no operation". It showed up that the forum's formatting checker relies on it and otherwise would insist: "Your post appears to contain code that is not properly formatted as code. Please indent all code by 4 spaces using the code toolbar button or the CTRL+K keyboard shortcut. For more editing help, click the [?] toolbar icon."
If executed it yields the following output:
r=1
For demonstration purposes I redirected terminal output to the files stdout.term and stderr.term.
root#voipterm1:~# cat stdout.txt
Result is 13
root#voipterm1:~# cat stderr.txt
ERROR: division by 0
root#voipterm1:~# cat stdout.term
Result is 13
root#voipterm1:~# cat stderr.term
ERROR: division by 0
Let me explain:
The following group command simulates some command that yields an error code of 1 along with some error message. File descriptor 4 is declared in step 3:
{
echo "Result is 13"
echo "ERROR: division by 0" 1>&2
/bin/false; echo $? 1>&4
} | tee stdout.txt;
By the following code stdout and stderr streams are swapped using file descriptor 3 as a dummy. This way error messages are sent to the file stderr.txt:
{
...
} 3>&1 1>&2 2>&3 | tee stderr.txt;
Exit status has been sent to file descriptor 4 in step 1. It is now redirected to file descriptor 1 which defines the value of variable r. Error messages are redirected to file descriptor 2 while normal output ("Result is 13") is attached to file descriptor 3:
r=$( {
...
} 4>&1 1>&2 2>&3 );
Finally file descriptor 3 is redirected to file descriptor 1. This controls the output "Result is 13":
{
...
} 3>&1;
The outermost curly brace just shows how the command behaves.
Gordon Davisson suggested to exploit the array variable PIPESTATUS containing a list of exit status values from the processes in the most-recently-executed foreground pipeline. This may be an promising approach but leads to the question how to hand over its value to the enclosing pipeline.
~# r=init; { { echo "Result is 13"; echo "ERROR: division by 0" 1>&2; } | tee -a stdout.txt; r=${PIPESTATUS[0]}; } 3>&1 1>&2 2>&3 | tee -a stderr.txt; echo "Can you tell me the exit status? $r"
ERROR: division by 0
Result is 13
Can you tell me the exit status? init

How to add timestamp to STDERR redirection

In bash/ksh can we add timestamp to STDERR redirection?
E.g. myscript.sh 2> error.log
I want to get a timestamp written on the log too.
If you're talking about an up-to-date timestamp on each line, that's something you'd probably want to do in your actual script (but see below for a nifty solution if you have no power to change it). If you just want a marker date on its own line before your script starts writing, I'd use:
( date 1>&2 ; myscript.sh ) 2>error.log
What you need is a trick to pipe stderr through another program that can add timestamps to each line. You could do this with a C program but there's a far more devious way using just bash.
First, create a script which will add the timestamp to each line (called predate.sh):
#!/bin/bash
while read line ; do
echo "$(date): ${line}"
done
For example:
( echo a ; sleep 5 ; echo b ; sleep 2 ; echo c ) | ./predate.sh
produces:
Fri Oct 2 12:31:39 WAST 2009: a
Fri Oct 2 12:31:44 WAST 2009: b
Fri Oct 2 12:31:46 WAST 2009: c
Then you need another trick that can swap stdout and stderr, this little monstrosity here:
( myscript.sh 3>&1 1>&2- 2>&3- )
Then it's simple to combine the two tricks by timestamping stdout and redirecting it to your file:
( myscript.sh 3>&1 1>&2- 2>&3- ) | ./predate.sh >error.log
The following transcript shows this in action:
pax> cat predate.sh
#!/bin/bash
while read line ; do
echo "$(date): ${line}"
done
pax> cat tstdate.sh
#!/bin/bash
echo a to stderr then wait five seconds 1>&2
sleep 5
echo b to stderr then wait two seconds 1>&2
sleep 2
echo c to stderr 1>&2
echo d to stdout
pax> ( ( ./tstdate.sh ) 3>&1 1>&2- 2>&3- ) | ./predate.sh >error.log
d to stdout
pax> cat error.log
Fri Oct 2 12:49:40 WAST 2009: a to stderr then wait five seconds
Fri Oct 2 12:49:45 WAST 2009: b to stderr then wait two seconds
Fri Oct 2 12:49:47 WAST 2009: c to stderr
As already mentioned, predate.sh will prefix each line with a timestamp and the tstdate.sh is simply a test program to write to stdout and stderr with specific time gaps.
When you run the command, you actually get "d to stdout" written to stderr (but that's your TTY device or whatever else stdout may have been when you started). The timestamped stderr lines are written to your desired file.
The devscripts package in Debian/Ubuntu contains a script called annotate-output which does that (for both stdout and stderr).
$ annotate-output make
21:41:21 I: Started make
21:41:21 O: gcc -Wall program.c
21:43:18 E: program.c: Couldn't compile, and took me ages to find out
21:43:19 E: collect2: ld returned 1 exit status
21:43:19 E: make: *** [all] Error 1
21:43:19 I: Finished with exitcode 2
Here's a version that uses a while read loop like pax's, but doesn't require extra file descriptors or a separate script (although you could use one). It uses process substitution:
myscript.sh 2> >( while read line; do echo "$(date): ${line}"; done > error.log )
Using pax's predate.sh:
myscript.sh 2> >( predate.sh > error.log )
The program ts from the moreutils package pipes standard input to standard output, and prefixes each line with a timestamp.
To prefix stdout lines: command | ts
To prefix both stdout and stderr: command 2>&1 | ts
I like those portable shell scripts but a little disturbed that they fork/exec date(1) for every line. Here is a quick perl one-liner to do the same more efficiently:
perl -p -MPOSIX -e 'BEGIN {$!=1} $_ = strftime("%T ", localtime) . $_'
To use it, just feed this command input through its stdin:
(echo hi; sleep 1; echo lo) | perl -p -MPOSIX -e 'BEGIN {$|=1} $_ = strftime("%T ", localtime) . $_'
Rather than writing a script to pipe to, I prefer to write the logger as a function inside the script, and then send the entirety of the process into it with brackets, like so:
# Vars
logfile=/path/to/scriptoutput.log
# Defined functions
teelogger(){
log=$1
while read line ; do
print "$(date +"%x %T") :: $line" | tee -a $log
done
}
# Start process
{
echo 'well'
sleep 3
echo 'hi'
sleep 3
echo 'there'
sleep 3
echo 'sailor'
} | teelogger $logfile
If you want to redirect back to stdout I found you have to do this:
myscript.sh >> >( while read line; do echo "$(date): ${line}"; done )
Not sure why I need the > in front of the (, so <(, but thats what works.
I was too lazy for all current solutions... So I figured out new one (works for stdout could be adjusted for stderr as well):
echo "output" | xargs -L1 -I{} bash -c "echo \$(date +'%x %T') '{}'" | tee error.log
would save to file and print something like that:
11/3/16 16:07:52 output
Details:
-L1 means "for each new line"
-I{} means "replace {} by input"
bash -c is used to update $(date) each time for new call
%x %T formats timestamp to minimal form.
It should work like a charm if stdout and stderr doesn't have quotes (" or `). If it has (or could have) it's better to use:
echo "output" | awk '{cmd="(date +'%H:%M:%S')"; cmd | getline d; print d,$0; close(cmd)} | tee error.log'
(Took from my answer in another topic: https://stackoverflow.com/a/41138870/868947)
How about timestamping the remaining output, redirecting all to stdout?
This answer combines some techniques from above, as well as from unix stackexchange here and here. bash >= 4.2 is assumed, but other advanced shells may work. For < 4.2, replace printf with a (slower) call to date.
: ${TIMESTAMP_FORMAT:="%F %T"} # override via environment
_loglines() {
while IFS= read -r _line ; do
printf "%(${TIMESTAMP_FORMAT})T#%s\n" '-1' "$_line";
done;
}
exec 7<&2 6<&1
exec &> >( _loglines )
# Logit
To restore stdout/stderr:
exec 1>&6 2>&7
You can then use tee to send the timestamps to stdout and a logfile.
_tmpfile=$(mktemp)
exec &> >( _loglines | tee $_tmpfile )
Not a bad idea to have cleanup code if the process exited without error:
trap "_cleanup \$?" 0 SIGHUP SIGINT SIGABRT SIGBUS SIGQUIT SIGTRAP SIGUSR1 SIGUSR2 SIGTERM
_cleanup() {
exec >&6 2>&7
[[ "$1" != 0 ]] && cat "$_logtemp"
rm -f "$_logtemp"
exit "${1:-0}"
}
#!/bin/bash
DEBUG=1
LOG=$HOME/script_${0##*/}_$(date +%Y.%m.%d-%H.%M.%S-%N).log
ERROR=$HOME/error.log
exec 2> $ERROR
exec 1> >(tee -ai $LOG)
if [ $DEBUG = 0 ] ; then
exec 4> >(xargs -i echo -e "[ debug ] {}")
else
exec 4> /dev/null
fi
# test
echo " debug sth " >&4
echo " log sth normal "
type -f this_is_error
echo " errot sth ..." >&2
echo " finish ..." >&2>&4
# close descriptor 4
exec 4>&-
This thing: nohup myscript.sh 2> >( while read line; do echo "$(date): ${line}"; done > mystd.err ) < /dev/null &
Works as such but when I log out and log back in to the server, it does not work. that is mystd.err stop getting populated with stderr stream even though my process (myscript.sh here) still runs.
Does someone know how to get back the lost stderr in the mystd.err file back?
Thought I would add my 2 cents worth..
#!/bin/sh
timestamp(){
name=$(printf "$1%*s" `expr 15 - ${#1}`)
awk "{ print strftime(\"%b %d %H:%M:%S\"), \"- $name -\", $$, \"- INFO -\", \$0; fflush() }";
}
echo "hi" | timestamp "process name" >> /tmp/proccess.log
printf "$1%*s" `expr 15 - ${#1}`
Spaces the name out so it looks nice, where 15 is the max space issued, increase if desired
outputs >> Date - Process name - Process ID - INFO - Message
Jun 27 13:57:20 - process name - 18866 - INFO - hi
Redirections are taken in order. Try this:
Given a script -
$: cat tst
echo a
sleep 2
echo 1 >&2
echo b
sleep 2
echo 2 >&2
echo c
sleep 2
echo 3 >&2
echo d
I get the following
$: ./tst 2>&1 1>stdout | sed 's/^/echo $(date +%Y%m%dT%H%M%S) /; e'
20180925T084008 1
20180925T084010 2
20180925T084012 3
And as much as I dislike awk, it does avoid the redundant subcalls to date.
$: ./tst 2>&1 1>stdout | awk "{ print strftime(\"%Y%m%dT%H%M%S \") \$0; fflush() }"; >stderr
$: cat stderr
20180925T084414 1
20180925T084416 2
20180925T084418 3
$: cat stdout
a
b
c
d

Resources