I am currently using following to log stdout and stderr. I want to log echo statements along with timestamp.
exec 3>&1 1>>${LOG_FILE} 2>&1
How can I achieve it using shell
Depending on the exact intended behavior you may want the ts
utility.
% echo message | ts
Apr 16 10:56:39 message
You could also alias echo.
% sh
$ alias echo='echo $(date)'
$ echo message
Mon Apr 16 10:57:55 UTC 2018 message
Or make it a shell function.
#!/bin/sh
echo() {
command echo $(date) "$#"
}
echo message
Here I'm using a function to simplify the line, but the main thing to look for is process substitution, that's the >(log_it) syntax. REPLY is the default variable used by read.
logit() {
while read
do
echo "$(date) $REPLY" >> ${LOG_FILE}
done
}
LOG_FILE="./logfile"
exec 3>&1 1>> >(logit) 2>&1
echo "Hello world"
ls -l does_not_exist
The example puts into logfile:
Mon 16 Apr 2018 09:18:33 BST Hello world
Mon 16 Apr 2018 09:18:33 BST ls: does_not_exist: No such file or directory
You might wish to change the bare $(date) to use a better format. If you are running bash 4.2 or later you can use printf instead, which will be more efficient, for example, instead of the echo:
# -1 means "current time"
printf "%(%Y-%m-%d %T)T %s\n" -1 "$REPLY" >> ${LOG_FILE}
Gives:
2018-04-16 09:26:50 Hello world
2018-04-16 09:26:50 ls: does_not_exist: No such file or directory
Brilliant, thanks #cdarke! I'm now using something like the following so that I have more fine-grained control in the main part of my scripts - it logs everything I want and I still see stdout:
logouts() {
while read
do
printf "%(%T)T %s\n" -1 "stdout: $REPLY" | tee -a ${LOG_FILE}
done
}
logerrs() {
while read
do
printf "%(%T)T %s\n" -1 "stderr: $REPLY" >> ${LOG_FILE}
done
}
LOG_FILE="./logfile"
main()
{
echo "Hello world" 1>> >(logouts) 2>> >(logerrs)
ls -l does_not_exist 1>> >(logouts) 2>> >(logerrs)
}
main "$#"
(Apologies to #cdarke: I couldn't format this into a comment - please upvote #cdarke answer if you find this useful.)
Related
I am assigning the output of a command to variable A:
A=$(some_command)
How can I "capture" stderr into a variable B ?
I have tried some variations with 2>&1 and read but that does not work:
A=$(some_command) 2>&1 | read B
echo $B
Here's a code snippet that might help you
# capture stderr into a variable and print it
echo "capture stderr into a variable and print it"
var=$(lt -l /tmp 2>&1)
echo $var
capture stderr into a variable and print it
zsh: command not found: lt
# capture stdout into a variable and print it
echo "capture stdout into a variable and print it"
var=$(ls -l /tmp)
echo $var
# capture both stderr and stdout into a variable and print it
echo "capture both stderr and stdout into a variable and print it"
var=$(ls -l /tmp 2>&1)
echo $var
# more classic way of executing a command which I always follow is as follows. This way I am always in control of what is going on and can act accordingly
if somecommand ; then
echo "command succeeded"
else
echo "command failed"
fi
If you have to capture the output and stderr in different variables, then the following might help as well
## create a file using file descriptor for stdout
exec 3> stdout.txt
# create a file using file descriptor for stderr
exec 4> stderr.txt
A=$($1 /tmp 2>&4 >&3);
## close file descriptor
exec 3>&-
exec 4>&-
## open file descriptor for reading
exec 3< stdout.txt
exec 4< stderr.txt
## read from file using file descriptor
read line <&3
read line2 <&4
## close file descriptor
exec 3<&-
exec 4<&-
## print line read from file
echo "stdout: $line"
echo "stderr: $line2"
## delete file
rm stdout.txt
rm stderr.txt
You can try running it with the following
╰─ bash test.sh pwd
stdout: /tmp/somedir
stderr:
╰─ bash test.sh pwdd
stdout:
stderr: test.sh: line 8: pwdd: command not found
As noted in a comment your use case may be better served in other scripting languages. An example: in Perl you can achieve what you want quite simple:
#!/usr/bin/env perl
use v5.26; # or earlier versions
use Capture::Tiny 'capture'; # library is not in core
my $cmd = 'date';
my #arg = ('-R', '-u');
my ($stdout, $stderr, $exit) = capture {
system( $cmd, #arg );
};
say "STDOUT: $stdout";
say "STDERR: $stderr";
say "EXIT: $exit";
I'm sure similar solutions are available in python, ruby, and all the rest.
I gave it another try using process substitution and came up with this:
# command with no error
date +%b > >(read A; if [ "$A" = 'Sep' ]; then echo 'September'; fi ) 2> >(read B; if [ ! -z "$B" ]; then echo "$B"; fi >&2)
September
# command with error
date b > >(read A; if [ "$A" = 'Sep' ]; then echo 'September'; fi ) 2> >(read B; if [ ! -z "$B" ]; then echo "$B"; fi >&2)
date: invalid date “b“
# command with both at the same time should work too
I had no success "exporting" the variables from the subprocesses back to the original script. It might be possible though. I just couldn't figure it out.
But this gives you at least access to stdout and stderr as a variable. This means you can do whatever processing you want on them as variables. It depends on your use case if this is of any help to you. Good luck :-)
How can I save the error output when running my script to a file?
I have my code bellow, but the code does not track error and save the error to test.log14.
Can somebody give me a hint on what could be wrong in my code...
LOGFILE=/usr/local/etc/backups/test14.log
"$(date "+%m%d%Y %T") : Starting work" >> $LOGFILE 2>&1
Send the error output of the command instead of appending 'error output from append' -if that makes sense. I typically use functions for this type of process (similar to above.)
Try this:
2>&1 >> $LOGFILE
instead of
>> $LOGFILE 2>&1
###
function run_my_stuff {
echo "$(date "+%m%d%Y %T") : Starting work"
... more commands ...
echo "$(date "+%m%d%Y %T") : Done"
}
## call function and append to log
run_my_stuff 2>&1 >> ${LOGFILE} # OR
run_my_stuff 2>&1 | tee -a ${LOGFILE} # watch the log
Use this code:
#!/bin/bash
LOGFILE=/usr/local/etc/backups/test14.log
(
echo "$(date "+%m%d%Y %T") : Starting work"
... more commands ...
echo error 1>&2 # test stderr
echo "$(date "+%m%d%Y %T") : Done"
) >& $LOGFILE
The () makes BASH execute most of the script in a subshell. All output of the subshell (stderr and stdout) is redirected to the logfile.
There are normally 2 outputs that you see on your screen when you execute a command:
STDOUT
STDERR
You can redirect them independently per command.
For example:
ls >out 2>err
will give you two files; out will contain the output from ls and err will be empty.
ls /nonexisting 1>out 2>err
will give you an empty out file and err will contain
"ls: cannot access /nonexisting: No such file or directory"
The 2>&1 redirects 2 (STDERR) to 1 (STDOUT)
Your echo does not have any errors, therefore there is no output on the STDERR.
So your code would probably need to be something like this:
echo "$(date "+%m%d%Y %T") : Starting work"
work_command > work_output 2>> $LOGFILE
Suppose I want to write a smart logging function log, that would read the line that is immediately after the log invocation and store it and its output in the log file. The function can find, read and execute the line of code that is in question. The problem is, that when the function returns, bash executes the line again.
Everything works fine except that assignment to BASH_LINENO[0] is silently discarded. Reading the http://wiki.bash-hackers.org/syntax/shellvars#bash_lineno I've learned that the variable is not read only.
function log()
{
BASH_LINENO[0]=$((${BASH_LINENO[0]}+1))
file=${BASH_SOURCE[1]##*/}
linenr=$((${BASH_LINENO[0]} + 1 ))
line=`sed "1,$((${linenr}-1)) d;${linenr} s/^ *//; q" $file`
if [ -f /tmp/tmp.txt ]; then
rm /tmp/tmp.txt
fi
exec 3>&1 4>&2 >>/tmp/tmp.txt 2>&1
set -x
eval $line
exitstatus=$?
set +x
exec 1>&3 2>&4 4>&- 3>&-
#Here goes the code that parses the /tmp/tmp.txt and stores it in the log
if [ "$exitstatus" -ne "0" ]; then
exit $exitstatus
fi
}
#Test case:
log
echo "Unfortunately this line gets appended twice" | tee -a bla.txt;
After consulting the wisdom of users on bug-bash#gnu.org mailing list it appears that modifying the call stack is not possible, after all. Here is an answer I got from Chet Ramey:
BASH_LINENO is a call stack; assignments to it should be (and are)
ignored. That's been the case since at least bash-3.2 (that's where I
quit looking).
There is an indirect way to force bash to not execute the next
command: set the extdebug option and have the DEBUG trap return a
non-zero status.
The above technique works very well for my purposes. I am finally able to do a production version of the log function.
#!/bin/bash
shopt -s extdebug
repetition_count=0
_ERR_HDR_FMT="%.8s %s#%s:%s:%s"
_ERR_MSG_FMT="[${_ERR_HDR_FMT}]%s \$ "
msg() {
printf "$_ERR_MSG_FMT" $(date +%T) $USER $HOSTNAME $PWD/${BASH_SOURCE[2]##*/} ${BASH_LINENO[1]}
echo ${#}
}
function rlog()
{
case $- in *x*) USE_X="-x";; *) USE_X=;; esac
set +x
if [ "${BASH_LINENO[0]}" -ne "$myline" ]; then
repetition_count=0
return 0;
fi
if [ "$repetition_count" -gt "0" ]; then
return -1;
fi
if [ -z "$log" ]; then
return 0
fi
file=${BASH_SOURCE[1]##*/}
line=`sed "1,$((${myline}-1)) d;${myline} s/^ *//; q" $file`
if [ -f /tmp/tmp.txt ]; then
rm /tmp/tmp.txt
fi
echo "$line" > /tmp/tmp2.txt
mymsg=`msg`
exec 3>&1 4>&2 >>/tmp/tmp.txt 2>&1
set -x
source /tmp/tmp2.txt
exitstatus=$?
set +x
exec 1>&3 2>&4 4>&- 3>&-
repetition_count=1 #This flag is to prevent multiple execution of the current line of code. This condition gets checked at the beginning of the function
frstline=`sed '1q' /tmp/tmp.txt`
[[ "$frstline" =~ ^(\++)[^+].*$ ]]
# echo "BASH_REMATCH[1]=${BASH_REMATCH[1]}"
eval 'tmp="${BASH_REMATCH[1]}"'
pluscnt=$(( (${#tmp} + 1) *2 ))
pluses="\+\+\+\+\+\+\+\+\+\+\+\+\+\+\+\+\+\+\+\+\+\+\+\+\+\+\+\+"
pluses=${pluses:0:$pluscnt}
commandlines="`awk \" gsub(/^${pluses}\\s/,\\\"\\\")\" /tmp/tmp.txt`"
n=0
#There might me more then 1 command in the debugged line. The next loop appends each command to the log.
while read -r line; do
if [ "$n" -ne "0" ]; then
echo "+ $line" >>$log
else
echo "${mymsg}$line" >>$log
n=1
fi
done <<< "$commandlines"
#Next line extracts all lines that are prefixed by sufficent number of "+" (usually 3), that are immidiately after the last line prefixed with $pluses, i.e. after the last command line.
awk "BEGIN {flag=0} /${pluses}/ { flag=1 } /^[^+]/ { if (flag==1) print \$0; }" /tmp/tmp.txt | tee -a $log
if [ "$exitstatus" -ne "0" ]; then
echo "## Exit status: $exitstatus" >>$log
fi
echo >>$log
if [ "$exitstatus" -ne "0" ]; then
exit $exitstatus
fi
if [ -n "$USE_X" ]; then
set -x
fi
return -1
}
log_next_line='eval if [ -n "$log" ]; then myline=$(($LINENO+1)); trap "rlog" DEBUG; fi;'
logoff='trap - DEBUG'
The usage of the file is intended as follows:
#!/bin/bash
log=mylog.log
if [ -f mylog.log ]; then
rm mylog.log
fi
. ./log.sh
a=example
x=a
$log_next_line
echo "KUKU!"
$log_next_line
echo $x
$log_next_line
echo ${!x}
$log_next_line
echo ${!x} > /dev/null
$log_next_line
echo "Proba">/tmp/mtmp.txt
$log_next_line
touch ${!x}.txt
$log_next_line
if [ $(( ${#a} + 6 )) -gt 10 ]; then echo "Too long string"; fi
$log_next_line
echo "\$a and \$x">/dev/null
$log_next_line
echo $x
$log_next_line
ls -l
$log_next_line
mkdir /ddad/adad/dad #Generates an error
The output (`mylog.log):
[13:39:51 adam#adam-N56VZ:/home/Adama-docs/Adam/Adam/linux/tmp/log/log-test-case.sh:14] $ echo 'KUKU!'
KUKU!
[13:39:51 adam#adam-N56VZ:/home/Adama-docs/Adam/Adam/linux/tmp/log/log-test-case.sh:16] $ echo a
a
[13:39:51 adam#adam-N56VZ:/home/Adama-docs/Adam/Adam/linux/tmp/log/log-test-case.sh:18] $ echo example
example
[13:39:51 adam#adam-N56VZ:/home/Adama-docs/Adam/Adam/linux/tmp/log/log-test-case.sh:20] $ echo example
[13:39:51 adam#adam-N56VZ:/home/Adama-docs/Adam/Adam/linux/tmp/log/log-test-case.sh:22] $ echo 1,2,3
[13:39:51 adam#adam-N56VZ:/home/Adama-docs/Adam/Adam/linux/tmp/log/log-test-case.sh:24] $ touch example.txt
[13:39:51 adam#adam-N56VZ:/home/Adama-docs/Adam/Adam/linux/tmp/log/log-test-case.sh:26] $ '[' 13 -gt 10 ']'
+ echo 'Too long string'
Too long string
[13:39:51 adam#adam-N56VZ:/home/Adama-docs/Adam/Adam/linux/tmp/log/log-test-case.sh:28] $ echo '$a and $x'
[13:39:51 adam#adam-N56VZ:/home/Adama-docs/Adam/Adam/linux/tmp/log/log-test-case.sh:30] $ echo a
a
[13:39:51 adam#adam-N56VZ:/home/Adama-docs/Adam/Adam/linux/tmp/log/log-test-case.sh:32] $ ls -l
total 12
-rw-rw-r-- 1 adam adam 0 gru 4 13:39 example.txt
lrwxrwxrwx 1 adam adam 66 gru 4 13:29 log.sh -> /home/Adama-docs/Adam/Adam/MyDocs/praca/Puppet/bootstrap/common.sh
-rwxrwxr-x 1 adam adam 520 gru 4 13:29 log-test-case.sh
-rw-rw-r-- 1 adam adam 995 gru 4 13:39 mylog.log
[13:39:51 adam#adam-N56VZ:/home/Adama-docs/Adam/Adam/linux/tmp/log/log-test-case.sh:34] $ mkdir /ddad/adad/dad
mkdir: cannot create directory ‘/ddad/adad/dad’: No such file or directory
## Exit status: 1
The standard output is unchanged.
Limitations
Limitations are serious, unfortunately.
Exit code of logged command gets discarded
First of all, the exit code of the logged command is discarded, so user cannot test for it in the next statement. The current code exits the script if there was an error (which I believe is the best behavior). It is possible to modify the script to test
Limited support for bash tracing
The function honors bash tracing with -x. If it finds that the user traces output, it temporarily disables the output (as it would interfere with the trace anyway), and restores it back at the end. Unfortunately, it also appends a few extra lines to the trace.
Unless user turns off logging (with $logoff) there is a considerable speed penalty for all commands after the first $log_next_line, even if no logging takes place.
In ideal world the function should disable debug trapping (trap - DEBUG) after each invocation. Unfortunately I don't know how to do it, so beginning with the first $log_next_line macro, interpretation of each line invokes a custom function.
I use this function before every key command in my complex bootstrapping scripts. With it I can see what exactly and when was executed and what was the output, without the need to really understand the logic of the lengthy and sometimes messy scripts.
I have written a wrapper in bash, which calls other shell scripts. However, I need to print only the output from the wrapper, avoiding the output from called scripts, which I am basically logging into a log file.
Elaborating…..
Basically I am using a function as
start_logging ${LOGFILE}
{
Funtion1
Funtion2
} 2>&1 | tee -a ${LOGFILE}
Where start logging is define as:- (I could only understand this function partially)
start_logging()
{
## usage: start_logging
## start a new log or append to existing log file
declare -i rc=0
if [ ! "${LOGFILE}" ];then
## display error and bail
fi
local TIME_STAMP=$(date +%Y%m%d:%H:%M:%S)
## open ${LOGFILE} or append to existing ${LOGFILE} with timestamp and actual command line
if [ ${DRY_RUN} ]; then
echo "DRY_RUN set..."
echo "${TIME_STAMP} Starting $(basename ${0}) run: '${0} ${ORIG_ARGS}'" { I}
echo "DRY_RUN set..."
echo "Please ignore \"No such file or directory\" from tee..."
else
echo "${TIME_STAMP} Starting $(basename ${0}) run: '${0} ${ORIG_ARGS}'"
echo "${TIME_STAMP} Starting $(basename ${0}) run: '${0} ${ORIG_ARGS}'"
fi
return ${rc}
}
LOGFILE is defined in the wrapper as
{
TMPDIR ="$/tmp"
LOGFILE="${TMPDIR}/${$}/${BASENAME%.*}.log
}
Now when its calling funtion1, funtion2 which basically calls other bash scripts its logging all the output in the file .i.e. { TMPDIR}/${$}/${BASENAME%.*}.log } as well as on the bash terminal.
I wanted that it should only echo what I have written in the wrapper on to the bash terminal and rest should be recorded in the log.
PleaseNote:- the called scripts from wrapper have echo function within but I don’t wanted that there output should be displayed on the terminal
Is it possible to achieve....
You need to redirect both stdout + stderr of your called scripts into your logfile.
./your_other_script.sh 2&>1 >> /var/log/mylogfile.txt
You get output to the terminal, because of tee. So you can:
start_logging ${LOGFILE}
{
Funtion1
Funtion2
} 2>&1 | tee -a ${LOGFILE} >/dev/null
^^^^^^^^^^ - redirect the output from a tee to /dev/null
or simply remove the tee and all logs redirect only into the file
start_logging ${LOGFILE}
{
Funtion1
Funtion2
} 2>&1 >${LOGFILE}
or for bigger parts of script, enclose the part into ( ) pair, will executed in a subshell and redirect the output to /dev/null, so:
(
start_logging ${LOGFILE}
{
Funtion1
Funtion2
} 2>&1 | tee -a ${LOGFILE}
) >/dev/null
In bash/ksh can we add timestamp to STDERR redirection?
E.g. myscript.sh 2> error.log
I want to get a timestamp written on the log too.
If you're talking about an up-to-date timestamp on each line, that's something you'd probably want to do in your actual script (but see below for a nifty solution if you have no power to change it). If you just want a marker date on its own line before your script starts writing, I'd use:
( date 1>&2 ; myscript.sh ) 2>error.log
What you need is a trick to pipe stderr through another program that can add timestamps to each line. You could do this with a C program but there's a far more devious way using just bash.
First, create a script which will add the timestamp to each line (called predate.sh):
#!/bin/bash
while read line ; do
echo "$(date): ${line}"
done
For example:
( echo a ; sleep 5 ; echo b ; sleep 2 ; echo c ) | ./predate.sh
produces:
Fri Oct 2 12:31:39 WAST 2009: a
Fri Oct 2 12:31:44 WAST 2009: b
Fri Oct 2 12:31:46 WAST 2009: c
Then you need another trick that can swap stdout and stderr, this little monstrosity here:
( myscript.sh 3>&1 1>&2- 2>&3- )
Then it's simple to combine the two tricks by timestamping stdout and redirecting it to your file:
( myscript.sh 3>&1 1>&2- 2>&3- ) | ./predate.sh >error.log
The following transcript shows this in action:
pax> cat predate.sh
#!/bin/bash
while read line ; do
echo "$(date): ${line}"
done
pax> cat tstdate.sh
#!/bin/bash
echo a to stderr then wait five seconds 1>&2
sleep 5
echo b to stderr then wait two seconds 1>&2
sleep 2
echo c to stderr 1>&2
echo d to stdout
pax> ( ( ./tstdate.sh ) 3>&1 1>&2- 2>&3- ) | ./predate.sh >error.log
d to stdout
pax> cat error.log
Fri Oct 2 12:49:40 WAST 2009: a to stderr then wait five seconds
Fri Oct 2 12:49:45 WAST 2009: b to stderr then wait two seconds
Fri Oct 2 12:49:47 WAST 2009: c to stderr
As already mentioned, predate.sh will prefix each line with a timestamp and the tstdate.sh is simply a test program to write to stdout and stderr with specific time gaps.
When you run the command, you actually get "d to stdout" written to stderr (but that's your TTY device or whatever else stdout may have been when you started). The timestamped stderr lines are written to your desired file.
The devscripts package in Debian/Ubuntu contains a script called annotate-output which does that (for both stdout and stderr).
$ annotate-output make
21:41:21 I: Started make
21:41:21 O: gcc -Wall program.c
21:43:18 E: program.c: Couldn't compile, and took me ages to find out
21:43:19 E: collect2: ld returned 1 exit status
21:43:19 E: make: *** [all] Error 1
21:43:19 I: Finished with exitcode 2
Here's a version that uses a while read loop like pax's, but doesn't require extra file descriptors or a separate script (although you could use one). It uses process substitution:
myscript.sh 2> >( while read line; do echo "$(date): ${line}"; done > error.log )
Using pax's predate.sh:
myscript.sh 2> >( predate.sh > error.log )
The program ts from the moreutils package pipes standard input to standard output, and prefixes each line with a timestamp.
To prefix stdout lines: command | ts
To prefix both stdout and stderr: command 2>&1 | ts
I like those portable shell scripts but a little disturbed that they fork/exec date(1) for every line. Here is a quick perl one-liner to do the same more efficiently:
perl -p -MPOSIX -e 'BEGIN {$!=1} $_ = strftime("%T ", localtime) . $_'
To use it, just feed this command input through its stdin:
(echo hi; sleep 1; echo lo) | perl -p -MPOSIX -e 'BEGIN {$|=1} $_ = strftime("%T ", localtime) . $_'
Rather than writing a script to pipe to, I prefer to write the logger as a function inside the script, and then send the entirety of the process into it with brackets, like so:
# Vars
logfile=/path/to/scriptoutput.log
# Defined functions
teelogger(){
log=$1
while read line ; do
print "$(date +"%x %T") :: $line" | tee -a $log
done
}
# Start process
{
echo 'well'
sleep 3
echo 'hi'
sleep 3
echo 'there'
sleep 3
echo 'sailor'
} | teelogger $logfile
If you want to redirect back to stdout I found you have to do this:
myscript.sh >> >( while read line; do echo "$(date): ${line}"; done )
Not sure why I need the > in front of the (, so <(, but thats what works.
I was too lazy for all current solutions... So I figured out new one (works for stdout could be adjusted for stderr as well):
echo "output" | xargs -L1 -I{} bash -c "echo \$(date +'%x %T') '{}'" | tee error.log
would save to file and print something like that:
11/3/16 16:07:52 output
Details:
-L1 means "for each new line"
-I{} means "replace {} by input"
bash -c is used to update $(date) each time for new call
%x %T formats timestamp to minimal form.
It should work like a charm if stdout and stderr doesn't have quotes (" or `). If it has (or could have) it's better to use:
echo "output" | awk '{cmd="(date +'%H:%M:%S')"; cmd | getline d; print d,$0; close(cmd)} | tee error.log'
(Took from my answer in another topic: https://stackoverflow.com/a/41138870/868947)
How about timestamping the remaining output, redirecting all to stdout?
This answer combines some techniques from above, as well as from unix stackexchange here and here. bash >= 4.2 is assumed, but other advanced shells may work. For < 4.2, replace printf with a (slower) call to date.
: ${TIMESTAMP_FORMAT:="%F %T"} # override via environment
_loglines() {
while IFS= read -r _line ; do
printf "%(${TIMESTAMP_FORMAT})T#%s\n" '-1' "$_line";
done;
}
exec 7<&2 6<&1
exec &> >( _loglines )
# Logit
To restore stdout/stderr:
exec 1>&6 2>&7
You can then use tee to send the timestamps to stdout and a logfile.
_tmpfile=$(mktemp)
exec &> >( _loglines | tee $_tmpfile )
Not a bad idea to have cleanup code if the process exited without error:
trap "_cleanup \$?" 0 SIGHUP SIGINT SIGABRT SIGBUS SIGQUIT SIGTRAP SIGUSR1 SIGUSR2 SIGTERM
_cleanup() {
exec >&6 2>&7
[[ "$1" != 0 ]] && cat "$_logtemp"
rm -f "$_logtemp"
exit "${1:-0}"
}
#!/bin/bash
DEBUG=1
LOG=$HOME/script_${0##*/}_$(date +%Y.%m.%d-%H.%M.%S-%N).log
ERROR=$HOME/error.log
exec 2> $ERROR
exec 1> >(tee -ai $LOG)
if [ $DEBUG = 0 ] ; then
exec 4> >(xargs -i echo -e "[ debug ] {}")
else
exec 4> /dev/null
fi
# test
echo " debug sth " >&4
echo " log sth normal "
type -f this_is_error
echo " errot sth ..." >&2
echo " finish ..." >&2>&4
# close descriptor 4
exec 4>&-
This thing: nohup myscript.sh 2> >( while read line; do echo "$(date): ${line}"; done > mystd.err ) < /dev/null &
Works as such but when I log out and log back in to the server, it does not work. that is mystd.err stop getting populated with stderr stream even though my process (myscript.sh here) still runs.
Does someone know how to get back the lost stderr in the mystd.err file back?
Thought I would add my 2 cents worth..
#!/bin/sh
timestamp(){
name=$(printf "$1%*s" `expr 15 - ${#1}`)
awk "{ print strftime(\"%b %d %H:%M:%S\"), \"- $name -\", $$, \"- INFO -\", \$0; fflush() }";
}
echo "hi" | timestamp "process name" >> /tmp/proccess.log
printf "$1%*s" `expr 15 - ${#1}`
Spaces the name out so it looks nice, where 15 is the max space issued, increase if desired
outputs >> Date - Process name - Process ID - INFO - Message
Jun 27 13:57:20 - process name - 18866 - INFO - hi
Redirections are taken in order. Try this:
Given a script -
$: cat tst
echo a
sleep 2
echo 1 >&2
echo b
sleep 2
echo 2 >&2
echo c
sleep 2
echo 3 >&2
echo d
I get the following
$: ./tst 2>&1 1>stdout | sed 's/^/echo $(date +%Y%m%dT%H%M%S) /; e'
20180925T084008 1
20180925T084010 2
20180925T084012 3
And as much as I dislike awk, it does avoid the redundant subcalls to date.
$: ./tst 2>&1 1>stdout | awk "{ print strftime(\"%Y%m%dT%H%M%S \") \$0; fflush() }"; >stderr
$: cat stderr
20180925T084414 1
20180925T084416 2
20180925T084418 3
$: cat stdout
a
b
c
d