How to terminate the logging process inside a shell script withput stopping the script? - shell

I am working with a shell script to run my analyses. To make sure, I can confirm the correct commands were executed, I am writing the complete STDOUT/STDERR output into a file
My script looks like that:
#!/bin/bash
# Here are some pliminary stuff
echo " this goes still to the STDOUT to control the begining of the script"
#### execute all output to log files
# to set a log file, where all echo command will be redirected into.
touch $projectName\_logfile.txt # creating the log file
exec &> $projectName\_logfile.txt # direct all output to the log file
echo "1. These steps should be written to the log file"
# exit
# exec >&-
echo "2. 2. these steps should be written to the STDOUT again!"
# The script should be able to continue here ...
As you can see I have tried both using the exit command as well as closing the file descriptor using exec again. But both have failed.
I would appreciate your help to understand how to close the connection to the log file and redirect everything back to the STDOUT/STDERR.
thanks
Assa

I would rather consider this way:
echo "to out 1"
{
echo "to log 1"
echo "to log 2"
} &> ./_logfile.txt
echo "to out 2"
Anyway if You still need to use Your approach then You need to save original file descriptors:
exec 3<&1 # save original stdout to 3
exec 4<&2 # save original stderr to 4
And then restore:
exec 1>&3 # restore original stdout
exec 2>&4 # restore original stderr
Your example:
#!/bin/env bash
echo " this goes still to the STDOUT to control the begining of the script"
touch ./_logfile.txt # touch the log file
exec 3<&1 # save original stdout to 3
exec 4<&2 # save original stderr to 4
exec &> ./_logfile.txt # direct all out and err to the log file
echo "1. These steps should be written to the log file"
exec 1>&3 # restore original stdout
exec 2>&4 # restore original stderr
echo "2. 2. these steps should be written to the STDOUT again!"
# The script should be able to continue here ...

Related

I want to place after all: echo "etc" ( >> filename) [duplicate]

Is it possible to redirect all of the output of a Bourne shell script to somewhere, but with shell commands inside the script itself?
Redirecting the output of a single command is easy, but I want something more like this:
#!/bin/sh
if [ ! -t 0 ]; then
# redirect all of my output to a file here
fi
# rest of script...
Meaning: if the script is run non-interactively (for example, cron), save off the output of everything to a file. If run interactively from a shell, let the output go to stdout as usual.
I want to do this for a script normally run by the FreeBSD periodic utility. It's part of the daily run, which I don't normally care to see every day in email, so I don't have it sent. However, if something inside this one particular script fails, that's important to me and I'd like to be able to capture and email the output of this one part of the daily jobs.
Update: Joshua's answer is spot-on, but I also wanted to save and restore stdout and stderr around the entire script, which is done like this:
# save stdout and stderr to file
# descriptors 3 and 4,
# then redirect them to "foo"
exec 3>&1 4>&2 >foo 2>&1
# ...
# restore stdout and stderr
exec 1>&3 2>&4
Addressing the question as updated.
#...part of script without redirection...
{
#...part of script with redirection...
} > file1 2>file2 # ...and others as appropriate...
#...residue of script without redirection...
The braces '{ ... }' provide a unit of I/O redirection. The braces must appear where a command could appear - simplistically, at the start of a line or after a semi-colon. (Yes, that can be made more precise; if you want to quibble, let me know.)
You are right that you can preserve the original stdout and stderr with the redirections you showed, but it is usually simpler for the people who have to maintain the script later to understand what's going on if you scope the redirected code as shown above.
The relevant sections of the Bash manual are Grouping Commands and I/O Redirection. The relevant sections of the POSIX shell specification are Compound Commands and I/O Redirection. Bash has some extra notations, but is otherwise similar to the POSIX shell specification.
Typically we would place one of these at or near the top of the script. Scripts that parse their command lines would do the redirection after parsing.
Send stdout to a file
exec > file
with stderr
exec > file
exec 2>&1
append both stdout and stderr to file
exec >> file
exec 2>&1
As Jonathan Leffler mentioned in his comment:
exec has two separate jobs. The first one is to replace the currently executing shell (script) with a new program. The other is changing the I/O redirections in the current shell. This is distinguished by having no argument to exec.
You can make the whole script a function like this:
main_function() {
do_things_here
}
then at the end of the script have this:
if [ -z $TERM ]; then
# if not run via terminal, log everything into a log file
main_function 2>&1 >> /var/log/my_uber_script.log
else
# run via terminal, only output to screen
main_function
fi
Alternatively, you may log everything into logfile each run and still output it to stdout by simply doing:
# log everything, but also output to stdout
main_function 2>&1 | tee -a /var/log/my_uber_script.log
For saving the original stdout and stderr you can use:
exec [fd number]<&1
exec [fd number]<&2
For example, the following code will print "walla1" and "walla2" to the log file (a.txt), "walla3" to stdout, "walla4" to stderr.
#!/bin/bash
exec 5<&1
exec 6<&2
exec 1> ~/a.txt 2>&1
echo "walla1"
echo "walla2" >&2
echo "walla3" >&5
echo "walla4" >&6
[ -t <&0 ] || exec >> test.log
I finally figured out how to do it. I wanted to not just save the output to a file but also, find out if the bash script ran successfully or not!
I've wrapped the bash commands inside a function and then called the function main_function with a tee output to a file. Afterwards, I've captured the output using if [ $? -eq 0 ].
#! /bin/sh -
main_function() {
python command.py
}
main_function > >(tee -a "/var/www/logs/output.txt") 2>&1
if [ $? -eq 0 ]
then
echo 'Success!'
else
echo 'Failure!'
fi

How to redirect the stream inline? It is possible not to keep repeating the code? [duplicate]

Is it possible to redirect all of the output of a Bourne shell script to somewhere, but with shell commands inside the script itself?
Redirecting the output of a single command is easy, but I want something more like this:
#!/bin/sh
if [ ! -t 0 ]; then
# redirect all of my output to a file here
fi
# rest of script...
Meaning: if the script is run non-interactively (for example, cron), save off the output of everything to a file. If run interactively from a shell, let the output go to stdout as usual.
I want to do this for a script normally run by the FreeBSD periodic utility. It's part of the daily run, which I don't normally care to see every day in email, so I don't have it sent. However, if something inside this one particular script fails, that's important to me and I'd like to be able to capture and email the output of this one part of the daily jobs.
Update: Joshua's answer is spot-on, but I also wanted to save and restore stdout and stderr around the entire script, which is done like this:
# save stdout and stderr to file
# descriptors 3 and 4,
# then redirect them to "foo"
exec 3>&1 4>&2 >foo 2>&1
# ...
# restore stdout and stderr
exec 1>&3 2>&4
Addressing the question as updated.
#...part of script without redirection...
{
#...part of script with redirection...
} > file1 2>file2 # ...and others as appropriate...
#...residue of script without redirection...
The braces '{ ... }' provide a unit of I/O redirection. The braces must appear where a command could appear - simplistically, at the start of a line or after a semi-colon. (Yes, that can be made more precise; if you want to quibble, let me know.)
You are right that you can preserve the original stdout and stderr with the redirections you showed, but it is usually simpler for the people who have to maintain the script later to understand what's going on if you scope the redirected code as shown above.
The relevant sections of the Bash manual are Grouping Commands and I/O Redirection. The relevant sections of the POSIX shell specification are Compound Commands and I/O Redirection. Bash has some extra notations, but is otherwise similar to the POSIX shell specification.
Typically we would place one of these at or near the top of the script. Scripts that parse their command lines would do the redirection after parsing.
Send stdout to a file
exec > file
with stderr
exec > file
exec 2>&1
append both stdout and stderr to file
exec >> file
exec 2>&1
As Jonathan Leffler mentioned in his comment:
exec has two separate jobs. The first one is to replace the currently executing shell (script) with a new program. The other is changing the I/O redirections in the current shell. This is distinguished by having no argument to exec.
You can make the whole script a function like this:
main_function() {
do_things_here
}
then at the end of the script have this:
if [ -z $TERM ]; then
# if not run via terminal, log everything into a log file
main_function 2>&1 >> /var/log/my_uber_script.log
else
# run via terminal, only output to screen
main_function
fi
Alternatively, you may log everything into logfile each run and still output it to stdout by simply doing:
# log everything, but also output to stdout
main_function 2>&1 | tee -a /var/log/my_uber_script.log
For saving the original stdout and stderr you can use:
exec [fd number]<&1
exec [fd number]<&2
For example, the following code will print "walla1" and "walla2" to the log file (a.txt), "walla3" to stdout, "walla4" to stderr.
#!/bin/bash
exec 5<&1
exec 6<&2
exec 1> ~/a.txt 2>&1
echo "walla1"
echo "walla2" >&2
echo "walla3" >&5
echo "walla4" >&6
[ -t <&0 ] || exec >> test.log
I finally figured out how to do it. I wanted to not just save the output to a file but also, find out if the bash script ran successfully or not!
I've wrapped the bash commands inside a function and then called the function main_function with a tee output to a file. Afterwards, I've captured the output using if [ $? -eq 0 ].
#! /bin/sh -
main_function() {
python command.py
}
main_function > >(tee -a "/var/www/logs/output.txt") 2>&1
if [ $? -eq 0 ]
then
echo 'Success!'
else
echo 'Failure!'
fi

Shell, redirect all output to a file but still print echo

I have multiple vagrant provisions (shell scripts), and I would like to redirect all command output to a file while keeping the echo output in stdout.
Currently I redirect all output by doing
exec &>> provision.log
At the beginning of each provision
Which works great, but console is empty. So I would like to redirect everything but the echo commands to a file.
If there would be a possibility of redirecting all output (including echo) to a file and keeping only echo in stdout that would be the best.
Result should look like:
Console
Starting provision "master"
Updating packages...
Installing MySQL...
ERROR! check provision.log for details
provision.log
Starting provision "master"
Updating packages...
...
(output from apt-get)
...
Installing MySQL...
...
(output from apt-get install mysql-server-5.5)
...
ERROR! check provision.log for details
I do realize I could attach output redirect to every command but that is quite messy
You can take this approach of duplicating stdout file descriptor and using a custom echo function redirecting to duplicate file descriptor.
#!/bin/bash
# open fd=3 redirecting to 1 (stdout)
exec 3>&1
# function echo to show echo output on terminal
echo() {
# call actual echo command and redirect output to fd=3 and log file
command echo "$#"
command echo "$#" >&3
}
# redirect stdout to a log file
exec >>logfile
printf "%s\n" "going to file"
date
echo "on stdout"
pwd
# close fd=3
exec 3>&-
You might redirect only stdout and keep stderr on the terminal. Of course your script should then echo to stderr, perhaps using echo something > /dev/stderr or echo something >&2
You could also redirect echo-s to /dev/tty which makes sense only if the script was started on a terminal (not e.g. thru at or crontab)

Use 'exec' to log all output, stop, and then re-attach to the same FDs/Named Pipe

update: I think the below code may be headed in the wrong direction -- but the question remains, can I open a pipe to log all output (file&console), pause that log and log to a new log (new file&console), and then re-attach to the FD for the original logger just by moving FDs around and not re-opening the original log file
Trying to improve my knowledge of FDs in bash. I'm trying to log all output of the main "meta" test.sh -- but log to a different file when I get to "sections" -- e.g. functions, sourced scripts, etc. And then go back to appending to the "meta" log.
I know I could pretty easily accomplish this with subshells -- or by opening the 'meta' log again and append from there, but can anyone help accomplish this by switching FDs around?
#!/bin/bash
rm *.log
NAMED_PIPE="$(mktemp -u /tmp/pipe.XXXX)"
mknod $NAMED_PIPE p
tee <$NAMED_PIPE "./meta.log" &
section () {
echo SECTION: stdout
echo SECTION: stderr >&2
}
# link stdout->3 & stderr->4 and save stdout & stderr
exec 3>&1 4>&2 &> "$NAMED_PIPE"
echo METAstr: stdout
echo METAstr: stderr >&2
# restore stdout & stderr
exec 1>&3 2>&4
# sleep 1 # I think an additional delay prevents the possible race condition I'm seeing
# exec 1>&3- 2>&4- ... I think this would restore but close 3 & 4?
# do I need another named pipe here?
section 2>&1 # | tee section.log
# re-link to same pipe
exec 3>&1 4>&2 &> "$NAMED_PIPE"
echo METAend: stdout
echo METAend: stderr >&2
Without trying to log 'section' all the meta output gets printed after the return of the script:
-bash-4.2# ./test.sh
SECTION: stdout
SECTION: stderr
-bash-4.2# METAstr: stdout
METAstr: stderr
METAend: stdout
METAend: stderr
And trying to log 'section' I think fouls up my FDs so the following exec hangs me up:
-bash-4.2# ./test.sh
METAstr: stdout
METAstr: stderr
SECTION: stdout
SECTION: stderr
EDIT1:
Contents of meta.log after running the script without trying to tee section:
[root#master tmp]# cat meta.log
METAstr: stdout
METAstr: stderr
METAend: stdout
METAend: stderr
It logs the ending messages, the tee does not exit until the script does
EDIT2:
Revision of EDIT1. I think It's a race condition. I think the FDs are being closed -- but they're not closed by the time the final echo commands happen.
I was just going to write the same thing. It's a race condition. The second exec closes the writing end in your process, signalling an EOF to tee. tee will want to exit when it gets the EOF. If it does exit by the time you call the last exec, the last exec will block. If it hasn't exited yet at that point, it will not block, because a reading end of the FIFO will still be open.
Any delay will make it more likely tee will have exited by that time.
Spawning a process makes it very likely. I found with stracing (which slows the program down a little) it's about 50/50.

Logging stderr and stdout to log file and handling errors in bash script [duplicate]

This question already has an answer here:
Send stderr/stdout messages to function and trap exit signal
(1 answer)
Closed 7 years ago.
RESOLUTION:
See Send stderr/stdout messages to function and trap exit signal
Edit
I see that I have not been precise enough in my original post, sorry about that! So now I will incldue code example of my bash script with refined questions:
My four questions:
How can I send stdout to file and stderr to log() function in mysql command in STEP 2
How can I send stdout and stderr to log() function in gzip STEP 2 and rsync command STEP 3?
How to read stdout and stderr in log() function in question 1 and 2 above?
How can I still trap all errors in the onexit() function?
In generel: I want stdout and stderr to go to log() function so the messages can be put to specific log file according to specific format, but in some cases, like for mysqldump command, stdout need to go to file. Also, error occurs and is sent to stderr, I want to end up in onexit() function after the log statement is finished.
#!/bin/bash -Eu
# -E: ERR trap is inherited by shell functions.
# -u: Treat unset variables as an error when substituting.
# Example script for handling bash errors. Exit on error. Trap exit.
# This script is supposed to run in a subshell.
# See also: http://fvue.nl/wiki/Bash:_Error_handling
# Trap non-normal exit signals: 2/INT, 3/QUIT, 15/TERM,
trap onexit 1 2 3 15 ERR
# ****** VARIABLES STARTS HERE ***********
BACKUP_ROOT_DIR=/var/app/backup
BACKUP_DIR_DATE=${BACKUP_ROOT_DIR}/`date "+%Y-%m-%d"`
EMAIL_FROM="foo#bar.com"
EMAIL_RECIPIENTS="bar#foo.com"
...
# ****** VARIABLES STARTS HERE ***********
# ****** FUNCTIONS STARTS HERE ***********
# Function that checks if all folders exists and create new ones as required
function checkFolders()
{
if [ ! -d "${BACKUP_ROOT_DIR}" ] ; then
log "ERROR" "Backup directory doesn't exist"
exit
else
log "INFO" "All folders exists"
fi
if [ ! -d "${BACKUP_DIR_DATE}" ] ; then
mkdir ${BACKUP_DIR_DATE} -v
log "INFO" "Created new backup directory"
else
log "WARN" "Backup directory already exists"
fi
}
# Function executed when exiting the script, either because of an error or successfully run
function onexit() {
local exit_status=${1:-$?}
# Send email notification with the status
echo "Backup finished at `date` with status ${exit_status_text} | mail -s "${exit_status_text} - backup" -S from="${EMAIL_FROM}" ${EMAIL_RECIPIENTS}"
log "INFO" "Email notification sent with execution status ${exit_status_text}"
# Print script duration to the console
ELAPSED_TIME=$((${SECONDS} - ${START_TIME}))
log "INFO" "Backup finished" "startDate=\"${START_DATE}\", endDate=\"`date`\", duration=\"$((${ELAPSED_TIME}/60)) min $((${ELAPSED_TIME}%60)) sec\""
exit ${exit_status}
}
# Logs to custom log file according to preferred log format for Splunk
# Input:
# 1. severity (INFO,WARN,DEBUG,ERROR)
# 2. the message
# 3. additional fields
#
function log() {
local print_msg="`date +"%FT%T.%N%Z"` severity=\"${1}\",message=\"${2}\",transactionID=\"${TRANS_ID}\",source=\"${SCRIPT_NAME}\",environment=\"${ENV}\",application=\"${APP}\""
# check if additional fields set in the 3. parameter
if [ $# -eq 3 ] ; then
print_msg="${print_msg}, ${3}"
fi
echo ${print_msg} >> ${LOG_FILE}
}
# ****** FUNCTIONS ENDS HERE ***********
# ****** SCRIPT STARTS HERE ***********
log "INFO" "Backup of ${APP} in ${ENV} starting"
# STEP 1 - validate
log "INFO" "1/3 Checking folder paths"
checkFolders
# STEP 2 - mysql dump
log "INFO" "2/3 Dumping ${APP} database"
mysqldump --single-transaction ${DB_NAME} > ${BACKUP_DIR_DATE}/${SQL_BACKUP_FILE}
gzip -f ${BACKUP_DIR_DATE}/${SQL_BACKUP_FILE}
log "INFO" "Mysql dump finished."
# STEP 3 - transfer
# Files are only transferred if all commands has been running successfully. Transfer is done with use of rsync
log "INFO" "3/3 Transferring backup file"
rsync -r -av ${BACKUP_ROOT_DIR}/ ${BACKUP_TRANSFER_USER}#${BACKUP_TRANSFER_DEST}
# ****** SCRIPT ENDS HERE ***********
onexit
Thanks!
Try this:
mylogger() { printf "Log: %s\n" "$(</dev/stdin)"; }
mysqldump ... 2>&1 >dumpfilename.sql | mylogger
Both Cyrus's answer and Oleg Vaskevich's answer offer viable solutions for redirecting stderr to a shell function.
What they both imply is that it makes sense for your function to accept stdin input rather than expecting input as an argument.
To explain the idiom they use:
mysqldump ... 2>&1 > sdtout-file | log-func-that-receives-stderr-via-stdin
2>&1 ... redirects stderr to the original stdout
from that point on, any stderr input is redirected to stdout
> sdtout-file then redirects the original stdout stdout to file stdout-out-file
from that point on, any stdout input is redirected to the file.
Since > stdout-file comes after 2>&1, the net result is:
stdout output is redirected to file stdout-file
stderr output is sent to [the original] stdout
Thus, log-func-that-receives-stderr-via-stdin only receives the previous command's stderr input through the pipe, via its stdin.
Similarly, your original approach - command 2> >(logFunction) - works in principle, but requires that your log() function read from stdin rather than expect arguments:
The following illustrates the principle:
ls / nosuchfile 2> >(sed s'/^/Log: /') > stdout-file
ls / nosuchfile produces both stdout and stderr output.
stdout output is redirected to file stdout-file.
2> >(...) uses an [output] process substitution to redirect stderr output to the command enclosed in >(...) - that command receives input via its stdin.
sed s'/^/Log: /' reads from its stdin and prepends string Log: to each input line.
Thus, your log() function should be rewritten to process stdin:
either: by implicitly passing the input to another stdin-processing utility such as sed or awk (as above).
or: by using a while read ... loop to process each input line in a shell loop:
log() {
# `read` reads from stdin by default
while IFS= read -r line; do
printf 'STDERR line: %s\n' "$line"
done
}
mysqldump ... 2> >(log) > stdout-file
Let's suppose that your log function looks like this (it just echos the first argument):
log() { echo "$1"; }
To save the stdout of mysqldump to some file and call your log() function for every line in stderr, do this:
mysqldump 2>&1 >/your/sql_dump_file.dat | while IFS= read -r line; do log "$line"; done
If you wanted to use xargs, you could do it this way. However, you'd be starting a new shell every time.
export -f log
mysqldump 2>&1 >/your/sql_dump_file.dat | xargs -L1 bash -i -c 'log $#' _

Resources