I have written a wrapper in bash, which calls other shell scripts. However, I need to print only the output from the wrapper, avoiding the output from called scripts, which I am basically logging into a log file.
Elaborating…..
Basically I am using a function as
start_logging ${LOGFILE}
{
Funtion1
Funtion2
} 2>&1 | tee -a ${LOGFILE}
Where start logging is define as:- (I could only understand this function partially)
start_logging()
{
## usage: start_logging
## start a new log or append to existing log file
declare -i rc=0
if [ ! "${LOGFILE}" ];then
## display error and bail
fi
local TIME_STAMP=$(date +%Y%m%d:%H:%M:%S)
## open ${LOGFILE} or append to existing ${LOGFILE} with timestamp and actual command line
if [ ${DRY_RUN} ]; then
echo "DRY_RUN set..."
echo "${TIME_STAMP} Starting $(basename ${0}) run: '${0} ${ORIG_ARGS}'" { I}
echo "DRY_RUN set..."
echo "Please ignore \"No such file or directory\" from tee..."
else
echo "${TIME_STAMP} Starting $(basename ${0}) run: '${0} ${ORIG_ARGS}'"
echo "${TIME_STAMP} Starting $(basename ${0}) run: '${0} ${ORIG_ARGS}'"
fi
return ${rc}
}
LOGFILE is defined in the wrapper as
{
TMPDIR ="$/tmp"
LOGFILE="${TMPDIR}/${$}/${BASENAME%.*}.log
}
Now when its calling funtion1, funtion2 which basically calls other bash scripts its logging all the output in the file .i.e. { TMPDIR}/${$}/${BASENAME%.*}.log } as well as on the bash terminal.
I wanted that it should only echo what I have written in the wrapper on to the bash terminal and rest should be recorded in the log.
PleaseNote:- the called scripts from wrapper have echo function within but I don’t wanted that there output should be displayed on the terminal
Is it possible to achieve....
You need to redirect both stdout + stderr of your called scripts into your logfile.
./your_other_script.sh 2&>1 >> /var/log/mylogfile.txt
You get output to the terminal, because of tee. So you can:
start_logging ${LOGFILE}
{
Funtion1
Funtion2
} 2>&1 | tee -a ${LOGFILE} >/dev/null
^^^^^^^^^^ - redirect the output from a tee to /dev/null
or simply remove the tee and all logs redirect only into the file
start_logging ${LOGFILE}
{
Funtion1
Funtion2
} 2>&1 >${LOGFILE}
or for bigger parts of script, enclose the part into ( ) pair, will executed in a subshell and redirect the output to /dev/null, so:
(
start_logging ${LOGFILE}
{
Funtion1
Funtion2
} 2>&1 | tee -a ${LOGFILE}
) >/dev/null
Related
I am assigning the output of a command to variable A:
A=$(some_command)
How can I "capture" stderr into a variable B ?
I have tried some variations with 2>&1 and read but that does not work:
A=$(some_command) 2>&1 | read B
echo $B
Here's a code snippet that might help you
# capture stderr into a variable and print it
echo "capture stderr into a variable and print it"
var=$(lt -l /tmp 2>&1)
echo $var
capture stderr into a variable and print it
zsh: command not found: lt
# capture stdout into a variable and print it
echo "capture stdout into a variable and print it"
var=$(ls -l /tmp)
echo $var
# capture both stderr and stdout into a variable and print it
echo "capture both stderr and stdout into a variable and print it"
var=$(ls -l /tmp 2>&1)
echo $var
# more classic way of executing a command which I always follow is as follows. This way I am always in control of what is going on and can act accordingly
if somecommand ; then
echo "command succeeded"
else
echo "command failed"
fi
If you have to capture the output and stderr in different variables, then the following might help as well
## create a file using file descriptor for stdout
exec 3> stdout.txt
# create a file using file descriptor for stderr
exec 4> stderr.txt
A=$($1 /tmp 2>&4 >&3);
## close file descriptor
exec 3>&-
exec 4>&-
## open file descriptor for reading
exec 3< stdout.txt
exec 4< stderr.txt
## read from file using file descriptor
read line <&3
read line2 <&4
## close file descriptor
exec 3<&-
exec 4<&-
## print line read from file
echo "stdout: $line"
echo "stderr: $line2"
## delete file
rm stdout.txt
rm stderr.txt
You can try running it with the following
╰─ bash test.sh pwd
stdout: /tmp/somedir
stderr:
╰─ bash test.sh pwdd
stdout:
stderr: test.sh: line 8: pwdd: command not found
As noted in a comment your use case may be better served in other scripting languages. An example: in Perl you can achieve what you want quite simple:
#!/usr/bin/env perl
use v5.26; # or earlier versions
use Capture::Tiny 'capture'; # library is not in core
my $cmd = 'date';
my #arg = ('-R', '-u');
my ($stdout, $stderr, $exit) = capture {
system( $cmd, #arg );
};
say "STDOUT: $stdout";
say "STDERR: $stderr";
say "EXIT: $exit";
I'm sure similar solutions are available in python, ruby, and all the rest.
I gave it another try using process substitution and came up with this:
# command with no error
date +%b > >(read A; if [ "$A" = 'Sep' ]; then echo 'September'; fi ) 2> >(read B; if [ ! -z "$B" ]; then echo "$B"; fi >&2)
September
# command with error
date b > >(read A; if [ "$A" = 'Sep' ]; then echo 'September'; fi ) 2> >(read B; if [ ! -z "$B" ]; then echo "$B"; fi >&2)
date: invalid date “b“
# command with both at the same time should work too
I had no success "exporting" the variables from the subprocesses back to the original script. It might be possible though. I just couldn't figure it out.
But this gives you at least access to stdout and stderr as a variable. This means you can do whatever processing you want on them as variables. It depends on your use case if this is of any help to you. Good luck :-)
I have a set of bash log functions which enable me to comfortably redirect all output to a log file and bail out in case something happens:
#! /usr/bin/env bash
# This script is meant to be sourced
export SCRIPT=$0
if [ -z "${LOG_FILE}" ]; then
export LOG_FILE="./log.txt"
fi
# https://stackoverflow.com/questions/11904907/redirect-stdout-and-stderr-to-function
# If the message is piped receives info, if the message is a parameter
# receives info, message
log() {
local TYPE
local IN
local PREFIX
local LINE
TYPE="$1"
if [ -n "$2" ]; then
IN="$2"
else
if read -r LINE; then
IN="${LINE}"
fi
while read -r LINE; do
IN="${IN}\n${LINE}"
done
IN=$(echo -e "${IN}")
fi
if [ -n "${IN}" ]; then
PREFIX=$(date +"[%X %d-%m-%y - $(basename "${SCRIPT}")] ${TYPE}: ")
IN="$(echo "${IN}" | awk -v PREFIX="${PREFIX}" '{printf PREFIX}$0')"
touch "${LOG_FILE}"
echo "${IN}" >> "${LOG_FILE}"
fi
}
# receives message as parameter or piped, logs as info
info() {
log "( INFO )" "$#"
}
# receives message as parameter or piped, logs as an error
error() {
log "(ERROR )" "$#"
}
# logs error and exits
fail() {
error "$1"
exit 1
}
# Reroutes stdout to info and sterr to error
log_wrap()
{
"$#" > >(info) 2> >(error)
return $?
}
Then I use the functions as follows:
LOG_FILE="logging.log"
source "log_functions.sh"
info "Program started"
log_wrap some_command arg0 arg1 --kwarg=value || fail "Program failed"
Which works. Since log_wrap redirects stdout and sterr I don't want it interfering with commands composed using piping or redirections. Such as:
log_wrap echo "File content" > ~/user_file || fail "user_file could not be created."
log_wrap echo "File content" | sudo tee ~/root_file > /dev/null || fail "root_file could not be created."
So I want a way to group those commands so their redirection is solved and then pass that to log_wrap. I am aware of two ways of grouping:
Subshells: They are not meant to be passed around, naturally this:
log_wrap ( echo "File content" > ~/user_file ) || fail "user_file could not be created."
throws a syntax error.
Braces (grouping?, context?): When called inside a command, the brace is interpreted as an argument.
log_wrap { echo "File content" > ~/user_file } || fail "user_file could not be created."
Is roughly equivalent (in my understanding) to:
log_wrap '{' echo "File content" > ~/user_file '}' || fail "user_file could not be created."
To recapitulate, my question is: Is there a way to pass a composition of commands, in my case composed by redirection/piping, to a bash function?
The way it's set up, you can only pass what Posix calls simple commands -- command names and arguments. No compound commands like subshells or brace groups will work.
However, you can use functions to run arbitrary code in a simple command:
foo() { { echo "File content" > ~/user_file; } || fail "user_file could not be created."; }
log_wrap foo
You could also consider just automatically applying your wrapper to all commands in the rest of the script using exec:
exec > >(info) 2> >(error)
{ echo "File content" > ~/user_file; } || fail "user_file could not be created.";
I'm trying create a function that will log wether a command executed successfully or not.
function LOG_CMD() {
"$#"
local exit_code=$?
if [ $exit_code -eq 0 ]; then
echo -e "[$(date)]\t[SUCCESS]${#}" | sudo tee -a $LOG_FILE
else
echo -e "[$(date)]\t[ERROR]${#}" | sudo tee -a $LOG_FILE
fi
}
This works for most commands but I'm having problems with anything that uses a pipe. For example when I try and use pipe and tee to create a config file the log entry gets written to the config.
LOG_CMD echo "ALTER USER '${1}'#'localhost' IDENTIFIED BY '${2}';" | sudo tee -a /sql-init
Because I'm often writing to files the the user won't have permission to I've avoided appending to files with >>.
Pipes are not arguments; they separate two completely different commands. The only way to do what you want is to pass a single string argument to LOG_CMD, then use eval to execute it.
LOG_CMD() {
eval "$1"
local exit_code=$?
if [ "$exit_code" -eq 0 ]; then
result=SUCCESS
else
result=ERROR
fi
printf '[%s]\t[%s] %s\n' "$(date)" "$result" "$1" | sudo tee -a "$LOG_FILE"
}
LOG_CMD "echo \"ALTER USER '${1}'#'localhost' IDENTIFIED BY '${2}';\" | sudo tee -a /sql-init"
Keep in mind the dangers of passing a dynamically constructed command to eval, however.
How can I save the error output when running my script to a file?
I have my code bellow, but the code does not track error and save the error to test.log14.
Can somebody give me a hint on what could be wrong in my code...
LOGFILE=/usr/local/etc/backups/test14.log
"$(date "+%m%d%Y %T") : Starting work" >> $LOGFILE 2>&1
Send the error output of the command instead of appending 'error output from append' -if that makes sense. I typically use functions for this type of process (similar to above.)
Try this:
2>&1 >> $LOGFILE
instead of
>> $LOGFILE 2>&1
###
function run_my_stuff {
echo "$(date "+%m%d%Y %T") : Starting work"
... more commands ...
echo "$(date "+%m%d%Y %T") : Done"
}
## call function and append to log
run_my_stuff 2>&1 >> ${LOGFILE} # OR
run_my_stuff 2>&1 | tee -a ${LOGFILE} # watch the log
Use this code:
#!/bin/bash
LOGFILE=/usr/local/etc/backups/test14.log
(
echo "$(date "+%m%d%Y %T") : Starting work"
... more commands ...
echo error 1>&2 # test stderr
echo "$(date "+%m%d%Y %T") : Done"
) >& $LOGFILE
The () makes BASH execute most of the script in a subshell. All output of the subshell (stderr and stdout) is redirected to the logfile.
There are normally 2 outputs that you see on your screen when you execute a command:
STDOUT
STDERR
You can redirect them independently per command.
For example:
ls >out 2>err
will give you two files; out will contain the output from ls and err will be empty.
ls /nonexisting 1>out 2>err
will give you an empty out file and err will contain
"ls: cannot access /nonexisting: No such file or directory"
The 2>&1 redirects 2 (STDERR) to 1 (STDOUT)
Your echo does not have any errors, therefore there is no output on the STDERR.
So your code would probably need to be something like this:
echo "$(date "+%m%d%Y %T") : Starting work"
work_command > work_output 2>> $LOGFILE
I have a huge bash script and I want to log specific blocks of code to a specific & small log files (instead of just one huge log file).
I have the following two methods:
# in this case, 'log' is a bash function
# Using code block & piping
{
# ... bash code ...
} | log "file name"
# Using Process Substitution
log "file name" < <(
# ... bash code ...
)
Both methods may interfere with the proper execution of the bash script, e.g. when assigning values to a variable (like the problem presented here).
How do you suggest to log the output of commands to log files?
Edit:
This is what I tried to do (besides many other variations), but doesn't work as expected:
function log()
{
if [ -z "$counter" ]; then
counter=1
echo "" >> "./General_Log_File" # Create the summary log file
else
(( ++counter ))
fi
echo "" > "./${counter}_log_file" # Create specific log file
# Display text-to-be-logged on screen & add it to the summary log file
# & write text-to-be-logged to it's corresponding log file
exec 1> >(tee "./${counter}_log_file" | tee -a "./General_Log_File") 2>&1
}
log # Logs the following code block
{
# ... Many bash commands ...
}
log # Logs the following code block
{
# ... Many bash commands ...
}
The results of executions varies: sometimes the log files are created and sometimes they don't (which raise an error).
You could try something like this:
function log()
{
local logfile=$1
local errfile=$2
exec > $logfile
exec 2> $errfile # if $errfile is not an empty string
}
log $fileA $errfileA
echo stuff
log $fileB $errfileB
echo more stuff
This would redirect all stdout/stderr from current process to a file without any subprocesses.
Edit: The below might be a good solution then, but not tested:
pipe=$(mktemp)
mknod $pipe p
exec 1>$pipe
function log()
{
if ! [[ -z "$teepid2" ]]; then
kill $teepid2
else
tee <$pipe general_log_file &
teepid1=$!
count=1
fi
tee <$pipe ${count}_logfile &
teepid2=$!
(( ++count ))
}
log
echo stuff
log
echo stuff2
if ! [[ -z "$teepid1" ]]; then kill $teepid1; fi
Thanks to Sahas, I managed to achieve the following solution:
function log()
{
[ -z "$counter" ] && counter=1 || (( ++counter ))
if [ -n "$teepid" ]; then
exec 1>&- 2>&- # close file descriptors to signal EOF to the `tee`
# command in the bg process
wait $teepid # wait for bg process to exit
fi
# Display text-to-be-logged on screen and
# write it to the summary log & to it's corresponding log file
( tee "${counter}.log" < "$pipe" | tee -a "Summary.log" 1>&4 ) &
teepid=$!
exec 1>"$pipe" 2>&1 # redirect stdout & stderr to the pipe
}
# Create temporary FIFO/pipe
pipe_dir=$(mktemp -d)
pipe="${pipe_dir}/cmds_output"
mkfifo "$pipe"
exec 4<&1 # save value of FD1 to FD4
log # Logs the following code block
{
# ... Many bash commands ...
}
log # Logs the following code block
{
# ... Many bash commands ...
}
if [ -n "$teepid" ]; then
exec 1>&- 2>&- # close file descriptors to signal EOF to the `tee`
# command in the bg process
wait $teepid # wait for bg process to exit
fi
It works - I tested it.
References:
Force bash script to use tee without piping from the command line # superuser.com - helped a lot
I/O Redirection # tldp.org
$! - PID Variable # tldp.org
TEST Operators: Binary Comparison # tldp.org
For simple redirection of bash code block, without using a dedicated function, do:
(
echo "log this block of code"
# commands ...
# ...
# ...
) &> output.log