Logging functions in bash and stdout - bash

I'd like to be able to put log messages in the middle of bash functions, without affecting the output of those very functions. For example, consider the following functions log() and get_animals():
# print a log a message
log ()
{
echo "Log message: $1"
}
get_animals()
{
log "Fetching animals"
echo "cat dog mouse"
}
values=`get_animals`
echo $values
After which $values contains the string "Log message: Fetching animals cat dog mouse".
How should I modify this script so that "Log message: Fetching animals" is outputted to the terminal, and $values contains "cat dog mouse"?

choroba's solution to another question shows how to use exec to open a new file descriptor.
Translating that solution to this question gives something like:
# Open a new file descriptor that redirects to stdout:
exec 3>&1
log ()
{
echo "Log message: $1" 1>&3
}
get_animals()
{
log "Fetching animals"
echo "cat dog mouse"
}
animals=`get_animals`
echo Animals: $animals
Executing the above produces:
Log message: Fetching animals
Animals: cat dog mouse
More information about using I/O redirection and file descriptors in Bash can be found at:
Bash Guide for Beginners, section 8.2.3, Redirection and file descriptors
Advanced Bash-Scripting Guide, Chapter 20, I/O Redirection

You can redirect the output to the sdterr error file on file handle 2 using >&2
example :
# print a log a message
log ()
{
echo "Log message: $1" >&2
}
get_animals()
{
log "Fetching animals"
echo "cat dog mouse"
}
values=`get_animals`
echo $values
the `` only take the output on stdout, not on stderr. The console on the other hand displays both.
If you really want the Log message on the stdout you can redirect error back to stdout after assigning to the variable :
# print a log a message
log ()
{
echo "Log message: $1" >&2
}
get_animals()
{
log "Fetching animals"
echo "cat dog mouse"
}
values=`get_animals` 2>&1
echo $values

#
#------------------------------------------------------------------------------
# echo pass params and print them to a log file and terminal
# with timestamp and $host_name and $0 PID
# usage:
# doLog "INFO some info message"
# doLog "DEBUG some debug message"
# doLog "WARN some warning message"
# doLog "ERROR some really ERROR message"
# doLog "FATAL some really fatal message"
#------------------------------------------------------------------------------
doLog(){
type_of_msg=$(echo $*|cut -d" " -f1)
msg=$(echo "$*"|cut -d" " -f2-)
[[ $type_of_msg == DEBUG ]] && [[ $do_print_debug_msgs -ne 1 ]] && return
[[ $type_of_msg == INFO ]] && type_of_msg="INFO " # one space for aligning
[[ $type_of_msg == WARN ]] && type_of_msg="WARN " # as well
# print to the terminal if we have one
test -t 1 && echo " [$type_of_msg] `date "+%Y.%m.%d-%H:%M:%S %Z"` [$run_unit][#$host_name] [$$] ""$msg"
# define default log file none specified in cnf file
test -z $log_file && \
mkdir -p $product_instance_dir/dat/log/bash && \
log_file="$product_instance_dir/dat/log/bash/$run_unit.`date "+%Y%m"`.log"
echo " [$type_of_msg] `date "+%Y.%m.%d-%H:%M:%S %Z"` [$run_unit][#$host_name] [$$] ""$msg" >> $log_file
}
#eof func doLog

You could redirect log output to the standard error stream:
log()
{
echo 1>&2 "Log message: $1"
}

Related

bash script stops logging in the middle

I trimmed my script down but my log function stops working and I don't understand why. I copied a script that returns values through stdout so we can't put 'anything' in stdout or it corrupts the set of bash scripts. I am on macOS Catalina
#!/bin/bash
set -e
function log {
MESSAGE=$1
>&2 echo "$MESSAGE"
}
log "message works"
command -v tac >&2
log "test and not work too"
TAC_EXISTS=$?
command -v tail >&2
TAIL_EXISTS=$?
log "message not work"
function log {
MESSAGE=$1
>&2 echo "$MESSAGE"
}
Why don't you write it as
function log {
>&2 echo "$1" > &2
}

Passing subshell to bash function

I have a set of bash log functions which enable me to comfortably redirect all output to a log file and bail out in case something happens:
#! /usr/bin/env bash
# This script is meant to be sourced
export SCRIPT=$0
if [ -z "${LOG_FILE}" ]; then
export LOG_FILE="./log.txt"
fi
# https://stackoverflow.com/questions/11904907/redirect-stdout-and-stderr-to-function
# If the message is piped receives info, if the message is a parameter
# receives info, message
log() {
local TYPE
local IN
local PREFIX
local LINE
TYPE="$1"
if [ -n "$2" ]; then
IN="$2"
else
if read -r LINE; then
IN="${LINE}"
fi
while read -r LINE; do
IN="${IN}\n${LINE}"
done
IN=$(echo -e "${IN}")
fi
if [ -n "${IN}" ]; then
PREFIX=$(date +"[%X %d-%m-%y - $(basename "${SCRIPT}")] ${TYPE}: ")
IN="$(echo "${IN}" | awk -v PREFIX="${PREFIX}" '{printf PREFIX}$0')"
touch "${LOG_FILE}"
echo "${IN}" >> "${LOG_FILE}"
fi
}
# receives message as parameter or piped, logs as info
info() {
log "( INFO )" "$#"
}
# receives message as parameter or piped, logs as an error
error() {
log "(ERROR )" "$#"
}
# logs error and exits
fail() {
error "$1"
exit 1
}
# Reroutes stdout to info and sterr to error
log_wrap()
{
"$#" > >(info) 2> >(error)
return $?
}
Then I use the functions as follows:
LOG_FILE="logging.log"
source "log_functions.sh"
info "Program started"
log_wrap some_command arg0 arg1 --kwarg=value || fail "Program failed"
Which works. Since log_wrap redirects stdout and sterr I don't want it interfering with commands composed using piping or redirections. Such as:
log_wrap echo "File content" > ~/user_file || fail "user_file could not be created."
log_wrap echo "File content" | sudo tee ~/root_file > /dev/null || fail "root_file could not be created."
So I want a way to group those commands so their redirection is solved and then pass that to log_wrap. I am aware of two ways of grouping:
Subshells: They are not meant to be passed around, naturally this:
log_wrap ( echo "File content" > ~/user_file ) || fail "user_file could not be created."
throws a syntax error.
Braces (grouping?, context?): When called inside a command, the brace is interpreted as an argument.
log_wrap { echo "File content" > ~/user_file } || fail "user_file could not be created."
Is roughly equivalent (in my understanding) to:
log_wrap '{' echo "File content" > ~/user_file '}' || fail "user_file could not be created."
To recapitulate, my question is: Is there a way to pass a composition of commands, in my case composed by redirection/piping, to a bash function?
The way it's set up, you can only pass what Posix calls simple commands -- command names and arguments. No compound commands like subshells or brace groups will work.
However, you can use functions to run arbitrary code in a simple command:
foo() { { echo "File content" > ~/user_file; } || fail "user_file could not be created."; }
log_wrap foo
You could also consider just automatically applying your wrapper to all commands in the rest of the script using exec:
exec > >(info) 2> >(error)
{ echo "File content" > ~/user_file; } || fail "user_file could not be created.";

Execute correctly a bash command that have interactive input

I try to execute from bash a command and retrieve stdout, stderr and exit code.
So far so good, there is plenty way.
The problem begin when that the program have an interactive input.
More precisly, I execute "git commit" (without -m) and "GNU nano" is executed in order to put a commit message.
If I use simply :
git commit
or
exec git commit
I can see the prompt, but I can't get stdout/stderr.
If I use
output=`git commit 2>&1`
or
output=$(git commit 2>&1)
I can retrieve stdout/stderr, but I can't see the prompt.
I can still do ctrl+X to abort the git commit.
My first attempt was by function call and my script end up hanging on a blank screen and ctrl+x / ctrl+c doesn't work.
function Execute()
{
if [[ $# -eq 0 ]]; then
echo "Error : function 'Execute' called without argument."
exit 3
fi
local msg=$("$# 2>&1")
local error=$?
if [[ $error -ne 0 ]]; then
echo "Error : '"$(printf '%q ' "$#")"' return '$error' error code."
echo "$1 message :"
echo "$msg"
echo
exit 1
fi
}
Execute git commit
I begin to ran out of idea/knowledge. Is what I want to do impossible ? Or is there a way that I don't know ?
Try this which processes every line output to stdout or stderr and redirects based on content:
#!/bin/env bash
foo() {
printf 'prompt: whos on first?\n' >&2
printf 'error: uh-oh\n' >&2
}
var=$(foo 2>&1 | awk '{print | "cat>&"(/prompt/ ? 2 : 1)}' )
echo "var=$var"
$ ./tst.sh
prompt: whos on first?
var=error: uh-oh
or this which just processes stderr:
#!/bin/env bash
foo() {
printf 'prompt: whos on first?\n' >&2
printf 'error: uh-oh\n' >&2
}
var=$(foo 2> >(awk '{print | "cat>&"(/prompt/ ? 2 : 1)}') )
echo "var=$var"
$ ./tst.sh
prompt: whos on first?
var=error: uh-oh
The awk command splits it's input to stderr or stdout based on content and only stdout is saved in the variable var. I don't know if your prompt is coming to stderr or stdout or where you really want it go go but massage to suit wrt what you want to go to stdout vs stderr and what you want to capture in the variable vs see printed to the screen. You just need to have something in the prompt to recognize as such so you can separate the prompt from the rest of the stdout and stderr and print the prompt to stderr while everything else gets redirected to stdout.
Alternatively here's a version that prints the first line (regardless of content) to stderr for display and everything else to stdout for capture:
$ cat tst.sh
#!/bin/env bash
foo() {
printf 'prompt: whos on first?\n' >&2
printf 'error: uh-oh\n' >&2
}
var=$(foo 2>&1 | awk '{print | "cat>&"(NR>1 ? 1 : 2)}' )
echo "var=$var"
$ ./tst.sh
prompt: whos on first?
var=error: uh-oh

How can I log my logins/logouts and screen locks/unlocks in gnome

I want to create a logfile with a log of certain events like:
log into gnome
log screen
unlock screen
logout
My plan was to write a script that runs in the background as a child process of the gnome session. It would start by appending "LOGIN", monitor for screen locking/unlocking, and append "LOGOUT" when it received a SIGHUP (meaning the session ended).
I wrote a script [1] which works if I start it in a shell, but it's clunky. I want this program running in the background -- I don't want to have to remember to start it each time I log in.
Can someone point me in the right direction?
[1] The script:
#!/bin/bash
# param $1: type, in:
# ["SCREEN_LOCKED",
# "SCREEN_UNLOCKED",
# "LOGIN",
# "LOGOUT",
# "SIGINT",
# "SIGTERM"]
function write_log {
if [ -z $1 ]; then
1="unspecified"
fi
echo -e "$1\t$(date)" >> "$LOG"
}
function notify {
echo "$#" >&2
}
function show_usage {
notify "Usage: $0 login <address> <logfile>"
notify "Parameters:"
notify " login: You must use the string 'login' to avoid seeing this message."
notify " <logfile>: File to store logs."
notify ""
notify "This script is designed to go in the bashrc file, and be called in the"
notify "form of: $0 login '$USER#$(uname -n)' >>/path/to/logfile &"
notify ""
}
if [ "$#" -eq 0 ]; then
show_usage
exit 1
fi
if [ "$1" != "login" ]; then
show_usage
notify "Error: first parameter must be the string 'login'."
exit 1
fi
LOG="$2"
if [ -z "$LOG" ]; then
notify "Error: please specify a logfile."
exit 1
elif [ -f "$LOG" ]; then
# If the logfile exists, verify that the last action was a LOGOUT.
LASTACTION=$(tail -1 "$LOG" | awk '{print $1}')
if [ $LASTACTION != "LOGOUT" ]; then
notify "Logfile '$LOG' exists but last action was not logout: $LASTACTION"
exit 1
fi
else
# If the file does not exist, create it.
touch "$LOG" || ( notify "Cannot create logfile: '$2'" && exit 1 )
fi
# Begin by logging in:
write_log "LOGIN"
# Handle signals by logging:
trap "write_log 'LOGOUT'; exit" SIGHUP
trap "write_log 'INTERRUPTED_SIGINT'; exit 1" SIGINT
trap "write_log 'INTERRUPTED_SIGTERM'; exit 1" SIGTERM
# Monitor gnome for screen locking. Log these events.
dbus-monitor --session "type='signal',interface='org.gnome.ScreenSaver'" | \
(
while true; do
read X;
if echo $X | grep "boolean true" &> /dev/null; then
write_log "SCREEN_LOCKED"
elif echo $X | grep "boolean false" &> /dev/null; then
write_log "SCREEN_UNLOCKED"
fi
done
)
I also have such a script, it works well starting it with a desktop file in the autostart directory:
$ cat ~/.config/autostart/watcher.sh.desktop
[Desktop Entry]
Type=Application
Exec=/home/<username>/hg/programs/system/watcher/watcher.sh
Hidden=false
X-GNOME-Autostart-enabled=true
Name[de_DE]=watcher
Name=watcher
Comment[de_DE]=
Comment=
Nowadays I think it's better to listen to the LockedHint rather than screensaver messages. That way you're not tied to a screensaver implementation.
Here's a simple script to do that:
gdbus monitor -y -d org.freedesktop.login1 | grep LockedHint
Gives this:
/org/freedesktop/login1/session/_32: org.freedesktop.DBus.Properties.PropertiesChanged ('org.freedesktop.login1.Session', {'LockedHint': <true>}, #as [])
/org/freedesktop/login1/session/_32: org.freedesktop.DBus.Properties.PropertiesChanged ('org.freedesktop.login1.Session', {'LockedHint': <false>}, #as [])

logging blocks of code to log files in bash

I have a huge bash script and I want to log specific blocks of code to a specific & small log files (instead of just one huge log file).
I have the following two methods:
# in this case, 'log' is a bash function
# Using code block & piping
{
# ... bash code ...
} | log "file name"
# Using Process Substitution
log "file name" < <(
# ... bash code ...
)
Both methods may interfere with the proper execution of the bash script, e.g. when assigning values to a variable (like the problem presented here).
How do you suggest to log the output of commands to log files?
Edit:
This is what I tried to do (besides many other variations), but doesn't work as expected:
function log()
{
if [ -z "$counter" ]; then
counter=1
echo "" >> "./General_Log_File" # Create the summary log file
else
(( ++counter ))
fi
echo "" > "./${counter}_log_file" # Create specific log file
# Display text-to-be-logged on screen & add it to the summary log file
# & write text-to-be-logged to it's corresponding log file
exec 1> >(tee "./${counter}_log_file" | tee -a "./General_Log_File") 2>&1
}
log # Logs the following code block
{
# ... Many bash commands ...
}
log # Logs the following code block
{
# ... Many bash commands ...
}
The results of executions varies: sometimes the log files are created and sometimes they don't (which raise an error).
You could try something like this:
function log()
{
local logfile=$1
local errfile=$2
exec > $logfile
exec 2> $errfile # if $errfile is not an empty string
}
log $fileA $errfileA
echo stuff
log $fileB $errfileB
echo more stuff
This would redirect all stdout/stderr from current process to a file without any subprocesses.
Edit: The below might be a good solution then, but not tested:
pipe=$(mktemp)
mknod $pipe p
exec 1>$pipe
function log()
{
if ! [[ -z "$teepid2" ]]; then
kill $teepid2
else
tee <$pipe general_log_file &
teepid1=$!
count=1
fi
tee <$pipe ${count}_logfile &
teepid2=$!
(( ++count ))
}
log
echo stuff
log
echo stuff2
if ! [[ -z "$teepid1" ]]; then kill $teepid1; fi
Thanks to Sahas, I managed to achieve the following solution:
function log()
{
[ -z "$counter" ] && counter=1 || (( ++counter ))
if [ -n "$teepid" ]; then
exec 1>&- 2>&- # close file descriptors to signal EOF to the `tee`
# command in the bg process
wait $teepid # wait for bg process to exit
fi
# Display text-to-be-logged on screen and
# write it to the summary log & to it's corresponding log file
( tee "${counter}.log" < "$pipe" | tee -a "Summary.log" 1>&4 ) &
teepid=$!
exec 1>"$pipe" 2>&1 # redirect stdout & stderr to the pipe
}
# Create temporary FIFO/pipe
pipe_dir=$(mktemp -d)
pipe="${pipe_dir}/cmds_output"
mkfifo "$pipe"
exec 4<&1 # save value of FD1 to FD4
log # Logs the following code block
{
# ... Many bash commands ...
}
log # Logs the following code block
{
# ... Many bash commands ...
}
if [ -n "$teepid" ]; then
exec 1>&- 2>&- # close file descriptors to signal EOF to the `tee`
# command in the bg process
wait $teepid # wait for bg process to exit
fi
It works - I tested it.
References:
Force bash script to use tee without piping from the command line # superuser.com - helped a lot
I/O Redirection # tldp.org
$! - PID Variable # tldp.org
TEST Operators: Binary Comparison # tldp.org
For simple redirection of bash code block, without using a dedicated function, do:
(
echo "log this block of code"
# commands ...
# ...
# ...
) &> output.log

Resources