Add timestamp to tee'd output, but not original output - bash

I'm writing a little budget script to keep an eye on my finances. I'd like to keep a log of all of my transactions and when they happened.
Currently, I input spendings as an argument:
f)
echo "$OPTARG spent on food" | tee spendinglogs.log
... # take away money from monthly budget
echo "$REMAINING_FOOD_BUDGET remaining" | tee spendinglogs.log
m)
echo "$OPTARG spent on miscellaneous" | tee spendinglogs.log
... # take away money from monthly budget
echo "$REMAINING_MISC_BUDGET remaining" | tee spendinglogs.log
... #etc
I don't want to timestamp output to the terminal, but I do want to timestamp output to the logs. Is there a way to do this?
For example
echo "$OPTARG spent on food" | tee `date %d-%m-%y %H_%M_%S` spendinglogs.log
But I can't imagine that working.

EDIT: Tested and updated with correct info
Check out ts from the moreutils package.
If you're using bash, you can tee to a shell pipe as a file:
echo "$OPTARG spent on food" | tee >(ts "%d-%m-%y %H_%M_%S" > spendinglogs.log)
My previous answer correctly stated the above, correct answer, but also an incorrect one using pee, also from moreutils. pee appears to buffer stdin before sending it to the output pipes, so this will not work with timestamping (it will work with commands where the timing is not important however).

Try this:
echo something 2>&1 | while read line; do echo $line; echo "$(date) $line" >> something.log; done

function tee_with_timestamps () {
local logfile=$1
while read data; do
echo "${data}" | sed -e "s/^/$(date '+%T') /" >> "${logfile}"
echo "${data}"
done
}
echo "test" | tee_with_timestamps "file.txt"

The below snippet is prepending timestamp in front of each line in the log file.
The console output is without any timestamp.
exec &> >(tee -a >(sed "s/^/$(date) /" >> "${filename.log}"))
&> : for both stdout and stderr
>> : for appending in a file

Related

How can I grep a list of names from case?

So as an example, I have a bunch of apps that are constantly writing to /var/log/app//nonsence.file there's nothing else those folders, just logs from this one set of apps. so I can easily do:
cat /var/log/app/*/nonsence.file
and I'll get a nice stream of the app logs.
Mixed into this stream are periodic references to people. I'd like to build a script to trigger when certain names appear in the stream.
I can do this easily enough:
cat /var/log/app/*/nonsence.file | grep 'greg|john|suzy|stacy'
and I can put THAT into a simple script thusly:
#!/bin/sh
NAME=`cat /var/log/app/*/nonsence.file | grep 'greg\|john\|suzy\|stacy'`
case "$NAME" in
"greg" ) echo "I found greg!" >> ~/names.meh ;;
"john" ) echo "I found john!" >> ~/names.meh ;;
"suzy" ) echo "I found suzy!" >> ~/names.meh ;;
"stacy" ) echo "I found stacy!" >> ~/names.meh ;;
* ) echo "forever alone..." >> ~/names.meh ;;
esac
easy peasy!
the trouble is, the list of names change from time to time and I would really like a neater list.
After some thinking I believe what I REALLY want to do is add each name into the case section only. so what do I need to do in the NAME variable section to tell the command to grep the name referenced in the case section?
cat file | grep is a useless use of cat. Just grep file.
Command in a pipe are by default block buffered.
The >> ~/names.meh is just repetition. Just specify it once for the whole block.
The backticks ` are discouraged. It's preferred to use $(..) instead.
Each time NAME=... is assigned the file is read, while you seem to want to want:
... I'd like to build a script to trigger when certain names appear in the stream.
which suggest you want to react when the name appears in the script, not after some time.
You may try:
patterns=(greg john suzy stacy)
printf "%s\n" /var/log/app/*/nonsence.file |
# tail each file at the same time by spawning for each a background process
xargs -P0 -n1 tail -F -n+1 |
# grep for the patterns
# pass the patterns from a file
# the <(...) is a process substitution, a bash extension
grep --line-buffered -f <(printf "%s\n" "${patterns[#]}") -o |
# for each grepped content execute different action
while IFS= read -r line; do
case "$line" in)
"greg") someaction; ;;
# etc
*) echo "Internal error - unhandled pattern"; ;;
esac
done >> ~/names.me
Because specyfing patterns twice is lame, you could do an associative function to map the patterns to function names, or just use unique function names and geenerate from them the pattern list:
pattern_greg() { echo "greg"; }
pattern_kamil() { echo "well, not greg"; }
patterns=($(declare -F | sed 's/declare -f //; /^pattern_/!d; s/pattern_//'))
... |
while IFS= read -r line; do
if declare -f pattern_"$line" >/dev/null 2>&1; then
pattern_"$line"
else
echo "Internal error occured"
fi
done
alternatively, but I like the functions better:
greg_function() { echo do something; }
kamil_callback() { echo do something else; }
declare -A patterns
patterns=([greg]=greg_function [kamil]=kamil_callback)
... | grep -f <(printf "%s\n" ${!patterns[#]}) ... |
while IFS= read -r line; do
# I think this is how to check if array element is set
if [[ -n "${patterns[$line]}" ]]; then
"${patterns[$line]}"
else
echo error
fi
done

bash stdout some information and pipe other from inside loop

How to print output from a loop which is piped to some other command:
for f in "${!myList[#]}"; do
echo $f > /dev/stdout # echoed to stdout, how to?
unzip -qqc $f # piped to awk script
done | awk -f script.awk
You can use /dev/stderr or second file descriptor:
echo something >&2 | grep nothing
echo something >/dev/stderr | grep nothing
You can use another file descriptor that will be connected to stdout:
# for a single command group
{ echo something >&3 | grep nothing; } 3>&1
# or for everywhere
exec 3>&1
echo something >&3 | grep nothing
# same as above with named file descriptor
exec {LOG}>&1
echo 123 >&$LOG | grep nothing
You can also redirect the output to current controlling terminal /dev/tty (if there is one):
echo something >/dev/tty | grep nothing

Ignoring all but the (multi-line) results of the last query sent to a program

I have an executable that accepts queries from stdin and responds to them, reading until EOF. Additionally I have an input file and a special command, let's call those EXEC, FILE and CMD respectively.
What I need to do is:
Pass FILE to EXEC as input.
Disregard all the output corresponding to commands read from FILE (/dev/null/).
Pass CMD as the last command.
Fetch output for the last command and save it in a variable.
EXEC's output can be multiline for each query.
I know how to pass FILE + CMD into the EXEC:
echo ${CMD} | cat ${FILE} - | ${EXEC}
but I have no idea how to fetch only output resulting from CMD.
Is there a magical one-liner that does this?
After looking around I've found the following partial solution:
mkfifo mypipe
(tail -f mypipe) | ${EXEC} &
cat ${FILE} | while read line; do
echo ${line} > mypipe
done
echo ${CMD} > mypipe
This allows me to redirect my input, but now the output gets printed to screen. I want to ignore all the output produced by EXEC in the while loop and get only what it prints for the last line.
I tried what first came into my mind, which is:
(tail -f mypipe) | ${EXEC} > somefile &
But it didn't work, the file was empty.
This is race-prone -- I'd suggest putting in a delay after the kill, or using an explicit sigil to determine when it's been received. That said:
#!/usr/bin/env bash
# route FD 4 to your output routine
exec 4> >(
output=; trap 'output=1' USR1
while IFS= read -r line; do
[[ $output ]] && printf '%s\n' "$line"
done
); out_pid=$!
# Capture the PID for the process substitution above; note that this requires a very
# new version of bash (4.4?)
[[ $out_pid ]] || { echo "ERROR: Your bash version is too old" >&2; exit 1; }
# Run your program in another process substitution, and close the parent's handle on FD 4
exec 3> >("$EXEC" >&4) 4>&-
# cat your file to FD 3...
cat "$file" >&3
# UGLY HACK: Wait to let your program finish flushing output from those commands
sleep 0.1
# notify the subshell writing output to disk that the ignored input is done...
kill -USR1 "$out_pid"
# UGLY HACK: Wait to let the subprocess actually receive the signal and set output=1
sleep 0.1
# ...and then write the command for which you actually want content logged.
echo "command" >&3
In validating this answer, I'm doing the following:
EXEC=stub_function
stub_function() {
local count line
count=0
while IFS= read -r line; do
(( ++count ))
printf '%s: %s\n' "$count" "$line"
done
}
cat >file <<EOF
do-not-log-my-output-1
do-not-log-my-output-2
do-not-log-my-output-3
EOF
file=file
export -f stub_function
export file EXEC
Output is only:
4: command
You could pipe it into a sed:
var=$(YOUR COMMAND | sed '$!d')
This will put only the last line into the variable
I think, that your proram EXEC does something special (open connection or remember state). When that is not the case, you can use
${EXEC} < ${FILE} > /dev/null
myvar=$(echo ${CMD} | ${EXEC})
Or with normal commands:
# Do not use (printf "==%s==\n" 1 2 3 ; printf "oo%soo\n" 4 5 6) | cat
printf "==%s==\n" 1 2 3 | cat > /dev/null
myvar=$(printf "oo%soo\n" 4 5 6 | cat)
When you need to give all input to one process, perhaps you can think of a marker that you can filter on:
(printf "==%s==\n" 1 2 3 ; printf "%s\n" "marker"; printf "oo%soo\n" 4 5 6) | cat | sed '1,/marker/ d'
You should examine your EXEC what could be used. When it is running SQL, you might use something like
(cat ${FILE}; echo 'select "DamonMarker" from dual;' ; echo ${CMD} ) |
${EXEC} | sed '1,/DamonMarker/ d'
and write this in a var with
myvar=$( (cat ${FILE}; echo 'select "DamonMarker" from dual;' ; echo ${CMD} ) |
${EXEC} | sed '1,/DamonMarker/ d' )

echo to stdout and append to file

I have this:
echo "all done creating tables" >> ${SUMAN_DEBUG_LOG_PATH}
but that should only append to the file, not write to stdout.
How can I write to stdout and append to a file in the same bash line?
Something like this?
echo "all done creating tables" | tee -a "${SUMAN_DEBUG_LOG_PATH}"
Use the tee command
$ echo hi | tee -a foo.txt
hi
$ cat foo.txt
hi
Normally tee is used, however a version using just bash:
#!/bin/bash
function mytee (){
fn=$1
shift
IFS= read -r LINE
printf '%s\n' "$LINE"
printf '%s\n' "$LINE" >> "$fn"
}
SUMAN_DEBUG_LOG_PATH=/tmp/abc
echo "all done creating tables" | mytee "${SUMAN_DEBUG_LOG_PATH}"

Why is this pipeline behave differently in ksh93 compared to bash?

I had an issue with a ksh93 code. As it was very complex I started reducing it to an example code that will reproduce the same issue. I ended up with this:
set -o pipefail;
{
echo "progress" 1>&3 | false
} 3>&1 | cat | \
while read pv_output; do
echo "meanwhile ... we got "
echo $pv_output | cat
done
echo $?
When I run this code with ksh93 it outputs "0", when I run it with bash it outputs "1".
# echo "ksh93";ksh93 ./x1.sh ;echo "bash"; bash ./x1.sh
ksh93
meanwhile ... we got
progress
0
bash
meanwhile ... we got
progress
1
However if I start fiddling with the code, and remove the first cat, both shells return "1"
set -o pipefail;
{
echo "progress" 1>&3 | false
} 3>&1 | \
while read pv_output; do
echo "meanwhile ... we got "
echo $pv_output | cat
done
echo $?
Or... if I leave in the first cat, but I remove the second one from inside the while, they work the same way again.
set -o pipefail;
{
echo "progress" 1>&3 | false
} 3>&1 | cat | \
while read pv_output; do
echo "meanwhile ... we got "
echo $pv_output
done
echo $?
Now, for this example I used cat for simplicity's sake. In real life the first cat is actually an awk processing the output of a complicated command. The second cat is actually a sed. I mention these, so that it is clear that the cat command in itself is not the culprit.

Resources