Bash script - Modify output of command and print into file - bash

Im trying to get text output of specified command, modify it somehow (e.g. add prefix before output) and print into file (.txt or .log)
LOG_FILE=...
LOG_ERROR_FILE=..
command_name >> ${LOG_FILE} 2>> ${LOG_ERROR_FILE}
I would like to do it in one line to modify what command will return and print it into files.
The same situation for error output and regular output.
Im beginner in bash scripts, so please be understading.

Create a function to execute commands and capture sterr an stdout to variables.
function execCommand(){
local command="$#"
{
IFS=$'\n' read -r -d '' STDERR;
IFS=$'\n' read -r -d '' STDOUT;
} < <((printf '\0%s\0' "$($command)" 1>&2) 2>&1)
}
function testCommand(){
grep foo bar
echo "return code $?"
}
execCommand testCommand
echo err: $STDERR
echo out: $STDOUT
execCommand "touch /etc/foo"
echo err: $STDERR
echo out: $STDOUT
execCommand "date"
echo err: $STDERR
echo out: $STDOUT
output
err: grep: bar: No such file or directory
out: return code 2
err: touch: cannot touch '/etc/foo': Permission denied
out:
err:
out: Mon Jan 31 16:29:51 CET 2022
Now you can modify $STDERR & $STDOUT
execCommand testCommand && { echo "$STDERR" > err.log; echo "$STDOUT" > out.log; }
Explanation: Look at the answer from madmurphy

Pipe | and/or redirects > is the answer, it seems.
So, as a bogus example to show what I mean: to get all interfaces that the command ip a spits out, you could pipe that to the processing commands and do output redirection into a file.
ip a | awk -F': *' '/^[0-9]/ { print $2 }' > my_file.txt
If you wish to send it to separate processing, you could redirect into a sub-shell:
$ command -V cd curl bogus > >(awk '{print $NF}' > stdout.txt) 2> >(sed 's/.*\s\(\w\+\):/\1/' > stderr.txt)
$ cat stdout.txt
builtin
(/usr/bin/curl)
$ cat stderr.txt
bogus not found
But it might be better for readability to process in a separate step:
$ command -V cd curl bogus >stdout.txt 2>stderr.txt
$ sed -i 's/.*\s//' stdout.txt
$ sed -i 's/.*\s\(\w\+\):/\1/' stderr.txt
$ cat stdout.txt
builtin
(/usr/bin/curl)
$ cat stderr.txt
bogus not found
There are a myriad of ways to do what you ask and I guess situation will have to decide what to use, but here's a start.

To modify the output and write it to a file, while modifying the error stream differently and writing to a different file, you just need to manipulate the file descriptors appropriately. eg:
#!/bin/sh
# A command that writes trivial data to both stdout and stderr
cmd() {
echo 'Hello stdout!'
echo 'Hello stderr!' >&2
}
# Filter both streams and redirect to different files
{ cmd 2>&1 1>&3 | sed 's/stderr/cruel world/' > "$LOG_ERROR_FILE"; } 3>&1 |
sed 's/stdout/world/' > "$LOG_FILE"
The technique is to redirect the error stream to the stdout so it can flow into the pipe (2>&1), and then redirect the output stream to a ancillary file descriptor, which is being redirected into a different pipe.
You can clean it up a bit by moving the file redirections into an earlier exec call. eg:
#!/bin/sh
cmd() {
echo 'Hello stdout!'
echo 'Hello stderr!' >&2
}
exec > "$LOG_FILE"
exec 2> "$LOG_ERROR_FILE"
# Filter both streams and redirect to different files
{ cmd 2>&1 1>&3 | sed 's/stderr/cruel world/' >&2; } 3>&1 | sed 's/stdout/world/'

Related

Copy stderr to stdout without using tee

I know there are many similar questions. But, none of the scenario satisfy my requirement.
I have a cron which backup MySQL databases. Currently, I redirect stderr to Slack and stdout to syslog like this:
mysql-backup.sh 1> >(logger -it DB_BACKUP) 2> >(push-to-slack.sh)
This way, we are instantly notified about any errors during backup process. And stdout is kept in syslog, but the stderr are missing from the syslog.
In short, I need stdout+stderr in syslog (with date, PID etc) and pipe (or redirect) stderr to push-to-slack.sh
Any solutions without using temporary files are expected.
This sends stderr to push-to-slack.sh while sending both stderr and stdout to logger:
{ mysql-backup.sh 2>&1 1>&3 | tee >(push-to-slack.sh); } 3>&1 | logger -it DB_BACKUP
Reproducible Example
Let's create a function that produces both stdout and stderr:
$ fn() { echo out; echo err>&2; }
Now, let's run the analog of our command above:
$ { fn 2>&1 1>&3 | tee err_only; } 3>&1 | cat >both
$ cat err_only
err
$ cat both
out
err
We can see that err_only captured only the stderr while both captured both stdout and stderr.
(Note to nitpickers: Yes, I know that cat above "useless" but I am keeping the command parallel to the one the OP needs.)
Without using tee
If you really seriously can't use tee, then we can do something like using shell:
{ fn 2>&1 1>&3 | (while read -r line; do echo "$line" >&3; echo "$line"; done >err_only); } 3>&1 | cat >both
Or, using awk:
{ fn 2>&1 1>&3 | awk '{print>"err"} 1'; } 3>&1 | cat >both

bash stdout some information and pipe other from inside loop

How to print output from a loop which is piped to some other command:
for f in "${!myList[#]}"; do
echo $f > /dev/stdout # echoed to stdout, how to?
unzip -qqc $f # piped to awk script
done | awk -f script.awk
You can use /dev/stderr or second file descriptor:
echo something >&2 | grep nothing
echo something >/dev/stderr | grep nothing
You can use another file descriptor that will be connected to stdout:
# for a single command group
{ echo something >&3 | grep nothing; } 3>&1
# or for everywhere
exec 3>&1
echo something >&3 | grep nothing
# same as above with named file descriptor
exec {LOG}>&1
echo 123 >&$LOG | grep nothing
You can also redirect the output to current controlling terminal /dev/tty (if there is one):
echo something >/dev/tty | grep nothing

How to prepend stdout and stderr output with timestamp when redirecting into log files?

In Linux I'm starting a program called $cmd in an init script (SysVInit). I'm already redirecting stdout and stderr of $cmd into two different logfiles called $stdout_log and $stderr_log. Now I also want to add a timestamp in front of every line printed into the logfiles.
I tried to write a function called log_pipe as follows:
log_pipe() {
while read line; do
echo [$(date +%Y-%m-%d\ %H:%M:%S)] "$line"
done
}
then pipe the output of my script into this function and after that redirect them to the logfiles as follows:
$cmd | log_pipe >> "$stdout_log" 2>> "$stderr_log" &
What I get is an empty $stdout.log (stdout) what should be okay, because the $cmd normally doesn't print anything. And a $stderr.log file with only timestamps but without error texts.
Where is my faulty reasoning?
PS: Because the problem exists within an init script I only want to use basic shell commands and no extra packages.
In any POSIX shell, try:
{ cmd | log_pipe >>stdout.log; } 2>&1 | log_pipe >>stderr.log
Also, if you have GNU awk (sometimes called gawk), then log_pipe can be made simpler and faster:
log_pipe() { awk '{print strftime("[%Y-%m-%d %H:%M:%S]"),$0}'; }
Example
As an example, let's create the command cmd:
cmd() { echo "This is out"; echo "This is err">&2; }
Now, let's run our command and look at the output files:
$ { cmd | log_pipe >>stdout.log; } 2>&1 | log_pipe >>stderr.log
$ cat stdout.log
[2019-07-04 23:42:20] This is out
$ cat stderr.log
[2019-07-04 23:42:20] This is err
The problem
cmd | log_pipe >> "$stdout_log" 2>> "$stderr_log"
The above redirects stdout from cmd to log_pipe. The stdout of log_pipe is redirected to $stdout_log and the stderr of log_pipe is redirected to $stderr_log. The problem is that the stderr of cmd is never redirected. It goes straight to the terminal.
As an example, consider this cmd:
cmd() { echo "This is out"; echo "This is err">&2; }
Now, let's run the command:
$ cmd | log_pipe >>stdout.log 2>>stderr.log
This is err
We can see that This is err is not sent to the file stderr.log. Instead, it appears on the terminal. It is never seen by log_pipe. stderr.log only captures error messages from log_pipe.
In Bash, you can also redirect to a subshell using process substitution:
logger.sh
#!/bin/bash
while read -r line; do
echo "[$(date +%Y-%m-%d\ %H:%M:%S)] $line"
done
redirection
cmd > >(logger.sh > stdout.log) 2> >(logger.sh > stderr.log)
This works, but my command has to run in background because it is within an init script, therefore i have to do:
({ cmd | log_pipe >>stdout.log; } 2>&1 | log_pipe >>stderr.log) &
echo $! > "$pid_file"
right?
But I think in this case the pid in the $pid_file is not the pid of $cmd...

How to suppress output of command in shell script?

For example:
cat a.txt
1 2
1 6
{ cat $HOME/SANITY/file.txt | grep 1 >> $HOME/SANITY/new.txt } > /dev/null
cut -d' ' -f2
Now i don't want the results to be shown when running the script with this code.
You can redirect the output. if you only use your_command > /dev/null only stdout will be redirected. If you want to remove the output of stderr as well, redirect stderr to stdout and stdout to /dev/null using:
your_command > /dev/null 2>&1
2>&1 will move stderr to the file descriptor of stdout.
Simply use in your case :
grep 1 "$HOME/SANITY/file.txt" >> "$HOME/SANITY/new.txt"
And for general purpose :
command_foo_bar > /dev/null # or any other non special file

How pipe std and err output to separate commands in bash script?

I have a bash script executing a long run command. I want to prefix each line printed by the command to stdout with $stdprefix and each line printed to stderr with $errprefix.
I don't want to store output to variables or even worse to files, because I'd have to wait until the command finishes execution to see the output.
You can use:
# your prefixes
stdprefix="stdout: "
errprefix="stderr: "
# sample command to produce output and error
cmd() { echo 'output'; echo >&2 'error'; }
Now to redirect stdout and stderr independently:
{ cmd 2>&3 | awk -v p="$stdprefix" '{print p $0}'; } 3>&1 1>&2 |
awk -v p="$errprefix" '{print p $0}'
stderr: error
stdout: output
Just replace cmd with your long running command.

Resources