How to print output from a loop which is piped to some other command:
for f in "${!myList[#]}"; do
echo $f > /dev/stdout # echoed to stdout, how to?
unzip -qqc $f # piped to awk script
done | awk -f script.awk
You can use /dev/stderr or second file descriptor:
echo something >&2 | grep nothing
echo something >/dev/stderr | grep nothing
You can use another file descriptor that will be connected to stdout:
# for a single command group
{ echo something >&3 | grep nothing; } 3>&1
# or for everywhere
exec 3>&1
echo something >&3 | grep nothing
# same as above with named file descriptor
exec {LOG}>&1
echo 123 >&$LOG | grep nothing
You can also redirect the output to current controlling terminal /dev/tty (if there is one):
echo something >/dev/tty | grep nothing
Related
Im trying to get text output of specified command, modify it somehow (e.g. add prefix before output) and print into file (.txt or .log)
LOG_FILE=...
LOG_ERROR_FILE=..
command_name >> ${LOG_FILE} 2>> ${LOG_ERROR_FILE}
I would like to do it in one line to modify what command will return and print it into files.
The same situation for error output and regular output.
Im beginner in bash scripts, so please be understading.
Create a function to execute commands and capture sterr an stdout to variables.
function execCommand(){
local command="$#"
{
IFS=$'\n' read -r -d '' STDERR;
IFS=$'\n' read -r -d '' STDOUT;
} < <((printf '\0%s\0' "$($command)" 1>&2) 2>&1)
}
function testCommand(){
grep foo bar
echo "return code $?"
}
execCommand testCommand
echo err: $STDERR
echo out: $STDOUT
execCommand "touch /etc/foo"
echo err: $STDERR
echo out: $STDOUT
execCommand "date"
echo err: $STDERR
echo out: $STDOUT
output
err: grep: bar: No such file or directory
out: return code 2
err: touch: cannot touch '/etc/foo': Permission denied
out:
err:
out: Mon Jan 31 16:29:51 CET 2022
Now you can modify $STDERR & $STDOUT
execCommand testCommand && { echo "$STDERR" > err.log; echo "$STDOUT" > out.log; }
Explanation: Look at the answer from madmurphy
Pipe | and/or redirects > is the answer, it seems.
So, as a bogus example to show what I mean: to get all interfaces that the command ip a spits out, you could pipe that to the processing commands and do output redirection into a file.
ip a | awk -F': *' '/^[0-9]/ { print $2 }' > my_file.txt
If you wish to send it to separate processing, you could redirect into a sub-shell:
$ command -V cd curl bogus > >(awk '{print $NF}' > stdout.txt) 2> >(sed 's/.*\s\(\w\+\):/\1/' > stderr.txt)
$ cat stdout.txt
builtin
(/usr/bin/curl)
$ cat stderr.txt
bogus not found
But it might be better for readability to process in a separate step:
$ command -V cd curl bogus >stdout.txt 2>stderr.txt
$ sed -i 's/.*\s//' stdout.txt
$ sed -i 's/.*\s\(\w\+\):/\1/' stderr.txt
$ cat stdout.txt
builtin
(/usr/bin/curl)
$ cat stderr.txt
bogus not found
There are a myriad of ways to do what you ask and I guess situation will have to decide what to use, but here's a start.
To modify the output and write it to a file, while modifying the error stream differently and writing to a different file, you just need to manipulate the file descriptors appropriately. eg:
#!/bin/sh
# A command that writes trivial data to both stdout and stderr
cmd() {
echo 'Hello stdout!'
echo 'Hello stderr!' >&2
}
# Filter both streams and redirect to different files
{ cmd 2>&1 1>&3 | sed 's/stderr/cruel world/' > "$LOG_ERROR_FILE"; } 3>&1 |
sed 's/stdout/world/' > "$LOG_FILE"
The technique is to redirect the error stream to the stdout so it can flow into the pipe (2>&1), and then redirect the output stream to a ancillary file descriptor, which is being redirected into a different pipe.
You can clean it up a bit by moving the file redirections into an earlier exec call. eg:
#!/bin/sh
cmd() {
echo 'Hello stdout!'
echo 'Hello stderr!' >&2
}
exec > "$LOG_FILE"
exec 2> "$LOG_ERROR_FILE"
# Filter both streams and redirect to different files
{ cmd 2>&1 1>&3 | sed 's/stderr/cruel world/' >&2; } 3>&1 | sed 's/stdout/world/'
I have an executable that accepts queries from stdin and responds to them, reading until EOF. Additionally I have an input file and a special command, let's call those EXEC, FILE and CMD respectively.
What I need to do is:
Pass FILE to EXEC as input.
Disregard all the output corresponding to commands read from FILE (/dev/null/).
Pass CMD as the last command.
Fetch output for the last command and save it in a variable.
EXEC's output can be multiline for each query.
I know how to pass FILE + CMD into the EXEC:
echo ${CMD} | cat ${FILE} - | ${EXEC}
but I have no idea how to fetch only output resulting from CMD.
Is there a magical one-liner that does this?
After looking around I've found the following partial solution:
mkfifo mypipe
(tail -f mypipe) | ${EXEC} &
cat ${FILE} | while read line; do
echo ${line} > mypipe
done
echo ${CMD} > mypipe
This allows me to redirect my input, but now the output gets printed to screen. I want to ignore all the output produced by EXEC in the while loop and get only what it prints for the last line.
I tried what first came into my mind, which is:
(tail -f mypipe) | ${EXEC} > somefile &
But it didn't work, the file was empty.
This is race-prone -- I'd suggest putting in a delay after the kill, or using an explicit sigil to determine when it's been received. That said:
#!/usr/bin/env bash
# route FD 4 to your output routine
exec 4> >(
output=; trap 'output=1' USR1
while IFS= read -r line; do
[[ $output ]] && printf '%s\n' "$line"
done
); out_pid=$!
# Capture the PID for the process substitution above; note that this requires a very
# new version of bash (4.4?)
[[ $out_pid ]] || { echo "ERROR: Your bash version is too old" >&2; exit 1; }
# Run your program in another process substitution, and close the parent's handle on FD 4
exec 3> >("$EXEC" >&4) 4>&-
# cat your file to FD 3...
cat "$file" >&3
# UGLY HACK: Wait to let your program finish flushing output from those commands
sleep 0.1
# notify the subshell writing output to disk that the ignored input is done...
kill -USR1 "$out_pid"
# UGLY HACK: Wait to let the subprocess actually receive the signal and set output=1
sleep 0.1
# ...and then write the command for which you actually want content logged.
echo "command" >&3
In validating this answer, I'm doing the following:
EXEC=stub_function
stub_function() {
local count line
count=0
while IFS= read -r line; do
(( ++count ))
printf '%s: %s\n' "$count" "$line"
done
}
cat >file <<EOF
do-not-log-my-output-1
do-not-log-my-output-2
do-not-log-my-output-3
EOF
file=file
export -f stub_function
export file EXEC
Output is only:
4: command
You could pipe it into a sed:
var=$(YOUR COMMAND | sed '$!d')
This will put only the last line into the variable
I think, that your proram EXEC does something special (open connection or remember state). When that is not the case, you can use
${EXEC} < ${FILE} > /dev/null
myvar=$(echo ${CMD} | ${EXEC})
Or with normal commands:
# Do not use (printf "==%s==\n" 1 2 3 ; printf "oo%soo\n" 4 5 6) | cat
printf "==%s==\n" 1 2 3 | cat > /dev/null
myvar=$(printf "oo%soo\n" 4 5 6 | cat)
When you need to give all input to one process, perhaps you can think of a marker that you can filter on:
(printf "==%s==\n" 1 2 3 ; printf "%s\n" "marker"; printf "oo%soo\n" 4 5 6) | cat | sed '1,/marker/ d'
You should examine your EXEC what could be used. When it is running SQL, you might use something like
(cat ${FILE}; echo 'select "DamonMarker" from dual;' ; echo ${CMD} ) |
${EXEC} | sed '1,/DamonMarker/ d'
and write this in a var with
myvar=$( (cat ${FILE}; echo 'select "DamonMarker" from dual;' ; echo ${CMD} ) |
${EXEC} | sed '1,/DamonMarker/ d' )
I am trying to check the output of a command and run different commands depending on the output.
count="1"
for f in "$#"; do
BASE=${f%.*}
# if [ -e "${BASE}_${suffix}_${suffix2}.mp4" ]; then
echo -e "Reading GPS metadata using MediaInfo file ${count}/${##} "$(basename "${BASE}_${suffix}_${suffix2}.mp4")"
mediainfo "${BASE}_${suffix}_${suffix2}.mp4" | grep "©xyz" | head -n 1
if [[ $? != *xyz* ]]; then
echo -e "WARNING!!! No GPS information found! File ${count}/${##} "$(basename "${BASE}_${suffix}_${suffix2}.mp4")" || exit 1
fi
((count++))
done
MediaInfo is the command I am checking the output of.
If a video file has "©xyz" atom written into it the output looks like this:
$ mediainfo FILE | grep "©xyz" | head -n 1
$ ©xyz : +60.9613-125.9309/
$
otherwise it is null
$ mediainfo FILE | grep "©xyz" | head -n 1
$
The above code does not work and echos the warning even when ©xyz presented.
Any ideas of what I am doing wrong?
The syntax you are using the capture the output of the mediainfo command is plain wrong. When using grep you can use its return code (the output of $?) directly in the if-conditional
if mediainfo "${BASE}_${suffix}_${suffix2}.mp4" | grep -q "©xyz" 2> /dev/null;
then
..
The -q flag in grep instructs it to run the command silently without throwing any results to stdout, and the part 2>/dev/null suppresses any errors thrown via stderr, so you will get the if-conditional pass when the string is present and fail if not present
$? is the exit code of the command: a number between 0 and 255. It's not related to stdout, where your value "xyz" is written.
To match in stdout, you can just use grep:
if mediainfo "${BASE}_${suffix}_${suffix2}.mp4" | grep -q "©xyz"
then
echo "It contained that thing"
else
echo "It did not"
fi
I'm trying to execute a block of code only if the string SVN_BRANCH is not found in /etc/profile. My current code looks like the following:
a = cat /etc/profile
b = `$a | grep 'SVN_BRANCH'`
not_if "$b"
{
....
...
...
}
This fails as given. How should it be done?
grep can take file as an argument, you don't need to cat the file and then pass the content to grep with pipe, that's totally unnecessary.
This is an example of if else block with grep:
if grep -q "pattern" filepath;then
echo "do something"
else
echo "do something else"
fi
Note:
-q option is for quite operation. It will hide the output of grep command (error will be printed).
If you want it to not print any errors too then use this:
if grep -sq "pattern" filepath;then
Or this:
if grep "pattern" filepath >/dev/null 2>&1;then
>/dev/null is to redirect the output to /dev/null
2>&1 redirects both stderr and stdout
you can use the exit code of the grep command to determine whether to execute your code block like this
cat /etc/profile | grep SVN_BRANCH 2>&1 >/dev/null || {
....
....
}
I'd like to do different things to the stdout and stderr of a particular command. Something like
cmd |1 stdout_1 | stdout_2 |2 stderr_1 | stderr_2
where stdout_x is a command specifically for stdout and stderr_x is specifically for stderr. It's okay if stderr from every command gets piped into my stderr commands, but it's even better if the stderr could be strictly from cmd. I've been searching for some syntax that may support this, but I can't seem to find anything.
You can make use of a different file descriptor:
{ cmd 2>&3 | stdout_1; } 3>&1 1>&2 | stderr_1
Example:
{ { echo 'out'; echo >&2 'error'; } 2>&3 | awk '{print "stdout: " $0}'; } 3>&1 1>&2 |
awk '{print "stderr: " $0}'
stderr: error
stdout: out
Or else use process substitution:
cmd 2> >(stderr_1) > >(stdout_1)
Example:
{ echo 'out'; echo >&2 'error'; } 2> >(awk '{print "stderr: " $0}') \
> >(awk '{print "stdout: " $0}')
stderr: error
stdout: out
to pipe stdout and stderr separately from your cmd.
You can use process substitution and redirection to achieve this:
cmd 2> >(stderr_1 | stderr_2) | stdout_1 | stdout_2
The most straightforward solution would be something like this:
(cmd | gets_stdout) 2>&1 | gets_stderr
The main drawback being that if gets_stdout itself has any output on stdout, that will also go to gets_stderr. If that is a problem, you should use one of anubhava's or Kevin's answers.
Late-answer - as with all previous answers stderr is landing in stdout at the end (#anubhava's answer was nearly complete).
To pipe stderr and stdout independently (or identical) and keep them in stdout and stderr, we can use file descriptors.
Solution:
{ { cmd | stdout_pipe ; } 2>&1 1>&3 | stderr_pipe; } 1>&2 3>&1
Explanation:
piping in the shell always happens on stdout (file descriptor 1)
we therefore apply the pipe for stdout directly (leaving stderr intact)
then we park stdout in a temporary file descriptor and move stderr to stdout, allowing us to pipe this in the next step
at the end we get the "current" stdout (piped stderr) back to stderr and stdout back from our temporary file descriptor
Example:
using
cmd = { echo 'out'; echo >&2 'error'; }
stdout_pipe = awk '{print "stdout: " $0}'
stderr_pipe = awk '{print "stderr: " $0}'
{ { { echo 'out'; echo >&2 'error'; } \
| awk '{print "stdout: " $0}'; } 2>&1 1>&3 \
| awk '{print "stderr: " $0}'; } 1>&2 3>&1
Note: this example has an extra pair of { } to "combine" both echo commands into a single one
You see the difference when redirecting the output of a script using this or when using this with a terminal that colors stderr.