I want to execute some scripts in install.sh, looks like:
#!/bin/bash
./script1.sh
./script2.sh
./script3.sh
...
It executes a bunch of scripts, so I want to distinguish stdout and stderr by color (green for stdout, red for stderr), and also where the outputs come from.
The output format I want is:
script1.sh: Hello # in green color (stdout)
script2.sh: Cannot read a file. # in red color (stderr)
My goal is to print outputs in scripts in format of:
{script_name}: {green_if_stdout, red_if_stderr}
I don't want to edit every single command in all scripts.
Is there any way to override (or customize) all stdout and stderr outputs in the script?
#!/bin/bash
override_stdout_and_stderr
echo "Start" # It also prints as green color
./script1.sh
./script2.sh
./script3.sh
...
restore_if_needed
You asked how to color all of stdout and stderr and prefix all lines with the script name.
In the answer below used redirection to send stdout to one process and stderr to another process. Credit to how to redirect stderr.
Using awk to prefix the incoming output with the needed color, red or green, then printing each line of input, and clearing the color setting upon finishing the print.
#!/bin/bash
function colorize()
{
"$#" 2> >( awk '{ printf "'$1':""\033[0;31m" $0 "\033[0m\n"}' ) \
1> >( awk '{ printf "'$1':""\033[0;32m" $0 "\033[0m\n"}' )
}
colorize ./script1.sh
#!/bin/sh
# script1.sh
echo "Hello GREEN"
>&2 echo "Hello RED"
Expect output similar to this command.
printf 'script1.sh:\033[0;32mHello GREEN\033[0m\nscript1.sh:\033[0;31mHello RED\033[0m\n'
Using read instead of awk:
#!/bin/bash
function greenchar()
{
while read ln ; do
printf "$1:\033[0;32m${ln}\033[0;0m\n" >&1
done
}
function redchar()
{
while read ln ; do
printf "$1:\033[0;31m${ln}\033[0;0m\n" >&2
done
}
function colorize()
{
$* 2> >( redchar $1 ) 1> >( greenchar $1 )
}
colorize ./script2.sh
#!/bin/bash
# script2.sh
echo "Hello GREEN"
>&2 echo "Hello RED"
>&1 echo "YES OR NO?"
select yn in "Yes" "No"; do
case $yn in
Yes) echo "YOU PICKED YES" ; break;;
No) echo "YOU PICKED NO" ; break;;
esac
done
Example output, the output is similar to output of these commands.
RED="\033[0;31m"
GRN="\033[0;32m"
NC="\033[0;0m"
printf "./script1.sh:${GRN}Hello GREEN${NC}\n"
printf "./script1.sh:${GRN}YES OR NO?${NC}\n"
printf "./script1.sh:${RED}Hello RED${NC}\n"
printf "./script1.sh:${RED}1) Yes${NC}\n"
printf "./script1.sh:${RED}2) No${NC}\n"
printf "${NC}1${NC}\n"
printf "./script1.sh:${GRN}YOU PICKED YES${NC}\n"
Related
I am assigning the output of a command to variable A:
A=$(some_command)
How can I "capture" stderr into a variable B ?
I have tried some variations with 2>&1 and read but that does not work:
A=$(some_command) 2>&1 | read B
echo $B
Here's a code snippet that might help you
# capture stderr into a variable and print it
echo "capture stderr into a variable and print it"
var=$(lt -l /tmp 2>&1)
echo $var
capture stderr into a variable and print it
zsh: command not found: lt
# capture stdout into a variable and print it
echo "capture stdout into a variable and print it"
var=$(ls -l /tmp)
echo $var
# capture both stderr and stdout into a variable and print it
echo "capture both stderr and stdout into a variable and print it"
var=$(ls -l /tmp 2>&1)
echo $var
# more classic way of executing a command which I always follow is as follows. This way I am always in control of what is going on and can act accordingly
if somecommand ; then
echo "command succeeded"
else
echo "command failed"
fi
If you have to capture the output and stderr in different variables, then the following might help as well
## create a file using file descriptor for stdout
exec 3> stdout.txt
# create a file using file descriptor for stderr
exec 4> stderr.txt
A=$($1 /tmp 2>&4 >&3);
## close file descriptor
exec 3>&-
exec 4>&-
## open file descriptor for reading
exec 3< stdout.txt
exec 4< stderr.txt
## read from file using file descriptor
read line <&3
read line2 <&4
## close file descriptor
exec 3<&-
exec 4<&-
## print line read from file
echo "stdout: $line"
echo "stderr: $line2"
## delete file
rm stdout.txt
rm stderr.txt
You can try running it with the following
╰─ bash test.sh pwd
stdout: /tmp/somedir
stderr:
╰─ bash test.sh pwdd
stdout:
stderr: test.sh: line 8: pwdd: command not found
As noted in a comment your use case may be better served in other scripting languages. An example: in Perl you can achieve what you want quite simple:
#!/usr/bin/env perl
use v5.26; # or earlier versions
use Capture::Tiny 'capture'; # library is not in core
my $cmd = 'date';
my #arg = ('-R', '-u');
my ($stdout, $stderr, $exit) = capture {
system( $cmd, #arg );
};
say "STDOUT: $stdout";
say "STDERR: $stderr";
say "EXIT: $exit";
I'm sure similar solutions are available in python, ruby, and all the rest.
I gave it another try using process substitution and came up with this:
# command with no error
date +%b > >(read A; if [ "$A" = 'Sep' ]; then echo 'September'; fi ) 2> >(read B; if [ ! -z "$B" ]; then echo "$B"; fi >&2)
September
# command with error
date b > >(read A; if [ "$A" = 'Sep' ]; then echo 'September'; fi ) 2> >(read B; if [ ! -z "$B" ]; then echo "$B"; fi >&2)
date: invalid date “b“
# command with both at the same time should work too
I had no success "exporting" the variables from the subprocesses back to the original script. It might be possible though. I just couldn't figure it out.
But this gives you at least access to stdout and stderr as a variable. This means you can do whatever processing you want on them as variables. It depends on your use case if this is of any help to you. Good luck :-)
I try to execute from bash a command and retrieve stdout, stderr and exit code.
So far so good, there is plenty way.
The problem begin when that the program have an interactive input.
More precisly, I execute "git commit" (without -m) and "GNU nano" is executed in order to put a commit message.
If I use simply :
git commit
or
exec git commit
I can see the prompt, but I can't get stdout/stderr.
If I use
output=`git commit 2>&1`
or
output=$(git commit 2>&1)
I can retrieve stdout/stderr, but I can't see the prompt.
I can still do ctrl+X to abort the git commit.
My first attempt was by function call and my script end up hanging on a blank screen and ctrl+x / ctrl+c doesn't work.
function Execute()
{
if [[ $# -eq 0 ]]; then
echo "Error : function 'Execute' called without argument."
exit 3
fi
local msg=$("$# 2>&1")
local error=$?
if [[ $error -ne 0 ]]; then
echo "Error : '"$(printf '%q ' "$#")"' return '$error' error code."
echo "$1 message :"
echo "$msg"
echo
exit 1
fi
}
Execute git commit
I begin to ran out of idea/knowledge. Is what I want to do impossible ? Or is there a way that I don't know ?
Try this which processes every line output to stdout or stderr and redirects based on content:
#!/bin/env bash
foo() {
printf 'prompt: whos on first?\n' >&2
printf 'error: uh-oh\n' >&2
}
var=$(foo 2>&1 | awk '{print | "cat>&"(/prompt/ ? 2 : 1)}' )
echo "var=$var"
$ ./tst.sh
prompt: whos on first?
var=error: uh-oh
or this which just processes stderr:
#!/bin/env bash
foo() {
printf 'prompt: whos on first?\n' >&2
printf 'error: uh-oh\n' >&2
}
var=$(foo 2> >(awk '{print | "cat>&"(/prompt/ ? 2 : 1)}') )
echo "var=$var"
$ ./tst.sh
prompt: whos on first?
var=error: uh-oh
The awk command splits it's input to stderr or stdout based on content and only stdout is saved in the variable var. I don't know if your prompt is coming to stderr or stdout or where you really want it go go but massage to suit wrt what you want to go to stdout vs stderr and what you want to capture in the variable vs see printed to the screen. You just need to have something in the prompt to recognize as such so you can separate the prompt from the rest of the stdout and stderr and print the prompt to stderr while everything else gets redirected to stdout.
Alternatively here's a version that prints the first line (regardless of content) to stderr for display and everything else to stdout for capture:
$ cat tst.sh
#!/bin/env bash
foo() {
printf 'prompt: whos on first?\n' >&2
printf 'error: uh-oh\n' >&2
}
var=$(foo 2>&1 | awk '{print | "cat>&"(NR>1 ? 1 : 2)}' )
echo "var=$var"
$ ./tst.sh
prompt: whos on first?
var=error: uh-oh
I have two bash script.
One script write in a fifo. The second one read from the fifo, but AFTER the first one end to write.
But something does not work. I do not understand where the problem is. Here the code.
The first script is (the writer):
#!/bin/bash
fifo_name="myfifo";
# Se non esiste, crea la fifo;
[ -p $fifo_name ] || mkfifo $fifo_name;
exec 3<> $fifo_name;
echo "foo" > $fifo_name;
echo "bar" > $fifo_name;
The second script is (the reader):
#!/bin/bash
fifo_name="myfifo";
while true
do
if read line <$fifo_name; then
# if [[ "$line" == 'ar' ]]; then
# break
#fi
echo $line
fi
done
Can anyone help me please?
Thank you
Replace the second script with:
#!/bin/bash
fifo_name="myfifo"
while true
do
if read line; then
echo $line
fi
done <"$fifo_name"
This opens the fifo only once and reads every line from it.
The problem with your setup is that you have fifo creation in the wrong script if you wish to control fifo access to time when the reader is actually running. In order to correct the problem you will need to do something like this:
reader: fifo_read.sh
#!/bin/bash
fifo_name="/tmp/myfifo" # fifo name
trap "rm -f $fifo_name" EXIT # set trap to rm fifo_name at exit
[ -p "$fifo_name" ] || mkfifo "$fifo_name" # if fifo not found, create
exec 3< $fifo_name # redirect fifo_name to fd 3
# (not required, but makes read clearer)
while :; do
if read -r -u 3 line; then # read line from fifo_name
if [ "$line" = 'quit' ]; then # if line is quit, quit
printf "%s: 'quit' command received\n" "$fifo_name"
break
fi
printf "%s: %s\n" "$fifo_name" "$line" # print line read
fi
done
exec 3<&- # reset fd 3 redirection
exit 0
writer: fifo_write.sh
#!/bin/bash
fifo_name="/tmp/myfifo"
# Se non esiste, exit :);
[ -p "$fifo_name" ] || {
printf "\n Error fifo '%s' not found.\n\n" "$fifo_name"
exit 1
}
[ -n "$1" ] &&
printf "%s\n" "$1" > "$fifo_name" ||
printf "pid: '%s' writing to fifo\n" "$$" > "$fifo_name"
exit 0
operation: (start reader in 1st terminal)
$ ./fifo_read.sh # you can background with & at end
(launch writer in second terminal)
$ ./fifo_write.sh "message from writer" # second terminal
$ ./fifo_write.sh
$ ./fifo_write.sh quit
output in 1st terminal:
$ ./fifo_read.sh
/tmp/myfifo: message from writer
/tmp/myfifo: pid: '28698' writing to fifo
/tmp/myfifo: 'quit' command received
The following script should do the job:
#!/bin/bash
FIFO="/tmp/fifo"
if [ ! -e "$FIFO" ]; then
mkfifo "$FIFO"
fi
for script in "$#"; do
echo $script > $FIFO &
done
while read script; do
/bin/bash -c $script
done < $FIFO
Given two script a.sh and b.sh where both scripts pass "a" and "b" to stdout, respectively, one will get the following result (given that the script above is called test.sh):
./test.sh /tmp/a.sh /tmp/b.sh
a
b
Best,
Julian
This question already has answers here:
Capture stdout and stderr into different variables
(21 answers)
Closed 7 years ago.
I know this syntax
var=`myscript.sh`
or
var=$(myscript.sh)
Will capture the result (stdout) of myscript.sh into var. I could redirect stderr into stdout if I wanted to capture both. How to save each of them to separate variables?
My use case here is if the return code is nonzero I want to echo stderr and suppress otherwise. There may be other ways to do this but this approach seems it will work, if it's actually possible.
There's a really ugly way to capture stderr and stdout in two separate variables without temporary files (if you like plumbing), using process substitution, source, and declare appropriately. I'll call your command banana. You can mimic such a command with a function:
banana() {
echo "banana to stdout"
echo >&2 "banana to stderr"
}
I'll assume you want standard output of banana in variable bout and standard error of banana in variable berr. Here's the magic that'll achieve that (Bash≥4 only):
. <({ berr=$({ bout=$(banana); } 2>&1; declare -p bout >&2); declare -p berr; } 2>&1)
So, what's happening here?
Let's start from the innermost term:
bout=$(banana)
This is just the standard way to assign to bout the standard output of banana, the standard error being displayed on your terminal.
Then:
{ bout=$(banana); } 2>&1
will still assign to bout the stdout of banana, but the stderr of banana is displayed on terminal via stdout (thanks to the redirection 2>&1.
Then:
{ bout=$(banana); } 2>&1; declare -p bout >&2
will do as above, but will also display on the terminal (via stderr) the content of bout with the declare builtin: this will be reused soon.
Then:
berr=$({ bout=$(banana); } 2>&1; declare -p bout >&2); declare -p berr
will assign to berr the stderr of banana and display the content of berr with declare.
At this point, you'll have on your terminal screen:
declare -- bout="banana to stdout"
declare -- berr="banana to stderr"
with the line
declare -- bout="banana to stdout"
being displayed via stderr.
A final redirection:
{ berr=$({ bout=$(banana); } 2>&1; declare -p bout >&2); declare -p berr; } 2>&1
will have the previous displayed via stdout.
Finally, we use a process substitution to source the content of these lines.
You mentioned the return code of the command too. Change banana to:
banana() {
echo "banana to stdout"
echo >&2 "banana to stderr"
return 42
}
We'll also have the return code of banana in the variable bret like so:
. <({ berr=$({ bout=$(banana); bret=$?; } 2>&1; declare -p bout bret >&2); declare -p berr; } 2>&1)
You can do without sourcing and a process substitution by using eval too (and it works with Bash<4 too):
eval "$({ berr=$({ bout=$(banana); bret=$?; } 2>&1; declare -p bout bret >&2); declare -p berr; } 2>&1)"
And all this is safe, because the only stuff we're sourceing or evaling are obtained from declare -p and will always be properly escaped.
Of course, if you want the output in an array (e.g., with mapfile, if you're using Bash≥4—otherwise replace mapfile with a while–read loop), the adaptation is straightforward.
For example:
banana() {
printf 'banana to stdout %d\n' {1..10}
echo >&2 'banana to stderr'
return 42
}
. <({ berr=$({ mapfile -t bout < <(banana); } 2>&1; declare -p bout >&2); declare -p berr; } 2>&1)
and with return code:
. <({ berr=$({ mapfile -t bout< <(banana; bret=$?; declare -p bret >&3); } 3>&2 2>&1; declare -p bout >&2); declare -p berr; } 2>&1)
There is no way to capture both without temp file.
You can capture stderr to variable and pass stdout to user screen (sample from here):
exec 3>&1 # Save the place that stdout (1) points to.
output=$(command 2>&1 1>&3) # Run command. stderr is captured.
exec 3>&- # Close FD #3.
# Or this alternative, which captures stderr, letting stdout through:
{ output=$(command 2>&1 1>&3-) ;} 3>&1
But there is no way to capture both stdout and stderr:
What you cannot do is capture stdout in one variable, and stderr in another, using only FD redirections. You must use a temporary file (or a named pipe) to achieve that one.
You can do:
OUT=$(myscript.sh 2> errFile)
ERR=$(<errFile)
Now $OUT will have standard output of your script and $ERR has error output of your script.
An easy, but not elegant way: Redirect stderr to a temporary file and then read it back:
TMP=$(mktemp)
var=$(myscript.sh 2> "$TMP")
err=$(cat "$TMP")
rm "$TMP"
While I have not found a way to capture stderr and stdout to separate variables in bash, I send both to the same variable with…
result=$( { grep "JUNK" ./junk.txt; } 2>&1 )
… then I check the exit status “$?”, and act appropriately on the data in $result.
# NAME
# capture - capture the stdout and stderr output of a command
# SYNOPSIS
# capture <result> <error> <command>
# DESCRIPTION
# This shell function captures the stdout and stderr output of <command> in
# the shell variables <result> and <error>.
# ARGUMENTS
# <result> - the name of the shell variable to capture stdout
# <error> - the name of the shell variable to capture stderr
# <command> - the command to execute
# ENVIRONMENT
# The following variables are mdified in the caller's context:
# - <result>
# - <error>
# RESULT
# Retuns the exit code of <command>.
# SOURCE
capture ()
{
# Name of shell variable to capture the stdout of command.
result=$1
shift
# Name of shell variable to capture the stderr of command.
error=$1
shift
# Local AWK program to extract the error, the result, and the exit code
# parts of the captured output of command.
local evaloutput='
{
output [NR] = $0
}
END \
{
firstresultline = NR - output [NR - 1] - 1
if (Var == "error") \
{
for (i = 1; i < firstresultline; ++ i)
{
printf ("%s\n", output [i])
}
}
else if (Var == "result") \
{
for (i = firstresultline; i < NR - 1; ++ i)
{
printf ("%s\n", output [i])
}
}
else \
{
printf ("%d", output [NR])
}
}'
# Capture the stderr and stdout output of command, as well as its exit code.
local output="$(
{
local stdout
stdout="$($*)"
local exitcode=$?
printf "\n%s\n%d\n%d\n" \
"$stdout" "$(echo "$stdout" | wc -l)" "$exitcode"
} 2>&1)"
# extract the stderr, the stdout, and the exit code parts of the captured
# output of command.
printf -v $error "%s" \
"$(echo "$output" | gawk -v Var="error" "$evaloutput")"
printf -v $result "%s" \
"$(echo "$output" | gawk -v Var="result" "$evaloutput")"
return $(echo "$output" | gawk "$evaloutput")
}
I have a script that prints in a loop. I want the loop to print differently the first time from all other times (i.e., it should print differently if anything has been printed at all). I am thinking a simple way would be to check whether anything has been printed yet (i.e., stdout has been written to). Is there any way to determine that?
I know I could also write to a variable and test whether it's empty, but I'd like to avoid a variable if I can.
I think that will do what you need. If you echo something between # THE SCRIPT ITSELF and # END, THE FOLLOWING DATA HAS BEEN WRITTEN TO STDOUT will be printed STDOUT HAS NOT BEEN TOUCHED else...
#!/bin/bash
readonly TMP=$(mktemp /tmp/test_XXXXXX)
exec 3<> "$TMP" # open tmp file as fd 3
exec 4>&1 # save current value of stdout as fd 4
exec >&3 # redirect stdout to fd 3 (tmp file)
# THE SCRIPT ITSELF
echo Hello World
# END
exec >&4 # restore save stdout
exec 3>&- # close tmp file
TMP_SIZE=$(stat -f %z "$TMP")
if [ $TMP_SIZE -gt 0 ]; then
echo "THE FOLLOWING DATA HAS BEEN WRITTEN TO STDOUT"
echo
cat "$TMP"
else
echo "STDOUT HAS NOT BEEN TOUCHED"
fi
rm "$TMP"
So, output of the script as is:
THE FOLLOWING DATA HAS BEEN WRITTEN TO STDOUT
Hello World
and if you remove the echo Hello World line:
STDOUT HAS NOT BEEN TOUCHED
And if you really want to test that while running the script itself, you can do that, too :-)
#!/bin/bash
#FIRST ELSE
function echo_fl() {
TMP_SIZE=$(stat -f %z "$TMP")
if [ $TMP_SIZE -gt 0 ]; then
echo $2
else
echo $1
fi
}
TMP=$(mktemp /tmp/test_XXXXXX)
exec 3 "$TMP" # open tmp file as fd 3
exec 4>&1 # save current value of stdout as fd 4
exec >&3 # redirect stdout to fd 3 (tmp file)
# THE SCRIPT ITSELF
for f in fst snd trd; do
echo_fl "$(echo $f | tr a-z A-Z)" "$f"
done
# END
exec >&4 # restore save stdout
exec 3>&- # close tmp file
TMP_SIZE=$(stat -f %z "$TMP")
if [ $TMP_SIZE -gt 0 ]; then
echo "THE FOLLOWING DATA HAS BEEN WRITTEN TO STDOUT"
echo
cat "$TMP"
else
echo "STDOUT HAS NOT BEEN TOUCHED"
fi
rm "$TMP"
output is:
THE FOLLOWING DATA HAS BEEN WRITTEN TO STDOUT
FST
snd
trd
as you can see: Only the first line (FST) has caps on. That's what the echo_fl function does for you: If it's the first line of output, if echoes the first argument, if it's not it echoes the second argument :-)
It's hard to tell what you are trying to do here, but if your script is printing to stdout, you could simply pipe it to perl:
yourcommand | perl -pe 'if ($. == 1) { print "First line is: $_" }'
It all depends on what kind of changes you are attempting to do.
You cannot use the -f option with %z. The line TMP_SIZE=$(stat -f %z "$TMP") produces a long string that fails the test in if [ $TMP_SIZE -gt 0 ].