Grep stderr in bash script for any output - bash

In my bash script I use grep in different logs like this:
LOGS1=$(grep -E -i 'err|warn' /opt/backup/exports.log /opt/backup/imports.log && grep "tar:" /opt/backup/h2_backups.log /opt/backup/st_backups.log)
if [ -n "$LOGS1" ] ]; then
COLOUR="yellow"
MESSAGE="Logs contain warnings. Backups may be incomplete. Invetigate these warnings:\n$LOGS"
Instead of checking if each log exsist (there are many more logs than this) I want check stderr while the script runs to see if I get any output. If one of the logs does not exists it will produce an error like this: grep: /opt/backup/st_backups.log: No such file or directory
I've tried to read sterr with commands like command 2> >(grep "file" >&2 but that does not seem to work.
I know I can pipe the output to a file, but I rather just handle the stderr when there is any output instead of reading the file. OR is there any reason why pipe to file is better?

Send the standard error (file descriptor 2) to standard output(file descriptor 1) and assign it to var Q:
$ Q=$(grep text file 2>&1)
$ echo $Q
grep: file: No such file or directory

This is default behaviour, stderr is normally set to your terminal (and unbuffered) so you see errors as you pipe stdout somewhere. If you want to merge stderr with stdout then this is the syntax,
command >file 2>&1

Related

echo to file and terminal [duplicate]

In bash, calling foo would display any output from that command on the stdout.
Calling foo > output would redirect any output from that command to the file specified (in this case 'output').
Is there a way to redirect output to a file and have it display on stdout?
The command you want is named tee:
foo | tee output.file
For example, if you only care about stdout:
ls -a | tee output.file
If you want to include stderr, do:
program [arguments...] 2>&1 | tee outfile
2>&1 redirects channel 2 (stderr/standard error) into channel 1 (stdout/standard output), such that both is written as stdout. It is also directed to the given output file as of the tee command.
Furthermore, if you want to append to the log file, use tee -a as:
program [arguments...] 2>&1 | tee -a outfile
$ program [arguments...] 2>&1 | tee outfile
2>&1 dumps the stderr and stdout streams.
tee outfile takes the stream it gets and writes it to the screen and to the file "outfile".
This is probably what most people are looking for. The likely situation is some program or script is working hard for a long time and producing a lot of output. The user wants to check it periodically for progress, but also wants the output written to a file.
The problem (especially when mixing stdout and stderr streams) is that there is reliance on the streams being flushed by the program. If, for example, all the writes to stdout are not flushed, but all the writes to stderr are flushed, then they'll end up out of chronological order in the output file and on the screen.
It's also bad if the program only outputs 1 or 2 lines every few minutes to report progress. In such a case, if the output was not flushed by the program, the user wouldn't even see any output on the screen for hours, because none of it would get pushed through the pipe for hours.
Update: The program unbuffer, part of the expect package, will solve the buffering problem. This will cause stdout and stderr to write to the screen and file immediately and keep them in sync when being combined and redirected to tee. E.g.:
$ unbuffer program [arguments...] 2>&1 | tee outfile
Another way that works for me is,
<command> |& tee <outputFile>
as shown in gnu bash manual
Example:
ls |& tee files.txt
If ‘|&’ is used, command1’s standard error, in addition to its standard output, is connected to command2’s standard input through the pipe; it is shorthand for 2>&1 |. This implicit redirection of the standard error to the standard output is performed after any redirections specified by the command.
For more information, refer redirection
You can primarily use Zoredache solution, but If you don't want to overwrite the output file you should write tee with -a option as follow :
ls -lR / | tee -a output.file
Something to add ...
The package unbuffer has support issues with some packages under fedora and redhat unix releases.
Setting aside the troubles
Following worked for me
bash myscript.sh 2>&1 | tee output.log
Thank you ScDF & matthew your inputs saved me lot of time..
Using tail -f output should work.
In my case I had the Java process with output logs. The simplest solution to display output logs and redirect them into the file(named logfile here) was:
my_java_process_run_script.sh |& tee logfile
Result was Java process running with output logs displaying and
putting them into the file with name logfile
You can do that for your entire script by using something like that at the beginning of your script :
#!/usr/bin/env bash
test x$1 = x$'\x00' && shift || { set -o pipefail ; ( exec 2>&1 ; $0 $'\x00' "$#" ) | tee mylogfile ; exit $? ; }
# do whaetever you want
This redirect both stderr and stdout outputs to the file called mylogfile and let everything goes to stdout at the same time.
It is used some stupid tricks :
use exec without command to setup redirections,
use tee to duplicates outputs,
restart the script with the wanted redirections,
use a special first parameter (a simple NUL character specified by the $'string' special bash notation) to specify that the script is restarted (no equivalent parameter may be used by your original work),
try to preserve the original exit status when restarting the script using the pipefail option.
Ugly but useful for me in certain situations.
Bonus answer since this use-case brought me here:
In the case where you need to do this as some other user
echo "some output" | sudo -u some_user tee /some/path/some_file
Note that the echo will happen as you and the file write will happen as "some_user" what will NOT work is if you were to run the echo as "some_user" and redirect the output with >> "some_file" because the file redirect will happen as you.
Hint: tee also supports append with the -a flag, if you need to replace a line in a file as another user you could execute sed as the desired user.
< command > |& tee filename # this will create a file "filename" with command status as a content, If a file already exists it will remove existed content and writes the command status.
< command > | tee >> filename # this will append status to the file but it doesn't print the command status on standard_output (screen).
I want to print something by using "echo" on screen and append that echoed data to a file
echo "hi there, Have to print this on screen and append to a file"
tee is perfect for this, but this will also do the job
ls -lr / > output | cat output

Pipe-ing perf output into a file [duplicate]

This question already has answers here:
How to redirect and append both standard output and standard error to a file with Bash
(8 answers)
Closed 6 years ago.
I know that in Linux, to redirect output from the screen to a file, I can either use the > or tee. However, I'm not sure why part of the output is still output to the screen and not written to the file.
Is there a way to redirect all output to file?
That part is written to stderr, use 2> to redirect it. For example:
foo > stdout.txt 2> stderr.txt
or if you want in same file:
foo > allout.txt 2>&1
Note: this works in (ba)sh, check your shell for proper syntax
All POSIX operating systems have 3 streams: stdin, stdout, and stderr. stdin is the input, which can accept the stdout or stderr. stdout is the primary output, which is redirected with >, >>, or |. stderr is the error output, which is handled separately so that any exceptions do not get passed to a command or written to a file that it might break; normally, this is sent to a log of some kind, or dumped directly, even when the stdout is redirected. To redirect both to the same place, use:
$command &> /some/file
EDIT: thanks to Zack for pointing out that the above solution is not portable--use instead:
$command > file 2>&1
If you want to silence the error, do:
$command 2> /dev/null
To get the output on the console AND in a file file.txt for example.
make 2>&1 | tee file.txt
Note: & (in 2>&1) specifies that 1 is not a file name but a file descriptor.
Use this - "require command here" > log_file_name 2>&1
Detail description of redirection operator in Unix/Linux.
The > operator redirects the output usually to a file but it can be to a device. You can also use >> to append.
If you don't specify a number then the standard output stream is assumed but you can also redirect errors
> file redirects stdout to file
1> file redirects stdout to file
2> file redirects stderr to file
&> file redirects stdout and stderr to file
/dev/null is the null device it takes any input you want and throws it away. It can be used to suppress any output.
Credits to osexp2003 and j.a. …
Instead of putting:
&>> your_file.log
behind a line in:
crontab -e
I use:
#!/bin/bash
exec &>> your_file.log
…
at the beginning of a BASH script.
Advantage: You have the log definitions within your script. Good for Git etc.
You can use exec command to redirect all stdout/stderr output of any commands later.
sample script:
exec 2> your_file2 > your_file1
your other commands.....
It might be the standard error. You can redirect it:
... > out.txt 2>&1
Command:
foo >> output.txt 2>&1
appends to the output.txt file, without replacing the content.
Use >> to append:
command >> file

How to use the set command to redirect output to another file

Bash:
I was wondering how I could use the set command to redirect the output to another file.
For example, I would like to use the set command to re-direct the output to a file called "output", how would I do so?
I've tried using set -h output but it didn't seem to work. Any help is appreciated.
The set command is not for redirecting output. Maybe you are thinking of redirections.
For example, if you would like to redirect the output of echo 'hi!' to a file, you could run echo 'hi!' > output.
You might be thinking of exec
#!/usr/bin/env bash
echo "script starting"
# make a backup of fd1 (stdout) as fd3, then redirect fd1 to an output file
exec 3>&1 1>/tmp/output.file
# your script here
echo "hello world"
echo "...and so on..."
# turn off logging
exec 1>&3 3>&-
echo done
Running that looks like:
$ bash script.sh
script starting
done
$ cat /tmp/output.file
hello world
...and so on...
You can use this method to log output to both the terminal and the output file, using a process substitition:
exec 3>&1 1> >(tee /tmp/output.file)
and other fancy things like adding a timestamp to the output
exec 3>&1 1> >(ts | tee /tmp/output.file)
The set command allows you to set or unset values of shell options and positional parameters and it is not the right choice for redirecting the output to a file. To write the output of a command to a file, there are multiple ways and some of the commonly used ways are listed below:
command > output.log
The standard output stream will be redirected to the file only, it
will not be visible in the terminal. If the file already exists, it
gets overwritten.
command >> output.log
The standard output stream will be redirected to the file only, it
will not be visible in the terminal. If the file already exists, the
new data will get appended to the end of the file.
command 2> output.log
The standard error stream will be redirected to the file only, it will
not be visible in the terminal. If the file already exists, it gets
overwritten.
command 2>> output.log
The standard error stream will be redirected to the file only, it will
not be visible in the terminal. If the file already exists, the new
data will get appended to the end of the file.
command &> output.log
Both the standard output and standard error stream will be redirected
to the file only, nothing will be visible in the terminal. If the file
already exists, it gets overwritten.
command &>> output.log
Both the standard output and standard error stream will be redirected
to the file only, nothing will be visible in the terminal. If the file
already exists, the new data will get appended to the end of the file.
command | tee output.log
The standard output stream will be copied to the file, it will still
be visible in the terminal. If the file already exists, it gets
overwritten.
command | tee -a output.log
The standard output stream will be copied to the file, it will still
be visible in the terminal. If the file already exists, the new data
will get appended to the end of the file.
command |& tee output.log
Both the standard output and standard error streams will be copied to
the file while still being visible in the terminal. If the file
already exists, it gets overwritten.
command |& tee -a output.log
Both the standard output and standard error streams will be copied to
the file while still being visible in the terminal. If the file
already exists, the new data will get appended to the end of the file.
command 2>&1 | tee output.log
Both the standard output and standard error streams will be redirected
to the file.

Suppressing console output an errors while running Shell Script

I am trying to output list of all running services, corresponding package and status on my Linux (Centos) box using below code snippet:-
for i in $(service --status-all | grep -v "not running" | grep -E running\|stopped | awk '{print $1}'); do
packagename=$(rpm -qf /etc/init.d/$i)
servicestatus=$(service --status-all | grep $i | awk '{print $NF}');
echo $tdydate, $(ifconfig | sed -En 's/127.0.0.1//;s/.*inet (addr:)?(([0-9]*\.){3}[0-9]*).*/\2/p'), $i, $packagename, $servicestatus >> "$HOME/MyLog/running_services.csv"
done
However, when I run the script, I get errors on console like:-
error: file /etc/init.d/cupsd: No such file or directory
error: file /etc/init.d/hald: No such file or directory
error: file /etc/init.d/lvmetad: No such file or directory
error: file /etc/init.d/postmaster: No such file or directory
Probably, this is when it tries to find service names in init.d directory. However, this is not true for all services.
Now, how can I suppress this console output? I don't want to see this on console. Any pointers?
You could wrap the entire thing in parantheses and do a 2>/dev/null
(
...
script
...
) 2>/dev/null
-That will run the script / snippet in a subshell.
See this page for more info on redirecting.
To suppress stdout and stderr, you simply redirect the output like this: command >/dev/null 2>&1.
The first redirection >/dev/null will redirect the standard output to /dev/null, which will suppress it. This standard output are things that are written to the screen by commands, not errors (e.g. when executing echo "hello", the hello will be a line of stdout).
The second redirection 2>&1 will couple stderr or standard error to the same location as standard output (so also to /dev/null).
A note on the notation: there are three standard file descriptors in linux (although I think you can create more, not sure). 1 resembles standard output, 2 standard error and 0 standard input. So you could also redirect standard error (i.e. error messages) to a log file, like this: 2>/path/to/log/file.log.
Google for file descriptor for more information.
So you could, as pacman-- suggested wrap the entire code and redirect the output but you could also call your script like this: bash script.sh >/dev/null 2>&1 which is more elegant because you can execute the script without the last redirection to debug or something like that, at least in my opinion.

Get whole output stream from remote machine after running .rb file using ssh [duplicate]

This question already has answers here:
How to redirect and append both standard output and standard error to a file with Bash
(8 answers)
Closed 6 years ago.
I know that in Linux, to redirect output from the screen to a file, I can either use the > or tee. However, I'm not sure why part of the output is still output to the screen and not written to the file.
Is there a way to redirect all output to file?
That part is written to stderr, use 2> to redirect it. For example:
foo > stdout.txt 2> stderr.txt
or if you want in same file:
foo > allout.txt 2>&1
Note: this works in (ba)sh, check your shell for proper syntax
All POSIX operating systems have 3 streams: stdin, stdout, and stderr. stdin is the input, which can accept the stdout or stderr. stdout is the primary output, which is redirected with >, >>, or |. stderr is the error output, which is handled separately so that any exceptions do not get passed to a command or written to a file that it might break; normally, this is sent to a log of some kind, or dumped directly, even when the stdout is redirected. To redirect both to the same place, use:
$command &> /some/file
EDIT: thanks to Zack for pointing out that the above solution is not portable--use instead:
$command > file 2>&1
If you want to silence the error, do:
$command 2> /dev/null
To get the output on the console AND in a file file.txt for example.
make 2>&1 | tee file.txt
Note: & (in 2>&1) specifies that 1 is not a file name but a file descriptor.
Use this - "require command here" > log_file_name 2>&1
Detail description of redirection operator in Unix/Linux.
The > operator redirects the output usually to a file but it can be to a device. You can also use >> to append.
If you don't specify a number then the standard output stream is assumed but you can also redirect errors
> file redirects stdout to file
1> file redirects stdout to file
2> file redirects stderr to file
&> file redirects stdout and stderr to file
/dev/null is the null device it takes any input you want and throws it away. It can be used to suppress any output.
Credits to osexp2003 and j.a. …
Instead of putting:
&>> your_file.log
behind a line in:
crontab -e
I use:
#!/bin/bash
exec &>> your_file.log
…
at the beginning of a BASH script.
Advantage: You have the log definitions within your script. Good for Git etc.
You can use exec command to redirect all stdout/stderr output of any commands later.
sample script:
exec 2> your_file2 > your_file1
your other commands.....
It might be the standard error. You can redirect it:
... > out.txt 2>&1
Command:
foo >> output.txt 2>&1
appends to the output.txt file, without replacing the content.
Use >> to append:
command >> file

Resources