Suppressing console output an errors while running Shell Script - shell

I am trying to output list of all running services, corresponding package and status on my Linux (Centos) box using below code snippet:-
for i in $(service --status-all | grep -v "not running" | grep -E running\|stopped | awk '{print $1}'); do
packagename=$(rpm -qf /etc/init.d/$i)
servicestatus=$(service --status-all | grep $i | awk '{print $NF}');
echo $tdydate, $(ifconfig | sed -En 's/127.0.0.1//;s/.*inet (addr:)?(([0-9]*\.){3}[0-9]*).*/\2/p'), $i, $packagename, $servicestatus >> "$HOME/MyLog/running_services.csv"
done
However, when I run the script, I get errors on console like:-
error: file /etc/init.d/cupsd: No such file or directory
error: file /etc/init.d/hald: No such file or directory
error: file /etc/init.d/lvmetad: No such file or directory
error: file /etc/init.d/postmaster: No such file or directory
Probably, this is when it tries to find service names in init.d directory. However, this is not true for all services.
Now, how can I suppress this console output? I don't want to see this on console. Any pointers?

You could wrap the entire thing in parantheses and do a 2>/dev/null
(
...
script
...
) 2>/dev/null
-That will run the script / snippet in a subshell.
See this page for more info on redirecting.

To suppress stdout and stderr, you simply redirect the output like this: command >/dev/null 2>&1.
The first redirection >/dev/null will redirect the standard output to /dev/null, which will suppress it. This standard output are things that are written to the screen by commands, not errors (e.g. when executing echo "hello", the hello will be a line of stdout).
The second redirection 2>&1 will couple stderr or standard error to the same location as standard output (so also to /dev/null).
A note on the notation: there are three standard file descriptors in linux (although I think you can create more, not sure). 1 resembles standard output, 2 standard error and 0 standard input. So you could also redirect standard error (i.e. error messages) to a log file, like this: 2>/path/to/log/file.log.
Google for file descriptor for more information.
So you could, as pacman-- suggested wrap the entire code and redirect the output but you could also call your script like this: bash script.sh >/dev/null 2>&1 which is more elegant because you can execute the script without the last redirection to debug or something like that, at least in my opinion.

Related

Redirect scp output that matches pattern

I want redirect output from scp into a file, but only lines that match a pattern. For example, I'm getting a lot of permission denied errors, so I want to log all the files where that is the case. I found that I can redirect all output of scp into a file like this:
source my_scp_script.sh > output.log 2>&1
The script is just a simple call to scp. I'm stuck on how I can match a pattern like "Permission Denied" and only write those lines to the file, not everything since there are thousands of files that are successful.
EDIT: I forgot to clarify that I have tried using grep, but it does not work when doing it like this source my_scp_script.sh | grep Permission > output.log
There is nothing special about scp; you just need to get familiar with shell I/O redirections.
(The Linux Documentation Project's Advanced Bash-Scripting Guide is IMO a great resource of information; §20 I/O Redirection contains a nice summary.)
You could use source my_scp_script.sh 1> /tmp/stdout 2> /tmp/stderr to redirect the output written to the file descriptors 1 (aka stdout; note that 1> can also be written as >, which is more commonly used) and 2 (aka stderr) to temporary files. You can then inspect /tmp/std{out,err} and will find that the Permission Denied errors are written to stderr.
A simple pipeline | grep connects stdout with grep's stdin,
but as you need to connect stderr, you can use
source my_scp_script.sh 2>&1 | grep "Permission Denied" > output.log
to redirect file descriptor 2 to file descriptor 1 before piping the combined stdout to grep.
With |& being a shortcut for 2>&1 |, you can finally simplify this to:
source my_scp_script.sh |& grep "Permission Denied" > output.log

echo to file and terminal [duplicate]

In bash, calling foo would display any output from that command on the stdout.
Calling foo > output would redirect any output from that command to the file specified (in this case 'output').
Is there a way to redirect output to a file and have it display on stdout?
The command you want is named tee:
foo | tee output.file
For example, if you only care about stdout:
ls -a | tee output.file
If you want to include stderr, do:
program [arguments...] 2>&1 | tee outfile
2>&1 redirects channel 2 (stderr/standard error) into channel 1 (stdout/standard output), such that both is written as stdout. It is also directed to the given output file as of the tee command.
Furthermore, if you want to append to the log file, use tee -a as:
program [arguments...] 2>&1 | tee -a outfile
$ program [arguments...] 2>&1 | tee outfile
2>&1 dumps the stderr and stdout streams.
tee outfile takes the stream it gets and writes it to the screen and to the file "outfile".
This is probably what most people are looking for. The likely situation is some program or script is working hard for a long time and producing a lot of output. The user wants to check it periodically for progress, but also wants the output written to a file.
The problem (especially when mixing stdout and stderr streams) is that there is reliance on the streams being flushed by the program. If, for example, all the writes to stdout are not flushed, but all the writes to stderr are flushed, then they'll end up out of chronological order in the output file and on the screen.
It's also bad if the program only outputs 1 or 2 lines every few minutes to report progress. In such a case, if the output was not flushed by the program, the user wouldn't even see any output on the screen for hours, because none of it would get pushed through the pipe for hours.
Update: The program unbuffer, part of the expect package, will solve the buffering problem. This will cause stdout and stderr to write to the screen and file immediately and keep them in sync when being combined and redirected to tee. E.g.:
$ unbuffer program [arguments...] 2>&1 | tee outfile
Another way that works for me is,
<command> |& tee <outputFile>
as shown in gnu bash manual
Example:
ls |& tee files.txt
If ‘|&’ is used, command1’s standard error, in addition to its standard output, is connected to command2’s standard input through the pipe; it is shorthand for 2>&1 |. This implicit redirection of the standard error to the standard output is performed after any redirections specified by the command.
For more information, refer redirection
You can primarily use Zoredache solution, but If you don't want to overwrite the output file you should write tee with -a option as follow :
ls -lR / | tee -a output.file
Something to add ...
The package unbuffer has support issues with some packages under fedora and redhat unix releases.
Setting aside the troubles
Following worked for me
bash myscript.sh 2>&1 | tee output.log
Thank you ScDF & matthew your inputs saved me lot of time..
Using tail -f output should work.
In my case I had the Java process with output logs. The simplest solution to display output logs and redirect them into the file(named logfile here) was:
my_java_process_run_script.sh |& tee logfile
Result was Java process running with output logs displaying and
putting them into the file with name logfile
You can do that for your entire script by using something like that at the beginning of your script :
#!/usr/bin/env bash
test x$1 = x$'\x00' && shift || { set -o pipefail ; ( exec 2>&1 ; $0 $'\x00' "$#" ) | tee mylogfile ; exit $? ; }
# do whaetever you want
This redirect both stderr and stdout outputs to the file called mylogfile and let everything goes to stdout at the same time.
It is used some stupid tricks :
use exec without command to setup redirections,
use tee to duplicates outputs,
restart the script with the wanted redirections,
use a special first parameter (a simple NUL character specified by the $'string' special bash notation) to specify that the script is restarted (no equivalent parameter may be used by your original work),
try to preserve the original exit status when restarting the script using the pipefail option.
Ugly but useful for me in certain situations.
Bonus answer since this use-case brought me here:
In the case where you need to do this as some other user
echo "some output" | sudo -u some_user tee /some/path/some_file
Note that the echo will happen as you and the file write will happen as "some_user" what will NOT work is if you were to run the echo as "some_user" and redirect the output with >> "some_file" because the file redirect will happen as you.
Hint: tee also supports append with the -a flag, if you need to replace a line in a file as another user you could execute sed as the desired user.
< command > |& tee filename # this will create a file "filename" with command status as a content, If a file already exists it will remove existed content and writes the command status.
< command > | tee >> filename # this will append status to the file but it doesn't print the command status on standard_output (screen).
I want to print something by using "echo" on screen and append that echoed data to a file
echo "hi there, Have to print this on screen and append to a file"
tee is perfect for this, but this will also do the job
ls -lr / > output | cat output

Get whole output stream from remote machine after running .rb file using ssh [duplicate]

This question already has answers here:
How to redirect and append both standard output and standard error to a file with Bash
(8 answers)
Closed 6 years ago.
I know that in Linux, to redirect output from the screen to a file, I can either use the > or tee. However, I'm not sure why part of the output is still output to the screen and not written to the file.
Is there a way to redirect all output to file?
That part is written to stderr, use 2> to redirect it. For example:
foo > stdout.txt 2> stderr.txt
or if you want in same file:
foo > allout.txt 2>&1
Note: this works in (ba)sh, check your shell for proper syntax
All POSIX operating systems have 3 streams: stdin, stdout, and stderr. stdin is the input, which can accept the stdout or stderr. stdout is the primary output, which is redirected with >, >>, or |. stderr is the error output, which is handled separately so that any exceptions do not get passed to a command or written to a file that it might break; normally, this is sent to a log of some kind, or dumped directly, even when the stdout is redirected. To redirect both to the same place, use:
$command &> /some/file
EDIT: thanks to Zack for pointing out that the above solution is not portable--use instead:
$command > file 2>&1
If you want to silence the error, do:
$command 2> /dev/null
To get the output on the console AND in a file file.txt for example.
make 2>&1 | tee file.txt
Note: & (in 2>&1) specifies that 1 is not a file name but a file descriptor.
Use this - "require command here" > log_file_name 2>&1
Detail description of redirection operator in Unix/Linux.
The > operator redirects the output usually to a file but it can be to a device. You can also use >> to append.
If you don't specify a number then the standard output stream is assumed but you can also redirect errors
> file redirects stdout to file
1> file redirects stdout to file
2> file redirects stderr to file
&> file redirects stdout and stderr to file
/dev/null is the null device it takes any input you want and throws it away. It can be used to suppress any output.
Credits to osexp2003 and j.a. …
Instead of putting:
&>> your_file.log
behind a line in:
crontab -e
I use:
#!/bin/bash
exec &>> your_file.log
…
at the beginning of a BASH script.
Advantage: You have the log definitions within your script. Good for Git etc.
You can use exec command to redirect all stdout/stderr output of any commands later.
sample script:
exec 2> your_file2 > your_file1
your other commands.....
It might be the standard error. You can redirect it:
... > out.txt 2>&1
Command:
foo >> output.txt 2>&1
appends to the output.txt file, without replacing the content.
Use >> to append:
command >> file

Grep stderr in bash script for any output

In my bash script I use grep in different logs like this:
LOGS1=$(grep -E -i 'err|warn' /opt/backup/exports.log /opt/backup/imports.log && grep "tar:" /opt/backup/h2_backups.log /opt/backup/st_backups.log)
if [ -n "$LOGS1" ] ]; then
COLOUR="yellow"
MESSAGE="Logs contain warnings. Backups may be incomplete. Invetigate these warnings:\n$LOGS"
Instead of checking if each log exsist (there are many more logs than this) I want check stderr while the script runs to see if I get any output. If one of the logs does not exists it will produce an error like this: grep: /opt/backup/st_backups.log: No such file or directory
I've tried to read sterr with commands like command 2> >(grep "file" >&2 but that does not seem to work.
I know I can pipe the output to a file, but I rather just handle the stderr when there is any output instead of reading the file. OR is there any reason why pipe to file is better?
Send the standard error (file descriptor 2) to standard output(file descriptor 1) and assign it to var Q:
$ Q=$(grep text file 2>&1)
$ echo $Q
grep: file: No such file or directory
This is default behaviour, stderr is normally set to your terminal (and unbuffered) so you see errors as you pipe stdout somewhere. If you want to merge stderr with stdout then this is the syntax,
command >file 2>&1

Capture stderr into a pipe from command expansion

I have a program that returns answers on stdout and errors on stderr.
Unfortunately the program ends by emitting some text on stderr even if successful.
I would like to store the program output in a variable using command expansion as:
ans=$(prog) 2>&1 | grep -v success
This doesn't work. Tried putting 2>&1 in the parens, but as I suspected $ans then
gets the success text.
Any ideas?
Not sure, what you trying to get, but probably this is your command:
ans=$(prog 2>&1 | grep -v success)
If you want to filter 'success' only from standard error stream, you could use something like this:
ans=$({ ./foo 3>&2 2>&1 >&3- | grep -v success; } 2>&1)
And just in case, as noted in BashFAQ/002:
What you cannot do is capture stdout in one variable, and stderr in another, using only FD redirections. You must use a temporary file (or a named pipe) to achieve that one.

Resources