Redirect scp output that matches pattern - bash

I want redirect output from scp into a file, but only lines that match a pattern. For example, I'm getting a lot of permission denied errors, so I want to log all the files where that is the case. I found that I can redirect all output of scp into a file like this:
source my_scp_script.sh > output.log 2>&1
The script is just a simple call to scp. I'm stuck on how I can match a pattern like "Permission Denied" and only write those lines to the file, not everything since there are thousands of files that are successful.
EDIT: I forgot to clarify that I have tried using grep, but it does not work when doing it like this source my_scp_script.sh | grep Permission > output.log

There is nothing special about scp; you just need to get familiar with shell I/O redirections.
(The Linux Documentation Project's Advanced Bash-Scripting Guide is IMO a great resource of information; §20 I/O Redirection contains a nice summary.)
You could use source my_scp_script.sh 1> /tmp/stdout 2> /tmp/stderr to redirect the output written to the file descriptors 1 (aka stdout; note that 1> can also be written as >, which is more commonly used) and 2 (aka stderr) to temporary files. You can then inspect /tmp/std{out,err} and will find that the Permission Denied errors are written to stderr.
A simple pipeline | grep connects stdout with grep's stdin,
but as you need to connect stderr, you can use
source my_scp_script.sh 2>&1 | grep "Permission Denied" > output.log
to redirect file descriptor 2 to file descriptor 1 before piping the combined stdout to grep.
With |& being a shortcut for 2>&1 |, you can finally simplify this to:
source my_scp_script.sh |& grep "Permission Denied" > output.log

Related

Pipe-ing perf output into a file [duplicate]

This question already has answers here:
How to redirect and append both standard output and standard error to a file with Bash
(8 answers)
Closed 6 years ago.
I know that in Linux, to redirect output from the screen to a file, I can either use the > or tee. However, I'm not sure why part of the output is still output to the screen and not written to the file.
Is there a way to redirect all output to file?
That part is written to stderr, use 2> to redirect it. For example:
foo > stdout.txt 2> stderr.txt
or if you want in same file:
foo > allout.txt 2>&1
Note: this works in (ba)sh, check your shell for proper syntax
All POSIX operating systems have 3 streams: stdin, stdout, and stderr. stdin is the input, which can accept the stdout or stderr. stdout is the primary output, which is redirected with >, >>, or |. stderr is the error output, which is handled separately so that any exceptions do not get passed to a command or written to a file that it might break; normally, this is sent to a log of some kind, or dumped directly, even when the stdout is redirected. To redirect both to the same place, use:
$command &> /some/file
EDIT: thanks to Zack for pointing out that the above solution is not portable--use instead:
$command > file 2>&1
If you want to silence the error, do:
$command 2> /dev/null
To get the output on the console AND in a file file.txt for example.
make 2>&1 | tee file.txt
Note: & (in 2>&1) specifies that 1 is not a file name but a file descriptor.
Use this - "require command here" > log_file_name 2>&1
Detail description of redirection operator in Unix/Linux.
The > operator redirects the output usually to a file but it can be to a device. You can also use >> to append.
If you don't specify a number then the standard output stream is assumed but you can also redirect errors
> file redirects stdout to file
1> file redirects stdout to file
2> file redirects stderr to file
&> file redirects stdout and stderr to file
/dev/null is the null device it takes any input you want and throws it away. It can be used to suppress any output.
Credits to osexp2003 and j.a. …
Instead of putting:
&>> your_file.log
behind a line in:
crontab -e
I use:
#!/bin/bash
exec &>> your_file.log
…
at the beginning of a BASH script.
Advantage: You have the log definitions within your script. Good for Git etc.
You can use exec command to redirect all stdout/stderr output of any commands later.
sample script:
exec 2> your_file2 > your_file1
your other commands.....
It might be the standard error. You can redirect it:
... > out.txt 2>&1
Command:
foo >> output.txt 2>&1
appends to the output.txt file, without replacing the content.
Use >> to append:
command >> file

bash stdout redirection in a for loop

Possible duplicate
Hello,
I'm struggling with output redirection within a bash for loop. I have several similar bash scripts, each scripts being launched on a different input file. The scripts called a tool with a few arguments.
The bash scripts are like this :
my_tool --input input_file --arg1 arg1_name --arg2 arg2_name > output_dir/result_X.dat
X being a string which is unique for each script.
I run all of them in a for loop :
for script in scripts_dir/*.sh
do
bash $script
done
However, the stdout of each run still display on the terminal. The specified output files are created, but empty. How I can solve that ? I found other questions on stackoverflow where the answer is a redirection of the full loop in one big output file, but I'd like to avoid that.
If I replace > output_dir/result_X.dat by > /dev/null : the standard outputs display on the terminal
If I replace > output_dir/result_X.dat by ` 2> /dev/null : nothing is displayed.
Thanks in advance.
When you start my_tool, there are normally 3 file-descriptors available in the tool:
STDIN
STDOUT
STDERR
STDIN is used for input, and therefore irrelevant for this question. STDOUT is used for standard output. This is file-descriptor 1. If you do
ls 1> /dev/null
the STDOUT of ls is written to /dev/null. If you do not add the 1, as in ls > /dev/null, it is assumed that you mean STDOUT.
STDERR is used as standard output for error messages, in the broadest sense of the word. The number of STDERR is 2.
Using ls instead of your my_command, ls > file will put the listing in the file. ls /non_existing_dir_ > file will put the STDOUT of ls in the file. But there is no output on STDOUT, and because STDERR is not redirected, it will be send to the terminal.
So, to conclude,
ls . /non_existing_dir 2>stderr >stdout
will put the directory listing of . in the file stdout and the error for the non-existing directory in stderr.
With 2>&1 you redirect the output of filedescriptor2 (STDERR) to file descriptor 1 (SDTOUT).
To complicate things a bit, you can add other file descriptor numbers. For example:
exec 3>file
will put the output of file descriptor 3 (which is newly created) in file. And
ls 1>&3
will then redirect the output of file descriptor 1 to file descriptor 3, effectively putting the output of ls in file.

Suppressing console output an errors while running Shell Script

I am trying to output list of all running services, corresponding package and status on my Linux (Centos) box using below code snippet:-
for i in $(service --status-all | grep -v "not running" | grep -E running\|stopped | awk '{print $1}'); do
packagename=$(rpm -qf /etc/init.d/$i)
servicestatus=$(service --status-all | grep $i | awk '{print $NF}');
echo $tdydate, $(ifconfig | sed -En 's/127.0.0.1//;s/.*inet (addr:)?(([0-9]*\.){3}[0-9]*).*/\2/p'), $i, $packagename, $servicestatus >> "$HOME/MyLog/running_services.csv"
done
However, when I run the script, I get errors on console like:-
error: file /etc/init.d/cupsd: No such file or directory
error: file /etc/init.d/hald: No such file or directory
error: file /etc/init.d/lvmetad: No such file or directory
error: file /etc/init.d/postmaster: No such file or directory
Probably, this is when it tries to find service names in init.d directory. However, this is not true for all services.
Now, how can I suppress this console output? I don't want to see this on console. Any pointers?
You could wrap the entire thing in parantheses and do a 2>/dev/null
(
...
script
...
) 2>/dev/null
-That will run the script / snippet in a subshell.
See this page for more info on redirecting.
To suppress stdout and stderr, you simply redirect the output like this: command >/dev/null 2>&1.
The first redirection >/dev/null will redirect the standard output to /dev/null, which will suppress it. This standard output are things that are written to the screen by commands, not errors (e.g. when executing echo "hello", the hello will be a line of stdout).
The second redirection 2>&1 will couple stderr or standard error to the same location as standard output (so also to /dev/null).
A note on the notation: there are three standard file descriptors in linux (although I think you can create more, not sure). 1 resembles standard output, 2 standard error and 0 standard input. So you could also redirect standard error (i.e. error messages) to a log file, like this: 2>/path/to/log/file.log.
Google for file descriptor for more information.
So you could, as pacman-- suggested wrap the entire code and redirect the output but you could also call your script like this: bash script.sh >/dev/null 2>&1 which is more elegant because you can execute the script without the last redirection to debug or something like that, at least in my opinion.

Get whole output stream from remote machine after running .rb file using ssh [duplicate]

This question already has answers here:
How to redirect and append both standard output and standard error to a file with Bash
(8 answers)
Closed 6 years ago.
I know that in Linux, to redirect output from the screen to a file, I can either use the > or tee. However, I'm not sure why part of the output is still output to the screen and not written to the file.
Is there a way to redirect all output to file?
That part is written to stderr, use 2> to redirect it. For example:
foo > stdout.txt 2> stderr.txt
or if you want in same file:
foo > allout.txt 2>&1
Note: this works in (ba)sh, check your shell for proper syntax
All POSIX operating systems have 3 streams: stdin, stdout, and stderr. stdin is the input, which can accept the stdout or stderr. stdout is the primary output, which is redirected with >, >>, or |. stderr is the error output, which is handled separately so that any exceptions do not get passed to a command or written to a file that it might break; normally, this is sent to a log of some kind, or dumped directly, even when the stdout is redirected. To redirect both to the same place, use:
$command &> /some/file
EDIT: thanks to Zack for pointing out that the above solution is not portable--use instead:
$command > file 2>&1
If you want to silence the error, do:
$command 2> /dev/null
To get the output on the console AND in a file file.txt for example.
make 2>&1 | tee file.txt
Note: & (in 2>&1) specifies that 1 is not a file name but a file descriptor.
Use this - "require command here" > log_file_name 2>&1
Detail description of redirection operator in Unix/Linux.
The > operator redirects the output usually to a file but it can be to a device. You can also use >> to append.
If you don't specify a number then the standard output stream is assumed but you can also redirect errors
> file redirects stdout to file
1> file redirects stdout to file
2> file redirects stderr to file
&> file redirects stdout and stderr to file
/dev/null is the null device it takes any input you want and throws it away. It can be used to suppress any output.
Credits to osexp2003 and j.a. …
Instead of putting:
&>> your_file.log
behind a line in:
crontab -e
I use:
#!/bin/bash
exec &>> your_file.log
…
at the beginning of a BASH script.
Advantage: You have the log definitions within your script. Good for Git etc.
You can use exec command to redirect all stdout/stderr output of any commands later.
sample script:
exec 2> your_file2 > your_file1
your other commands.....
It might be the standard error. You can redirect it:
... > out.txt 2>&1
Command:
foo >> output.txt 2>&1
appends to the output.txt file, without replacing the content.
Use >> to append:
command >> file

Grep stderr in bash script for any output

In my bash script I use grep in different logs like this:
LOGS1=$(grep -E -i 'err|warn' /opt/backup/exports.log /opt/backup/imports.log && grep "tar:" /opt/backup/h2_backups.log /opt/backup/st_backups.log)
if [ -n "$LOGS1" ] ]; then
COLOUR="yellow"
MESSAGE="Logs contain warnings. Backups may be incomplete. Invetigate these warnings:\n$LOGS"
Instead of checking if each log exsist (there are many more logs than this) I want check stderr while the script runs to see if I get any output. If one of the logs does not exists it will produce an error like this: grep: /opt/backup/st_backups.log: No such file or directory
I've tried to read sterr with commands like command 2> >(grep "file" >&2 but that does not seem to work.
I know I can pipe the output to a file, but I rather just handle the stderr when there is any output instead of reading the file. OR is there any reason why pipe to file is better?
Send the standard error (file descriptor 2) to standard output(file descriptor 1) and assign it to var Q:
$ Q=$(grep text file 2>&1)
$ echo $Q
grep: file: No such file or directory
This is default behaviour, stderr is normally set to your terminal (and unbuffered) so you see errors as you pipe stdout somewhere. If you want to merge stderr with stdout then this is the syntax,
command >file 2>&1

Resources