mysqldump - redirect greped STDERR to a file - bash

mysqldump generates STDOUT (used to dump *.sql), and also generates STDERR - that I need to filter + write to a file.
mysqldump --user=db_username --password=db_password --add-drop-database --host=db_host db_name 2>> '/srv/www/data_appsrv/logs/mysql_date_ym.log' > '/mnt/backup_srv/backup/daily/file_nm.sql'
The code above will write STDERR to mysql_date_ym.log
I need to exclude Warning: Using a password string from STDERR that is written to mysql_date_ym.log
I tried variations with grep, 2>> and >, but none works.

This should work
command 1>stdout.txt 2> >(grep -v "Thing to remove from stderr" >stderr.txt)
Which will redirect STDOUT to stdout.txt, and STDERR via process substitution into the grep filter, then finally to stderr.txt.
So your full command would look like this
mysqldump --user=db_username --password=db_password --add-drop-database --host=db_host db_name 1>'/mnt/backup_srv/backup/daily/file_nm.sql' 2> >(grep -v "Warning: Using a password" > '/srv/www/data_appsrv/logs/mysql_date_ym.log')
You could also use a sed script to run on the mysql_date_ym.log file after redirecting STDOUT/STDERR normally to the output files.
sed -i 's/Warning: Using a password//g' mysql_date_ym.log

Related

Redirect stdout to file and tee stderr to the same file

I am running a command which will (very likely) output text to both stderr and stdout. I want to save both stderr and stdout to the same file, but I only want stderr printing to the terminal.
How can I get this to work? I've tried mycommand 1>&2 | tee file.txt >/dev/null but that doesn't print anything to the terminal.
If You Don't Need Perfect Ordering
Using two separate copies of tee, both writing to the same file in append mode but only one of them subsequently forwarding content to /dev/null, will get you where you need to be:
mycommand \
2> >(tee -a file.txt >&2) \
> >(tee -a file.txt >/dev/null)
If You Do Need Perfect Ordering
See Separately redirecting and recombining stderr/stdout without losing ordering

Exclude specific lines from sftp connection from logfile

I have multiple scripts to connect to an sftp server and put/get files. Recently the sftp admins added a large header for all the logins, which has blown up my log files, and I'd like to exclude that as it's blowing up the file sizes as well as making the logs somewhat unreadable.
As is, the commands are of the following format:
sftp user#host <<EOF &>> ${LOGFILE}
get ...
put ...
exit
EOF
Now, I've tried grepping out all the new lines, which all start with a pipe (they basically made a box to put them in).
sftp user#host <<EOF | grep -v '^|' &>> ${LOGFILE}
get ...
put ...
exit
EOF
This excludes the lines from ${LOGFILE} but throws them to stdout instead which means they end up in another log file, where we also don't want them (these scripts are called by a scheduler). Oddly, it also seems to filter out the first line from the connection attempt output.
Connecting to <host>...
and redirect that to stdout as well. Not the end of the world, but I do find it odd.
How can I filter the lines beginning with | so they don't show anywhere?
In
sftp user#host <<EOF &>> ${LOGFILE}
You are redirecting stdout and stderr to the logfile for appending data (&>>). But when you use
sftp user#host <<EOF | grep -v '^|' &>> ${LOGFILE}
you are only redirecting stdout to grep, leaving the stderr output of sftp to pass untouched. Finally, you are redirecting agains stdout and stderr of grep to the logfile.
In fact, you are interested in redirecting both (stdout and stderr) from sftp, so you can use someting like:
sftp user#host <<EOF |& grep -v '^|' >> ${LOGFILE}
or, in older versions of bash, using the specific redirection instead of the shorthand:
sftp user#host <<EOF 2>&1 | grep -v '^|' >> ${LOGFILE}

Bash: Pipe stdout and catch stderr into a variable

Is it possible to pipe normal stdout further to another program, but store stderr into a variable?
This is the usecase:
mysqldump database | gzip > database.sql
In this scenario I would like to catch all errors/warnings produced by mysqldump and store them into a variable, but the normal stdout (which is the dump) should continue being piped to gzip.
Any ideas about how to accomplish this?
You can do the following:
mysqldump database 2> dump_errors | gzip > database.sql
error_var=$( cat dump_errors )
rm dump_errors
Here, all errors by mysqldump are redirected to a file called 'dump_errors', and stdout is piped to gzip, which in turn writes to database.sql.
Contents of 'dump_errors' are then assigned to variable 'error_val', and file 'dump_errors' is then removed.
Note the following redirections:
$ sort 1> output 2> errors # redirects stdout to "output", stderr to "errors"
$ sort 1> output 2>&1 # stderr goes to stdout, stdout writes to "output"
$ sort 2> errors 1>&2 # stdout goes to stderr, stderr writes to "errors"
$ sort 2>&1 > output # tie stderr to screen, redirect stdout to "output"
$ sort > output 2>&1 # redirect stdout to "output", tie stderr to "output"
You could do something like:
errors=$(mysqldump database 2>&1 > >(gzip > database.sql))
Here, I'm using process substitution to get gzip to use mysqldump's output as stdin. Given the order of redirections (2>&1 before >), mysqldump's stderr should now be used for the command substitution.
Testing it out:
$ a=$(sh -c 'echo foo >&2; echo bar' 2>&1 > >(gzip > foo))
$ gunzip < foo
bar
$ echo $a
foo

Write STDOUT & STDERR to a logfile, also write STDERR to screen

I would like to run several commands, and capture all output to a logfile. I also want to print any errors to the screen (or optionally mail the output to someone).
Here's an example. The following command will run three commands, and will write all output (STDOUT and STDERR) into a single logfile.
{ command1 && command2 && command3 ; } > logfile.log 2>&1
Here is what I want to do with the output of these commands:
STDERR and STDOUT for all commands goes to a logfile, in case I need it later--- I usually won't look in here unless there are problems.
Print STDERR to the screen (or optionally, pipe to /bin/mail), so that any error stands out and doesn't get ignored.
It would be nice if the return codes were still usable, so that I could do some error handling. Maybe I want to send email if there was an error, like this:
{ command1 && command2 && command3 ; } > logfile.log 2>&1 || mailx -s "There was an error" stefanl#example.org
The problem I run into is that STDERR loses context during I/O redirection. A '2>&1' will convert STDERR into STDOUT, and therefore I cannot view errors if I do 2> error.log
Here are a couple juicier examples. Let's pretend that I am running some familiar build commands, but I don't want the entire build to stop just because of one error so I use the '--keep-going' flag.
{ ./configure && make --keep-going && make install ; } > build.log 2>&1
Or, here's a simple (And perhaps sloppy) build and deploy script, which will keep going in the event of an error.
{ ./configure && make --keep-going && make install && rsync -av --keep-going /foo devhost:/foo} > build-and-deploy.log 2>&1
I think what I want involves some sort of Bash I/O Redirection, but I can't figure this out.
(./doit >> log) 2>&1 | tee -a log
This will take stdout and append it to log file.
The stderr will then get converted to stdout which is piped to tee which appends it to the log (if you are have Bash 4, you can replace 2>&1 | with |&) and sends it to stdout which will either appear on the tty or can be piped to another command.
I used append mode for both so that regardless of which order the shell redirection and tee open the file, you won't blow away the original. That said, it may be possible that stderr/stdout is interleaved in an unexpected way.
If your system has /dev/fd/* nodes you can do it as:
( exec 5>logfile.txt ; { command1 && command2 && command3 ;} 2>&1 >&5 | tee /dev/fd/5 )
This opens file descriptor 5 to your logfile. Executes the commands with standard error directed to standard out, standard out directed to fd 5 and pipes stdout (which now contains only stderr) to tee which duplicates the output to fd 5 which is the log file.
Here is how to run one or more commands, capturing the standard output and error, in the order in which they are generated, to a logfile, and displaying only the standard error on any terminal screen you like. Works in bash on linux. Probably works in most other environments. I will use an example to show how it's done.
Preliminaries:
Open two windows (shells, tmux sessions, whatever)
I will demonstrate with some test files, so create the test files:
touch /tmp/foo /tmp/foo1 /tmp/foo2
in window1:
mkfifo /tmp/fifo
0</tmp/fifo cat - >/tmp/logfile
Then, in window2:
(ls -l /tmp/foo /tmp/nofile /tmp/foo1 /tmp/nofile /tmp/nofile; echo successful test; ls /tmp/nofile1111) 2>&1 1>/tmp/fifo | tee /tmp/fifo 1>/dev/pts/2
Where you replace /dev/pts/2 with whatever tty you want the stderr to display.
The reason for the various successful and unsuccessful commands in the subshell is simply to generate a mingled stream of output and error messages, so that you can verify the correct ordering in the log file. Once you understand how it works, replace the “ls” and “echo” commands with scripts or commands of your choosing.
With this method, the ordering of output and error is preserved, the syntax is simple and clean, and there is only a single reference to the output file. Plus there is flexiblity in putting the extra copy of stderr wherever you want.
Try:
command 2>&1 | tee output.txt
Additionally, you can direct stdout and stderr to different places:
command > stdout.txt >& stderr.txt
command > stdout.txt |& program_for_stderr
So some combination of the above should work for you -- e.g. you could save stdout to a file, and stderr to both a file and piping to another program (with tee).
add this at the beginning of your script
#!/bin/bash
set -e
outfile=logfile
exec > >(cat >> $outfile)
exec 2> >(tee -a $outfile >&2)
# write your code here
STDOUT and STDERR will be written to $outfile, only STDERR will be seen on the console

How to silence output in a Bash script?

I have a program that outputs to stdout and would like to silence that output in a Bash script while piping to a file.
For example, running the program will output:
% myprogram
% WELCOME TO MY PROGRAM
% Done.
I want the following script to not output anything to the terminal:
#!/bin/bash
myprogram > sample.s
If it outputs to stderr as well you'll want to silence that. You can do that by redirecting file descriptor 2:
# Send stdout to out.log, stderr to err.log
myprogram > out.log 2> err.log
# Send both stdout and stderr to out.log
myprogram &> out.log # New bash syntax
myprogram > out.log 2>&1 # Older sh syntax
# Log output, hide errors.
myprogram > out.log 2> /dev/null
Redirect stderr to stdout
This will redirect the stderr (which is descriptor 2) to the file descriptor 1 which is the the stdout.
2>&1
Redirect stdout to File
Now when perform this you are redirecting the stdout to the file sample.s
myprogram > sample.s
Redirect stderr and stdout to File
Combining the two commands will result in redirecting both stderr and stdout to sample.s
myprogram > sample.s 2>&1
Redirect stderr and stdout to /dev/null
Redirect to /dev/null if you want to completely silent your application.
myprogram >/dev/null 2>&1
All output:
scriptname &>/dev/null
Portable:
scriptname >/dev/null 2>&1
Portable:
scriptname >/dev/null 2>/dev/null
For newer bash (no portable):
scriptname &>-
If you are still struggling to find an answer, specially if you produced a file for the output, and you prefer a clear alternative:
echo "hi" | grep "use this hack to hide the oputut :) "
If you want STDOUT and STDERR both [everything], then the simplest way is:
#!/bin/bash
myprogram >& sample.s
then run it like ./script, and you will get no output to your terminal. :)
the ">&" means STDERR and STDOUT. the & also works the same way with a pipe: ./script |& sed
that will send everything to sed
Try with:
myprogram &>/dev/null
to get no output
Useful variations:
Get only the STDERR in a file, while hiding any STDOUT even if the
program to hide isn't existing at all. (does not ever hang):
stty -echo && ./programMightNotExist 2> errors.log && stty echo
Detach completely and silence everything, even killing the parent
script won't abort ./prog (Does behave just like nohup):
./prog </dev/null >/dev/null 2>&1 &
nohup can be used as well to fully detach, as follow:
nohup ./prog &
A log file nohup.out will be created aside of the script, use tail -f nohup.out to read it.
Note: This answer is related to the question "How to turn off echo while executing a shell script Linux" which was in turn marked as duplicated to this one.
To actually turn off the echo the command is:
stty -echo
(this is, for instance; when you want to enter a password and you don't want it to be readable. Remember to turn echo on at the end of your script, otherwise the person that runs your script won't see what he/she types in from then on. To turn echo on run:
stty echo
For output only on error:
so [command]

Resources