Exclude specific lines from sftp connection from logfile - bash

I have multiple scripts to connect to an sftp server and put/get files. Recently the sftp admins added a large header for all the logins, which has blown up my log files, and I'd like to exclude that as it's blowing up the file sizes as well as making the logs somewhat unreadable.
As is, the commands are of the following format:
sftp user#host <<EOF &>> ${LOGFILE}
get ...
put ...
exit
EOF
Now, I've tried grepping out all the new lines, which all start with a pipe (they basically made a box to put them in).
sftp user#host <<EOF | grep -v '^|' &>> ${LOGFILE}
get ...
put ...
exit
EOF
This excludes the lines from ${LOGFILE} but throws them to stdout instead which means they end up in another log file, where we also don't want them (these scripts are called by a scheduler). Oddly, it also seems to filter out the first line from the connection attempt output.
Connecting to <host>...
and redirect that to stdout as well. Not the end of the world, but I do find it odd.
How can I filter the lines beginning with | so they don't show anywhere?

In
sftp user#host <<EOF &>> ${LOGFILE}
You are redirecting stdout and stderr to the logfile for appending data (&>>). But when you use
sftp user#host <<EOF | grep -v '^|' &>> ${LOGFILE}
you are only redirecting stdout to grep, leaving the stderr output of sftp to pass untouched. Finally, you are redirecting agains stdout and stderr of grep to the logfile.
In fact, you are interested in redirecting both (stdout and stderr) from sftp, so you can use someting like:
sftp user#host <<EOF |& grep -v '^|' >> ${LOGFILE}
or, in older versions of bash, using the specific redirection instead of the shorthand:
sftp user#host <<EOF 2>&1 | grep -v '^|' >> ${LOGFILE}

Related

mysqldump - redirect greped STDERR to a file

mysqldump generates STDOUT (used to dump *.sql), and also generates STDERR - that I need to filter + write to a file.
mysqldump --user=db_username --password=db_password --add-drop-database --host=db_host db_name 2>> '/srv/www/data_appsrv/logs/mysql_date_ym.log' > '/mnt/backup_srv/backup/daily/file_nm.sql'
The code above will write STDERR to mysql_date_ym.log
I need to exclude Warning: Using a password string from STDERR that is written to mysql_date_ym.log
I tried variations with grep, 2>> and >, but none works.
This should work
command 1>stdout.txt 2> >(grep -v "Thing to remove from stderr" >stderr.txt)
Which will redirect STDOUT to stdout.txt, and STDERR via process substitution into the grep filter, then finally to stderr.txt.
So your full command would look like this
mysqldump --user=db_username --password=db_password --add-drop-database --host=db_host db_name 1>'/mnt/backup_srv/backup/daily/file_nm.sql' 2> >(grep -v "Warning: Using a password" > '/srv/www/data_appsrv/logs/mysql_date_ym.log')
You could also use a sed script to run on the mysql_date_ym.log file after redirecting STDOUT/STDERR normally to the output files.
sed -i 's/Warning: Using a password//g' mysql_date_ym.log

How can I conditionally copy output to a file without repeating echo/printf statements? [duplicate]

I know how to redirect stdout to a file:
exec > foo.log
echo test
this will put the 'test' into the foo.log file.
Now I want to redirect the output into the log file AND keep it on stdout
i.e. it can be done trivially from outside the script:
script | tee foo.log
but I want to do declare it within the script itself
I tried
exec | tee foo.log
but it didn't work.
#!/usr/bin/env bash
# Redirect stdout ( > ) into a named pipe ( >() ) running "tee"
exec > >(tee -i logfile.txt)
# Without this, only stdout would be captured - i.e. your
# log file would not contain any error messages.
# SEE (and upvote) the answer by Adam Spiers, which keeps STDERR
# as a separate stream - I did not want to steal from him by simply
# adding his answer to mine.
exec 2>&1
echo "foo"
echo "bar" >&2
Note that this is bash, not sh. If you invoke the script with sh myscript.sh, you will get an error along the lines of syntax error near unexpected token '>'.
If you are working with signal traps, you might want to use the tee -i option to avoid disruption of the output if a signal occurs. (Thanks to JamesThomasMoon1979 for the comment.)
Tools that change their output depending on whether they write to a pipe or a terminal (ls using colors and columnized output, for example) will detect the above construct as meaning that they output to a pipe.
There are options to enforce the colorizing / columnizing (e.g. ls -C --color=always). Note that this will result in the color codes being written to the logfile as well, making it less readable.
The accepted answer does not preserve STDERR as a separate file descriptor. That means
./script.sh >/dev/null
will not output bar to the terminal, only to the logfile, and
./script.sh 2>/dev/null
will output both foo and bar to the terminal. Clearly that's not
the behaviour a normal user is likely to expect. This can be
fixed by using two separate tee processes both appending to the same
log file:
#!/bin/bash
# See (and upvote) the comment by JamesThomasMoon1979
# explaining the use of the -i option to tee.
exec > >(tee -ia foo.log)
exec 2> >(tee -ia foo.log >&2)
echo "foo"
echo "bar" >&2
(Note that the above does not initially truncate the log file - if you want that behaviour you should add
>foo.log
to the top of the script.)
The POSIX.1-2008 specification of tee(1) requires that output is unbuffered, i.e. not even line-buffered, so in this case it is possible that STDOUT and STDERR could end up on the same line of foo.log; however that could also happen on the terminal, so the log file will be a faithful reflection of what could be seen on the terminal, if not an exact mirror of it. If you want the STDOUT lines cleanly separated from the STDERR lines, consider using two log files, possibly with date stamp prefixes on each line to allow chronological reassembly later on.
Solution for busybox, macOS bash, and non-bash shells
The accepted answer is certainly the best choice for bash. I'm working in a Busybox environment without access to bash, and it does not understand the exec > >(tee log.txt) syntax. It also does not do exec >$PIPE properly, trying to create an ordinary file with the same name as the named pipe, which fails and hangs.
Hopefully this would be useful to someone else who doesn't have bash.
Also, for anyone using a named pipe, it is safe to rm $PIPE, because that unlinks the pipe from the VFS, but the processes that use it still maintain a reference count on it until they are finished.
Note the use of $* is not necessarily safe.
#!/bin/sh
if [ "$SELF_LOGGING" != "1" ]
then
# The parent process will enter this branch and set up logging
# Create a named piped for logging the child's output
PIPE=tmp.fifo
mkfifo $PIPE
# Launch the child process with stdout redirected to the named pipe
SELF_LOGGING=1 sh $0 $* >$PIPE &
# Save PID of child process
PID=$!
# Launch tee in a separate process
tee logfile <$PIPE &
# Unlink $PIPE because the parent process no longer needs it
rm $PIPE
# Wait for child process, which is running the rest of this script
wait $PID
# Return the error code from the child process
exit $?
fi
# The rest of the script goes here
Inside your script file, put all of the commands within parentheses, like this:
(
echo start
ls -l
echo end
) | tee foo.log
Easy way to make a bash script log to syslog. The script output is available both through /var/log/syslog and through stderr. syslog will add useful metadata, including timestamps.
Add this line at the top:
exec &> >(logger -t myscript -s)
Alternatively, send the log to a separate file:
exec &> >(ts |tee -a /tmp/myscript.output >&2 )
This requires moreutils (for the ts command, which adds timestamps).
Using the accepted answer my script kept returning exceptionally early (right after 'exec > >(tee ...)') leaving the rest of my script running in the background. As I couldn't get that solution to work my way I found another solution/work around to the problem:
# Logging setup
logfile=mylogfile
mkfifo ${logfile}.pipe
tee < ${logfile}.pipe $logfile &
exec &> ${logfile}.pipe
rm ${logfile}.pipe
# Rest of my script
This makes output from script go from the process, through the pipe into the sub background process of 'tee' that logs everything to disc and to original stdout of the script.
Note that 'exec &>' redirects both stdout and stderr, we could redirect them separately if we like, or change to 'exec >' if we just want stdout.
Even thou the pipe is removed from the file system in the beginning of the script it will continue to function until the processes finishes. We just can't reference it using the file name after the rm-line.
Bash 4 has a coproc command which establishes a named pipe to a command and allows you to communicate through it.
Can't say I'm comfortable with any of the solutions based on exec. I prefer to use tee directly, so I make the script call itself with tee when requested:
# my script:
check_tee_output()
{
# copy (append) stdout and stderr to log file if TEE is unset or true
if [[ -z $TEE || "$TEE" == true ]]; then
echo '-------------------------------------------' >> log.txt
echo '***' $(date) $0 $# >> log.txt
TEE=false $0 $# 2>&1 | tee --append log.txt
exit $?
fi
}
check_tee_output $#
rest of my script
This allows you to do this:
your_script.sh args # tee
TEE=true your_script.sh args # tee
TEE=false your_script.sh args # don't tee
export TEE=false
your_script.sh args # tee
You can customize this, e.g. make tee=false the default instead, make TEE hold the log file instead, etc. I guess this solution is similar to jbarlow's, but simpler, maybe mine has limitations that I have not come across yet.
Neither of these is a perfect solution, but here are a couple things you could try:
exec >foo.log
tail -f foo.log &
# rest of your script
or
PIPE=tmp.fifo
mkfifo $PIPE
exec >$PIPE
tee foo.log <$PIPE &
# rest of your script
rm $PIPE
The second one would leave a pipe file sitting around if something goes wrong with your script, which may or may not be a problem (i.e. maybe you could rm it in the parent shell afterwards).

Bash output from expect script to two different files

I am trying to output to two different files using tee. My first file will basically be tail -f /myfile and my second output will be a subset of the first file. I have looked online that they were saying we can use `|
tee >(proc1) >(proc2)
I have tried the above but both my files are blank.
Here is what i have so far:
myscript.sh
ssh root#server 'tail -f /my/dir/text.log' | tee >(/mydir/my.log) >(grep 'string' /mydir/my.log > /mydir/mysecond.log)
myexpect.sh
#!/usr/bin/expect -f
set pass password
spawn /my/dir/myexpect.sh
expect {
"key fingerprint" {send "yes/r"; exp_contiue}
"assword: " {send "$pass\r"}
}
interact
In your script, there are some problems in the usage of tee,
tee >(/mydir/my.log): can be substitute with tee /mydir/my.log, since tee would write to stdout and files, i.e. /mydir/my.log
grep 'string' /mydir/my.log > /mydir/mysecond.log: as I mentioned, tee would also write to stdout, so no need to grep the string from file, you can grep from stdout directly. Use pipeline to do it.
So the whole command shall be modified as followed,
ssh root#server 'tail -f /my/dir/text.log | tee /mydir/my.log | grep --line-buffered "string" > /mydir/mysecond.log'
Edit:
For your further question
The command would hang because of tail -f was still waiting for output the growing file. If you don't want the command hanged, try to remove -f for tail.
Depends on the option -f existed for tail, you shall use two different way to allow the grep write file.
For tail case: grep can successfully write file
For tail -f case: --line-buffered for grep would use line buffering on output

Copy stderr and stdout to a file as well as the screen in ksh

I'm looking for a solution (similar to the bash code below) to copy both stdout and stderr to a file in addition to the screen within ksh on Solaris.
The following code works great in the bash shell:
#!/usr/bin/bash
# Clear the logfile
>logfile.txt
# Redirect all script output to a logfile as well as their normal locations
exec > >(tee -a logfile.txt)
exec 2> >(tee -a logfile.txt >&2)
date
ls -l /non-existent/path
For some reason this is throwing a syntax error on Solaris. I assume it's because I can't do process substitution, and I've seen some posts suggesting the use of mkfifo, but I've yet to come up with a working solution.
Does anyone know of a way that all output can be redirected to a file in addition to the default locations?
Which version of ksh are you using? The >() is not supported in ksh88, but is supported in ksh93 - the bash code should work unchanged (aside from the #! line) on ksh93.
If you are stuck with ksh88 (poor thing!) then you can emulate the bash/ksh93 behaviour using a named pipe:
#!/bin/ksh
# Clear the logfile
>logfile.txt
pipe1="/tmp/mypipe1.$$"
pipe2="/tmp/mypipe2.$$"
trap 'rm "$pipe1" "$pipe2"' EXIT
mkfifo "$pipe1"
mkfifo "$pipe2"
tee -a logfile.txt < "$pipe1" &
tee -a logfile.txt >&2 < "$pipe2" &
# Redirect all script output to a logfile as well as their normal locations
exec >"$pipe1"
exec 2>"$pipe2"
date
ls -l /non-existent/path
The above is a second version to enable stderr to be redirected to a different file.
How about this:
(some commands ...) 2>&1 | tee logfile.txt
Add -a to the tee command line for subsequent invocations to append rather than overwrite.
In ksh, the following works very well for me
LOG=log_file.$(date +%Y%m%d%H%M%S).txt
{
ls
date
... whatever command
} 2>&1 | tee -a $LOG

Write STDOUT & STDERR to a logfile, also write STDERR to screen

I would like to run several commands, and capture all output to a logfile. I also want to print any errors to the screen (or optionally mail the output to someone).
Here's an example. The following command will run three commands, and will write all output (STDOUT and STDERR) into a single logfile.
{ command1 && command2 && command3 ; } > logfile.log 2>&1
Here is what I want to do with the output of these commands:
STDERR and STDOUT for all commands goes to a logfile, in case I need it later--- I usually won't look in here unless there are problems.
Print STDERR to the screen (or optionally, pipe to /bin/mail), so that any error stands out and doesn't get ignored.
It would be nice if the return codes were still usable, so that I could do some error handling. Maybe I want to send email if there was an error, like this:
{ command1 && command2 && command3 ; } > logfile.log 2>&1 || mailx -s "There was an error" stefanl#example.org
The problem I run into is that STDERR loses context during I/O redirection. A '2>&1' will convert STDERR into STDOUT, and therefore I cannot view errors if I do 2> error.log
Here are a couple juicier examples. Let's pretend that I am running some familiar build commands, but I don't want the entire build to stop just because of one error so I use the '--keep-going' flag.
{ ./configure && make --keep-going && make install ; } > build.log 2>&1
Or, here's a simple (And perhaps sloppy) build and deploy script, which will keep going in the event of an error.
{ ./configure && make --keep-going && make install && rsync -av --keep-going /foo devhost:/foo} > build-and-deploy.log 2>&1
I think what I want involves some sort of Bash I/O Redirection, but I can't figure this out.
(./doit >> log) 2>&1 | tee -a log
This will take stdout and append it to log file.
The stderr will then get converted to stdout which is piped to tee which appends it to the log (if you are have Bash 4, you can replace 2>&1 | with |&) and sends it to stdout which will either appear on the tty or can be piped to another command.
I used append mode for both so that regardless of which order the shell redirection and tee open the file, you won't blow away the original. That said, it may be possible that stderr/stdout is interleaved in an unexpected way.
If your system has /dev/fd/* nodes you can do it as:
( exec 5>logfile.txt ; { command1 && command2 && command3 ;} 2>&1 >&5 | tee /dev/fd/5 )
This opens file descriptor 5 to your logfile. Executes the commands with standard error directed to standard out, standard out directed to fd 5 and pipes stdout (which now contains only stderr) to tee which duplicates the output to fd 5 which is the log file.
Here is how to run one or more commands, capturing the standard output and error, in the order in which they are generated, to a logfile, and displaying only the standard error on any terminal screen you like. Works in bash on linux. Probably works in most other environments. I will use an example to show how it's done.
Preliminaries:
Open two windows (shells, tmux sessions, whatever)
I will demonstrate with some test files, so create the test files:
touch /tmp/foo /tmp/foo1 /tmp/foo2
in window1:
mkfifo /tmp/fifo
0</tmp/fifo cat - >/tmp/logfile
Then, in window2:
(ls -l /tmp/foo /tmp/nofile /tmp/foo1 /tmp/nofile /tmp/nofile; echo successful test; ls /tmp/nofile1111) 2>&1 1>/tmp/fifo | tee /tmp/fifo 1>/dev/pts/2
Where you replace /dev/pts/2 with whatever tty you want the stderr to display.
The reason for the various successful and unsuccessful commands in the subshell is simply to generate a mingled stream of output and error messages, so that you can verify the correct ordering in the log file. Once you understand how it works, replace the “ls” and “echo” commands with scripts or commands of your choosing.
With this method, the ordering of output and error is preserved, the syntax is simple and clean, and there is only a single reference to the output file. Plus there is flexiblity in putting the extra copy of stderr wherever you want.
Try:
command 2>&1 | tee output.txt
Additionally, you can direct stdout and stderr to different places:
command > stdout.txt >& stderr.txt
command > stdout.txt |& program_for_stderr
So some combination of the above should work for you -- e.g. you could save stdout to a file, and stderr to both a file and piping to another program (with tee).
add this at the beginning of your script
#!/bin/bash
set -e
outfile=logfile
exec > >(cat >> $outfile)
exec 2> >(tee -a $outfile >&2)
# write your code here
STDOUT and STDERR will be written to $outfile, only STDERR will be seen on the console

Resources