method to output stdin AND stdout in the output - shell

I need a simple way to capture both the issued command (from a script) and the resultant output to a log file.
Here's a simple example:
Command:
grep '^#PermitRootLogin' /etc/ssh/sshd_config
Output:
#PermitRootLogin no
Required result:
grep '^#PermitRootLogin' /etc/ssh/sshd_config
PermitRootLogin no
By redirecting stdin I seem to be stomping on stdout; it shouldn't be so difficult but it's eluding me for some reason.
Using tee just creates a log file with extraneous noise; and I'd like to use the file for a report at the end (no noise).
Thanks in advance,
TT

Wrap your desired behaviour in a function, i.e.
function stomp {
echo $#
eval $#
}
then call it like so
stomp grep '^#PermitRootLogin' /etc/ssh/sshd_config

There's the script utility, that will record everything you type plus what any program outputs to stdout in a file named typescript. However, it is very thorough and also records all the newlines and line feeds plus all the prompts by the shell, so most likely you want to post process the typescript.
Maybe it's easier to just
echo "grep '^#PermitRootLogin' /etc/ssh/sshd_config" > file
grep '^#PermitRootLogin' /etc/ssh/sshd_config >> file
Then you have the command and its output in file.

Related

echo to file and terminal [duplicate]

In bash, calling foo would display any output from that command on the stdout.
Calling foo > output would redirect any output from that command to the file specified (in this case 'output').
Is there a way to redirect output to a file and have it display on stdout?
The command you want is named tee:
foo | tee output.file
For example, if you only care about stdout:
ls -a | tee output.file
If you want to include stderr, do:
program [arguments...] 2>&1 | tee outfile
2>&1 redirects channel 2 (stderr/standard error) into channel 1 (stdout/standard output), such that both is written as stdout. It is also directed to the given output file as of the tee command.
Furthermore, if you want to append to the log file, use tee -a as:
program [arguments...] 2>&1 | tee -a outfile
$ program [arguments...] 2>&1 | tee outfile
2>&1 dumps the stderr and stdout streams.
tee outfile takes the stream it gets and writes it to the screen and to the file "outfile".
This is probably what most people are looking for. The likely situation is some program or script is working hard for a long time and producing a lot of output. The user wants to check it periodically for progress, but also wants the output written to a file.
The problem (especially when mixing stdout and stderr streams) is that there is reliance on the streams being flushed by the program. If, for example, all the writes to stdout are not flushed, but all the writes to stderr are flushed, then they'll end up out of chronological order in the output file and on the screen.
It's also bad if the program only outputs 1 or 2 lines every few minutes to report progress. In such a case, if the output was not flushed by the program, the user wouldn't even see any output on the screen for hours, because none of it would get pushed through the pipe for hours.
Update: The program unbuffer, part of the expect package, will solve the buffering problem. This will cause stdout and stderr to write to the screen and file immediately and keep them in sync when being combined and redirected to tee. E.g.:
$ unbuffer program [arguments...] 2>&1 | tee outfile
Another way that works for me is,
<command> |& tee <outputFile>
as shown in gnu bash manual
Example:
ls |& tee files.txt
If ‘|&’ is used, command1’s standard error, in addition to its standard output, is connected to command2’s standard input through the pipe; it is shorthand for 2>&1 |. This implicit redirection of the standard error to the standard output is performed after any redirections specified by the command.
For more information, refer redirection
You can primarily use Zoredache solution, but If you don't want to overwrite the output file you should write tee with -a option as follow :
ls -lR / | tee -a output.file
Something to add ...
The package unbuffer has support issues with some packages under fedora and redhat unix releases.
Setting aside the troubles
Following worked for me
bash myscript.sh 2>&1 | tee output.log
Thank you ScDF & matthew your inputs saved me lot of time..
Using tail -f output should work.
In my case I had the Java process with output logs. The simplest solution to display output logs and redirect them into the file(named logfile here) was:
my_java_process_run_script.sh |& tee logfile
Result was Java process running with output logs displaying and
putting them into the file with name logfile
You can do that for your entire script by using something like that at the beginning of your script :
#!/usr/bin/env bash
test x$1 = x$'\x00' && shift || { set -o pipefail ; ( exec 2>&1 ; $0 $'\x00' "$#" ) | tee mylogfile ; exit $? ; }
# do whaetever you want
This redirect both stderr and stdout outputs to the file called mylogfile and let everything goes to stdout at the same time.
It is used some stupid tricks :
use exec without command to setup redirections,
use tee to duplicates outputs,
restart the script with the wanted redirections,
use a special first parameter (a simple NUL character specified by the $'string' special bash notation) to specify that the script is restarted (no equivalent parameter may be used by your original work),
try to preserve the original exit status when restarting the script using the pipefail option.
Ugly but useful for me in certain situations.
Bonus answer since this use-case brought me here:
In the case where you need to do this as some other user
echo "some output" | sudo -u some_user tee /some/path/some_file
Note that the echo will happen as you and the file write will happen as "some_user" what will NOT work is if you were to run the echo as "some_user" and redirect the output with >> "some_file" because the file redirect will happen as you.
Hint: tee also supports append with the -a flag, if you need to replace a line in a file as another user you could execute sed as the desired user.
< command > |& tee filename # this will create a file "filename" with command status as a content, If a file already exists it will remove existed content and writes the command status.
< command > | tee >> filename # this will append status to the file but it doesn't print the command status on standard_output (screen).
I want to print something by using "echo" on screen and append that echoed data to a file
echo "hi there, Have to print this on screen and append to a file"
tee is perfect for this, but this will also do the job
ls -lr / > output | cat output

Reading full file from Standard Input and Supplying it to a command in ksh

I am trying to read contents of a file given from standard input into a script. Any ideas how to do that?
Basically what I want is:
someScript.ksh < textFile.txt
Inside the ksh, I am using a binary which will read data from "textFile.txt" if the file is given on the standard input.
Any ideas how do I "pass" the contents of the given input file, if any, to another binary inside the script?
You haven't really given us enough information to answer the question, but here are a few ideas.
If you have a script that you want to accept data on stdin, and that script calls something else that expects data to be passed in as a filename on the command line, you can take stdin and dump it to a temporary file. Something like:
#!/bin/sh
tmpfile=$(mktemp tmpXXXXXX)
cat > $tmpfile
/some/other/command $tmpfile
rm -f $tmpfile
(In practice, you would probably use trap to clean up the temporary file on exit).
If instead the script is calling another command that also expects input on stdin, you don't really have to do anything special. Inside your script, stdin of anything you call will be connected to stdin of the calling script, and as long as you haven't previously consumed the input you should be all set.
E.g., given a script like this:
#!/bin/sh
sed s/hello/goodbye/
I can run:
echo hello world | sh myscript.sh
And get:
goodbye world

Grep stderr in bash script for any output

In my bash script I use grep in different logs like this:
LOGS1=$(grep -E -i 'err|warn' /opt/backup/exports.log /opt/backup/imports.log && grep "tar:" /opt/backup/h2_backups.log /opt/backup/st_backups.log)
if [ -n "$LOGS1" ] ]; then
COLOUR="yellow"
MESSAGE="Logs contain warnings. Backups may be incomplete. Invetigate these warnings:\n$LOGS"
Instead of checking if each log exsist (there are many more logs than this) I want check stderr while the script runs to see if I get any output. If one of the logs does not exists it will produce an error like this: grep: /opt/backup/st_backups.log: No such file or directory
I've tried to read sterr with commands like command 2> >(grep "file" >&2 but that does not seem to work.
I know I can pipe the output to a file, but I rather just handle the stderr when there is any output instead of reading the file. OR is there any reason why pipe to file is better?
Send the standard error (file descriptor 2) to standard output(file descriptor 1) and assign it to var Q:
$ Q=$(grep text file 2>&1)
$ echo $Q
grep: file: No such file or directory
This is default behaviour, stderr is normally set to your terminal (and unbuffered) so you see errors as you pipe stdout somewhere. If you want to merge stderr with stdout then this is the syntax,
command >file 2>&1

automatically send stdin to interactive bash script

I have a compiled program which i run from the shell; as i run it, it asks me for an input file in stdin. I want to run that program in a bash loop, with predefined input file, such as
for i in $(seq 100); do
input.txt | ./myscript
done
but of course this won't work. How can I achieve that? I cannot edit the source code.
Try
for i in $(seq 100); do
./myscript < input.txt
done
Pipes (|) are inter-process. That is, they stream between processes. What you're looking for is file redirection (e.g. <, > etc.)
Redirection simply means capturing output from a file, command,
program, script, or even code block within a script and sending it as
input to another file, command, program, or script.
You may see cat used for this e.g. cat file | mycommand. Given the above, this usage is redundant and often the winner of a 'Useless use of cat' award.
You can use:
./myscript < input.txt
to send content of input.txt on stdin of myscript
Based on your comments, it looks like myscript prompts for a file name and you want to always respond with input.txt. Did you try this?
for i in $(seq 100); do
echo input.txt | ./myscript
done
You might want to just try this first:
echo input.txt | ./myscript
just in case.

Why reading and writing the same file through I/O redirection results in an empty file in Unix?

If I redirect output of a command to same file it reads from, its contents is erased.
sed 's/abd/def/g' a.txt > a.txt
Can anyone explain why?
The first thing the redirection does is to open the file for writing, thus clearing any existing contents. sed then tries to read this empty file you have just created, and does nothing. The file is then closed, containing nothing.
The redirection operations <, >, etc. are handled by the shell. When you give a command to the shell that includes redirection, the shell will first open the file. In the case of > the file will be opened for writing, which means it gets truncated to zero size. After the redirection files have been opened, the shell starts a new process, binding its standard input, output, and error to any possible redirected files, and only then executes the command you gave. So when the sed command in your example begins execution, a.txt has already been truncated by the shell.
Incidentally, and somewhat tangentially, this is also the reason why you cannot use redirection directly with sudo because it is the shell that needs the permissions to open the redirection file, not the command being executed.
You need to use the -i option to edit the file in place:
sed -i .bck 's/abd/def/g' a.txt
EDIT: as noted by neil, the redirection first opens the file for writing thus clears it.
EDIT2: it might be interesting for some reader
On OSX, if you want to use -i with an empty extension to prevent the creation of a backup file, you need to use the -eswitch as well otherwise it fails to parse the arguments correctly:
sed -i -e 's/abc/def/g' a.txt
stdout and stderr will first prepared and then stdin and then the command execute. so a.txt would be clear for stdout first and then when the comamnd execute no content could be found.
try
sed -i 's/abd/def/g' a.txt
or
sed 's/abd/def/g' a.txt 1<> a.txt

Resources