linux - bash: pipe _everything_ to a logfile - bash

In an interactive bash script I use
exec > >(tee -ia logfile.log)
exec 2>&1
to write the scripts output to a logfile. However, if I ask the user to input something this is not written to this file:
read UserInput
Also, I issue commands with $UserInput as parameter. These command are also not written to the logfile.
The logfile should contain everything my script does, i.e. what the user entered interactively and also the resulting commands along with their output.
Of course I could use set -x and/or echo "user input: "$UserInput, but this would also be sent to the "screen". I dont want to read anything else on the screen except what my script or the commands echo.
How can this be done?

Related

Echo a command back to to bash shell prompt?

I'm trying to implement a simple command line util that will allow users to select from a set of commands and then echo those commands strings back to the shell. I don't want the shell to execute said commands, but I want the commands to simply echo out to the prompt, so the user can verify them or change them before pressing return button.
No idea where to start. Of course echoing the command to STDOUT is pretty easy with a log command, or println kind of thing, but that would be the stdout of current process, Ideally, I would like the stdout of that process to be the stdin of the shell, but only into the prompt line, not a pipe into a new shell or a command execution. Is this possible?
e.g.
$ help # user asks for help
1. you can do this
2. you can do that
? 1 # user chooses 1, help echoes back a string to the parent shell $$
$ this-command --flags # simply ends up on prompt line, but doesn't exec
Is this possible without a hook in the terminal ui or tty?

bash hangs when exec > > is called and an additional bash script is executed with output to stdin [duplicate]

I have a shell script which writes all output to logfile
and terminal, this part works fine, but if I execute the script
a new shell prompt only appear if I press enter. Why is that and how do I fix it?
#!/bin/bash
exec > >(tee logfile)
echo "output"
First, when I'm testing this, there always is a new shell prompt, it's just that sometimes the string output comes after it, so the prompt isn't last. Did you happen to overlook it? If so, there seems to be a race where the shell prints the prompt before the tee in the background completes.
Unfortunately, that cannot fixed by waiting in the shell for tee, see this question on unix.stackexchange. Fragile workarounds aside, the easiest way to solve this that I see is to put your whole script inside a list:
{
your-code-here
} | tee logfile
If I run the following script (suppressing the newline from the echo), I see the prompt, but not "output". The string is still written to the file.
#!/bin/bash
exec > >(tee logfile)
echo -n "output"
What I suspect is this: you have three different file descriptors trying to write to the same file (that is, the terminal): standard output of the shell, standard error of the shell, and the standard output of tee. The shell writes synchronously: first the echo to standard output, then the prompt to standard error, so the terminal is able to sequence them correctly. However, the third file descriptor is written to asynchronously by tee, so there is a race condition. I don't quite understand how my modification affects the race, but it appears to upset some balance, allowing the prompt to be written at a different time and appear on the screen. (I expect output buffering to play a part in this).
You might also try running your script after running the script command, which will log everything written to the terminal; if you wade through all the control characters in the file, you may notice the prompt in the file just prior to the output written by tee. In support of my race condition theory, I'll note that after running the script a few times, it was no longer displaying "abnormal" behavior; my shell prompt was displayed as expected after the string "output", so there is definitely some non-deterministic element to this situation.
#chepner's answer provides great background information.
Here's a workaround - works on Ubuntu 12.04 (Linux 3.2.0) and on OS X 10.9.1:
#!/bin/bash
exec > >(tee logfile)
echo "output"
# WORKAROUND - place LAST in your script.
# Execute an executable (as opposed to a builtin) that outputs *something*
# to make the prompt reappear normally.
# In this case we use the printf *executable* to output an *empty string*.
# Use of `$ec` is to ensure that the script's actual exit code is passed through.
ec=$?; $(which printf) ''; exit $ec
Alternatives:
#user2719058's answer shows a simple alternative: wrapping the entire script body in a group command ({ ... }) and piping it to tee logfile.
An external solution, as #chepner has already hinted at, is to use the script utility to create a "transcript" of your script's output in addition to displaying it:
script -qc yourScript /dev/null > logfile # Linux syntax
This, however, will also capture stderr output; if you wanted to avoid that, use:
script -qc 'yourScript 2>/dev/null' /dev/null > logfile
Note, however, that this will suppress stderr output altogether.
As others have noted, it's not that there's no prompt printed -- it's that the last of the output written by tee can come after the prompt, making the prompt no longer visible.
If you have bash 4.4 or newer, you can wait for your tee process to exit, like so:
#!/usr/bin/env bash
case $BASH_VERSION in ''|[0-3].*|4.[0-3]) echo "ERROR: Bash 4.4+ needed" >&2; exit 1;; esac
exec {orig_stdout}>&1 {orig_stderr}>&2 # make a backup of original stdout
exec > >(tee -a "_install_log"); tee_pid=$! # track PID of tee after starting it
cleanup() { # define a function we'll call during shutdown
retval=$?
exec >&$orig_stdout # Copy your original stdout back to FD 1, overwriting the pipe to tee
exec 2>&$orig_stderr # If something overwrites stderr to also go through tee, fix that too
wait "$tee_pid" # Now, wait until tee exits
exit "$retval" # and complete exit with our original exit status
}
trap cleanup EXIT # configure the function above to be called during cleanup
echo "Writing something to stdout here"

Piping multiple commands to bash, pipe behavior question

I have this command sequence that I'm having trouble understanding:
[me#mine ~]$ (echo 'test'; cat) | bash
echo $?
1
echo 'this is the new shell'
this is the new shell
exit
[me#mine ~]$
As far as I can understand, here is what happens:
A pipe is created.
stdout of echo 'test' is sent to the pipe.
bash receives 'test' on stdin.
echo $? returns 1, which is what happens when you run test without args.
cat runs.
It is copying stdin to stdout.
stdout is sent to the pipe.
bash will execute whatever you type in, but stderr won't get printed to the screen (we used |, not |&).
I have three questions:
It looks like, even though we run two commands, we use the same pipe and bash process for both commands. Is that the case?
Where do the prompts go?
When something like cat uses stdin, does it take exclusive ownership of stdin as long as the shell runs, or can other things use it?
I suspect I'm missing some detail with ttys, but I'm not sure. Any help or details or man excerpt appreciated!
So...
Yes, there's a single pipe sending commands to a single instance of bash. Note:
$ echo 'date "+%T hello $$"; sleep 1; date "+%T world $$"' | bash
22:18:52 hello 72628
22:18:53 world 72628
There are no prompts. From the man page:
An interactive shell is one started without non-option arguments (unless -s is specified) and without the -c option whose standard input and error are both connected to terminals. PS1 is set and $- includes i if bash is interactive.
So a pipe is not an interactive shell, and therefore has no prompt.
Stdin and stdout can only connect to one thing at a time. cat will take stdin from the process that ran it (for example, your interactive shell) and send its stdout through the pipe to bash. If you need multiple things to be able to submit to the stdin of that cat, consider using a named pipe.
Does that cover it?

Unix output redirection

I have a script which prompts user to select options like 'y' or 'n'.
If 'y' is selected, the script proceeds with further execution and if 'n' is selected then it stops.
I want the output of this file to be re-directed to a log file. so used below command:-
./script stop >> script_RUN.log 2>&1
The problem is, the script starts running but does not prompt to ask for options like 'y' or 'n'
It is writing this to script_RUN.log.
How can I make the script to prompt user for options and re-direct the further execution to script_RUN.log?
you can try using tee command instead.
./script stop | tee script_RUN.log
NOTE:
Only the output of the program will be saved.
EDIT:
if you don't want to see the output on the console at all just redirect it into /dev/null
for example:
./script stop | tee script_RUN.log > /dev/null
the above line will write the file into log but dost NOT printout on console
This works like it has to really. You are redirecting stdout and stderr output from the very start. Instead you should try to redirect it in the script after the prompt. I think this would be helpful for you:
redirect COPY of stdout to log file from within bash script itself

redirect all output in a bash script when using set -x

I have a bash script that has set -x in it. Is it possible to redirect the debug prints of this script and all its output to a file? Ideally I would like to do something like this:
#!/bin/bash
set -x
(some magic command here...) > /tmp/mylog
echo "test"
and get the
+ echo test
test
output in /tmp/mylog, not in stdout.
This is what I've just googled and I remember myself using this some time ago...
Use exec to redirect both standard output and standard error of all commands in a script:
#!/bin/bash
logfile=$$.log
exec > $logfile 2>&1
For more redirection magic check out Advanced Bash Scripting Guide - I/O Redirection.
If you also want to see the output and debug on the terminal in addition to in the log file, see redirect COPY of stdout to log file from within bash script itself.
If you want to handle the destination of the set -x trace output independently of normal STDOUT and STDERR, see bash storing the output of set -x to log file.
the -x output goes to stderr, so to log it do:
set -x
exec 2>/tmp/mylog
To redirect stderr and stdout:
exec &>> $LOG_FILE_NAME
If you want to append to file. To overwrite file:
exec &> $LOG_FILE_NAME
In my case, the script was being called multiple times from elsewhere, and I wasn't seeing everything, so I did an append instead, and it worked:
exec 1>>FILENAME 2>&1
set -x
To avoid confusion, be sure to delete FILENAME before each run.

Resources