tee to a log within a bash script, while preserving stdout as a TTY - bash

Similar to redirect COPY of stdout to log file from within bash script itself, but I'd also like to preserve stdout as a TTY device.
For example, I have the following scripts:
/tmp/teed-off$ cat some-script
#!/usr/bin/env ruby
if $stdout.tty?
puts "stdout is a TTY"
else
puts "stdout is NOT a TTY"
end
/tmp/teed-off$ cat wrapper
#!/usr/bin/env bash
exec > >(tee some-script.log)
./some-script
When I run them, the wrapper eats stdout as a TTY device:
/tmp/teed-off$ ./some-script
stdout is a TTY
/tmp/teed-off$ ./wrapper
stdout is NOT a TTY
How can I flip that behavior around so that the script believes that its in a TTY even when executed via the wrapper?

It won't be trivial, but I think you can do it via pseudo-ttys. I'm not sure that there's any standard tool, other than perhaps expect, that would do it for you.
It takes a bit of thinking about. You'd have a control program that would open the pseudo-tty master, then the slave. The slave would be connected to the output of ./some-script. The master would be read by the control program, which would copy the data it reads from the master to the file and to standard output.
I've not tried coding that up. I'm not sure whether you could do it with standard shell commands; I can't think of any way. So, I think there will be some C coding to be done.

look for dup2 it duplicates a file descriptor
int dup2(int oldfd, int newfd);

Related

bash hangs when exec > > is called and an additional bash script is executed with output to stdin [duplicate]

I have a shell script which writes all output to logfile
and terminal, this part works fine, but if I execute the script
a new shell prompt only appear if I press enter. Why is that and how do I fix it?
#!/bin/bash
exec > >(tee logfile)
echo "output"
First, when I'm testing this, there always is a new shell prompt, it's just that sometimes the string output comes after it, so the prompt isn't last. Did you happen to overlook it? If so, there seems to be a race where the shell prints the prompt before the tee in the background completes.
Unfortunately, that cannot fixed by waiting in the shell for tee, see this question on unix.stackexchange. Fragile workarounds aside, the easiest way to solve this that I see is to put your whole script inside a list:
{
your-code-here
} | tee logfile
If I run the following script (suppressing the newline from the echo), I see the prompt, but not "output". The string is still written to the file.
#!/bin/bash
exec > >(tee logfile)
echo -n "output"
What I suspect is this: you have three different file descriptors trying to write to the same file (that is, the terminal): standard output of the shell, standard error of the shell, and the standard output of tee. The shell writes synchronously: first the echo to standard output, then the prompt to standard error, so the terminal is able to sequence them correctly. However, the third file descriptor is written to asynchronously by tee, so there is a race condition. I don't quite understand how my modification affects the race, but it appears to upset some balance, allowing the prompt to be written at a different time and appear on the screen. (I expect output buffering to play a part in this).
You might also try running your script after running the script command, which will log everything written to the terminal; if you wade through all the control characters in the file, you may notice the prompt in the file just prior to the output written by tee. In support of my race condition theory, I'll note that after running the script a few times, it was no longer displaying "abnormal" behavior; my shell prompt was displayed as expected after the string "output", so there is definitely some non-deterministic element to this situation.
#chepner's answer provides great background information.
Here's a workaround - works on Ubuntu 12.04 (Linux 3.2.0) and on OS X 10.9.1:
#!/bin/bash
exec > >(tee logfile)
echo "output"
# WORKAROUND - place LAST in your script.
# Execute an executable (as opposed to a builtin) that outputs *something*
# to make the prompt reappear normally.
# In this case we use the printf *executable* to output an *empty string*.
# Use of `$ec` is to ensure that the script's actual exit code is passed through.
ec=$?; $(which printf) ''; exit $ec
Alternatives:
#user2719058's answer shows a simple alternative: wrapping the entire script body in a group command ({ ... }) and piping it to tee logfile.
An external solution, as #chepner has already hinted at, is to use the script utility to create a "transcript" of your script's output in addition to displaying it:
script -qc yourScript /dev/null > logfile # Linux syntax
This, however, will also capture stderr output; if you wanted to avoid that, use:
script -qc 'yourScript 2>/dev/null' /dev/null > logfile
Note, however, that this will suppress stderr output altogether.
As others have noted, it's not that there's no prompt printed -- it's that the last of the output written by tee can come after the prompt, making the prompt no longer visible.
If you have bash 4.4 or newer, you can wait for your tee process to exit, like so:
#!/usr/bin/env bash
case $BASH_VERSION in ''|[0-3].*|4.[0-3]) echo "ERROR: Bash 4.4+ needed" >&2; exit 1;; esac
exec {orig_stdout}>&1 {orig_stderr}>&2 # make a backup of original stdout
exec > >(tee -a "_install_log"); tee_pid=$! # track PID of tee after starting it
cleanup() { # define a function we'll call during shutdown
retval=$?
exec >&$orig_stdout # Copy your original stdout back to FD 1, overwriting the pipe to tee
exec 2>&$orig_stderr # If something overwrites stderr to also go through tee, fix that too
wait "$tee_pid" # Now, wait until tee exits
exit "$retval" # and complete exit with our original exit status
}
trap cleanup EXIT # configure the function above to be called during cleanup
echo "Writing something to stdout here"

Is there a way to redirect all stdout and stderr to systemd journal from within script?

I like the idea of using systemd's journal to view and manage the logs of my own scripts. I have become aware you can log to journal from my user scripts on a per message basis..
echo 'hello' | systemd-cat -t myscript -p emerg
Is there a way to redirect all messages to journald, even those generated by other commands? Like..
exec &> systemd-cat
Update:
Some partial success.
Tried Inian's suggestion from terminal.
~/scripts/myscript.sh 2>&1 | systemd-cat -t myscript.sh
and it worked, stdout and stderr were directed to systemd's journal.
Curiously,
~/scripts/myscript.sh &> | systemd-cat -t myscript.sh
didn't work in my Bash terminal.
I still need to find a way to do this inside my script for when other programs call my script.
I tried..
exec 2>&1 | systemd-cat -t myscript.sh
but it doesn't work.
Update 2:
From terminal
systemd-cat ~/scripts/myscript.sh
works. But I'm still looking for a way to do this from within the script.
A pipe to systemd-cat is a process which needs to run concurrently with your script. Bash offers a facility for this, though it's not portable to POSIX sh.
exec > >(systemd-cat -t myscript -p emerg) 2>&1
The >(command) process substitution starts another process and returns a pseudo-filename (something like /dev/fd/63) which you can redirect into. This is basically a wrapper for the mkfifo hacks you could do if you wanted to port this to POSIX sh.
If your script happens to not be a shell script, but some other programming language that allows loading extension modules linked to -lsystemd, there is another way. There is a library function sd_journal_stream_fd that quite precisely matches the task at hand. Calling it from bash itself (as opposed to some child) seems difficult at best. In Python for instance, it is available as systemd.journal.stream. What this function does in essence is connecting a unix domain stream socket and communicating what kind of data is being transmitted (e.g. priority). The difficult part with a shell here is making it connect a unix domain socket (as opposed to connecting in a child).
The key idea to this answer was given by Freenode/libera.chat user grawity.
Apparently, and for reasons that are beyond me, you can't really redirect all stdout and stderr to journald from within a script because it has to be piped in. To work around that I found a trick people were using with syslog's logger which works similarly.
You can wrap all your code into a function and then pipe the function into systemd-cat.
#!/bin/bash
mycode(){
echo "hello world"
echor "echo typo producing error"
}
mycode | systemd-cat -t myscript.sh
exit 0
And then to search journal logs..
journalctl -t myscript.sh --since yesterday
I'm disappointed there isn't a more direct way of doing this.

Piping multiple commands to bash, pipe behavior question

I have this command sequence that I'm having trouble understanding:
[me#mine ~]$ (echo 'test'; cat) | bash
echo $?
1
echo 'this is the new shell'
this is the new shell
exit
[me#mine ~]$
As far as I can understand, here is what happens:
A pipe is created.
stdout of echo 'test' is sent to the pipe.
bash receives 'test' on stdin.
echo $? returns 1, which is what happens when you run test without args.
cat runs.
It is copying stdin to stdout.
stdout is sent to the pipe.
bash will execute whatever you type in, but stderr won't get printed to the screen (we used |, not |&).
I have three questions:
It looks like, even though we run two commands, we use the same pipe and bash process for both commands. Is that the case?
Where do the prompts go?
When something like cat uses stdin, does it take exclusive ownership of stdin as long as the shell runs, or can other things use it?
I suspect I'm missing some detail with ttys, but I'm not sure. Any help or details or man excerpt appreciated!
So...
Yes, there's a single pipe sending commands to a single instance of bash. Note:
$ echo 'date "+%T hello $$"; sleep 1; date "+%T world $$"' | bash
22:18:52 hello 72628
22:18:53 world 72628
There are no prompts. From the man page:
An interactive shell is one started without non-option arguments (unless -s is specified) and without the -c option whose standard input and error are both connected to terminals. PS1 is set and $- includes i if bash is interactive.
So a pipe is not an interactive shell, and therefore has no prompt.
Stdin and stdout can only connect to one thing at a time. cat will take stdin from the process that ran it (for example, your interactive shell) and send its stdout through the pipe to bash. If you need multiple things to be able to submit to the stdin of that cat, consider using a named pipe.
Does that cover it?

Want to redirect the output of the nohup command [duplicate]

I have a problem with the nohup command.
When I run my job, I have a lot of data. The output nohup.out becomes too large and my process slows down. How can I run this command without getting nohup.out?
The nohup command only writes to nohup.out if the output would otherwise go to the terminal. If you have redirected the output of the command somewhere else - including /dev/null - that's where it goes instead.
nohup command >/dev/null 2>&1 # doesn't create nohup.out
Note that the >/dev/null 2>&1 sequence can be abbreviated to just >&/dev/null in most (but not all) shells.
If you're using nohup, that probably means you want to run the command in the background by putting another & on the end of the whole thing:
nohup command >/dev/null 2>&1 & # runs in background, still doesn't create nohup.out
On Linux, running a job with nohup automatically closes its input as well. On other systems, notably BSD and macOS, that is not the case, so when running in the background, you might want to close input manually. While closing input has no effect on the creation or not of nohup.out, it avoids another problem: if a background process tries to read anything from standard input, it will pause, waiting for you to bring it back to the foreground and type something. So the extra-safe version looks like this:
nohup command </dev/null >/dev/null 2>&1 & # completely detached from terminal
Note, however, that this does not prevent the command from accessing the terminal directly, nor does it remove it from your shell's process group. If you want to do the latter, and you are running bash, ksh, or zsh, you can do so by running disown with no argument as the next command. That will mean the background process is no longer associated with a shell "job" and will not have any signals forwarded to it from the shell. (A disowned process gets no signals forwarded to it automatically by its parent shell - but without nohup, it will still receive a HUP signal sent via other means, such as a manual kill command. A nohup'ed process ignores any and all HUP signals, no matter how they are sent.)
Explanation:
In Unixy systems, every source of input or target of output has a number associated with it called a "file descriptor", or "fd" for short. Every running program ("process") has its own set of these, and when a new process starts up it has three of them already open: "standard input", which is fd 0, is open for the process to read from, while "standard output" (fd 1) and "standard error" (fd 2) are open for it to write to. If you just run a command in a terminal window, then by default, anything you type goes to its standard input, while both its standard output and standard error get sent to that window.
But you can ask the shell to change where any or all of those file descriptors point before launching the command; that's what the redirection (<, <<, >, >>) and pipe (|) operators do.
The pipe is the simplest of these... command1 | command2 arranges for the standard output of command1 to feed directly into the standard input of command2. This is a very handy arrangement that has led to a particular design pattern in UNIX tools (and explains the existence of standard error, which allows a program to send messages to the user even though its output is going into the next program in the pipeline). But you can only pipe standard output to standard input; you can't send any other file descriptors to a pipe without some juggling.
The redirection operators are friendlier in that they let you specify which file descriptor to redirect. So 0<infile reads standard input from the file named infile, while 2>>logfile appends standard error to the end of the file named logfile. If you don't specify a number, then input redirection defaults to fd 0 (< is the same as 0<), while output redirection defaults to fd 1 (> is the same as 1>).
Also, you can combine file descriptors together: 2>&1 means "send standard error wherever standard output is going". That means that you get a single stream of output that includes both standard out and standard error intermixed with no way to separate them anymore, but it also means that you can include standard error in a pipe.
So the sequence >/dev/null 2>&1 means "send standard output to /dev/null" (which is a special device that just throws away whatever you write to it) "and then send standard error to wherever standard output is going" (which we just made sure was /dev/null). Basically, "throw away whatever this command writes to either file descriptor".
When nohup detects that neither its standard error nor output is attached to a terminal, it doesn't bother to create nohup.out, but assumes that the output is already redirected where the user wants it to go.
The /dev/null device works for input, too; if you run a command with </dev/null, then any attempt by that command to read from standard input will instantly encounter end-of-file. Note that the merge syntax won't have the same effect here; it only works to point a file descriptor to another one that's open in the same direction (input or output). The shell will let you do >/dev/null <&1, but that winds up creating a process with an input file descriptor open on an output stream, so instead of just hitting end-of-file, any read attempt will trigger a fatal "invalid file descriptor" error.
nohup some_command > /dev/null 2>&1&
That's all you need to do!
Have you tried redirecting all three I/O streams:
nohup ./yourprogram > foo.out 2> foo.err < /dev/null &
You might want to use the detach program. You use it like nohup but it doesn't produce an output log unless you tell it to. Here is the man page:
NAME
detach - run a command after detaching from the terminal
SYNOPSIS
detach [options] [--] command [args]
Forks a new process, detaches is from the terminal, and executes com‐
mand with the specified arguments.
OPTIONS
detach recognizes a couple of options, which are discussed below. The
special option -- is used to signal that the rest of the arguments are
the command and args to be passed to it.
-e file
Connect file to the standard error of the command.
-f Run in the foreground (do not fork).
-i file
Connect file to the standard input of the command.
-o file
Connect file to the standard output of the command.
-p file
Write the pid of the detached process to file.
EXAMPLE
detach xterm
Start an xterm that will not be closed when the current shell exits.
AUTHOR
detach was written by Robbert Haarman. See http://inglorion.net/ for
contact information.
Note I have no affiliation with the author of the program. I'm only a satisfied user of the program.
Following command will let you run something in the background without getting nohup.out:
nohup command |tee &
In this way, you will be able to get console output while running script on the remote server:
sudo bash -c "nohup /opt/viptel/viptel_bin/log.sh $* &> /dev/null" &
Redirecting the output of sudo causes sudo to reask for the password, thus an awkward mechanism is needed to do this variant.
If you have a BASH shell on your mac/linux in-front of you, you try out the below steps to understand the redirection practically :
Create a 2 line script called zz.sh
#!/bin/bash
echo "Hello. This is a proper command"
junk_errorcommand
The echo command's output goes into STDOUT filestream (file descriptor 1).
The error command's output goes into STDERR filestream (file descriptor 2)
Currently, simply executing the script sends both STDOUT and STDERR to the screen.
./zz.sh
Now start with the standard redirection :
zz.sh > zfile.txt
In the above, "echo" (STDOUT) goes into the zfile.txt. Whereas "error" (STDERR) is displayed on the screen.
The above is the same as :
zz.sh 1> zfile.txt
Now you can try the opposite, and redirect "error" STDERR into the file. The STDOUT from "echo" command goes to the screen.
zz.sh 2> zfile.txt
Combining the above two, you get:
zz.sh 1> zfile.txt 2>&1
Explanation:
FIRST, send STDOUT 1 to zfile.txt
THEN, send STDERR 2 to STDOUT 1 itself (by using &1 pointer).
Therefore, both 1 and 2 goes into the same file (zfile.txt)
Eventually, you can pack the whole thing inside nohup command & to run it in the background:
nohup zz.sh 1> zfile.txt 2>&1&
You can run the below command.
nohup <your command> & > <outputfile> 2>&1 &
e.g.
I have a nohup command inside script
./Runjob.sh > sparkConcuurent.out 2>&1

Can I only show stdout/stderr in case of a trapped error in bash?

How I wish it to work:
When no errors are bash trapped (if nothing returns a non-zero exit code [unless overwritten by || true]), be silent. Hide stdout and stderr.
When an error is bash trapped, be verbose. Write stdout and stderr.
In my script only stdout and stderr is missing.
#!/bin/bash
exec 5>&1 >/dev/null
exec 6>&2 2>/dev/null
error_handler() {
local return_code="$?"
local last_err="$BASH_COMMAND"
local stdout= # How to read FD 5?
local stderr= # How to read FD 6?
exec 1>&5
exec 2>&6
echo "ERROR!
scriptname: $0
BASH_COMMAND: $last_err
\$?: $return_code
stdout: $stdout
stderr: $stderr
" 1>&2
exit 1
}
trap "error_handler" ERR
echo "Some message..."
# Some command fails, i.e. return a non-zero exit code.
mkdir
I could probably redirect stdout/stderr to a temporary file and use cat to show it in case an error was bash trapped. Would be a bit better, if that temporary file wasn't required. Any idea?
Credit:
This question was inspired by question How to undo exec > /dev/null in bash? and answer by Charles Duffy
Let's look at the I/O redirection carefully:
exec 5>&1 >/dev/null
exec 6>&2 2>/dev/null
We see that file descriptor 5 is a duplicate of the original standard output, but that standard output is going to /dev/null. Similarly, 6 is a duplicate of standard error, but standard error is going to /dev/null.
Now let's consider what happens when you run:
ls -l /dev/null /dev/not-actually/there
The ls command writes the output for /dev/null to /dev/null because that's where its standard output is directed. Similarly, it writes the error for the non-existent file /dev/not-actually/there to /dev/null because that's where its standard error is directed.
Thus, both the standard output and standard error of the command are irrevocably lost.
Given the expressed requirements, there isn't going to be a simple solution. Your best bet is probably to redirect both standard output and standard error to the same file (but be aware that the interleaving of error and normal output may be different because the output is a file). Alternatively, you can direct standard output and standard error to two separate files and show them when necessary.
Note that you will need to consider emptying the output file(s) after each command (letting the trap report the contents before the file(s) is/are emptied) so that you don't report the standard output or standard error of commands 1-9 when command 10 fails.
Doing this neatly and handling pipelines correctly, etc, is not trivial. I'm not sure whether to suggest a function that's passed the command and arguments (tricky for pipelines) or some other technique.
I've used the 'capture everything in one file' technique in cron-run scripts that mail the output when appropriate. It isn't wholly satisfactory, but it is a lot better than not having the error messages at all.
You can consider playing with expect and/or pseudo-ttys, but doing a good job will be really hard.
Your file descriptors 5 and 6 are write-only. There's no way for the shell to read its own output; bidirectional pipes are deadlocks waiting to happen even when it's not the same process on both ends.
I would go with the temp file idea.
The actual paths to I/O files are hidden from the shells, and other applications; you need a program that knows how to dig for the details. lsof may come to your rescue, if your system supports it. Try adding the following in your error routine:
local name0="$(basename "$0")";
lsof -p$$ -d5,6 2>/dev/null |
egrep "^${name0:0:5}[^ ]* +[^ ]+ +[^ ]+ +[56][a-zA-Z]* "
This will require some tweaking to get it to be robust (short program names, program names with spaces in them, ...), and more friendly (stdout for "4" and "stderr for "3"... say replacing the program name in column 1). But when you tweak beware the huge variations in output formats you may encounter. Not just between systems but between different file types on the same system. I leave this as an exercise for the student.

Resources