to clarify, by reporting output, here i mean the ones start with [1]:
$ echo hello world >&2 &
[1] 11714
hello world
[1]+ Done echo hello world 1>&2
which means i want hello world to be output.
i did a lot of search on this, and the solutions i found would be:
having it run in a subshell
set +m can deal with Done message only.
suppress it explicitly: { cmd & } 2>/dev/null which won't suppress Done and it suppresses all my stderr too.
but in my condition, they don't work quite fine since i want extra parallelism. the framework should be:
cmd1 &>>log &
cmd2 &>>log &
wait
cat file &
cat file2 >&2 &
wait
if i put things into subshells, output is suppressed, but wait won't block the program.
the rest two options doesn't work as i've stated.
the worst is i am expecting something will be output to stderr. so i am looking for a way to totally suppress these reporting things or any other work around that you can come up with.
This is very ugly but it looks like it works in a quick test.
set +m
{ { sleep 2; echo stdout; echo stderr >&2; } 2>&3- & } 3>&2 2>/dev/null
Create fd 3 as a copy of fd 2 then redirect fd 2 to /dev/null (to suppress the background id/pid message).
Then, for the backgrounded command list, move fd 3 back to fd 2 so things that try to use it go where you wanted them to.
My first attempt had the fd 3 mv in the outer brace command list but that didn't suppress the id/pid message correctly (I guess that happened too quickly or something).
Related
Is there a filename that is assignable to a variable (i.e. not a magic builtin shell token like &1) that will let me redirect to stdout?
What I finally want to do is run something like this in a cron script:
LOG=/tmp/some_file
...
some_command 2>&1 >> $LOG
echo "blah" >> $LOG
...
Conveniently, this lets me turn off log noise by redirecting to /dev/null later when I'm sure there is nothing that can fail (or, at least, nothing that I care about!) without rewriting the whole script. Yes, turning off logging isn't precisely best practice -- but once this script works, there is not much that can conceivably go wrong, and trashing the disk with megabytes of log info that nobody wants to read isn't desired.
In case something unexpectedly fails 5 years later, it is still possible to turn on logging again by flipping a switch.
On the other hand, while writing and debugging the script, which involves calling it manually from the shell, it would be extremely nice if it could just dump the output to the console. That way I wouldn't need to tail the logfile manually.
In other words, what I'm looking for is something like /proc/self/fd/0 in bash-talk that I can assign to LOG. As it happens, /proc/self/fd/0 works just fine on my Linux box, but I wonder if there isn't such a thing built into bash already (which would generally be preferrable).
Basic solution:
#!/bin/bash
LOG=/dev/null
# uncomment next line for debugging (logging)
# LOG=/tmp/some_file
{
some_command
echo "blah"
} | tee 1>$LOG 2>&1
More evolved:
#!/bin/bash
ENABLE_LOG=0 # 1 to log standard & error outputs
LOG=/tmp/some_file
{
some_command
echo "blah"
} | if (( $ENABLE_LOG ))
then
tee 1>$LOG 2>&1
fi
More elegant solution from DevSolar's idea:
#!/bin/bash
# uncomment next line for debugging (logging)
# exec 1> >(tee /tmp/some_file) 2>&1
some_command
echo "blah"
Thanks to the awesome hints by olibre and suvayu, I came up with this (for the record, version that I'm using now):
# log to file
# exec 1>> /tmp/logfile 2>&1
# be quiet
# exec 1> /dev/null 2>&1
# dump to console
exec 2>&1
Just need to uncomment one of the three, depending on what is desired, and don't worry about anything else, ever again. This either logs all subsequent output to a file or to console, or not at all.
No output duplicated, works universally the same for every command (without explicit redirects), no weird stuff, and as easy as it gets.
If I have understood your requirement clearly, the following should do what you want
exec 2>&1
exec >> $LOG
Stdout and stderr of all subsequent commands will be appended to the file $LOG.
Use /dev/stdout
Here's another SO answer that mentions this solution: Difference between stdout and /dev/stdout
I'm aware of the bash "capture output" capability, along the lines of (two separate files):
sub.sh: echo hello
main.sh: greeting="$(./sub.sh)"
This will set the greeting variable to be hello.
However, I need to write a script where I want to just capture some information, allowing the rest to got to "normal" standard output:
sub.sh: xyzzy hello ; plugh goodbye
main.sh: greeting="$(./sub.sh)"
What I would like is for hello to be placed in the greeting variable but goodbye to be sent to the standard output of main.sh.
What do the magic commands xyzzy and plugh need to be replaced with above (or what can I do in main.sh), in order to achieve this behaviour? I suspect it could be done with some sneaky fiddling around of handle-based redirections but I'm not sure. If not possible, I'll have to revert to writing one of the items to a temporary file to be picked up later, but I'd prefer not to do that.
To make things clearer, here's the test case I'm using (currently using the non-working file handle 3 method). First sub.sh:
echo xx_greeting >&3 # This should be captured to variable.
echo xx_stdout # This should show up on stdout.
echo xx_stderr >&2 # This should show up on stderr.
Then main.sh:
greeting="$(./sub.sh)" 3>&1
echo "Greeting was ${greeting}"
And I run it thus:
./main.sh >/tmp/out 2>/tmp.err
expecting to see the following files:
/tmp/out:
xx_stdout
Greeting was xx_greeting
/tmp/err:
xx_stderr
This can be done by introducing an extra file descriptor as follows. First the sub.sh script for testing, which simply writes a different thing to three different descriptors (implicit >&1 for the first):
echo for-var
echo for-out >&3
echo for-err >&2
Second, the main.sh which calls it:
exec 3>&1
greeting="$(./sub.sh)"
echo "Variable is ${greeting}"
Then you simply run it ensuring you know what output is going to the different locations:
pax> ./main.sh > xxout 2> xxerr
pax> cat xxout
for-out
Variable is for-var
pax> cat xxerr
for-err
Hence you can see that, when calling sub.sh from main.sh, stuff written to file handle 1 goes to the capture variable, stuff written to file handle 2 goes to standard error, and stuff written to file handle 3 goes to standard output.
Want to log all stdout and stderr to seperate files and add timestamp to each line.
Tried the following, which works but is missing timestamps.
#!/bin/bash
debug_file=./stdout.log
error_file=./stderr.log
exec > >(tee -a "$debug_file") 2> >(tee -a "$error_file")
echo "hello"
echo "hello world"
this-will-fail
and-so-will-this
Adding timestamps. (Only want timestamps prefixed to log output.)
#!/bin/bash
debug_file=./stdout.log
error_file=./stderr.log
log () {
file=$1; shift
while read -r line; do
printf '%(%s)T %s\n' -1 "$line"
done >> "$file"
}
exec > >(tee >(log "$debug_file")) 2> >(tee >(log "$error_file"))
echo "hello"
echo "hello world"
this-will-fail
and-so-will-this
The latter adds timestamps to the logs but it also has the chance of messing up my terminal window. Reproducing this behavior was not straight forward, it only happend every now and then. I suspect it has to with the subroutine/buffer still having output flowing through it.
Examples of the script messing up my terminal.
# expected/desired behavior
user#system:~ ./log_test
hello
hello world
./log_test: line x: this-will-fail: command not found
./log_test: line x: and-so-will-this: command not found
user#system:~ # <-- cursor blinks here
# erroneous behavior
user#system:~ ./log_test
hello
hello world
user#system:~ ./log_test: line x: this-will-fail: command not found
./log_test: line x: and-so-will-this: command not found
# <-- cursor blinks here
# erroneous behavior
user#system:~ ./log_test
hello
hello world
./log_test: line x: this-will-fail: command not found
user#system:~
./log_test: line x: and-so-will-this: command not found
# <-- cursor blinks here
# erroneous behavior
user#system:~ ./log_test
hello
hello world
user#system:~
./log_test: line x: this-will-fail: command not found
./log_test: line x: and-so-will-this: command not found
# <-- cursor blinks here
For funs I put a sleep 2 at the end of the script to see what would happen and the problem never occurred again.
Hopefully someone knows the answer or can point me in the right derection.
Thanks
Edit
Judging from another question answered by Charles Duffy, what I'm trying to achieve is not really possible in bash.
Separately redirecting and recombining stderr/stdout without losing ordering
The trick is to make sure that tee, and the process substitution running your log function, exits before the script as a whole does -- so that when the shell that started the script prints its prompt, there isn't any backgrounded process that might write more output after it's done.
As a working example (albeit one focused more on being explicit than terseness):
#!/usr/bin/env bash
stdout_log=stdout.log; stderr_log=stderr.log
log () {
file=$1; shift
while read -r line; do
printf '%(%s)T %s\n' -1 "$line"
done >> "$file"
}
# first, make backups of your original stdout and stderr
exec {stdout_orig_fd}>&1 {stderr_orig_fd}>&2
# for stdout: start your process substitution, record its PID, start tee, record *its* PID
exec {stdout_log_fd}> >(log "$stdout_log"); stdout_log_pid=$!
exec {stdout_tee_fd}> >(tee "/dev/fd/$stdout_log_fd"); stdout_tee_pid=$!
exec {stdout_log_fd}>&- # close stdout_log_fd so the log process can exit when tee does
# for stderr: likewise
exec {stderr_log_fd}> >(log "$stderr_log"); stderr_log_pid=$!
exec {stderr_tee_fd}> >(tee "/dev/fd/$stderr_log_fd" >&2); stderr_tee_pid=$!
exec {stderr_log_fd}>&- # close stderr_log_fd so the log process can exit when tee does
# now actually swap out stdout and stderr for the processes we started
exec 1>&$stdout_tee_fd 2>&$stderr_tee_fd {stdout_tee_fd}>&- {stderr_tee_fd}>&-
# ...do the things you want to log here...
echo "this goes to stdout"; echo "this goes to stderr" >&2
# now, replace the FDs going to tee with the backups...
exec >&"$stdout_orig_fd" 2>&"$stderr_orig_fd"
# ...and wait for the associated processes to exit.
while :; do
ready_to_exit=1
for pid_var in stderr_tee_pid stderr_log_pid stdout_tee_pid stdout_log_pid; do
# kill -0 just checks whether a PID exists; it doesn't actually send a signal
kill -0 "${!pid_var}" &>/dev/null && ready_to_exit=0
done
(( ready_to_exit )) && break
sleep 0.1 # avoid a busy-loop eating unnecessary CPU by sleeping before next poll
done
So What's With The File Descriptor Manipulation?
A few key concepts to make sure we have clear:
All subshells have their own copies of the file descriptor table as created when they were fork()ed off from their parent. From that point forward, each file descriptor table is effectively independent.
A process reading from (the read end of) a FIFO (or pipe) won't see an EOF until all programs writing to (the write end of) that FIFO have closed their copies of the descriptor.
...so, if you create a FIFO pair, fork() off a child process, and let the child process write to the write end of the FIFO, whatever's reading from the read end will never see an EOF until not just the child, but also the parent, closes their copies.
Thus, the gymnastics you see here:
When we run exec {stdout_log_fd}>&-, we're closing the parent shell's copy of the FIFO writing to the log function for stdout, so the only remaining copy is the one used by the tee child process -- so that when tee exits, the subshell running log exits too.
When we run exec 1>&$stdout_tee_fd {stdout_tee_fd}>&-, we're doing two things: First, we make FD 1 a copy of the file descriptor whose number is stored in the variable stdout_tee_fd; second, we delete the stdout_tee_fd entry from the file descriptor table, so only the copy on FD 1 remains. This ensures that later, when we run exec >&"$stdout_orig_fd", we're deleting the last remaining write handle to the stdout tee function, causing tee to get an EOF on stdin (so it exits, thus closing the handle it holds on the log function's subshell and letting that subshell exit as well).
Some Final Notes On Process Management
Unfortunately, how bash handles subshells created for process substitutions has changed substantially between still-actively-deployed releases; so while in theory it's possible to use wait "$pid" to let a process substitution exit and collect its exit status, this isn't always reliable -- hence the use of kill -0.
However, if wait "$pid" worked, it would be strongly preferable, because the wait() syscall is what removes a previously-exited process's entry from the process table: It is guaranteed that a PID will not be reused (and a zombie process-table entry is left as a placeholder) if no wait() or waitpid() invocation has taken place.
Modern operating systems try fairly hard to avoid short-term PID reuse, so wraparound is not an active concern in most scenarios. However, if you're worried about this, consider using the flock-based mechanism discussed in https://stackoverflow.com/a/31552333/14122 for waiting for your process substitutions to exit, instead of kill -0.
EDIT: Corrected process/thread terminology
My shell script has a foreground process that reads user input and a background process that prints messages. I would like to print these messages on the line above the input prompt rather than interrupting the input. Here's a canned example:
sleep 5 && echo -e "\nINFO: Helpful Status Update!" &
echo -n "> "
read input
When I execute it and type "input" a bunch of times, I get something like this:
> input input input inp
INFO: Helpful Status Update!
ut input
But I would like to see something like this:
INFO: Helpful Status Update!
> input input input input input
The solution need not be portable (I'm using bash on linux), though I would like to avoid ncurses if possible.
EDIT: According to #Nick, previous lines are inaccessible for historical reasons. However, my situation only requires modifying the current line. Here's a proof of concept:
# Make named pipe
mkfifo pipe
# Spawn background process
while true; do
sleep 2
echo -en "\033[1K\rINFO: Helpful Status Update!\n> `cat pipe`"
done &
# Start foreground user input
echo -n "> "
pid=-1
collected=""
IFS=""
while true; do
read -n 1 c
collected="$collected$c"
# Named pipes block writes, so must do background process
echo -n "$collected" >> pipe &
# Kill last loop's (potentially) still-blocking pipe write
if kill -0 $pid &> /dev/null; then
kill $pid &> /dev/null
fi
pid=$!
done
This produces mostly the correct behavior, but lacks CLI niceties like backspace and arrow navigation. These could be hacked in, but I'm still having trouble believing that a standard approach hasn't already been developed.
The original ANSI codes still work in bash terminal on Linux (and MacOS), so you can use \033[F where \033 is the ESCape character. You can generate this in bash terminal by control-V followed by the ESCape character. You should see ^[ appear. Then type [F. If you test the following script:
echo "original line 1"
echo "^[[Fupdated line 1"
echo "line 2"
echo "line 3"
You should see output:
updated line 1
line 2
line 3
EDIT:
I forgot to add that using this in your script will cause the cursor to return to the beginning of the line, so further input will overwrite what you have typed already. You could use control-R on the keyboard to cause bash to re-type the current line and return the cursor to the end of the line.
So I was playing around with the language Octave, and they had this useful command called diary that would log stdout into a file for anything in between the diary on and diary off
diary on
a = [4 5, 2 6, 2 1]
a + 1
diary off
The above would save a file called diary in the working directory with the output of a, then a+1. It was super helpful for debugging, especially when looking at large datasets.
I was looking at other scripting languages and wondered if they have equivalents. The best I could come up with was echo hello.dat >> diary.txt for every single line. Does a tool exist that could achieve this functionality for bash? If not, how about python? It seems like a basic thing, but idk how to do it.
If you don't need contents to keep going to the TTY, and want to redirect both stdout and stderr:
exec 3>&1 4>&2 >>diary.txt 2>&1
echo "Everything here goes to diary.txt"
echo "...without having to redirect each line separately"
exec >&3 2>&4
If you do need contents to keep going to the TTY:
exec 3>&1 4>&2 > >(tee -a diary.txt) 2>&1
echo "Everything here goes to diary.txt"
echo "...without having to redirect each line separately"
exec >&3 2>&4
Note that you can't redirect both stdout and stderr to the file without either losing their ordering (ie. having two separate copies of tee and having to hope that they finish flushing in the same order in which you started writing to them) or lose information on which piece of output went to which descriptor.
The above can also be done with a multi-line block with a single redirection, which will do both the setup and the cleanup automatically:
{
echo "Everything here goes to diary.txt"
echo "...without having to redirect each line separately"
} >>diary.txt 2>&1