I am so sorry but I am really confused as to how stdout and logging works on Expect.
Actually I just want to log the stdout of a spawned process of Expect to a file.
With an easy command like:
echo hello
Stdout is:
hello
So I was just testing this with Expect:
$ expect
expect1.1> log_file log.txt
expect1.2> spawn echo hello
spawn echo hello
1978
expect1.3>
But in log.txt I got:
expect1.2> spawn echo hellospawn echo hello
1978
expect1.3>
How do I get expect to log just the stdout from the spawned process? In this case it is just:
hello
Related
I am trying to save some logs from bash functions which execute tools (some of them run in subshells). In addition I would like to print all errors to the terminal.
My code leads to a sigpipe and exit code 141 upon hitting ctr-c plus a strange log file. The pipe fail seems to be caused by the redirection of stdout to stderr within the trap, which breaks the stdout stream of the tee command. Interestingly the code terminates as expected with exit code 130 without the redirection used in the trap or the cat command.
I am still unable to fix and explain the resulting log file. Why are there some echos twice and why are the trap echos written to the file as well?
Why isn't the sigpipe caused earlier by the redirection within the function?
trap '
echo trap_stdout
echo trap_stderr >&2
' INT
fun(){
echo fun_stdout
echo fun_stderr >&2
( sleep 10 | cat )
}
echo > log
fun >> log 2> >(tee -a log)
log file
fun_stdout
fun_stderr
fun_stderr
trap_stdout
EDIT: working example according to oguz ismail answer
exec 3>> log
exec 4> >(tee -ai log >&2)
fun 2>&4 >&3
exec 3>&-
exec 4>&-
Why are there some echos twice
fun's stdout is redirected to log before its stderr is redirected to the FIFO created for tee, thus tee inherits a stdout that is redirected to log. I can prove that like so:
$ : > file 2> >(date)
$ cat file
Sat Jul 25 18:46:31 +03 2020
Changing the order of redirections will fix that. E.g.:
fun 2> >(tee -a log) >> log
and why are the trap echos written to the file as well?
If the trap set for SIGINT is triggered while the shell is still executing fun, its perfectly normal that the redirections associated with fun takes effect.
To connect the trap action's stdout and stderr to those of the main shell, you can do:
exec 3>&1 4>&2
handler() {
: # handle SIGINT here
} 1>&3 2>&4
trap handler INT
Or something alike; the idea is making copies of the main shell's stdout and stderr.
Why isn't the sigpipe caused earlier by the redirection within the function?
Because tee is alive while echo fun_stderr >&2 is being executed. And sleep does not write anything to its stdout, so it can not trigger a SIGPIPE.
The reason why this script terminates due to a SIGPIPE is that tee receives the SIGINT generated by the keyboard as well and terminates before the trap action associated with SIGINT is executed. As a result, while executing echo trap_stderr >&2, since its stderr is connected to a pipe that has been closed moments ago, the shell receives the SIGPIPE.
To avoid this, as already suggested, you can make tee ignore SIGINT. You don't need to set an empty trap for that though, the -i option is enough.
fun 2> >(tee -a -i log) >> log
The source of the SIGPIPE is that the SIGINT (initiated by ctrl/c) is sent to ALL processes running: both the "main" bash process (executing the 'fun' function), and the sub shell executing the 'tee -a'. As a result, on Ctrl/C, both get killed. When the main process tries to send 'trap_stderr' to te "tee" process, it get SIGPIPE, because the "tee" has already died.
Given the role of the 'tee -a', it make sense to protect it from the SIGINT, and allow it to run until 'fun' complete (or killed). Consider the following change to the last line
fun >> log 2> >(trap '' INT ; tee -a log >&2)
which will produce the log file:
Console (stderr)
fun_stderr
^Ctrap_stderr
Log File: (no duplicates)
fun_stdout
fun_stderr
trap_stdout
trap_stderr
The above will also address the second question is about duplicate lines in the log file. This is the result of using tee to send each stderr line to log file AND stdout. Given that stdout just got redirect (by the '>>log') to the 'log' file, both copy of the output are sent to the log file, and none to the terminal
Given the redirection are performed sequentially, changing the 'tee' line to sent the output to the original stderr (instead of the already redirect stdout) will show the output on the terminal (or whatever stderr is)
Let's say you have a series of scripts that you don't own and, therefore, can't modify, that may spawn background processes without redirecting stdout and stderr. I've noticed that in bash, tee'ing the output, as shown in the following example, does not return when the script is done if the background process is still running (and has open file descriptors for stdout or stderr).
./runme.sh 2>&1| tee runme.out
Where runme.sh is defined as:
#!/bin/bash
# Start a fake daemon
perl -e 'while(1) { sleep(1) }' &
printf "Enter your name: "
read name
echo "Goodbye $name"
How can I run scripts like this in bash while capturing all output and get back to the prompt when the script is done?
alternative syntax could be to use process substitution
./runme.sh > >(tee runme.out) 2>&1
this way tee is no more a child process of current shell and shell will wait only for runme.sh termination whereas in a pipeline it's waiting for all process termination.
Note that tee and subprocesses are still running after runme.sh terminates.
does not return when the script is done if the background process is still running (and has open file descriptors for stdout or stderr)
So don't do that. Daemon tools will generally redirect stdout/err for this reason, and you can do it manually too:
perl -e 'while(1) { sleep(1) }' < /dev/null > mydaemon.log 2>&1 &
Now that it's not keeping the pipe open, you can tee robustly without hacks.
update: I think the below code may be headed in the wrong direction -- but the question remains, can I open a pipe to log all output (file&console), pause that log and log to a new log (new file&console), and then re-attach to the FD for the original logger just by moving FDs around and not re-opening the original log file
Trying to improve my knowledge of FDs in bash. I'm trying to log all output of the main "meta" test.sh -- but log to a different file when I get to "sections" -- e.g. functions, sourced scripts, etc. And then go back to appending to the "meta" log.
I know I could pretty easily accomplish this with subshells -- or by opening the 'meta' log again and append from there, but can anyone help accomplish this by switching FDs around?
#!/bin/bash
rm *.log
NAMED_PIPE="$(mktemp -u /tmp/pipe.XXXX)"
mknod $NAMED_PIPE p
tee <$NAMED_PIPE "./meta.log" &
section () {
echo SECTION: stdout
echo SECTION: stderr >&2
}
# link stdout->3 & stderr->4 and save stdout & stderr
exec 3>&1 4>&2 &> "$NAMED_PIPE"
echo METAstr: stdout
echo METAstr: stderr >&2
# restore stdout & stderr
exec 1>&3 2>&4
# sleep 1 # I think an additional delay prevents the possible race condition I'm seeing
# exec 1>&3- 2>&4- ... I think this would restore but close 3 & 4?
# do I need another named pipe here?
section 2>&1 # | tee section.log
# re-link to same pipe
exec 3>&1 4>&2 &> "$NAMED_PIPE"
echo METAend: stdout
echo METAend: stderr >&2
Without trying to log 'section' all the meta output gets printed after the return of the script:
-bash-4.2# ./test.sh
SECTION: stdout
SECTION: stderr
-bash-4.2# METAstr: stdout
METAstr: stderr
METAend: stdout
METAend: stderr
And trying to log 'section' I think fouls up my FDs so the following exec hangs me up:
-bash-4.2# ./test.sh
METAstr: stdout
METAstr: stderr
SECTION: stdout
SECTION: stderr
EDIT1:
Contents of meta.log after running the script without trying to tee section:
[root#master tmp]# cat meta.log
METAstr: stdout
METAstr: stderr
METAend: stdout
METAend: stderr
It logs the ending messages, the tee does not exit until the script does
EDIT2:
Revision of EDIT1. I think It's a race condition. I think the FDs are being closed -- but they're not closed by the time the final echo commands happen.
I was just going to write the same thing. It's a race condition. The second exec closes the writing end in your process, signalling an EOF to tee. tee will want to exit when it gets the EOF. If it does exit by the time you call the last exec, the last exec will block. If it hasn't exited yet at that point, it will not block, because a reading end of the FIFO will still be open.
Any delay will make it more likely tee will have exited by that time.
Spawning a process makes it very likely. I found with stracing (which slows the program down a little) it's about 50/50.
I'm writing a bash script and using the following trick to redirect standard output into a named pipe which is consumed by tee:
exec > >(tee -a $LOGFILE) 2>&1
However, when the script exits, it does not return the shell until I press enter. Is there a simple way to fix this while still using this approach?
Edit: This is the environment I'm running this in:
Centos 7
Bash version 4.2.45
Contents of simple script called redirect.sh:
#!/bin/bash
exec > >(tee -a /tmp/haha) 2>&1
echo "hi there"
exit 0
Sample session:
[root#linux-ha-1 ~]# ./redirect.sh
[root#linux-ha-1 ~]# hi there
[root#linux-ha-1 ~]#
The prompt is being printed; unfortunately, it is printed before tee's output is printed (which is why it appears before hi there in the sample output).
Since the tee process is running asynchronously, there is no guarantee that it will send its output to the console before the script terminates. What you really want to do is to close the tee process and then wait for it to terminate before exiting from the script. This cannot be done with process substitution, unfortunately, but it can be accomplished either with coprocesses (in bash 4) or using named pipes, as is explained in the answer to bash: How do I ensure termination of process substitution used with exec?
For a simpler (but unreliable) solution, close the pipes feeding the tee process (which will force it to close) and then wait a few milliseconds:
#!/bin/bash
exec 3>&1 > >(tee -a /tmp/haha) 2>&1
echo "hi there"
exec 1>&3 2>&3
sleep 0.1
All I want to do is just redirect the executed command's stdout to a pipe. An example will explain it better than I do.
$ echo "Hello world" | cowsay
outputs "Hello world" in cowsay, i want to preprocess terminal's / bash stdout to pass through cowsay
$ echo "Hello world"
this should output the same as the first command.
Thanks in advance.
You can use process substitution:
#!/bin/bash
exec > >(cowsay)
echo "Hello world"
However, there are caveats. If you start anything in the background, the cow will wait for it to finish:
#!/bin/bash
exec > >(cowsay)
echo "Hello world"
sleep 30 & # Runs in the background
exit # Script exits immediately but no cow appears
In this case, the script will exit with no output. 30 seconds later, when sleep exits, the cow suddenly shows up.
You can fix this by telling the programs to write somewhere else, so that the cow doesn't have to wait for them:
#!/bin/bash
exec > >(cowsay)
echo "Hello world"
sleep 30 > /dev/null & # Runs in the background and doesn't keep the cow waiting
exit # Script exits and the cow appears immediately.
If you don't start anything in the background, one of your tools or programs do. To find which ones, redirect or comment them out one by one until cows appear.
You can use a named pipe:
mkfifo /tmp/cowsay_pipe
cowsay < /tmp/cowsay_pipe &
exec > /tmp/cowsay_pipe # Redirect all future output to the pipe
echo "Hello world"