Can't redirect command output to file - bash

I've been writing a bash script to install PBIS Open when I execute the following command domainjoin-cli join $domain $join_account $password I can see the output on the terminal. However, if I try to capture the terminal output and save the output to a file, the file is empty.
I've tried adding <cmd> > output.txt
I've tried using
script output.txt
<cmd>
exit
I've searched for a day now but I can't seem to find a working solution.

There are two types of output stream stdout and stderr. It is probably coming out on the stderr stream. The > by itself will only capture the stdout.
Try executing with
<cmd> &> filename

Related

Where does `set -x` cause bash to print to?

If I set -x in my bash session ( v4.1.2(2) - CentOS 6.10), I get :
$ ls /root
+ ls --color=auto /root
ls: cannot open directory /root: Permission denied
Great, it echo's the command I ran and prints out the terminal. This is expected. Now if I redirect both stdout and stderr to the another file.
$ ls /root &> stuff.txt
+ ls --color=auto /root
It still prints the command to the terminal.
QUESTION
Where is set -x having bash print to if it isn't stderr or stdout?
The set -x command prints tracing information to stderr.
When you run this command...
ls /root &> stuff.txt
You're only redirecting stdout and stderr for the ls command. You're not changing either for your current shell, which is where you have run set -x.
As Mad Physicist points out, the technical answer is "it logs to BASH_XTRACEFD", which defaults to stderr. You can redirect trace logging for the current shell to another file by doing something like:
# open a new file descriptor for logging
exec 4> trace.log
# redirect trace logs to fd 4
BASH_XTRACEFD=4
# enable tracing
set -x
When you execute a command, you can redirect the standard output (known as /dev/stdout) of the command directly to the file. Also if the command generates error-output (generally send to /dev/stderr) you can also redirect it to a file as:
$ command > /path/to/output.txt 2> /path/to/error.txt
When you execute the command set -x, you ask it to generate a trace of the commands being executed. It does this by sending messages to /dev/stderr. In contrast to a normal command, you cannot easily redirect this in a similar way as with a normal command. This is because bash executes the script and at the same time generates the trace to /dev/stderr. So if you would like to catch the trace, you would have to redirect the error output of bash directly. This can be done by the command
exec 2> /path/to/trace.txt
note: this will at the same time also contain all the error output of any command executed in the script.
Examples:
#!/usr/bin/env bash
set -x
command
This sends all output and error output to the terminal
#!/usr/bin/env bash
set -x
command 2> /path/to/command.err
This sends the output of command and the trace of bash to the terminal but catches the error output of command in a file
#!/usr/bin/env bash
set -x
exec 2> /path/to/trace.err
command 2> /path/to/command.err
This sends the output of command to the terminal, the error output of command to a file, and the trace of the script to /path/to/trace.err
#!/usr/bin/env bash
set -x
exec 2> /path/to/trace_and_command.err
command
This sends the output of command to the terminal, the trace and the error of command to a file.

How to capture the output of a bash command which prompts for a user's confirmation without blocking the output nor the command

I need to capture the output of a bash command which prompts for a user's confirmation without altering its flow.
I know only 2 ways to capture a command output:
- output=$(command)
- command > file
In both cases, the whole process is blocked without any output.
For instance, without --assume-yes:
output=$(apt purge 2>&1 some_package)
I cannot print the output back because the command is not done yet.
Any suggestion?
Edit 1: The user must be able to answer the prompt.
EDIT 2: I used dash-o's answer to complete a bash script allowing a user to remove/purge all obsolete packages (which have no installation candidate) from any Debian/Ubuntu distribution.
To capture partial output from that is waiting for a prompt, one can use a tail on temporary file, potentiality with 'tee' to keep the output flowing if needed. The downside of this approach is that stderr need to be tied with stdout, making it hard to tell between the two (if this is an issue)
#! /bin/bash
log=/path/to/log-file
echo > $log
(
while ! grep -q -F 'continue?' $log ; do sleep 2 ; done ;
output=$(<$log)
echo do-something "$output"
) &
# Run command with output to terminal
apt purge 2>&1 some_package | tee -a $log
# If output to terminal not needed, replace above command with
apt purge 2>&1 some_package > $log
There is no generic way to tell (from a script) when exactly a program prompts for input. The above code looks for the prompt string ('continue?'), so this will have to be customized per command.

Redirect output to a file AND console

Currently I have a script where I want all output to be redirected to both a file and the console.
#!/bin/bash
touch /mnt/ybdata/ybvwconf.log
{
... [Do some script code here]
} 2>&1 | tee -a /mnt/ybdata/ybvwconf.log
Above, you can see my current code, which works perfectly fine. It prints all the output to the console as well as piping it to the ybvwconf.log file. However, I was looking for a way to eliminate the curly brackets. Something along the lines of this:
#!/bin/bash
touch /mnt/ybdata/ybvwconf.log
exec 2>&1 | tee -a /mnt/ybdata/ybvwconf.log
... [Do some script code here]
I have tried this approach and sadly it does not work. I don't get any errors but no content appears in my log file. Any ideas what might be wrong?
You can place this at top of your script to redirect both stdout and stderr to a file and show them on terminal as well.
#!/bin/bash
exec &> >(tee /mnt/ybdata/ybvwconf.log; exit)
# your script code goes here

Bash Redirect to a file

I am trying to redirect output of a command to a file. The command I am using (zypper) downloads packages from the internet. The command I am using is
zypper -x -n in geany >> log.txt
The command gradually prints output to the console. The problem I am facing is that the above command writes the command output all at once after the command finishes executing. How do I redirect the bash output as I get it onto the terminal, rather than writing all the command output at the end.
Not with bash itself, but via the tee command:
zipper -x -n in geany | tee log.txt
&>>FILE COMMAND
will append the output of COMMAND to FILE
In your case
&>>log.txt zypper -x -n in geany
If you want to pipe a command through a filter, you must assure that the command outputs to standard output (file descriptor 1) -- if it outputs to standard error (file descriptor 2), you have to redirect the 2 to 1 before the pipe. Take into account that only stdout passed through a pipe.
So you have to do so:
2>&1 COMMAND | FILTER
If you want to grep the output and in the same keep it into a log file, you have to duplicate it with tee, and use a filter like ... | tee log-file | grep options

Write STDOUT & STDERR to a logfile, also write STDERR to screen

I would like to run several commands, and capture all output to a logfile. I also want to print any errors to the screen (or optionally mail the output to someone).
Here's an example. The following command will run three commands, and will write all output (STDOUT and STDERR) into a single logfile.
{ command1 && command2 && command3 ; } > logfile.log 2>&1
Here is what I want to do with the output of these commands:
STDERR and STDOUT for all commands goes to a logfile, in case I need it later--- I usually won't look in here unless there are problems.
Print STDERR to the screen (or optionally, pipe to /bin/mail), so that any error stands out and doesn't get ignored.
It would be nice if the return codes were still usable, so that I could do some error handling. Maybe I want to send email if there was an error, like this:
{ command1 && command2 && command3 ; } > logfile.log 2>&1 || mailx -s "There was an error" stefanl#example.org
The problem I run into is that STDERR loses context during I/O redirection. A '2>&1' will convert STDERR into STDOUT, and therefore I cannot view errors if I do 2> error.log
Here are a couple juicier examples. Let's pretend that I am running some familiar build commands, but I don't want the entire build to stop just because of one error so I use the '--keep-going' flag.
{ ./configure && make --keep-going && make install ; } > build.log 2>&1
Or, here's a simple (And perhaps sloppy) build and deploy script, which will keep going in the event of an error.
{ ./configure && make --keep-going && make install && rsync -av --keep-going /foo devhost:/foo} > build-and-deploy.log 2>&1
I think what I want involves some sort of Bash I/O Redirection, but I can't figure this out.
(./doit >> log) 2>&1 | tee -a log
This will take stdout and append it to log file.
The stderr will then get converted to stdout which is piped to tee which appends it to the log (if you are have Bash 4, you can replace 2>&1 | with |&) and sends it to stdout which will either appear on the tty or can be piped to another command.
I used append mode for both so that regardless of which order the shell redirection and tee open the file, you won't blow away the original. That said, it may be possible that stderr/stdout is interleaved in an unexpected way.
If your system has /dev/fd/* nodes you can do it as:
( exec 5>logfile.txt ; { command1 && command2 && command3 ;} 2>&1 >&5 | tee /dev/fd/5 )
This opens file descriptor 5 to your logfile. Executes the commands with standard error directed to standard out, standard out directed to fd 5 and pipes stdout (which now contains only stderr) to tee which duplicates the output to fd 5 which is the log file.
Here is how to run one or more commands, capturing the standard output and error, in the order in which they are generated, to a logfile, and displaying only the standard error on any terminal screen you like. Works in bash on linux. Probably works in most other environments. I will use an example to show how it's done.
Preliminaries:
Open two windows (shells, tmux sessions, whatever)
I will demonstrate with some test files, so create the test files:
touch /tmp/foo /tmp/foo1 /tmp/foo2
in window1:
mkfifo /tmp/fifo
0</tmp/fifo cat - >/tmp/logfile
Then, in window2:
(ls -l /tmp/foo /tmp/nofile /tmp/foo1 /tmp/nofile /tmp/nofile; echo successful test; ls /tmp/nofile1111) 2>&1 1>/tmp/fifo | tee /tmp/fifo 1>/dev/pts/2
Where you replace /dev/pts/2 with whatever tty you want the stderr to display.
The reason for the various successful and unsuccessful commands in the subshell is simply to generate a mingled stream of output and error messages, so that you can verify the correct ordering in the log file. Once you understand how it works, replace the “ls” and “echo” commands with scripts or commands of your choosing.
With this method, the ordering of output and error is preserved, the syntax is simple and clean, and there is only a single reference to the output file. Plus there is flexiblity in putting the extra copy of stderr wherever you want.
Try:
command 2>&1 | tee output.txt
Additionally, you can direct stdout and stderr to different places:
command > stdout.txt >& stderr.txt
command > stdout.txt |& program_for_stderr
So some combination of the above should work for you -- e.g. you could save stdout to a file, and stderr to both a file and piping to another program (with tee).
add this at the beginning of your script
#!/bin/bash
set -e
outfile=logfile
exec > >(cat >> $outfile)
exec 2> >(tee -a $outfile >&2)
# write your code here
STDOUT and STDERR will be written to $outfile, only STDERR will be seen on the console

Resources