I don't understand bash exec [duplicate] - bash

This question already has answers here:
How to redirect output of an entire shell script within the script itself?
(6 answers)
Closed 7 years ago.
#!/bin/bash
#...
exec >> logfile
#cmd
I guest it's save output.
Could you expand it with the same example?
Thank you very much!
I often use this command to list txt files:
find / -type f -exec grep -l "bash" {} \;
but I can't very understand too.
How can there be such good people in the world.very very moved!

Yes, it sends any further output to the file named logfile. In other words, it redirects standard output (also known as stdout) to the file logfile.
Example
Let's start with this script:
$ cat >script.sh
#!/bin/bash
echo First
exec >>logfile
echo Second
If we run the script, we see output from the first but not the second echo statements:
$ bash script.sh
First
The output from the second echo statement went to the file logfile:
$ cat logfile
Second
$
If we had used exec >logfile, then the logfile would be overwritten each time the script was run. Because we used >> instead of >, however, the output will be appended to logfile. For example, if we run it once again:
$ bash script.sh
First
$ cat logfile
Second
Second
Documentation
This is documented in man bash:
exec [-cl] [-a name] [command [arguments]] If command
is specified, it replaces the shell. No new process is created. The
arguments become the arguments to command. If the -l option is
supplied, the shell places a dash at the beginning of the zeroth
argument passed to command. This is what login(1) does. The -c
option causes command to be executed with an empty environment. If
-a is supplied, the shell passes name as the zeroth argument to the executed command. If command cannot be executed for some reason, a
non-interactive shell exits, unless the execfail shell option is
enabled. In that case, it returns failure. An interactive shell
returns failure if the file cannot be executed. If command is not
specified, any redirections take effect in the current shell, and the
return status is 0. If there is a redirection error, the return
status is 1. [Emphasis added.]
In your case, no command argument is specified. So, the exec command performs redirections which, in this case, means any further stdout is sent to file logfile.
find command and -exec
The find command has a -exec option. For example:
find / -type f -exec grep -l "bash" {} \;
Other than the similarity in name, the -exec here has absolutely nothing to do with the shell command exec.
The construct -exec grep -l "bash" {} \; tells find to execute the command grep -l "bash" on any files that it finds. This is unrelated to the shell command exec >>logfile which executes nothing but has the effect of redirecting output.

Everything you output to stdout will not go to stdout but to the logfile.
See example below:
#!/bin/bash
# reassign-stdout.sh
LOGFILE=logfile.txt
exec 6>&1 # Link file descriptor #6 with stdout.
# Saves stdout.
exec > $LOGFILE # stdout replaced with file "logfile.txt".
# ----------------------------------------------------------- #
# All output from commands in this block sent to file $LOGFILE.
echo -n "Logfile: "
date
echo "-------------------------------------"
echo
echo "Output of \"ls -al\" command"
echo
ls -al
echo; echo
echo "Output of \"df\" command"
echo
df
# ----------------------------------------------------------- #
exec 1>&6 6>&- # Restore stdout and close file descriptor #6.
echo
echo "== stdout now restored to default == "
echo
ls -al
echo
exit 0

This redirects standard output produced by your script and any other programs it subsequently calls into the logfile. So if your script runs in terminal after it executes
exec >> logfile
you'll see no output in your terminal window but you'll find it inside the logfile. logfile will be created if it doesn't exist or appended every time you run your script.

Related

I want to place after all: echo "etc" ( >> filename) [duplicate]

Is it possible to redirect all of the output of a Bourne shell script to somewhere, but with shell commands inside the script itself?
Redirecting the output of a single command is easy, but I want something more like this:
#!/bin/sh
if [ ! -t 0 ]; then
# redirect all of my output to a file here
fi
# rest of script...
Meaning: if the script is run non-interactively (for example, cron), save off the output of everything to a file. If run interactively from a shell, let the output go to stdout as usual.
I want to do this for a script normally run by the FreeBSD periodic utility. It's part of the daily run, which I don't normally care to see every day in email, so I don't have it sent. However, if something inside this one particular script fails, that's important to me and I'd like to be able to capture and email the output of this one part of the daily jobs.
Update: Joshua's answer is spot-on, but I also wanted to save and restore stdout and stderr around the entire script, which is done like this:
# save stdout and stderr to file
# descriptors 3 and 4,
# then redirect them to "foo"
exec 3>&1 4>&2 >foo 2>&1
# ...
# restore stdout and stderr
exec 1>&3 2>&4
Addressing the question as updated.
#...part of script without redirection...
{
#...part of script with redirection...
} > file1 2>file2 # ...and others as appropriate...
#...residue of script without redirection...
The braces '{ ... }' provide a unit of I/O redirection. The braces must appear where a command could appear - simplistically, at the start of a line or after a semi-colon. (Yes, that can be made more precise; if you want to quibble, let me know.)
You are right that you can preserve the original stdout and stderr with the redirections you showed, but it is usually simpler for the people who have to maintain the script later to understand what's going on if you scope the redirected code as shown above.
The relevant sections of the Bash manual are Grouping Commands and I/O Redirection. The relevant sections of the POSIX shell specification are Compound Commands and I/O Redirection. Bash has some extra notations, but is otherwise similar to the POSIX shell specification.
Typically we would place one of these at or near the top of the script. Scripts that parse their command lines would do the redirection after parsing.
Send stdout to a file
exec > file
with stderr
exec > file
exec 2>&1
append both stdout and stderr to file
exec >> file
exec 2>&1
As Jonathan Leffler mentioned in his comment:
exec has two separate jobs. The first one is to replace the currently executing shell (script) with a new program. The other is changing the I/O redirections in the current shell. This is distinguished by having no argument to exec.
You can make the whole script a function like this:
main_function() {
do_things_here
}
then at the end of the script have this:
if [ -z $TERM ]; then
# if not run via terminal, log everything into a log file
main_function 2>&1 >> /var/log/my_uber_script.log
else
# run via terminal, only output to screen
main_function
fi
Alternatively, you may log everything into logfile each run and still output it to stdout by simply doing:
# log everything, but also output to stdout
main_function 2>&1 | tee -a /var/log/my_uber_script.log
For saving the original stdout and stderr you can use:
exec [fd number]<&1
exec [fd number]<&2
For example, the following code will print "walla1" and "walla2" to the log file (a.txt), "walla3" to stdout, "walla4" to stderr.
#!/bin/bash
exec 5<&1
exec 6<&2
exec 1> ~/a.txt 2>&1
echo "walla1"
echo "walla2" >&2
echo "walla3" >&5
echo "walla4" >&6
[ -t <&0 ] || exec >> test.log
I finally figured out how to do it. I wanted to not just save the output to a file but also, find out if the bash script ran successfully or not!
I've wrapped the bash commands inside a function and then called the function main_function with a tee output to a file. Afterwards, I've captured the output using if [ $? -eq 0 ].
#! /bin/sh -
main_function() {
python command.py
}
main_function > >(tee -a "/var/www/logs/output.txt") 2>&1
if [ $? -eq 0 ]
then
echo 'Success!'
else
echo 'Failure!'
fi

How can I conditionally copy output to a file without repeating echo/printf statements? [duplicate]

I know how to redirect stdout to a file:
exec > foo.log
echo test
this will put the 'test' into the foo.log file.
Now I want to redirect the output into the log file AND keep it on stdout
i.e. it can be done trivially from outside the script:
script | tee foo.log
but I want to do declare it within the script itself
I tried
exec | tee foo.log
but it didn't work.
#!/usr/bin/env bash
# Redirect stdout ( > ) into a named pipe ( >() ) running "tee"
exec > >(tee -i logfile.txt)
# Without this, only stdout would be captured - i.e. your
# log file would not contain any error messages.
# SEE (and upvote) the answer by Adam Spiers, which keeps STDERR
# as a separate stream - I did not want to steal from him by simply
# adding his answer to mine.
exec 2>&1
echo "foo"
echo "bar" >&2
Note that this is bash, not sh. If you invoke the script with sh myscript.sh, you will get an error along the lines of syntax error near unexpected token '>'.
If you are working with signal traps, you might want to use the tee -i option to avoid disruption of the output if a signal occurs. (Thanks to JamesThomasMoon1979 for the comment.)
Tools that change their output depending on whether they write to a pipe or a terminal (ls using colors and columnized output, for example) will detect the above construct as meaning that they output to a pipe.
There are options to enforce the colorizing / columnizing (e.g. ls -C --color=always). Note that this will result in the color codes being written to the logfile as well, making it less readable.
The accepted answer does not preserve STDERR as a separate file descriptor. That means
./script.sh >/dev/null
will not output bar to the terminal, only to the logfile, and
./script.sh 2>/dev/null
will output both foo and bar to the terminal. Clearly that's not
the behaviour a normal user is likely to expect. This can be
fixed by using two separate tee processes both appending to the same
log file:
#!/bin/bash
# See (and upvote) the comment by JamesThomasMoon1979
# explaining the use of the -i option to tee.
exec > >(tee -ia foo.log)
exec 2> >(tee -ia foo.log >&2)
echo "foo"
echo "bar" >&2
(Note that the above does not initially truncate the log file - if you want that behaviour you should add
>foo.log
to the top of the script.)
The POSIX.1-2008 specification of tee(1) requires that output is unbuffered, i.e. not even line-buffered, so in this case it is possible that STDOUT and STDERR could end up on the same line of foo.log; however that could also happen on the terminal, so the log file will be a faithful reflection of what could be seen on the terminal, if not an exact mirror of it. If you want the STDOUT lines cleanly separated from the STDERR lines, consider using two log files, possibly with date stamp prefixes on each line to allow chronological reassembly later on.
Solution for busybox, macOS bash, and non-bash shells
The accepted answer is certainly the best choice for bash. I'm working in a Busybox environment without access to bash, and it does not understand the exec > >(tee log.txt) syntax. It also does not do exec >$PIPE properly, trying to create an ordinary file with the same name as the named pipe, which fails and hangs.
Hopefully this would be useful to someone else who doesn't have bash.
Also, for anyone using a named pipe, it is safe to rm $PIPE, because that unlinks the pipe from the VFS, but the processes that use it still maintain a reference count on it until they are finished.
Note the use of $* is not necessarily safe.
#!/bin/sh
if [ "$SELF_LOGGING" != "1" ]
then
# The parent process will enter this branch and set up logging
# Create a named piped for logging the child's output
PIPE=tmp.fifo
mkfifo $PIPE
# Launch the child process with stdout redirected to the named pipe
SELF_LOGGING=1 sh $0 $* >$PIPE &
# Save PID of child process
PID=$!
# Launch tee in a separate process
tee logfile <$PIPE &
# Unlink $PIPE because the parent process no longer needs it
rm $PIPE
# Wait for child process, which is running the rest of this script
wait $PID
# Return the error code from the child process
exit $?
fi
# The rest of the script goes here
Inside your script file, put all of the commands within parentheses, like this:
(
echo start
ls -l
echo end
) | tee foo.log
Easy way to make a bash script log to syslog. The script output is available both through /var/log/syslog and through stderr. syslog will add useful metadata, including timestamps.
Add this line at the top:
exec &> >(logger -t myscript -s)
Alternatively, send the log to a separate file:
exec &> >(ts |tee -a /tmp/myscript.output >&2 )
This requires moreutils (for the ts command, which adds timestamps).
Using the accepted answer my script kept returning exceptionally early (right after 'exec > >(tee ...)') leaving the rest of my script running in the background. As I couldn't get that solution to work my way I found another solution/work around to the problem:
# Logging setup
logfile=mylogfile
mkfifo ${logfile}.pipe
tee < ${logfile}.pipe $logfile &
exec &> ${logfile}.pipe
rm ${logfile}.pipe
# Rest of my script
This makes output from script go from the process, through the pipe into the sub background process of 'tee' that logs everything to disc and to original stdout of the script.
Note that 'exec &>' redirects both stdout and stderr, we could redirect them separately if we like, or change to 'exec >' if we just want stdout.
Even thou the pipe is removed from the file system in the beginning of the script it will continue to function until the processes finishes. We just can't reference it using the file name after the rm-line.
Bash 4 has a coproc command which establishes a named pipe to a command and allows you to communicate through it.
Can't say I'm comfortable with any of the solutions based on exec. I prefer to use tee directly, so I make the script call itself with tee when requested:
# my script:
check_tee_output()
{
# copy (append) stdout and stderr to log file if TEE is unset or true
if [[ -z $TEE || "$TEE" == true ]]; then
echo '-------------------------------------------' >> log.txt
echo '***' $(date) $0 $# >> log.txt
TEE=false $0 $# 2>&1 | tee --append log.txt
exit $?
fi
}
check_tee_output $#
rest of my script
This allows you to do this:
your_script.sh args # tee
TEE=true your_script.sh args # tee
TEE=false your_script.sh args # don't tee
export TEE=false
your_script.sh args # tee
You can customize this, e.g. make tee=false the default instead, make TEE hold the log file instead, etc. I guess this solution is similar to jbarlow's, but simpler, maybe mine has limitations that I have not come across yet.
Neither of these is a perfect solution, but here are a couple things you could try:
exec >foo.log
tail -f foo.log &
# rest of your script
or
PIPE=tmp.fifo
mkfifo $PIPE
exec >$PIPE
tee foo.log <$PIPE &
# rest of your script
rm $PIPE
The second one would leave a pipe file sitting around if something goes wrong with your script, which may or may not be a problem (i.e. maybe you could rm it in the parent shell afterwards).

How to capture shell script output

I have an unix shell script. I have put -x in shell to see all the execution step. Now I want to capture these in one log file on a daily basis.
Psb script.
#!/bin/ksh -x
Logfile= path.log.date
Print " copying file" | tee $logifle
Scp -i key source destination | tee -a $logfile.
Exit 0;
First line of the shell script is known as shebang , which indicates what interpreter has to be execute the below script.
Similarly first line is commented which denotes coming lines not related to that interpreted session.
To capture the output, run the script redirect your output while running the script.
ksh -x scriptname >> output_file
Note:it will output what your script's doing line by line
There are two cases, using ksh as your shell, then you need to do IO redirection accordingly, and using some other shell and executing a .ksh script, then IO redirection could be done based on that shell. Following method should work for most of the shells.
$ cat somescript.ksh
#!/bin/ksh -x
printf "Copy file \n";
printf "Do something else \n";
Run it:
$ ./somescript.ksh 1>some.log 2>&1
some.log will contain,
+ printf 'Copy file \n'
Copy file
+ printf 'Do something else \n'
Do something else
In your case, no need to specify logfile and/or tee. Script would look something like this,
#!/bin/ksh -x
printf "copying file\n"
scp -i key user#server /path/to/file
exit 0
Run it:
$ ./myscript 1>/path/to/logfile 2>&1
2>&1 captures both stderr and stdout into stdout and 1>logfile prints it out into logfile.
I would prefer to explicitly redirecting the output (including stderr 2> because set -x sends output to stderr).
This keeps the shebang short and you don't have to cram the redirecton and filename-building into it.
#!/bin/ksh
logfile=path.log.date
exec >> $logfile 2>&1 # redirecting all output to logfile (appending)
set -x # switch on debugging
# now start working
echo "print something"

How can I redirect stdout and stderr with variant?

Normally, we use
sh script.sh 1>t.log 2>t.err
to redirect log.
How can I use variant to log:
string="1>t.log 2>t.err"
sh script.sh $string
You need to use 'eval' shell builtin for this purpose. As per man page of bash command:
eval [arg ...]
The args are read and concatenated together into a single command. This command is then read and exe‐
cuted by the shell, and its exit status is returned as the value of eval. If there are no args, or
only null arguments, eval returns 0.
Run your command like below:
eval sh script.sh $string
However, do you really need to run script.sh through sh command? If you instead put sh interpreter line (using #!/bin/sh as the first line in your shell script) in your script itself and give it execute permission, that would let you access return code of ls command. Below is an example of using sh and not using sh. Notice the difference in exit codes.
Note: I had only one file try.sh in my current directory. So ls command was bound to exit with return code 2.
$ ls try1.sh try1.sh.backup 1>out.txt 2>err.txt
$ echo $?
2
$ eval sh ls try1.sh try1.sh.backup 1>out.txt 2>err.txt
$ echo $?
127
In the second case, the exit code is of sh shell. In first case, the exit code is of ls command. You need to make cautious choice depending on your needs.
I figure out one way but it's ugly:
echo script.sh $string | sh
I think you can just put the name into a string variable
and then use data redirection
file_name="file1"
outfile="$file_name"".log"
errorfile="$file_name"".err"
sh script.sh 1> $outfile 2> $errorfile

How to redirect output of an entire shell script within the script itself?

Is it possible to redirect all of the output of a Bourne shell script to somewhere, but with shell commands inside the script itself?
Redirecting the output of a single command is easy, but I want something more like this:
#!/bin/sh
if [ ! -t 0 ]; then
# redirect all of my output to a file here
fi
# rest of script...
Meaning: if the script is run non-interactively (for example, cron), save off the output of everything to a file. If run interactively from a shell, let the output go to stdout as usual.
I want to do this for a script normally run by the FreeBSD periodic utility. It's part of the daily run, which I don't normally care to see every day in email, so I don't have it sent. However, if something inside this one particular script fails, that's important to me and I'd like to be able to capture and email the output of this one part of the daily jobs.
Update: Joshua's answer is spot-on, but I also wanted to save and restore stdout and stderr around the entire script, which is done like this:
# save stdout and stderr to file
# descriptors 3 and 4,
# then redirect them to "foo"
exec 3>&1 4>&2 >foo 2>&1
# ...
# restore stdout and stderr
exec 1>&3 2>&4
Addressing the question as updated.
#...part of script without redirection...
{
#...part of script with redirection...
} > file1 2>file2 # ...and others as appropriate...
#...residue of script without redirection...
The braces '{ ... }' provide a unit of I/O redirection. The braces must appear where a command could appear - simplistically, at the start of a line or after a semi-colon. (Yes, that can be made more precise; if you want to quibble, let me know.)
You are right that you can preserve the original stdout and stderr with the redirections you showed, but it is usually simpler for the people who have to maintain the script later to understand what's going on if you scope the redirected code as shown above.
The relevant sections of the Bash manual are Grouping Commands and I/O Redirection. The relevant sections of the POSIX shell specification are Compound Commands and I/O Redirection. Bash has some extra notations, but is otherwise similar to the POSIX shell specification.
Typically we would place one of these at or near the top of the script. Scripts that parse their command lines would do the redirection after parsing.
Send stdout to a file
exec > file
with stderr
exec > file
exec 2>&1
append both stdout and stderr to file
exec >> file
exec 2>&1
As Jonathan Leffler mentioned in his comment:
exec has two separate jobs. The first one is to replace the currently executing shell (script) with a new program. The other is changing the I/O redirections in the current shell. This is distinguished by having no argument to exec.
You can make the whole script a function like this:
main_function() {
do_things_here
}
then at the end of the script have this:
if [ -z $TERM ]; then
# if not run via terminal, log everything into a log file
main_function 2>&1 >> /var/log/my_uber_script.log
else
# run via terminal, only output to screen
main_function
fi
Alternatively, you may log everything into logfile each run and still output it to stdout by simply doing:
# log everything, but also output to stdout
main_function 2>&1 | tee -a /var/log/my_uber_script.log
For saving the original stdout and stderr you can use:
exec [fd number]<&1
exec [fd number]<&2
For example, the following code will print "walla1" and "walla2" to the log file (a.txt), "walla3" to stdout, "walla4" to stderr.
#!/bin/bash
exec 5<&1
exec 6<&2
exec 1> ~/a.txt 2>&1
echo "walla1"
echo "walla2" >&2
echo "walla3" >&5
echo "walla4" >&6
[ -t <&0 ] || exec >> test.log
I finally figured out how to do it. I wanted to not just save the output to a file but also, find out if the bash script ran successfully or not!
I've wrapped the bash commands inside a function and then called the function main_function with a tee output to a file. Afterwards, I've captured the output using if [ $? -eq 0 ].
#! /bin/sh -
main_function() {
python command.py
}
main_function > >(tee -a "/var/www/logs/output.txt") 2>&1
if [ $? -eq 0 ]
then
echo 'Success!'
else
echo 'Failure!'
fi

Resources