I want to redirect stderr of the exec command to /dev/null, like: exec hostname 2> /dev/null
for all the exec commands in my scripts.
As the very first solution I want to create an alias for that like this: alias exec='function _myexec() { exec "$#" 2> /dev/null; unset -f _myexec; }; _myexec'
but this and any other simple alias for exec just hanging.
I would appreciate your thoughts.
Related
I try to use in my bashrc:
something(){
cmd="sudo $#"
...
exec {*}$cmd
}
because sudo with: eval exec "$cmd" close my session after the command, but the expansion operator {*} did not works
but the command is failed
bash: exec: {*}sudo: not found
do you have any ideas about that? And How can i keep the sudo session active after that ?
edit: how to exec a sudo command in the same shell ?
Thanks a lot
You can try like this:
something(){
cmd="sudo ${#}"
...
exec ${cmd}
}
Update :
Try using -S option with sudo to read the password from standard input, and -s option to run a shell with elevated privileges:
something() {
cmd="sudo -S -s ${#}"
...
echo "$PASSWORD" | $cmd
}
My Bash script calls a lot of commands, most of them output something. I want to silence them all. Right now, I'm adding &>/dev/null at the end of most command invocations, like this:
some_command &>/dev/null
another_command &>/dev/null
command3 &>/dev/null
Some of the commands have flags like --quiet or similar, still, I'd need to work it out for all of them and I'd just rather silence all of them by default and only allow output explicitly, e.g., via echo.
You can use the exec command to redirect everything for the rest of the script.
You can use 3>&1 to save the old stdout stream on FD 3, so you can redirect output to that if you want to see the output.
exec 3>&1 &>/dev/null
some_command
another_command
command_you_want_to_see >&3
command3
You can create a function:
run_cmd_silent () {
# echo "Running: ${1}"
${1} > /dev/null 2>&1
}
You can remove the commented line to print the actual command you run.
Now you can run your commands like this, e.g.:
run_cmd_silent "git clone git#github.com:prestodb/presto.git"
I know exec is for executing a program in current process as quoted down from here
exec replaces the current program in the current process, without
forking a new process. It is not something you would use in every
script you write, but it comes in handy on occasion.
I'm looking at a bash script a line of which I can't understand exactly.
#!/bin/bash
LOG="log.txt"
exec &> >(tee -a "$LOG")
echo Logging output to "$LOG"
Here, exec doesn't have any program name to run. what does it mean? and it seems to be capturing the execution output to a log file. I would understand if it was exec program |& tee log.txt but here, I cannot understand exec &> >(tee -a log.txt). why another > after &>?
What's the meaning of the line? (I know -a option is for appending and &> is for redirecting including stderr)
EDIT : after I selected the solution, I found the exec &> >(tee -a "$LOG") works when it is bash shell(not sh). So I modified the initial #!/bin/sh to #!/bin/bash. But exec &>> "$LOG" works both for bash and sh.
From man bash:
exec [-cl] [-a name] [command [arguments]]
If command is not specified, any redirections take effect in the
current shell, [...]
And the rest:
&> # redirects stdout and stderr
>(cmd) # redirects to a process
See process substitution.
Our shell script contains the header
#!/bin/bash -x
that causes the commands to also be listed. Instead of having to type
$ ./script.sh &> log.txt
I would like to add a command to this script that will log all following output (also) to a log file. How this is possible?
You can place this line at the start of your script:
# redirect stdout/stderr to a file
exec &> log.txt
EDIT: As per comments below:
#!/bin/bash -x
# redirect stdout/stderr to a file and still show them on terminal
exec &> >(tee log.txt; exit)
I run my_program via a bash wrapper script, and use exec to prevent forking a separate process:
#! /bin/bash
exec my_program >> /tmp/out.log 2>&1
Now I would like to duplicate all output into two different files, but still prevent forking, so I do not want to use a pipe and tee like this:
#! /bin/bash
exec my_program 2>&1 | tee -a /tmp/out.log >> /tmp/out2.log
How to do that with bash?
The reasons for avoid forking is to make sure that:
all signals sent to the bash script also reaches my_program (including non-trappable signals).
waitpid(3) on the bash-script can never return before my_program has also terminated.
I think the best you can do is to redirect standard output and error to tee via a process substitution:
exec > >( tee -a /tmp/out.log >> /tmp/out2.log) 2>&1
then exec to replace the bash script with your program (which will keep the same open file handles to standard output).
exec my_program