I want to be able to use nohup and time together in a bash shell while running mpiexec such that the output (stdout), errors (stderr) and time all end up in 1 file. Here is what I am using right now:
nohup bash -c "time mpiexec.hydra -np 120 -f nodefile ./executable -i 1000 > results.log 2>&1"
However, what is happening is that time goes to a file called nohup.out and output and error goes to results.log
Has anyone figured this out?
You could enclose the whole command between curcly braces to redirect the whole output { ... ; } such as :
{ nohup bash -c "time mpiexec.hydra -np 120 -f nodefile ./executable -i 1000" ; } > results.log 2>&1
Gnu reference :
Bash provides two ways to group a list of commands to be executed as a unit. When commands are grouped, redirections may be applied to the entire command list.
(...) : creates a subshell
{...} : doesn't
EDIT :
Here not sure that it is required.
I think that the issue is that your enclosing " " is too broad.
You should redirect only the result of the nohup command such as :
nohup "foo-cmd -bar argFooBar" > results.log 2>&1
So try without enclosing the redirection from the command passed to bash :
nohup bash -c "time mpiexec.hydra -np 120 -f nodefile ./executable -i 1000" > results.log 2>&1
Man page for nohup
To save output to FILE, use 'nohup COMMAND > FILE'
Simple example:
$ nohup bash -c "time echo 'this is stdout' > results.log 2>&1" >> results.log
nohup: ignoring input and redirecting stderr to stdout
$ cat results.log
this is stdout
real 0m0.001s
user 0m0.000s
sys 0m0.000s
So, in your case, try this:
nohup bash -c "time mpiexec.hydra -np 120 -f nodefile ./executable -i 1000 > results.log 2>&1" >> results.log
Related
when I do the bash test:
(exec -l -a specialname /bin/bash -c 'echo $0' ) 2> error
the run-builtins fails, after some search, I found that it outputs
^[7^[[r^[[999;999H^[[6n
to the stderr, so I redirect it to a file error.
If I cat it, it output a blank line.
I opened it using vim with which I found the:
^[7^[[r^[[999;999H^[[6n
why?
After a long search, I found that bash read the /etc/profile file, and in this file, has the following:
if [ -x /usr/bin/resize ];then
/usr/bin/resize >/dev/null
fi
so the bash execute the resize program, this program is produced by busybox in my system, the busybox source code console-tools/resize.c has:
fprintf(stderr, ESC"7" ESC"[r" ESC"[999;999H" ESC"[6n")
so it output that to stderr.
run the command:
(exec -l -a specialname /bin/bash -c 'export PS1='test';echo ${PS1}') 2> err.log
vi err.log
I'm trying to log the time for the execution of a command, so I'm doing that by using the builtin time command in bash. I also wish to redirect the stderr and stdout to a logfile at the same time. However, it doesn't seem to be working as the stderr just spills out onto my terminal.
Here is the command:
rm -rf doxygen
mkdir doxygen
bash -c 'time "/cygdrive/d/Program Files/doxygen/bin/doxygen.exe" Doxyfile > doxygen/doxygen.log 1>&2' genfile > doxygen/time 1>&2 &
What am I doing wrong here?
You are using 1>&2 instead of 2>&1.
With the lengths of names reduced, you're trying to run:
bash -c 'time doxygen Doxyfile > doxygen.log 1>&2' genfile > doxygen.time 1>&2 &
The > doxygen.log sends standard output to the file; the 1>&2 then changes your mind and sends standard output to the same place that standard error is going. Similarly with the outer pair of redirections.
If you used:
bash -c 'time doxygen Doxyfile > doxygen.log 2>&1' genfile > doxygen.time 2>&1 &
then you send standard error to the same place that standard output goes — twice.
Incidentally, do you realize that the genfile serves as the $0 for the script run by bash -c '…'? I'm not convinced it is needed in your script. To see this, try:
bash -c 'echo 0=$0; echo 1=$1; echo 2=$2' genfile jarre oxygene
When run, this produces:
0=genfile
1=jarre
2=oxygene
Just a little question about timing programs on Linux: the time command allows to
measure the execution time of a program:
[ed#lbox200 ~]$ time sleep 1
real 0m1.004s
user 0m0.000s
sys 0m0.004s
Which works fine. But if I try to redirect the output to a file, it fails.
[ed#lbox200 ~]$ time sleep 1 > time.txt
real 0m1.004s
user 0m0.001s
sys 0m0.004s
[ed#lbox200 ~]$ cat time.txt
[ed#lbox200 ~]$
I know there are other implementations of time with the option -o to write a file but
my question is about the command without those options.
Any suggestions ?
Try
{ time sleep 1 ; } 2> time.txt
which combines the STDERR of "time" and your command into time.txt
Or use
{ time sleep 1 2> sleep.stderr ; } 2> time.txt
which puts STDERR from "sleep" into the file "sleep.stderr" and only STDERR from "time" goes into "time.txt"
Simple. The GNU time utility has an option for that.
But you have to ensure that you are not using your shell's builtin time command, at least the bash builtin does not provide that option! That's why you need to give the full path of the time utility:
/usr/bin/time -o time.txt sleep 1
Wrap time and the command you are timing in a set of brackets.
For example, the following times ls and writes the result of ls and the results of the timing into outfile:
$ (time ls) > outfile 2>&1
Or, if you'd like to separate the output of the command from the captured output from time:
$ (time ls) > ls_results 2> time_results
If you care about the command's error output you can separate them like this while still using the built-in time command.
{ time your_command 2> command.err ; } 2> time.log
or
{ time your_command 2>1 ; } 2> time.log
As you see the command's errors go to a file (since stderr is used for time).
Unfortunately you can't send it to another handle (like 3>&2) since that will not exist anymore outside the {...}
That said, if you can use GNU time, just do what #Tim Ludwinski said.
\time -o time.log command
Since the output of 'time' command is error output, redirect it as standard output would be more intuitive to do further processing.
{ time sleep 1; } 2>&1 | cat > time.txt
If you are using GNU time instead of the bash built-in, try
time -o outfile command
(Note: GNU time formats a little differently than the bash built-in).
&>out time command >/dev/null
in your case
&>out time sleep 1 >/dev/null
then
cat out
I ended up using:
/usr/bin/time -ao output_file.txt -f "Operation took: %E" echo lol
Where "a" is append
Where "o" is proceeded by the file name to append to
Where "f" is format with a printf-like syntax
Where "%E" produces 0:00:00; hours:minutes:seconds
I had to invoke /usr/bin/time because the bash "time" was trampling it and doesn't have the same options
I was just trying to get output to file, not the same thing as OP
If you don't want to touch the original process' stdout and stderr, you can redirect stderr to file descriptor 3 and back:
$ { time { perl -le "print 'foo'; warn 'bar';" 2>&3; }; } 3>&2 2> time.out
foo
bar at -e line 1.
$ cat time.out
real 0m0.009s
user 0m0.004s
sys 0m0.000s
You could use that for a wrapper (e.g. for cronjobs) to monitor runtimes:
#!/bin/bash
echo "[$(date)]" "$#" >> /my/runtime.log
{ time { "$#" 2>&3; }; } 3>&2 2>> /my/runtime.log
#!/bin/bash
set -e
_onexit() {
[[ $TMPD ]] && rm -rf "$TMPD"
}
TMPD="$(mktemp -d)"
trap _onexit EXIT
_time_2() {
"$#" 2>&3
}
_time_1() {
time _time_2 "$#"
}
_time() {
declare time_label="$1"
shift
exec 3>&2
_time_1 "$#" 2>"$TMPD/timing.$time_label"
echo "time[$time_label]"
cat "$TMPD/timing.$time_label"
}
_time a _do_something
_time b _do_another_thing
_time c _finish_up
This has the benefit of not spawning sub shells, and the final pipeline has it's stderr restored to the real stderr.
If you are using csh you can use:
/usr/bin/time --output=outfile -p $SHELL -c 'your command'
For example:
/usr/bin/time --output=outtime.txt -p csh -c 'cat file'
If you want just the time in a shell variable then this works:
var=`{ time <command> ; } 2>&1 1>/dev/null`
I have this
script -q -c "continuously_running_command" /dev/null > out
When I have the above command line running I can stop it by doing CTRL+C
However I'd like to run the above commandline in back ground so that I can stop it by doing kill -9 %1
But when I try to run this
script -q -c "continuously_running_command" /dev/null > out &
I get
[2]+ Stopped (tty output) script -q -c "continuously_running_command" /dev/null 1>out
Question:
How can I run the above commandline in back ground?
In order to background a process with redirection to a file, you must also redirect stderr. With stdout and stderr redirected, you can then background the process:
script -q -c "continuously_running_command" /dev/null > out 2>&1 &
Fully working example:
#!/bin/bash
i=$((0+0))
while test "$i" -lt 100; do
((i+=1))
echo "$i"
sleep 1
done
Running the script and tail of output file while backgrounded:
alchemy:~/scr/tmp/stack> ./back.sh > outfile 2>&1 &
[1] 31779
alchemy:~/scr/tmp/stack> tailf outfile
10
11
12
13
14
...
100
[1]+ Done ./back.sh > outfile 2>&1
In the case of:
script -q -c "continuously_running_command" /dev/null
The problem in in this case is the fact that script itself causes redirection of all dialog with script to FILE, in this case to /dev/null. So you need to simply issue the command without redirecting to /dev/null or just redirect stderr to out:
script -q -c "continuously_running_command" out 2>&1 &
or
script -q -c "continuously_running_command" /dev/null/ 2>out &
I have a program that outputs to stdout and would like to silence that output in a Bash script while piping to a file.
For example, running the program will output:
% myprogram
% WELCOME TO MY PROGRAM
% Done.
I want the following script to not output anything to the terminal:
#!/bin/bash
myprogram > sample.s
If it outputs to stderr as well you'll want to silence that. You can do that by redirecting file descriptor 2:
# Send stdout to out.log, stderr to err.log
myprogram > out.log 2> err.log
# Send both stdout and stderr to out.log
myprogram &> out.log # New bash syntax
myprogram > out.log 2>&1 # Older sh syntax
# Log output, hide errors.
myprogram > out.log 2> /dev/null
Redirect stderr to stdout
This will redirect the stderr (which is descriptor 2) to the file descriptor 1 which is the the stdout.
2>&1
Redirect stdout to File
Now when perform this you are redirecting the stdout to the file sample.s
myprogram > sample.s
Redirect stderr and stdout to File
Combining the two commands will result in redirecting both stderr and stdout to sample.s
myprogram > sample.s 2>&1
Redirect stderr and stdout to /dev/null
Redirect to /dev/null if you want to completely silent your application.
myprogram >/dev/null 2>&1
All output:
scriptname &>/dev/null
Portable:
scriptname >/dev/null 2>&1
Portable:
scriptname >/dev/null 2>/dev/null
For newer bash (no portable):
scriptname &>-
If you are still struggling to find an answer, specially if you produced a file for the output, and you prefer a clear alternative:
echo "hi" | grep "use this hack to hide the oputut :) "
If you want STDOUT and STDERR both [everything], then the simplest way is:
#!/bin/bash
myprogram >& sample.s
then run it like ./script, and you will get no output to your terminal. :)
the ">&" means STDERR and STDOUT. the & also works the same way with a pipe: ./script |& sed
that will send everything to sed
Try with:
myprogram &>/dev/null
to get no output
Useful variations:
Get only the STDERR in a file, while hiding any STDOUT even if the
program to hide isn't existing at all. (does not ever hang):
stty -echo && ./programMightNotExist 2> errors.log && stty echo
Detach completely and silence everything, even killing the parent
script won't abort ./prog (Does behave just like nohup):
./prog </dev/null >/dev/null 2>&1 &
nohup can be used as well to fully detach, as follow:
nohup ./prog &
A log file nohup.out will be created aside of the script, use tail -f nohup.out to read it.
Note: This answer is related to the question "How to turn off echo while executing a shell script Linux" which was in turn marked as duplicated to this one.
To actually turn off the echo the command is:
stty -echo
(this is, for instance; when you want to enter a password and you don't want it to be readable. Remember to turn echo on at the end of your script, otherwise the person that runs your script won't see what he/she types in from then on. To turn echo on run:
stty echo
For output only on error:
so [command]