I need to write the time taken to execute this command in a txt file:
time ./program.exe
How can I do in bash script?
I try with >> time.txt but that doesn't work (the output does not go to file and does go to the screen).
Getting time in bash to write to a file is hard work. It is a bash built-in command. (On Mac OS X, there's an external command, /usr/bin/time, that does a similar job but with a different output format and less recalcitrance.)
You need to use:
(time ./program.exe) 2> time.txt
It writes to standard error (hence the 2> notation). However, if you don't use the sub-shell (the parentheses), it doesn't work; the output still comes to the screen.
Alternatively, and without a sub-shell, you can use:
{ time ./program.exe; } 2> time.txt
Note the space after the open brace and the semi-colon; both are necessary on a single line. The braces must appear where a command could appear, and must be standalone symbols. (If you struggle hard enough, you'll come up with ...;}|something or ...;}2>&1. Both of these identify the brace as a standalone symbol, though. If you try ...;}xyz, the shell will (probably) fail to find a command called }xyz, though.)
I need to run more command in more terminal. If I do this:
xterm -xrm '*hold: true' -e (time ./Program.exe) >> time.exe & sleep 2
it doesn't work and tells me Syntax error: "(" unexpected. How do I fix this?
You would need to do something like:
xterm -xrm '*hold: true' -e sh -c "(time ./Program.exe) 2> time.txt & sleep 2"
The key change is to run the shell with the script coming from the argument to the -c option; you can replace sh with /bin/bash or an equivalent name. That should get around any 'Syntax error' issues. I'm not quite sure what triggers that error, though, so there may be a simpler and better way to deal with it. It's also conceivable that xterm's -e option only takes a single string argument, in which case, I suppose you'd use:
xterm -xrm '*hold: true' -e 'sh -c "(time ./Program.exe) 2> time.txt & sleep 2"'
You can manual bash xterm as well as I can.
I'm not sure why you run the timed program in background mode, but that's your problem, not mine. Similarly, the sleep 2 is not obviously necessary if the hold: true keeps the terminal open.
time_elapsed=(time sh -c "./program.exe") 2>&1 | grep "real" | awk '{print $(NF)}'
echo time_elapsed > file.txt
This command should give you the exact time consumed in bash in a desired file..
You can also redirect this to a file usng 2 > file.txt as explained in another reply.
It's not easy to redirect the output of the bash builtin time.
One solution is to use the external time program:
/bin/time --append -o time.txt ./program.exe
(on most systems it's a GNU program, so use info time rather than man to get its documentation).
Just enclose the command to time in a { .. }:
{ time ./program.exe; } 2>&1
Of course, the output of builtin time goes to stderr, thus the needed redirection 2>&1.
Then, it may appear to be tricky to capture the output, let's use a second { .. } to read the command more easily, this works:
{ { time ./program.exe; } 2>&1; } >> time.txt # This works.
However, the correct construct should simply have the capture reversed, as this:
{ time ./program.exe; } >> time.txt 2>&1; # Correct.
To close any possible output from the command, redirect it's output to /dev/null, as this:
{ time ./program.exe >/dev/null 2>&1; } >> time.txt 2>&1 # Better.
And, as now there is only output on stderr, we could simply capture just it:
{ time ./program.exe >/dev/null 2>&1; } 2>> time.txt # Best.
The output from ./program should be redirected, or it may well end inside time.txt.
Related
Is there a filename that is assignable to a variable (i.e. not a magic builtin shell token like &1) that will let me redirect to stdout?
What I finally want to do is run something like this in a cron script:
LOG=/tmp/some_file
...
some_command 2>&1 >> $LOG
echo "blah" >> $LOG
...
Conveniently, this lets me turn off log noise by redirecting to /dev/null later when I'm sure there is nothing that can fail (or, at least, nothing that I care about!) without rewriting the whole script. Yes, turning off logging isn't precisely best practice -- but once this script works, there is not much that can conceivably go wrong, and trashing the disk with megabytes of log info that nobody wants to read isn't desired.
In case something unexpectedly fails 5 years later, it is still possible to turn on logging again by flipping a switch.
On the other hand, while writing and debugging the script, which involves calling it manually from the shell, it would be extremely nice if it could just dump the output to the console. That way I wouldn't need to tail the logfile manually.
In other words, what I'm looking for is something like /proc/self/fd/0 in bash-talk that I can assign to LOG. As it happens, /proc/self/fd/0 works just fine on my Linux box, but I wonder if there isn't such a thing built into bash already (which would generally be preferrable).
Basic solution:
#!/bin/bash
LOG=/dev/null
# uncomment next line for debugging (logging)
# LOG=/tmp/some_file
{
some_command
echo "blah"
} | tee 1>$LOG 2>&1
More evolved:
#!/bin/bash
ENABLE_LOG=0 # 1 to log standard & error outputs
LOG=/tmp/some_file
{
some_command
echo "blah"
} | if (( $ENABLE_LOG ))
then
tee 1>$LOG 2>&1
fi
More elegant solution from DevSolar's idea:
#!/bin/bash
# uncomment next line for debugging (logging)
# exec 1> >(tee /tmp/some_file) 2>&1
some_command
echo "blah"
Thanks to the awesome hints by olibre and suvayu, I came up with this (for the record, version that I'm using now):
# log to file
# exec 1>> /tmp/logfile 2>&1
# be quiet
# exec 1> /dev/null 2>&1
# dump to console
exec 2>&1
Just need to uncomment one of the three, depending on what is desired, and don't worry about anything else, ever again. This either logs all subsequent output to a file or to console, or not at all.
No output duplicated, works universally the same for every command (without explicit redirects), no weird stuff, and as easy as it gets.
If I have understood your requirement clearly, the following should do what you want
exec 2>&1
exec >> $LOG
Stdout and stderr of all subsequent commands will be appended to the file $LOG.
Use /dev/stdout
Here's another SO answer that mentions this solution: Difference between stdout and /dev/stdout
I have a shell script in which in the first line I ask the user to input how many minutes they want the script to run for:
#!/usr/bin/ksh
echo "How long do you want the script to run for in minutes?:\c"
read scriptduration
loopcnt=0
interval=1
date2=$(date +%H:%M%S)
(( intervalsec = $interval * 1 ))
totalmin=${1:-$scriptduration}
(( loopmax = ${totalmin} * 60 ))
ofile=/home2/s499929/test.log
echo "$date2 total runtime is $totalmin minutes at 2 sec intervals"
while(( $loopmax > $loopcnt ))
do
date1=$(date +%H:%M:%S)
pid=`/usr/local/bin/lsof | grep 16752 | grep LISTEN |awk '{print $2}'` > /dev/null 2>&1
count=$(netstat -an|grep 16752|grep ESTABLISHED|wc -l| sed "s/ //g")
process=$(ps -ef | grep $pid | wc -l | sed "s/ //g")
port=$(netstat -an | grep 16752 | grep LISTEN | wc -l| sed "s/ //g")
echo "$date1 activeTCPcount:$count activePID:$pid activePIDcount=$process listen=$port" >> ${ofile}
sleep $intervalsec
(( loopcnt = loopcnt + 1 ))
done
It works great if I kick it off an input the values manually. But if I want to run this for 3 hours I need to kick off the script to run in the background.
I have tried just running ./scriptname & and I get this:
$ How long do you want the test to run for in minutes:360
ksh: 360: not found.
[2] + Stopped (SIGTTIN) ./test.sh &
And the script dies. Is this possible, any suggestions on how I can accept this one input and then run in the background?? Thanks!!!
You could do something like this:
test.sh arg1 arg2 &
Just refer to arg1 and arg2 as $1 and $2, respectively, in the bash script. ($0 is the name of the script)
So,
test.sh 360 &
will pass 360 as the first argument to the bash or ksh script which can be referred to as $1 in the script.
So the first few lines of your script would now be:
#!/usr/bin/ksh
scriptduration=$1
loopcnt=0
...
...
With bash you can start the script in the foreground and after you finished with the user input, interrupt it by hitting Ctrl-Z.
Then type
$ bg %
and the script will continue to run in the background.
Why You're Getting What You're Getting
When you run the script in the background, it can't take any user input. In fact, the program will freeze if it expects user input until its put back in the foreground. However, output has to go somewhere. Thus, the output goes to the screen (even though the program is running in the background. Thus, you see the prompt.
The prompt you see your program displaying is meaningless because you can't input at the prompt. Instead, you type in 360 and your shell is interpreting it as a command you want because you're not putting it in the program, you're putting it in the command prompt.
You want your program to be in the foreground for the input, but run in the background. You can't do both at once.
Solutions To Your Dilemma
You can have two programs. The first takes the input, and the second runs the actual program in the background.
Something like this:
#! /bin/ksh
read time?"How long in seconds do you want to run the job? "
my_actual_job.ksh $time &
In fact, you could even have a mechanism to run the job in the background if the time is over a certain limit, but otherwise run the job in the foreground.
#! /bin/ksh
readonly MAX_FOREGROUND_TIME=30
read time?"How long in seconds do you want to run the job? "
if [ $time -gt $MAX_FOREGROUND_TIME ]
then
my_actual_job.ksh $time &
else
my_actual_job.ksh $time
fi
Also remember if your job is in the background, it cannot print to the screen. You can redirect the output elsewhere, but if you don't, it'll print to the screen at inopportune times. For example, you could be in VI editing a file, and suddenly have the output appear smack in the middle of your VI session.
I believe there's an easy way to tell if your job is in the background, but I can't remember it offhand. You could find your current process ID by looking at $$, then looking at the output of jobs -p and see if that process ID is in the list. However, I'm sure someone will come up with an easy way to tell.
It is also possible that a program could throw itself into the background via the bg $$ command.
Some Hints
If you're running Kornshell, you might consider taking advantage of many of Kornshell's special features:
print: The print command is more flexible and robust than echo. Take a look at the manpage for Kornshell and see all of its features.
read: You notice that you can use the read var?"prompt" form of the read command.
readonly: Use readonly to declare constants. That way, you don't accidentally change the value of that variable later. Besides, it's good programming technique.
typeset: Take a look at typeset in the ksh manpage. The typeset command can help you declare particular variables as floating point vs. real, and can automatically do things like zero fill, right or left justify, etc.
Some things not specific to Kornshell:
The awk and sed commands can also do what grep does, so there's no reason to filter something through grep and then through awk or sed.
You can combine greps by using the -e parameter. grep foo | grep bar is the same as grep -e foo -e bar.
Hope this helps.
I've tested this with ksh and it worked. The trick is to let the script call itself with the time to wait as parameter:
if [ -z "$1" ]; then
echo "How long do you want the test to run for in minutes:\c"
read scriptduration
echo "running task in background"
$0 $scriptduration &
exit 0
else
scriptduration=$1
fi
loopcnt=0
interval=1
# ... and so on
So are you using bash or ksh? In bash, you can do this:
{ echo 360 | ./test.sh ; } &
It could work for ksh also.
I need a way to make a process keep a certain file open forever. Here's an example of what I have so far:
sleep 1000 > myfile &
It works for a thousand seconds, but really don't want to make some complicated sleep/loop statement. This post suggested that cat is the same thing as sleep for infinite. So I tried this:
cat > myfile &
It almost looks like a mistake doesn't it? It seemed to work from the command line, but in a script the file connection did not stay open. Any other ideas?
Rather than using a background process, you can also just use bash to open one of its file descriptors:
exec 5>myfile
(The special use of exec here allows changing the current file descriptor redirections - see man bash for details). This will open file descriptor 5 to "myfile" (use >> if you don't want to empty the file).
You can later close the file again with:
exec 5>&-
(One possible downside of this is that the FD gets inherited by every program that the shell runs in the meantime. Mostly this is harmless - e.g. your greps and seds will generally ignore the extra FD - but it could be annoying in some cases, especially if you spawn any processes that stay around (because they will then keep the FD open).
Note: If you are using a newer version of bash (>4.1) you can use a slightly different syntax:
exec {fd}>myfile
This allocates a new file descriptor, and puts it in the variable fd. This can help ensure that scripts don't accidentally overwrite each other's file descriptors. To close the file later, use
exec {fd}>&-
The reason that cat>myfile& works is because it re-directs standard input into a file.
if you launch it with an ampersand (in background), it won't get ANY input, including end-of-file, which means it will forever wait and print nothing to the output file.
You can get an equivalent effect, except WITHOUT dependency on standard input (the latter is what makes it not work in your script), with this command:
tail -f /dev/null > myfile &
On the cat > myfile & issue running in terminal vs not running as part of a script: In a non-interactive shell the stdin of a backgrounded command & gets implicitly redirected from /dev/null.
So, cat > myfile & in a script actually gets translated into cat </dev/null > myfile, which terminates cat immediately.
See the POSIX standard on the Shell Command Language & Asynchronous Lists:
The standard input for an asynchronous list, before any explicit redirections are
performed, shall be considered to be assigned to a file that has the same
properties as /dev/null. If it is an interactive shell, this need not happen.
In all cases, explicit redirection of standard input shall override this activity.
# some tests
sh -c 'sleep 10 & lsof -p ${!}'
sh -c 'sleep 10 0<&0 & lsof -p ${!}'
sh -ic 'sleep 10 & lsof -p ${!}'
# in a script
- cat > myfile &
+ cat 0<&0 > myfile &
tail -f myfile
This 'follows' the file, and outputs any changes to the file. If you don't want to see the output of tail, redirect output to /dev/null or something:
tail -f myfile > /dev/null
You may want to use the --retry option, depending on your specific case. See man tail for more information.
I tried to redirect the output of the time command, but I couldn't:
$time ls > filename
real 0m0.000s
user 0m0.000s
sys 0m0.000s
In the file I can see the output of the ls command, not that of time.
Please explain, why I couldn't and how to do this.
no need to launch sub shell. Use a code block will do as well.
{ time ls; } 2> out.txt
or
{ time ls > /dev/null 2>&1 ; } 2> out.txt
you can redirect the time output using,
(time ls) &> file
Because you need to take (time ls) as a single command so you can use braces.
The command time sends it's output to STDERR (instead of STDOUT). That's because the command executed with time normally (in this case ls) outputs to STDOUT.
If you want to capture the output of time, then type:
(time ls) 2> filename
That captures only the output of time, but the output of ls goes normal to the console. If you want to capture both in one file, type:
(time ls) &> filename
2> redirects STDERR, &> redirects both.
time is shell builtin and I'm not sure if there is way to redirect it. However you can use
/usr/bin/time instead, which definitely accept any output redirections.
The reason why redirection does not seem to work with time is that it's a bash reserved word (not a builtin!) when used in front of a pipeline. bash(1):
If the time reserved word precedes a pipeline, the elapsed as well as
user and system time consumed by its execution are reported when the
pipeline terminates.
So, to redirect output of time, either use curly braces:
{ time ls; } 2> filename
Or call /usr/bin/time:
/usr/bin/time ls 2> filename
If you don't want to mix output from time and the command.
With GNU time, you can use -o file like:
/usr/bin/time -o tim grep -e k /tmp 1>out 2>err
while tim is output of time, out and err are stdout and stderr from grep.
I use the redirection of stdout and stderr method with braces for testing.
The &>>rpt represents this >>rpt 2>&1 but shorter.
The braces will execute a command(s) in the current shell. See: man bash
{ time ls a*; } &>>rpt
Not the canonical use case, but another way to go.
Longer running simple tasks can be launched in a detached "screen" terminal with logged output. You could even give the log a unique name.
Primarily this method is good for something that will take hours and is invoked over SSH with a need to "check up on" from time to time. In preference to backgrounding and disowning.
screen -dmL time -v ./crackpassword
You get the same output a terminal would get, with the caveat that this is asynchronous. Of course it could be a script. The output may need tweaking.
It's really annoying to type this whenever I don't want to see a program's output. I'd love to know if there is a shorter way to write:
$ program >/dev/null 2>&1
Generic shell is the best, but other shells would be interesting to know about too, especially bash or dash.
>& /dev/null
You can write a function for this:
function nullify() {
"$#" >/dev/null 2>&1
}
To use this function:
nullify program arg1 arg2 ...
Of course, you can name the function whatever you want. It can be a single character for example.
By the way, you can use exec to redirect stdout and stderr to /dev/null temporarily. I don't know if this is helpful in your case, but I thought of sharing it.
# Save stdout, stderr to file descriptors 6, 7 respectively.
exec 6>&1 7>&2
# Redirect stdout, stderr to /dev/null
exec 1>/dev/null 2>/dev/null
# Run program.
program arg1 arg2 ...
# Restore stdout, stderr.
exec 1>&6 2>&7
In bash, zsh, and dash:
$ program >&- 2>&-
It may also appear to work in other shells because &- is a bad file descriptor.
Note that this solution closes the file descriptors rather than redirecting them to /dev/null, which could potentially cause programs to abort.
Most shells support aliases. For instance, in my .zshrc I have things like:
alias -g no='2> /dev/null > /dev/null'
Then I just type
program no
If /dev/null is too much to type, you could (as root) do something like:
ln -s /dev/null /n
Then you could just do:
program >/n 2>&1
But of course, scripts you write in this way won't be portable to other systems without setting up that symlink first.
It's also worth noting, that often times redirecting output is not really necessary. Many Unix and Linux programs accept a "silent flag", usually -n or -q, that suppresses any output and only returns a value on success or failure.
For example
grep foo bar.txt >/dev/null 2>&1
if [ $? -eq 0 ]; then
do_something
fi
Can be rewritten as
grep -q foo bar.txt
if [ $? -eq 0 ]; then
do_something
fi
Edit: the (:) or |: based solutions might cause an error because : doesn't read stdin. Though it might not be as bad as closing the file descriptor, as proposed in Zaz's answer.
For bash and bash-compliant shells (zsh...):
$ program &>/dev/null
OR
$ program &> >(:) # Should actually cause error or abortion
For all shells:
$ program 2>&1 >/dev/null
OR
$ program 2>&1|: # Should actually cause error or abortion
$ program 2>&1 > >(:) does not work for dash because it refuses to operate process substitution right of a file substitution.
Explanations:
2>&1 redirects stderr (file descriptor 2) to stdout (file descriptor 1).
| is the regular piping of stdout to the stdin of another command.
: is a shell builtin which does nothing (it is equivalent to true).
&> redirects both stdout and stderr outputs to a file.
>(your-command) is process substitution. It is replaced with a path to a special file, for instance: /proc/self/fd/6. This file is used as input file for the command your-command.
Note: A process trying to write to a closed file descriptor will get an EBADF (bad file descriptor) error which is more likely to cause abortion than trying to write to | true. The latter would cause an EPIPE (pipe) error, see Charles Duffy's comment.
Ayman Hourieh's solution works well for one-off invocations of overly chatty programs. But if there's only a small set of commonly called programs for which you want to suppress output, consider silencing them by adding the following to your .bashrc file (or the equivalent, if you use another shell):
CHATTY_PROGRAMS=(okular firefox libreoffice kwrite)
for PROGRAM in "${CHATTY_PROGRAMS[#]}"
do
printf -v eval_str '%q() { command %q "$#" &>/dev/null; }' "$PROGRAM" "$PROGRAM"
eval "$eval_str"
done
This way you can continue to invoke programs using their usual names, but their stdout and stderr output will disappear into the bit bucket.
Note also that certain programs allow you to configure how much logging/debugging output they spew. For KDE applications, you can run kdebugdialog and selectively or globally disable debugging output.
Seems to me, that the most portable solution, and best answer, would be a macro on your terminal (PC).
That way, no matter what server you log in to, it will always be there.
If you happen to run Windows, you can get the desired outcome with AHK (google it, it's opensource) in two tiny lines of code. That can translate any string of keys into any other string of keys, in situ.
You type "ugly.sh >>NULL" and it will rewrite it as "ugly.sh 2>&1 > /dev/null" or what not.
Solutions for other platforms are somewhat more difficult. AppleScript can paste in keyboard presses, but can't be triggered that easily.