Get version number into a bash variable - Graphviz dot cli [duplicate] - bash

Is there a filename that is assignable to a variable (i.e. not a magic builtin shell token like &1) that will let me redirect to stdout?
What I finally want to do is run something like this in a cron script:
LOG=/tmp/some_file
...
some_command 2>&1 >> $LOG
echo "blah" >> $LOG
...
Conveniently, this lets me turn off log noise by redirecting to /dev/null later when I'm sure there is nothing that can fail (or, at least, nothing that I care about!) without rewriting the whole script. Yes, turning off logging isn't precisely best practice -- but once this script works, there is not much that can conceivably go wrong, and trashing the disk with megabytes of log info that nobody wants to read isn't desired.
In case something unexpectedly fails 5 years later, it is still possible to turn on logging again by flipping a switch.
On the other hand, while writing and debugging the script, which involves calling it manually from the shell, it would be extremely nice if it could just dump the output to the console. That way I wouldn't need to tail the logfile manually.
In other words, what I'm looking for is something like /proc/self/fd/0 in bash-talk that I can assign to LOG. As it happens, /proc/self/fd/0 works just fine on my Linux box, but I wonder if there isn't such a thing built into bash already (which would generally be preferrable).

Basic solution:
#!/bin/bash
LOG=/dev/null
# uncomment next line for debugging (logging)
# LOG=/tmp/some_file
{
some_command
echo "blah"
} | tee 1>$LOG 2>&1
More evolved:
#!/bin/bash
ENABLE_LOG=0 # 1 to log standard & error outputs
LOG=/tmp/some_file
{
some_command
echo "blah"
} | if (( $ENABLE_LOG ))
then
tee 1>$LOG 2>&1
fi
More elegant solution from DevSolar's idea:
#!/bin/bash
# uncomment next line for debugging (logging)
# exec 1> >(tee /tmp/some_file) 2>&1
some_command
echo "blah"

Thanks to the awesome hints by olibre and suvayu, I came up with this (for the record, version that I'm using now):
# log to file
# exec 1>> /tmp/logfile 2>&1
# be quiet
# exec 1> /dev/null 2>&1
# dump to console
exec 2>&1
Just need to uncomment one of the three, depending on what is desired, and don't worry about anything else, ever again. This either logs all subsequent output to a file or to console, or not at all.
No output duplicated, works universally the same for every command (without explicit redirects), no weird stuff, and as easy as it gets.

If I have understood your requirement clearly, the following should do what you want
exec 2>&1
exec >> $LOG
Stdout and stderr of all subsequent commands will be appended to the file $LOG.

Use /dev/stdout
Here's another SO answer that mentions this solution: Difference between stdout and /dev/stdout

Related

BASH - Best practice to use output from one script in another

I'm developing a BASH script which invokes another BASH script which prints a line to stdout. That output is captured by the first BASH script and used later. It works, but it has the downside that any other output which is printed by the second script will cause this part to behave unexpectedly, because there will be extra content.
main.sh
#!/bin/bash
# Invoke worker.sh and capture its standard output to stats
stats=$(worker.sh --generate-stats)
echo "stats=$stats"
worker.sh
#!/bin/bash
[[ $1 == "--generate-stats" ]] && echo "cpu=90 mem=50 disk=15"
In this over-simplified example, it's not a problem to use this construct, but as worker.sh grows in size and complexity, it's hard to remember that no other command can print to stdout without confounding the behavior, and if someone else works on worker.sh without realizing they can't print to stdout, it can easily get fouled. So what is considered good practice to generate output in one script and use it in the other?
I'm wondering if a fifo would be appropriate, or another file descriptor, or just a plain file. Or if exec should be used in this case, something like what is shown here https://www.tldp.org/LDP/abs/html/x17974.html:
#!/bin/bash
exec 6>&1 # Link file descriptor #6 with stdout.
# Saves stdout.
exec >&2 # stdout now goes to stderr
echo "Didn't know I shouldn't print to stdout"
exec 1>&6 6>&- # Restore stdout and close file descriptor #6.
[[ $1 == "--generate-stats" ]] && echo "cpu=90 mem=50 disk=15"
But I wouldn't want to use that if it's not considered good practice.
Many command-line utilities have quiet and verbose modes; it's generally considered good practice to have the most verbose output (debugging, tracing, etc.) be separated to standard error anyway, but it's common to have normal output be formatted for human legibility (e.g. include table headings and column separators) and quiet mode output be just the bare data for programmatic use. (For one example, see docker images vs docker images -q). So that would be my recommendation - have worker.sh take a flag indicating whether its output is being consumed programmatically, and write it such that its output is all sent via a function that checks that flag and filters appropriately.
Maybe a different approach would be for the second script to test to see if it's stdout is being used programatically:
gash.sh:
#!/bin/bash
data=$(./another.sh)
echo "Received $data"
another.sh:
#!/bin/bash
# for -t see man isatty(3). 1 is file descriptor 1 - stdout
if [ -t 1 ]; then
echo "stdout is a terminal"
else
echo "stdout is not a terminal"
fi
Gives (where $ is a generic keyboard prompt):
$ bash gash.sh
Received stdout is not a terminal
$ bash another.sh
stdout is a terminal
You could then set a flag to change script behaviour (ls(1) does a similar thing). However, you should be prepared for this:
$ bash another.sh|more
stdout is not a terminal
$ bash another.sh > out.txt
$ cat out.txt
stdout is not a terminal

bash script write executing time in a file

I need to write the time taken to execute this command in a txt file:
time ./program.exe
How can I do in bash script?
I try with >> time.txt but that doesn't work (the output does not go to file and does go to the screen).
Getting time in bash to write to a file is hard work. It is a bash built-in command. (On Mac OS X, there's an external command, /usr/bin/time, that does a similar job but with a different output format and less recalcitrance.)
You need to use:
(time ./program.exe) 2> time.txt
It writes to standard error (hence the 2> notation). However, if you don't use the sub-shell (the parentheses), it doesn't work; the output still comes to the screen.
Alternatively, and without a sub-shell, you can use:
{ time ./program.exe; } 2> time.txt
Note the space after the open brace and the semi-colon; both are necessary on a single line. The braces must appear where a command could appear, and must be standalone symbols. (If you struggle hard enough, you'll come up with ...;}|something or ...;}2>&1. Both of these identify the brace as a standalone symbol, though. If you try ...;}xyz, the shell will (probably) fail to find a command called }xyz, though.)
I need to run more command in more terminal. If I do this:
xterm -xrm '*hold: true' -e (time ./Program.exe) >> time.exe & sleep 2
it doesn't work and tells me Syntax error: "(" unexpected. How do I fix this?
You would need to do something like:
xterm -xrm '*hold: true' -e sh -c "(time ./Program.exe) 2> time.txt & sleep 2"
The key change is to run the shell with the script coming from the argument to the -c option; you can replace sh with /bin/bash or an equivalent name. That should get around any 'Syntax error' issues. I'm not quite sure what triggers that error, though, so there may be a simpler and better way to deal with it. It's also conceivable that xterm's -e option only takes a single string argument, in which case, I suppose you'd use:
xterm -xrm '*hold: true' -e 'sh -c "(time ./Program.exe) 2> time.txt & sleep 2"'
You can manual bash xterm as well as I can.
I'm not sure why you run the timed program in background mode, but that's your problem, not mine. Similarly, the sleep 2 is not obviously necessary if the hold: true keeps the terminal open.
time_elapsed=(time sh -c "./program.exe") 2>&1 | grep "real" | awk '{print $(NF)}'
echo time_elapsed > file.txt
This command should give you the exact time consumed in bash in a desired file..
You can also redirect this to a file usng 2 > file.txt as explained in another reply.
It's not easy to redirect the output of the bash builtin time.
One solution is to use the external time program:
/bin/time --append -o time.txt ./program.exe
(on most systems it's a GNU program, so use info time rather than man to get its documentation).
Just enclose the command to time in a { .. }:
{ time ./program.exe; } 2>&1
Of course, the output of builtin time goes to stderr, thus the needed redirection 2>&1.
Then, it may appear to be tricky to capture the output, let's use a second { .. } to read the command more easily, this works:
{ { time ./program.exe; } 2>&1; } >> time.txt # This works.
However, the correct construct should simply have the capture reversed, as this:
{ time ./program.exe; } >> time.txt 2>&1; # Correct.
To close any possible output from the command, redirect it's output to /dev/null, as this:
{ time ./program.exe >/dev/null 2>&1; } >> time.txt 2>&1 # Better.
And, as now there is only output on stderr, we could simply capture just it:
{ time ./program.exe >/dev/null 2>&1; } 2>> time.txt # Best.
The output from ./program should be redirected, or it may well end inside time.txt.

How to save the command you are about to execute in bash?

Is there a better way to save a command line before it it executed?
A number of my /bin/bash scripts construct a very long command line. I generally save the command line to a text file for easier debugging and (sometimes) execution.
My code is littered with this idiom:
echo >saved.txt cd $NEW_PLACE '&&' command.py --flag $FOO $LOTS $OF $OTHER $VARIABLES
cd $NEW_PLACE && command.py --flag $FOO $LOTS $OF $OTHER $VARIABLES
Obviously updating code in two places is error-prone. Less obvious is that Certain parts need to be quoted in the first line but not the next. Thus, I can not do the update by simple copy-and-paste. If the command includes quotes, it gets even more complicated.
There has got to be a better way! Suggestions?
How about creating a helper function which logs and then executes the command? "$#" will expand to whatever command you pass in.
log() {
echo "$#" >> /tmp/cmd.log
"$#"
}
Use it by simply prepending log to any existing command. It won't handle && or || though, so you'll have to log those commands separately.
log cd $NEW_PLACE && log command.py --flag $FOO $LOTS $OF $OTHER $VARIABLES
are you looking for set -x (or bash -x)? This writes every command to standard out after executing.
use script and you will get archived everything.
use -x for tracing your script, e.g. run them as bash -x script_name args....
use set -x in your current bash (you will get echoed your commands with substitued globs and variables
combine 2 and 3 with the 1
If you just execute the command file immediately after creating it, you will only need to construct the command once, with one level of escapes.
If that would create too many discrete little command files, you could create shell procedures and then run an individual one.
(echo fun123 '()' {
echo echo something important
echo }
) > saved.txt
. saved.txt
fun123
It sounds like your goal is to keep a good log of what your script did so that you can debug it when things go bad. I would suggest using the -x parameter in your shebang like so:
#!/bin/sh -x
# the -x above makes bash print out every command before it is executed.
# you can also use the -e option to make bash exit immediately if any command
# returns a non-zero return code.
Also, see my answer on a previous question about redirecting all of this debug output to a log when --log is passed into your shell script. This will redirect all stdout and stderr. Occasionally, you'll still want to write to the terminal to give the user feedback. You can do this by saving stdout to a new file descriptor and using that with echo (or other programs):
exec 3>&1 # save stdout to fd 3
# perform log redirection as per above linked answer
# now all stdout and stderr will be redirected to the file and console.
# remove the `tee` command if you want it to go just to the file.
# now if you want to write to the original stdout (i.e. terminal)
echo "Hello World" >&3
# "Hello World" will be written to the terminal and not the logs.
I suggest you look into the xargs command. It was made to solve the problem of programtically building up argument lists and passing them off to executables for batch processing
http://en.wikipedia.org/wiki/Xargs

Is there a command-line shortcut for ">/dev/null 2>&1"

It's really annoying to type this whenever I don't want to see a program's output. I'd love to know if there is a shorter way to write:
$ program >/dev/null 2>&1
Generic shell is the best, but other shells would be interesting to know about too, especially bash or dash.
>& /dev/null
You can write a function for this:
function nullify() {
"$#" >/dev/null 2>&1
}
To use this function:
nullify program arg1 arg2 ...
Of course, you can name the function whatever you want. It can be a single character for example.
By the way, you can use exec to redirect stdout and stderr to /dev/null temporarily. I don't know if this is helpful in your case, but I thought of sharing it.
# Save stdout, stderr to file descriptors 6, 7 respectively.
exec 6>&1 7>&2
# Redirect stdout, stderr to /dev/null
exec 1>/dev/null 2>/dev/null
# Run program.
program arg1 arg2 ...
# Restore stdout, stderr.
exec 1>&6 2>&7
In bash, zsh, and dash:
$ program >&- 2>&-
It may also appear to work in other shells because &- is a bad file descriptor.
Note that this solution closes the file descriptors rather than redirecting them to /dev/null, which could potentially cause programs to abort.
Most shells support aliases. For instance, in my .zshrc I have things like:
alias -g no='2> /dev/null > /dev/null'
Then I just type
program no
If /dev/null is too much to type, you could (as root) do something like:
ln -s /dev/null /n
Then you could just do:
program >/n 2>&1
But of course, scripts you write in this way won't be portable to other systems without setting up that symlink first.
It's also worth noting, that often times redirecting output is not really necessary. Many Unix and Linux programs accept a "silent flag", usually -n or -q, that suppresses any output and only returns a value on success or failure.
For example
grep foo bar.txt >/dev/null 2>&1
if [ $? -eq 0 ]; then
do_something
fi
Can be rewritten as
grep -q foo bar.txt
if [ $? -eq 0 ]; then
do_something
fi
Edit: the (:) or |: based solutions might cause an error because : doesn't read stdin. Though it might not be as bad as closing the file descriptor, as proposed in Zaz's answer.
For bash and bash-compliant shells (zsh...):
$ program &>/dev/null
OR
$ program &> >(:) # Should actually cause error or abortion
For all shells:
$ program 2>&1 >/dev/null
OR
$ program 2>&1|: # Should actually cause error or abortion
$ program 2>&1 > >(:) does not work for dash because it refuses to operate process substitution right of a file substitution.
Explanations:
2>&1 redirects stderr (file descriptor 2) to stdout (file descriptor 1).
| is the regular piping of stdout to the stdin of another command.
: is a shell builtin which does nothing (it is equivalent to true).
&> redirects both stdout and stderr outputs to a file.
>(your-command) is process substitution. It is replaced with a path to a special file, for instance: /proc/self/fd/6. This file is used as input file for the command your-command.
Note: A process trying to write to a closed file descriptor will get an EBADF (bad file descriptor) error which is more likely to cause abortion than trying to write to | true. The latter would cause an EPIPE (pipe) error, see Charles Duffy's comment.
Ayman Hourieh's solution works well for one-off invocations of overly chatty programs. But if there's only a small set of commonly called programs for which you want to suppress output, consider silencing them by adding the following to your .bashrc file (or the equivalent, if you use another shell):
CHATTY_PROGRAMS=(okular firefox libreoffice kwrite)
for PROGRAM in "${CHATTY_PROGRAMS[#]}"
do
printf -v eval_str '%q() { command %q "$#" &>/dev/null; }' "$PROGRAM" "$PROGRAM"
eval "$eval_str"
done
This way you can continue to invoke programs using their usual names, but their stdout and stderr output will disappear into the bit bucket.
Note also that certain programs allow you to configure how much logging/debugging output they spew. For KDE applications, you can run kdebugdialog and selectively or globally disable debugging output.
Seems to me, that the most portable solution, and best answer, would be a macro on your terminal (PC).
That way, no matter what server you log in to, it will always be there.
If you happen to run Windows, you can get the desired outcome with AHK (google it, it's opensource) in two tiny lines of code. That can translate any string of keys into any other string of keys, in situ.
You type "ugly.sh >>NULL" and it will rewrite it as "ugly.sh 2>&1 > /dev/null" or what not.
Solutions for other platforms are somewhat more difficult. AppleScript can paste in keyboard presses, but can't be triggered that easily.

What's the difference between "&> foo" and "> foo 2>&1"?

There seem to be two bash idioms for redirecting STDOUT and STDERR to a file:
fooscript &> foo
... and ...
fooscript > foo 2>&1
What's the difference? It seems to me that the first one is just a shortcut for the second one, but my coworker contends that the second one will produce no output even if there's an error with the initial redirect, whereas the first one will spit redirect errors to STDOUT.
EDIT: Okay... it seems like people are not understanding what I am asking, so I will try to clarify:
Can anyone give me an example where the two specific lines lines written above will yield different behavior?
From the bash manual:
There are two formats for redirecting standard output and standard error:
&>word
and
>&word
Of the two forms, the first is preferred. This is semantically equivalent to
>word 2>&1
The phrase "semantically equivalent" should settle the issue with your coworker.
The situation where the two lines have different behavior is when your script is not running in bash but some simpler shell in the sh family, e.g. dash (which I believe is used as /bin/sh in some Linux distros because it is more lightweight than bash). In that case,
fooscript &> foo
is interpreted as two commands: the first one runs fooscript in the background, and the second one truncates the file foo. The command
fooscript > foo 2>&1
runs fooscript in the foreground and redirects its output and standard error to the file foo. In bash I think the lines will always do the same thing.
The main reason to use 2>&1, in my experience, is when you want to append all output to a file rather than overwrite the file. With &> syntax, you can't append. So with 2>&1, you can write something like program >> alloutput.log 2>&1 and get stdout and stderr output appended to the log file.
&>foo is less typing than >foo 2>&1, and less error-prone (you can't get it in the wrong order), but achieves the same result.
2>&1 is confusing, because you need to put it after the 1> redirect, unless stdout is being redirected to a | pipe, in which case it goes before...
$ some-command 2>&1 >foo # does the unexpected
$ some-command >foo 2>&1 # does the same as
$ some-command &>foo # this and
$ some-command >&foo # compatible with other shells, but trouble if the filename is numeric
$ some-command 2>&1 | less # but the redirect goes *before* the pipe here...
&> foo # Will take all and redirect all output to foo.
2>&1 # will redirect stderr to stdout.
2>&1 depends on the order in which it is specified on the command line. Where &> sends both stdout and stderr to wherever, 2>&1 sends stderr to where stdout is currently going at that point in the command line. Thus:
command > file 2>&1
is different than:
command 2>&1 > file
where the former is redirecting both stdout and stderr to file, the latter redirects stderr to where stdout is going before it is redirected to the file (in this case, probably the terminal.) This is useful if you wanted to do something like:
command 2>&1 > file | less
Where you want to use less to page through the output of stderr and store the output of stdout to a file.
2>&1 might be useful for cases where you aren't redirecting stdout to somewhere else, but rather you just want stderr to be sent to the same place (such as the console) as stdout (perhaps if the command you are running is already sending stderr somewhere else other than the console).
I know its a pretty old posting..but sharing what could answer the question :
* ..will yield different behavior?
Scenario:
When you are trying to use "tee" and want to preserve the exit code using process substitution in bash...
someScript.sh 2>&1 >( tee "toALog") # this fails to capture the out put to log
where as :
someScript.sh >& >( tee "toALog") # works fine!

Resources