I have a problem concerning storing the output of a command inside a variable within a bash script.
I know in general there are two ways to do this
either
foo=$(bar)
# or
foo=`bar`
but for the Java version query, this doesn't seem to work.
I did:
version=$(java --version)
This doesn't store the value inside the var. It even still prints it, which really shouldn't be the case.
I also tried redirecting output to a file but this also fails.
version=$(java -version 2>&1)
The version param only takes one dash, and if you redirect stderr, which is, where the message is written to, you'll get the desired result.
As a sidenote, using two dashes is an inofficial standard on Unix like systems, but since Java tries to be almost identical over different platforms, it violates the Unix/Linux-expectations and behaves the same in this regard as on windows, and as I suspect, on Mac OS.
That is because java -version writes to stderr and not stdout. You should use:
version=$(java -version 2>&1)
In order to redirect stderr to stdout.
You can see it by running the following 2 commands:
java -version > /dev/null
java -version 2> /dev/null
Related
I am using this answer to compare the min version number that is required. But before i go to comparison, I am actually stuck on how to extract the version number.
My current script looks like this
#!/usr/bin/env bash
x=`pgsync -v`
echo "---"
echo $x
and its output is
> ./version-test.sh
0.6.7
---
I have also tried with x="$(pgsync -v)" and i am still getting an empty string. What am i doing wrong here.
If you're trying to capture a command's output in a variable and it's instead getting printed to the terminal, that's a sign the command isn't writing to its standard output, but to another stream - usually standard error. So just redirect it:
x=$(pgsync -v 2>&1)
As an aside, writing out an explicitly requested version number to standard error instead of standard output is counter intuitive and arguably a bug.
Also, prefer $() command substitution to backticks; see Bash FAQ 082 for details.
There are plenty of threads here discussing how to do this for scripts or for the cmdline (mostly involving pipes, redirections, tee).
What I didn't find is a solution which can be set up once and then just works globally, without manipulating single scripts or adding something to every command line.
What I want to achieve is something like described in the top answer of
How do I write stderr to a file while using "tee" with a pipe?
Isn't it possible to configure the bash session so that all stderr output is logged to a file, while still writing it to console? Something I could add to .bashrc and thus automatically set up every time I login?
Software: Bash 4.2.24(1)-release (x86_64-pc-linux-gnu), xterm, Ubuntu 12.04
Try this variation on #0xC0000022L's previous solution (put it in your .bash_profile):
exec 2> >( tee log.file > /dev/tty )
A couple of caveats:
The prompt and anything you type at the command line are printed to stderr, and so will be logged in your file.
There could be an issue with the newline that terminates a command not being displayed in your terminal; I observe it on my Linux host, but not on my Mac OS X laptop. Perhaps someone else can explain and/or fix the issue. For example, if I type "echo stdout", I see the following:
$ echo stdoutstdout
$
I have a question about what I think is an operator or argument passer but google hasn't turned up anything. The script this is contained in is
#!/bin/sh
ln mopac.in FOR005
mopac >& FOR006
mv FOR006 mopac.out
When I call "mopac mopac.in", the program runs fine, but, for my needs, mopac is called within another program by using this script, but it seems like the input file is failing to pass so mopac is not running. I don't understand what the ">&" is supposed to do so I am having problems troubleshooting.
Thanks.
>& FILE is deprecated bash (from csh) shorthand for > FILE 2>&1, that is, redirect both standard output and standard error. (If /bin/sh is not bash, as is true on a number of Linux distributions, this will elicit an error.) Older bash (before 3.0) preferred this form, so most newer bash still understand it, although possibly very recent bash has finally removed it as they seem to finally be removing deprecated constructs of late.
Your script there is not passing mopac.in at all, but appears to be assuming that mopac will read its input from FOR005, so uses ln to make it available there. Perhaps you should change the script to read mopac.in as a parameter, just as you're running it directly.
Explanation here : http://tldp.org/LDP/abs/html/io-redirection.html
>&j
# Redirects, by default, file descriptor 1 (stdout) to j.
# All stdout gets sent to file pointed to by j.
I'm trying this in ruby.
I have a shell script to which I can pass a command which will be executed by the shell after some initial environment variables have been set. So in ruby code I'm doing this..
# ruby code
my_results = `some_script -allow username -cmd "perform_action"`
The issue is that since the script "some_script" runs "perform_action" in it's own environment, I'm not seeing the result when i output the variable "my_results". So a ruby puts of "my_results" just gives me some initial comments before the script processes the command "perform_action".
Any clues how I can get the output of perform_action into "my_results"?
Thanks.
The backticks will only capture stdout. If you are redirecting stdout, or writing to any other handle (like stderr), it will not show up in its output; otherwise, it should. Whether something goes into stdout or not is not dependent on an environment, only on redirection or direct writing to a different handle.
Try to see whether your script actually prints to stdout from shell:
$ some_script -allow username -cmd "perform_action" > just_stdout.log
$ cat just_stdout.log
In any case, this is not a Ruby question. (Or at least it isn't if I understood you correctly.) You would get the same answer for any language.
This question already has answers here:
How to trick an application into thinking its stdout is a terminal, not a pipe
(9 answers)
Closed 5 years ago.
Various bash commands I use -- fancy diffs, build scripts, etc, produce lots of color output.
When I redirect this output to a file, and then cat or less the file later, the colorization is gone -- presumably b/c the act of redirecting the output stripped out the color codes that tell the terminal to change colors.
Is there a way to capture colorized output, including the colorization?
One way to capture colorized output is with the script command. Running script will start a bash session where all of the raw output is captured to a file (named typescript by default).
Redirecting doesn't strip colors, but many commands will detect when they are sending output to a terminal, and will not produce colors by default if not. For example, on Linux ls --color=auto (which is aliased to plain ls in a lot of places) will not produce color codes if outputting to a pipe or file, but ls --color will. Many other tools have similar override flags to get them to save colorized output to a file, but it's all specific to the individual tool.
Even once you have the color codes in a file, to see them you need to use a tool that leaves them intact. less has a -r flag to show file data in "raw" mode; this displays color codes. edit: Slightly newer versions also have a -R flag which is specifically aware of color codes and displays them properly, with better support for things like line wrapping/trimming than raw mode because less can tell which things are control codes and which are actually characters going to the screen.
Inspired by the other answers, I started using script. I had to use -c to get it working though. All other answers, including tee, different script examples did not work for me.
Context:
Ubuntu 16.04
running behavior tests with behave and starting shell command during the test with python's subprocess.check_call()
Solution:
script --flush --quiet --return /tmp/ansible-output.txt --command "my-ansible-command"
Explanation for the switches:
--flush was needed, because otherwise the output is not well live-observable, coming in big chunks
--quiet supresses the own output of the script tool
-c, --command directly provides the command to execute, piping from my command to script did not work for me (no colors)
--return to make script propagate the exit code of my command so I know if my command has failed
I found that using script to preserve colors when piping to less doesn't really work (less is all messed up and on exit, bash is all messed up) because less is interactive. script seems to really mess up input coming from stdin even after exiting.
So instead of running:
script -q /dev/null cargo build | less -R
I redirect /dev/null to it before piping to less:
script -q /dev/null cargo build < /dev/null | less -R
So now script doesn't mess with stdin and gets me exactly what I want. It's the equivalent of command | less but it preserves colors while also continuing to read new content appended to the file (other methods I tried wouldn't do that).
some programs remove colorization when they realize the output is not a TTY (i.e. when you redirect them into another program). You can tell some of those to use color forcefully, and tell the pager to turn on colorization, for example use less -R
This question over on superuser helped me when my other answer (involving tee) didn't work. It involves using unbuffer to make the command think it's running from a shell.
I installed it using sudo apt install expect tcl rather than sudo apt-get install expect-dev.
I needed to use this method when redirecting the output of apt, ironically.
I use tee: pipe the command's output to teefilename and it'll keep the colour. And if you don't want to see the output on the screen (which is what tee is for: showing and redirecting output at the same time) then just send the output of tee to /dev/null:
command| teefilename> /dev/null