Bash - Command substituting $(ping google.com) outputs to terminal [duplicate] - bash

This question already has answers here:
How to store standard error in a variable
(20 answers)
Closed 12 months ago.
I've been writing a script whose weird behavior's been driving me nuts, but I've managed to find what might be the problem: command substituting like this
out="$(ping google.com)"
if done while the internet isn't available, outputs this to the terminal
ping: google.com: Temporary failure in name resolution
even though, from my understanding, the command being substituted is run in a subshell, and so the output of the command should not go to stdout, but only be passed as the value of the variable. In fact, if done while the internet is available, the command substitution outputs nothing to the terminal, as expected.
I'm not sure if this is what's causing problems in my script, because I'm running a slightly more elaborate command (out="$(timeout 5 ping google.com | grep -c failure)"), but my theory is that something weird is happening that messes up later operations with variables and substitutions.
Why is this happening? And why does it only happen when the ping command fails to reach google.com? Thank you for your time.

The output is not going to stdout, it's going to stderr, and is printed to the terminal directly. Use out="$(ping google.com 2>&1)" to get all the output (stderr and stdout) in your out variable, or consider using exit codes for your command.

Related

Is it possible to make a .bat Bash hybrid?

In cmd, it is possible to use Linux commands with the ubuntu or bash commands, but they are very fickle. In batch, it is also possible to make a VBScript-batch hybrid, which got me thinking, is it possible to make a Bash-batch hybrid? Besides being a tongue-twister, I feel that Bash-batch scripts may be really useful.
What I have tried so far
So far I tried using the empty bash and ubuntu commands alone since they switch the normal command-prompt to the Ubuntu/Bash shell, but even if you put commands after the ubuntu/bash they wouldn't show or do anything.
After I tried that, I tried using the ubuntu -run command, but like I said earlier, it’s really fickle and inconsistent on what things work and what things don't. It is less inconsistent when you pipe things into it, but it still usually doesn't work.
I looked here since it seemed like it would answer my question and I tried it, but it didn't work since it required another program (I think).
I also looked to this and I guess it failed miserably, but interesting concept.
What I've gotten from all of my research is that most people think when this is mentioned of a file that could be run either as a .bat file or as .sh shell file instead of my goal, to make a file that runs both batch and Bash commands in the same instance.
What I want this for relates to my other question where I am trying to hash a string instead of a file in cmd, and you could do it with a Bash command, but I would still like to keep the file as a batch file.
Sure you can use Bash in batch, assuming it is available. Just use the command bash -c 'cmd', where cmd is the command that you want to run in Bash.
The following batch line pipes the Hello to cat -A command that prints it including the invisible symbols:
echo Hello | bash -c "cat -A"
Compare the output with the result of the version completely written in Bash:
bash -c "echo Hello | cat -A"
They will slightly differ!

How do I send commands to the ADB shell directly from my app?

I want to send commands in the ADB shell itself as if i had done the following in cmd.
>adb shell
shell#:/ <command>
I am using python 3.4 on a windows 7 OS 64bit machine. I can send one-line shell commands simply using subprocess.getoutput such as:
subprocess.getoutput ('adb pull /storage/sdcard0/file.txt')
as long as the adb commands themselves are recognized by ADB specifically, such as pull and push, however there are other commands such as grep that need to be run IN the shell, like above, since they are not recognized by adb. for example, the following line will not work:
subprocess.getoutput ('adb shell ls -l | grep ...')
To enter the commands in the shell I thought I needed some kind of expect library as that is what 'everyone' suggests, however pexpect, wexpect, and winexpect all failed to work. they were written for python 2 and after being ported to python 3 and my going through the .py files by hand, even those tweaked for windows, nothing was working - each of them for different reasons.
how can i send the input i want to the adb shell directly?
If none of the already recommended shortcuts work for you you can still go the 'regular' way using 'subprocess.Popen' for entering commands in the adb shell with Popen:
cmd1 = 'adb shell'
cmd2 = 'ls -l | grep ...'
p = subprocess.Popen(cmd1.split(), stdin=PIPE)
time.sleep(1)
p.stdin.write(cmd2.encode('utf-8'))
p.stdin.write('\n'.encode('utf-8'))
p.stdin.flush()
time.sleep(3)
p.kill()
Some things to remember:
even though you import subprocess you still need to invoke subprocess.Popen
sending cmd1 as a string or as items in a list should work too but '.split()' does the trick and is easier on the eyes
since you only specidfied you want to enter input to the shell you only need stdin=PIPE. stdout would only be necessary if you wanted to receive output from the shell
time.sleep(1) isn't really necessary, however since many complained about input issues being faster or slower in python 2 vs 3 consider maybe using it. 'they' might have been using versions of 'expect' that need the shell's reply first. this code also worked when i tested it with simply swapping out and in the process with time.sleep(0)
stdin.write will return an error if the input is not encoded properly. python's default is unicode. entering by binary did not work for me in my tests like this "b\ls ..." but .encode() worked. dont forget the endline!
if you use .encode() there is a worry that the line might not get sent properly, so to be sure it might be good to include a flush().
time.sleep(3) is completely uneccesary, but if your command takes a long time to execute (eg a regressive search through the entire device piped out to a txt file on the memory card) maybe give it some extra time before killing anyhting.
remember to kill. if you didnt kill it, the pipe may remain open, and even after exiting the test app on the console the next commend still went to the shell even though the prompt appearsed to be my regular cmd prompt.
Amichai, I have to start with pointing out that your own "solution" is pretty awful. And your explanation makes it even worse. Doing all those unnecessary things just because you do not understand how shell (here I mean your PC's OS shell, not adb) command parsing works.
When all you needed was just this one command:
subprocess.check_output(['adb', 'shell', 'ls /storage/sdcard0 | grep ...']).decode('utf-8')

Does stdout get stored somewhere in the filesystem or in memory? [duplicate]

This question already has answers here:
Send output of last command to a file automatically in bash?
(3 answers)
Closed 8 years ago.
I know I can save the result of a command to a variable using last_output=$(my_cmd) but what I'd really want is for $last_output to get updated every time I run a command. Is there a variable, zsh module, or plugin that I could install?
I guess my question is does stdout get permanently written somewhere (at least before the next command)? That way I could manipulate the results of the previous command without having to re-run it. This would be really useful for commands that take a long time to run
If you run the following:
exec > >(tee save.txt)
# ... stuff here...
exec >/dev/tty
...then your stdout for everything run between the two commands will go both to stdout, and to save.txt.
You could, of course, write a shell function which does this for you:
with_saved_output() {
"$#" \
2> >(tee "$HOME/.last-command.err >&2) \
| tee "$HOME/.last-command.out"
}
...and then use it at will:
with_saved_output some-command-here
...and zsh almost certainly will provide a mechanism to wrap interactively-entered commands. (In bash, which I can speak to more directly, you could do the same thing with a DEBUG trap).
However, even though you can, you shouldn't do this: When you split stdout and stderr into two streams, information about the exact ordering of writes is lost, even if those streams are recombined later.
Thus, the output
O: this is written to stdout first
E: this is written to stderr second
could become:
E: this is written to stderr second
O: this is written to stdout first
when these streams are individually passed through tee subprocesses to have copies written to disk. There are also buffering concerns created, and differences in behavior caused by software which checks whether it's outputting to a TTY and changes its behavior (for instance, software which turns color-coded output on when writing directly to console, and off when writing to a file or pipeline).
stdout is just a file handle that by default is connected to the console, but could be redirected.
yourcommand > save.txt
If you want to display the output to the console and save it to a file at the same time yout could pipe the output to tee, a command that writes everything it receives on stdin to stdout and to a file of your choice:
yourcommand | tee save.txt

Redirecting standard output to a file containing the pid of the logging process [duplicate]

This question already has answers here:
Redirecting standard output to a file containing the pid of the logging process
(2 answers)
Closed 2 years ago.
I've searched for a while but i can't either find an answer or come up with a solution of my own, so I turn to you guys. First question I actually ask here :)
I would like to run several instances of the same program, and redirect each of these programs' standard output to a file that contains that same process' pid, something like:
my_program > <pid of the instance of my_program that is called in this command>.log
I'm aware that this is not even close of the way to go :P I have tinkered around with exec and $PPID but to no avail. My bash-fu is weak :| please help me, point me somewhere! Thanks!
You can wrap your program execution into bash script. The bash process will be replaced with your program on exec call. So:
#!/bin/bash
exec my_program > $$.log
You cannot know the PID of a process before you created it.
Therefore this is not possible, you should rewrite the program that is called, to use getpid() to forge a log name from its own PID.

Can colorized output be captured via shell redirect? [duplicate]

This question already has answers here:
How to trick an application into thinking its stdout is a terminal, not a pipe
(9 answers)
Closed 5 years ago.
Various bash commands I use -- fancy diffs, build scripts, etc, produce lots of color output.
When I redirect this output to a file, and then cat or less the file later, the colorization is gone -- presumably b/c the act of redirecting the output stripped out the color codes that tell the terminal to change colors.
Is there a way to capture colorized output, including the colorization?
One way to capture colorized output is with the script command. Running script will start a bash session where all of the raw output is captured to a file (named typescript by default).
Redirecting doesn't strip colors, but many commands will detect when they are sending output to a terminal, and will not produce colors by default if not. For example, on Linux ls --color=auto (which is aliased to plain ls in a lot of places) will not produce color codes if outputting to a pipe or file, but ls --color will. Many other tools have similar override flags to get them to save colorized output to a file, but it's all specific to the individual tool.
Even once you have the color codes in a file, to see them you need to use a tool that leaves them intact. less has a -r flag to show file data in "raw" mode; this displays color codes. edit: Slightly newer versions also have a -R flag which is specifically aware of color codes and displays them properly, with better support for things like line wrapping/trimming than raw mode because less can tell which things are control codes and which are actually characters going to the screen.
Inspired by the other answers, I started using script. I had to use -c to get it working though. All other answers, including tee, different script examples did not work for me.
Context:
Ubuntu 16.04
running behavior tests with behave and starting shell command during the test with python's subprocess.check_call()
Solution:
script --flush --quiet --return /tmp/ansible-output.txt --command "my-ansible-command"
Explanation for the switches:
--flush was needed, because otherwise the output is not well live-observable, coming in big chunks
--quiet supresses the own output of the script tool
-c, --command directly provides the command to execute, piping from my command to script did not work for me (no colors)
--return to make script propagate the exit code of my command so I know if my command has failed
I found that using script to preserve colors when piping to less doesn't really work (less is all messed up and on exit, bash is all messed up) because less is interactive. script seems to really mess up input coming from stdin even after exiting.
So instead of running:
script -q /dev/null cargo build | less -R
I redirect /dev/null to it before piping to less:
script -q /dev/null cargo build < /dev/null | less -R
So now script doesn't mess with stdin and gets me exactly what I want. It's the equivalent of command | less but it preserves colors while also continuing to read new content appended to the file (other methods I tried wouldn't do that).
some programs remove colorization when they realize the output is not a TTY (i.e. when you redirect them into another program). You can tell some of those to use color forcefully, and tell the pager to turn on colorization, for example use less -R
This question over on superuser helped me when my other answer (involving tee) didn't work. It involves using unbuffer to make the command think it's running from a shell.
I installed it using sudo apt install expect tcl rather than sudo apt-get install expect-dev.
I needed to use this method when redirecting the output of apt, ironically.
I use tee: pipe the command's output to teefilename and it'll keep the colour. And if you don't want to see the output on the screen (which is what tee is for: showing and redirecting output at the same time) then just send the output of tee to /dev/null:
command| teefilename> /dev/null

Resources