Testing for pipe or console standard input with ARGF - ruby

I have written a script that I would like to take input either from a pipe or by providing a filename as an argument. ARGF makes it easy to deal with this flexibly, except in the incorrect usage cases where neither is provided, in which case STDIN is opened and it hangs until the user inputs something on the console.
I would like to catch this incorrect usage to display an error message and exit the program, but I haven't been able found a way. ARGF.eof? seemed like a possible candidate, but it also hangs until some input is received.
Is there a simple way for Ruby to discriminate between STDIN provided by a pipe and one from the console?

you can use
$stdin.tty?
for example
$ ruby -e 'puts $stdin.tty?'
> true
$ echo "hello" | ruby -e 'puts $stdin.tty?'
> false

Related

How to get version number and then compare it to minimum

I am using this answer to compare the min version number that is required. But before i go to comparison, I am actually stuck on how to extract the version number.
My current script looks like this
#!/usr/bin/env bash
x=`pgsync -v`
echo "---"
echo $x
and its output is
> ./version-test.sh
0.6.7
---
I have also tried with x="$(pgsync -v)" and i am still getting an empty string. What am i doing wrong here.
If you're trying to capture a command's output in a variable and it's instead getting printed to the terminal, that's a sign the command isn't writing to its standard output, but to another stream - usually standard error. So just redirect it:
x=$(pgsync -v 2>&1)
As an aside, writing out an explicitly requested version number to standard error instead of standard output is counter intuitive and arguably a bug.
Also, prefer $() command substitution to backticks; see Bash FAQ 082 for details.

Why will a script not accept input on stdin when invoked from bash's bind?

I have a number of bash/bind tools that I've written to simplify my command-line existence, and have recently wanted to make one of these tools interactive. If I try to read from stdin in one of these scripts, execution locks up at the point of reading. My example here is in python, but I have seen the exact same behavior when the invoked script is written in ruby:
~> cat tmp.py
import sys
sys.stdout.write(">>>")
sys.stdout.flush()
foo = sys.stdin.readline()
print "foo: %s" % foo,
~> python tmp.py
>>>yodeling yoda
foo: yodeling yoda
So the script works. When I invoke it, I can give it input and it prints what I fed it.
~> bind -x '"\eh":"echo yodeling yoda"'
[output deleted]
~> [Alt-H]
yodeling yoda
bind works as expected. The bound keystroke invokes the command. I use this stuff all the time, but until now, I've only invoked scripts that required no stdin reads.
Let's bind [Alt-H] to the script:
~> bind -x '"\eh":"python tmp.py"'
[output deleted]
Now we're configured to have the script read from stdin while invoked by the bound keystroke. Hitting [Alt-H] starts the script but nothing typed is echoed back. Even hitting [Crl-D] doesn't end it. The only way to get out is to hit [Crl-C], killing the process in the readline. (sys.stdin.read() suffers the same fate.)
~> [Alt-H]
>>>Traceback (most recent call last):
File "tmp.py", line 7, in <module>
foo = sys.stdin.readline()
KeyboardInterrupt
As I mentioned at the top, I see the same issue with ruby, so I know it's nothing to do with the language I'm using. (I've omitted the script.)
~> bind -x '"\eh":"ruby tmp.rb"'
[Output deleted]
~> [Alt-H]
>>>tmp.rb:3:in `gets': Interrupt
from tmp.rb:3
I've looked through the Bash Reference Manual entry on bind, and it says nothing about a restriction on input. Any thoughts?
EDIT:
If I cat /proc/[PID]/fd/0 while the process is stuck, I see the script's input being displayed. (Oddly enough, a fair number of characters - seemingly at random - fail to appear here. This symptom only appears after I've given a few hundred bytes of input.)
Found this, a description of how and when a terminal switches between cooked and raw modes. Calling stty cooked and stty echo at the beginning of prompting, then stty sane or stty raw afterward triggers a new cascade of problems; mostly relating to how bound characters are handled, but suffice it to say that it destroys most alt bindings (and more) until return has been hit a couple times.
In the end, the best answer proved to be cooking the tty and manually turning on echo, then reverting the tty settings back to where they were when I started:
def _get_terminal_settings():
proc = subprocess.Popen(['/bin/stty', '-g'], stdout=subprocess.PIPE)
settings = proc.communicate()[0]
os.system('stty cooked echo')
return settings
def _set_terminal_settings(settings):
os.system('stty %s' % settings)
...
...
settings = _get_terminal_settings()
user_input = sys.stdin.readline()
_set_terminal_settings(settings)
...
...
You should be able to do this in any language you choose.
If you're curious about why this insanity is required, I would encourage you to read the link I added (under EDIT), above. The article doesn't cover anywhere enough detail, but you'll at least understand more than I did when I started.
Hmm, my guess is that what's happening is the python script is running and waiting for input from stdin when you press [Alt-H],but that it's stdin scope is not the same as the stdin scope of the calling script. When you type in something, it goes to the Bash scripts stdin, not the pythons. Perhaps look up a way to "reverse pipe" or forward the stdin from the bash shell to the stdin of a called script?
Edit:
Okay, I researched it a bit, and it looks like pipes might work. Here's a really informative link:
bash - redirect specific output from 2nd script back to stdin of 1st program?

Ruby not showing output of internal process

I'm trying this in ruby.
I have a shell script to which I can pass a command which will be executed by the shell after some initial environment variables have been set. So in ruby code I'm doing this..
# ruby code
my_results = `some_script -allow username -cmd "perform_action"`
The issue is that since the script "some_script" runs "perform_action" in it's own environment, I'm not seeing the result when i output the variable "my_results". So a ruby puts of "my_results" just gives me some initial comments before the script processes the command "perform_action".
Any clues how I can get the output of perform_action into "my_results"?
Thanks.
The backticks will only capture stdout. If you are redirecting stdout, or writing to any other handle (like stderr), it will not show up in its output; otherwise, it should. Whether something goes into stdout or not is not dependent on an environment, only on redirection or direct writing to a different handle.
Try to see whether your script actually prints to stdout from shell:
$ some_script -allow username -cmd "perform_action" > just_stdout.log
$ cat just_stdout.log
In any case, this is not a Ruby question. (Or at least it isn't if I understood you correctly.) You would get the same answer for any language.

clojure -- how to run a program without piping it's output?

I want to use something like shell-out [ http://richhickey.github.com/clojure-contrib/shell-out-api.html ], but without capturing the any output. Of course the output can be passed to print, but this is slightly less than desirable (e.g. in the case that the subprocess may fail).
edit
Sorry, I want the subprocess to output to the same stdout as the parent process.
Also see this third party library
https://github.com/Raynes/conch
It provides direct access to the streams.
EDIT: Before Clarification
You can wrap the shell command with a sh and then pipe to /dev/null like so:
(clojure.java.shell/sh "sh" "-c" "echo hello > /dev/null")
;; {:exit 0, :out "", :err ""}
This will silence the output before getting to clojure.
EDIT: After Clarification
Passing output and stderr to print should work as long as the output comes out quickly enough. If you want something with continuous output of error messages and standard output, looking at the source for the "sh" function should help.
Personally, I would make my own version of clojure.java.shell/sh and for each stream, create a thread that pipes the output directly to out using something like IOUtils.copy from org.apache.commons.io.IOUtilsin

When the input is from a pipe, does STDIN.read run until EOF is reached?

Sorry if this is a naïve question, but let's say I have a Ruby program called processor.rb that begins with data = STDIN.read. If I invoke this program like this
cat textfile.txt | processor.rb
Does STDIN.read wait for cat to pipe the entire textfile.txt in? Or does it assign some indeterminate portion of textfile.txt to the data variable?
I'm asking this because I recently saw a strange bug in one of my programs that suggests that the latter is the case.
The read method should import the entire file, as-is, and return only when the process producing the output has finished, as indicated by a flag on the pipe. It should be the case that on output from cat that if you call read a subsequent time, you will return 0 bytes.
In simple terms, a process is allowed to append to its output at any time, which is the case of things like 'tail -f', so you can't be assured that you have read all the data from STDIN without actually checking.
Your OS may implement cat or shell pipes slightly differently, though. I'm not familiar with what POSIX dictates for behavior here.
Probably is line buffered and reads until it encounters a newline or EOF.

Resources