I'm trying to find some documentation on how STDIN is handled in Ruby.
I've experimented with this simple script:
# test.rb
loop do
puts "stdin: #{$stdin.gets}"
sleep 2
end
That I've run from bash (on OS X) with:
$ ruby test.rb
As I expected, the call to $stdin.gets is blocking, and the loop waits for the next input. The 2 second sleep time even allows me to enter more lines in one go, and the loop correctly prints them in order, then stops again when STDIN is cleared:
$ ruby test.rb
a
stdin: a
b
stdin: b
c
d
e
stdin: c
stdin: d
stdin: e
So far, all good. I was expecting this.
Then, I made a test with a pipe:
$ mkfifo my_pipe
$ ruby test.rb < my_pipe
And, in another shell:
$ echo "Hello" > my_pipe
This time, it behaved a bit differently.
At first it did wait, blocking the loop. But then, after the first input was passed through the pipe, it keept looping and printing empty strings:
$ ruby test.rb
stdin: Hello
stdin:
stdin:
stdin: Other input
stdin:
So my question is: why the difference? Does it treat the pipe like an empty file? Where is this documented? The docs don't say anything about the blocking behaviour, but they do say that:
Returns nil if called at end of file.
It's a start.
So the short answer is yes, you are getting an EOF from the pipe. Because the way echo works is that it's going to open the pipe for writing, write to it, then close (i.e. send EOF). Then a new call to echo will open it back up, read to it, and then close.
If you had instead used a program that printed lines of a file after a 3 second sleep, you would see that your application would perform blocking waits until that one exits (at which point the never-ending EOFs would return).
# slow_write.rb
ARGF.each do |line|
puts line
STDOUT.flush
sleep 3
end
I should note that this behavior is not specific to Ruby. The C stdlio library has the exact same behavior and since most languages use C primitives as their basis they have the same behavior as well.
Related
I am using following lines of code in my ruby program on ubuntu:
data=ARGF.read
if data.length != 0
.....
end
The program runs fine when I run as "cat file.txt | ruby test.rb", however, I am unable to handle following issues:
When run as "cat | ruby test.rb", the program goes into endless loop.
When run as "ruby test.rb", the program goes into endless loop.
When run as "cat file1.txt | ruby test.rb", the program gives "cat: file1.txt: No such file or directory" error.
Any input will be highly appreciated.
I think you misunderstand what ARGF is used for. ARGF.read gives all the data of all the files passed as arguments.
When you don't give any input file, it is waiting for you to give the input through stdin. Since, you are in Ubuntu, you could just press (Control + D) which would end the stream and then you could process the data normally.
In bash 1:
$ mkfifo /tmp/pipe
$ echo 'something' > /tmp/pipe
Now it hangs and waits that data to be read.
In bash 2:
$ </tmp/pipe
Now shell 1 goes away, it is closed, my terminal is gone.
Why is this happening?
In bash manual there is written
The command substitution $(cat file) can be replaced by the
equivalent but faster $(< file).
So I was experimenting if plain "< file" works in a similar way to cat file content to terminal.
$ bash --version | head -1
GNU bash, version 4.3.11(1)-release (x86_64-pc-linux-gnu)
$ cat /proc/version
Linux version 3.16.0-71-generic (buildd#lgw01-46) (gcc version 4.8.2 (Ubuntu 4.8.2-19ubuntu1) ) #92~14.04.1-Ubuntu SMP Thu May 12 23:31:46 UTC 2016
Edit
After seeing initial comments and answers I will add a bit of clarification.
I'm not concerned about different command line syntaxes.
But what I was really after was that in reader shell $ < /tmp/pipe scenario writer shell exits, but with $ cat /tmp/pipe in reader shell the writer shell does not exit. Why?
I see that I really did not phrase that in question and in body and should probably initiate another question?
From the pipe(7) manual page:
If all file descriptors referring to the read end of a pipe have been closed, then a write(2) will cause a SIGPIPE signal to be generated for the calling process.
What happens is that when the reading shell has finished reading and closes its end of the pipe, the writing shell will receive the SIGPIPE signal, and if it doesn't catch it then the shell will be terminated.
In manual sign $ is connected with variable not command prompt.
Try the following scripts:
1)
#!/bin/bash
echo $(< /tmp/pipe);
2)
#!/bin/bash
echo $(cat /tmp/pipe);
Both works correctly.
When you type < /tmp/pipe, you connect the standard input of the current shell to the named pipe instead. bash works by continuously reading from its input and executing what it reads as a command.
In shell 1, echo something > /tmp/pipe opens the pipe for writing, writes the string, then blocks until something reads it. As soon as echo completes, it will close its end of the pipe.
< /tmp/pipe opens the pipe for reading, and connects it to shell 2's standard input.
Shell 2 reads from the pipe (and tries to execute a command).
Back in shell 1, the echo, having unblocked after the 2nd shell read from the pipe, completes. The write end of the pipe closes.
With the write-end of the pipe closed, shell 2 will get a SIGPIPE when it tries to read another command, then exit.
(An alternate possibility is that shell 2 exits if the command it reads from the pipe and tries to execute causes an error.)
$(< file), on the other hand, is a special case of command substitution. When bash sees that, it simply reads from file itself, rather than spawning a cat process and capturing its output.
I'm using:
- Ruby 1.9.3-p448
- Windows Server 2008
I have a file that contains commands that is used by a program, I'm using it in this way
C:\> PATH_TO_FOLDER/program.exe file.txt
File.txt have some commands so program.exe will do the following:
- Execute commands
- Reads from a DB using an ODBC method used by program
- Outputs result in a txt file
Using powershell this command works fine and as expected.
Now I have this in a file (app.rb):
require 'sinatra'
require 'open3'
get '/process' do
program_path = "path to program.exe"
file_name = "file.txt"
Open3.popen3(program_path, file_name) do |i, o, e, w|
# I have some commands here to execute but just as an example I'm using o.read
puts o.read
end
end
Now when using this by accessing http://localhost/process, Open3 works by doing this (I'm not 100% sure but after trying several times I think is the only option)
Reads commands and executes them (this is ok)
Tries to read from DB by using ODBC method (Here is my problem. I
need to receive some output from Open3 so I can show it in a browser, but I guess when it tries to read it starts another process that Open3 is not aware of, so Open3 goes on and finish without waiting for it)
Exits
Exits
I've found about following:
Use Thread.join (in this case, w.join) in order to wait for process to finish, but it doesn't work
Open4 seems to handle child process but doesn't work on Windows
Process.wait(pid), in this case pid = w.pid, but also doesn't work
Timeout.timeout(n), the problem here is that I'm not sure how long
will it take.
Is there any way of handling this? (waiting for Open3 subprocess so I get proper output).
We had a similar problem getting the exit status and this is what we did
Open3.popen3(*cmd) do |stdin, stdout, stderr, wait_thr|
# print stdout and stderr as it comes in
threads = [stdout, stderr].collect do |output|
Thread.new do
while ((line = output.gets rescue '') != nil) do
unless line.blank?
puts line
end
end
end
end
# get exit code as a Process::Status object
process_status = wait_thr.value #.exitstatus
# wait for logging threads to finish before continuing
# so we don't lose any logging output
threads.each(&:join)
# wait up to 5 minutes to make sure the process has really exited
Timeout::timeout(300) do
while !process_status.exited?
sleep(1)
end
end rescue nil
process_status.exitstatus.to_i
end
Using Open3.popen3 is easy only for trivial cases. I do not know the real code for handling the input, output and error channels of your subprocess. Neither do I know the exact behaviour of your subprocesses: Does it write on stdout? Does it write on stderr? Does it try to read from stdin?
This is why I assume that there are problems in the code that you replaced by puts o.read.
A good summary about the problems you can run into is on http://coldattic.info/shvedsky/pro/blogs/a-foo-walks-into-a-bar/posts/63.
Though I disagree with the author of the article, Pavel Shved, when it comes to finding a solution. He recommends his own solution. I just use one of the wrapper functions for popen3 in my projects: Open3.capture*. They do all the difficult things like waiting for stdout and stderr at the same time.
I am trying to build an application that enables the user to interact with a command-line interactive shell, like IRB or Python. This means that I need to pipe user input into the shell and the shell's output back to the user.
I was hoping this was going to be as easy as piping STDIN, STDOUT, and STDERR, but most shells seem to respond differently to STDIN input as opposed to direct keyboard input.
For example, here is what happens when I pipe STDIN into python:
$ python 1> py.out 2> py.err <<EOI
> print 'hello'
> hello
> print 'goodbye'
> EOI
$ cat py.out
hello
$ cat py.err
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
NameError: name 'hello' is not defined
It seems that Python is interpreting the STDIN as a script file, and it doesn't pipe any of the interactive interface, like the ">>>" at the beginning of the line. It also fails at the first line with an error, because we do not see "goodbye" in the outfile.
Here is what happens with irb (Interactive Ruby):
$ irb 1> irb.out 2> irb.err <<EOI
> puts 'hello'
> hello
> puts 'goodbye'
> EOI
$ cat irb.out
Switch to inspect mode.
puts 'hello'
hello
nil
hello
NameError: undefined local variable or method `hello' for main:Object
from (irb):2
from /path/to/irb:16:in `<main>'
puts 'goodbye'
goodbye
nil
$ cat irb.err
IRB responds differently than Python: namely, it continues executing commands even when there is an error. However, it still lacks the shell interface.
How can an application interact with an interactive shell environment?
Technically, your first sample is not piping the input to Python; it is coming from a file — and yes, file input is treated differently.
The way to persuade a program that its input is coming from a terminal is using a pseudo-tty. There's a master side and a slave side; you'll hook the shell (Python, Ruby) to the slave side, and have your controlling program write to and read from the master side.
That's fairly tricky to set up. You may do better using expect or one of its clones to manage the pseudo-tty. Amongst other related questions, see How to perform automated Unix input?
Here comes a sample ruby code:
r = gets
puts r
if the script is executed standalone from console, it work fine. But if i ran it via pipeline:
echo 'testtest' | ruby test.rb
gets seem is redirected to pipeline inputs, but i need some user input.
How?
Stdin has been attached to the receiving end of the pipe by the invoking shell. If you really need interactive input you have a couple choices. You can open the tty input directly, leavng stdin bound to the pipe:
tty_input = open('/dev/tty') {|f| f.gets }
/dev/tty works under linux and OS/x, but might not work everywhere.
Alternatively, you can use a different form of redirection, process substitution, under bash to supply the (formerly piped) input as a psuedo-file passed as an argument and leave stdin bound to your terminal:
ruby test.rb <(echo 'testtest')
# test.rb
input = open(ARGV[0])
std_input = gets
input.readlines { |line| process_line(line) }