$stdin.gets is not working when execute ruby script via pipeline - ruby

Here comes a sample ruby code:
r = gets
puts r
if the script is executed standalone from console, it work fine. But if i ran it via pipeline:
echo 'testtest' | ruby test.rb
gets seem is redirected to pipeline inputs, but i need some user input.
How?

Stdin has been attached to the receiving end of the pipe by the invoking shell. If you really need interactive input you have a couple choices. You can open the tty input directly, leavng stdin bound to the pipe:
tty_input = open('/dev/tty') {|f| f.gets }
/dev/tty works under linux and OS/x, but might not work everywhere.
Alternatively, you can use a different form of redirection, process substitution, under bash to supply the (formerly piped) input as a psuedo-file passed as an argument and leave stdin bound to your terminal:
ruby test.rb <(echo 'testtest')
# test.rb
input = open(ARGV[0])
std_input = gets
input.readlines { |line| process_line(line) }

Related

How to check for command return codes inside a ruby shell script?

I have a simple shell script written in ruby that runs some predefined commands and saves the output strings.
The script works well, but I need a way branch conditionally if the command fails. I've tried using the $? object but the script exits before it gets there.
#!/usr/bin/ruby
def run_command(cmd)
`#{cmd}`
if $?.success?
# continue as normal
else
# ignore this command and move on
end
end
run_command('ls')
run_command('not_a_command')
# Output:
# No such file or directory - not_a_command (Errno::ENOENT)...
I've tried $?.exitstatus or even just puts $? but it always exits before it gets there because the script is obviously running the command before hitting that line.
Is there a way to check if the command will run before actually running it in the script?
Hope that's clear, thanks for your time!
Use system (which returns true or false depending on exit code) instead of backticks (which return output string):
if system(cmd)
...
else
...
end
If you want it to run quietly without polluting your logs / output:
system(cmd, out: File::NULL, err: File::NULL)

Ruby STDIN, blocking vs not blocking

I'm trying to find some documentation on how STDIN is handled in Ruby.
I've experimented with this simple script:
# test.rb
loop do
puts "stdin: #{$stdin.gets}"
sleep 2
end
That I've run from bash (on OS X) with:
$ ruby test.rb
As I expected, the call to $stdin.gets is blocking, and the loop waits for the next input. The 2 second sleep time even allows me to enter more lines in one go, and the loop correctly prints them in order, then stops again when STDIN is cleared:
$ ruby test.rb
a
stdin: a
b
stdin: b
c
d
e
stdin: c
stdin: d
stdin: e
So far, all good. I was expecting this.
Then, I made a test with a pipe:
$ mkfifo my_pipe
$ ruby test.rb < my_pipe
And, in another shell:
$ echo "Hello" > my_pipe
This time, it behaved a bit differently.
At first it did wait, blocking the loop. But then, after the first input was passed through the pipe, it keept looping and printing empty strings:
$ ruby test.rb
stdin: Hello
stdin:
stdin:
stdin: Other input
stdin:
So my question is: why the difference? Does it treat the pipe like an empty file? Where is this documented? The docs don't say anything about the blocking behaviour, but they do say that:
Returns nil if called at end of file.
It's a start.
So the short answer is yes, you are getting an EOF from the pipe. Because the way echo works is that it's going to open the pipe for writing, write to it, then close (i.e. send EOF). Then a new call to echo will open it back up, read to it, and then close.
If you had instead used a program that printed lines of a file after a 3 second sleep, you would see that your application would perform blocking waits until that one exits (at which point the never-ending EOFs would return).
# slow_write.rb
ARGF.each do |line|
puts line
STDOUT.flush
sleep 3
end
I should note that this behavior is not specific to Ruby. The C stdlio library has the exact same behavior and since most languages use C primitives as their basis they have the same behavior as well.

Ruby - Open3 not finishing subprocess

I'm using:
- Ruby 1.9.3-p448
- Windows Server 2008
I have a file that contains commands that is used by a program, I'm using it in this way
C:\> PATH_TO_FOLDER/program.exe file.txt
File.txt have some commands so program.exe will do the following:
- Execute commands
- Reads from a DB using an ODBC method used by program
- Outputs result in a txt file
Using powershell this command works fine and as expected.
Now I have this in a file (app.rb):
require 'sinatra'
require 'open3'
get '/process' do
program_path = "path to program.exe"
file_name = "file.txt"
Open3.popen3(program_path, file_name) do |i, o, e, w|
# I have some commands here to execute but just as an example I'm using o.read
puts o.read
end
end
Now when using this by accessing http://localhost/process, Open3 works by doing this (I'm not 100% sure but after trying several times I think is the only option)
Reads commands and executes them (this is ok)
Tries to read from DB by using ODBC method (Here is my problem. I
need to receive some output from Open3 so I can show it in a browser, but I guess when it tries to read it starts another process that Open3 is not aware of, so Open3 goes on and finish without waiting for it)
Exits
Exits
I've found about following:
Use Thread.join (in this case, w.join) in order to wait for process to finish, but it doesn't work
Open4 seems to handle child process but doesn't work on Windows
Process.wait(pid), in this case pid = w.pid, but also doesn't work
Timeout.timeout(n), the problem here is that I'm not sure how long
will it take.
Is there any way of handling this? (waiting for Open3 subprocess so I get proper output).
We had a similar problem getting the exit status and this is what we did
Open3.popen3(*cmd) do |stdin, stdout, stderr, wait_thr|
# print stdout and stderr as it comes in
threads = [stdout, stderr].collect do |output|
Thread.new do
while ((line = output.gets rescue '') != nil) do
unless line.blank?
puts line
end
end
end
end
# get exit code as a Process::Status object
process_status = wait_thr.value #.exitstatus
# wait for logging threads to finish before continuing
# so we don't lose any logging output
threads.each(&:join)
# wait up to 5 minutes to make sure the process has really exited
Timeout::timeout(300) do
while !process_status.exited?
sleep(1)
end
end rescue nil
process_status.exitstatus.to_i
end
Using Open3.popen3 is easy only for trivial cases. I do not know the real code for handling the input, output and error channels of your subprocess. Neither do I know the exact behaviour of your subprocesses: Does it write on stdout? Does it write on stderr? Does it try to read from stdin?
This is why I assume that there are problems in the code that you replaced by puts o.read.
A good summary about the problems you can run into is on http://coldattic.info/shvedsky/pro/blogs/a-foo-walks-into-a-bar/posts/63.
Though I disagree with the author of the article, Pavel Shved, when it comes to finding a solution. He recommends his own solution. I just use one of the wrapper functions for popen3 in my projects: Open3.capture*. They do all the difficult things like waiting for stdout and stderr at the same time.

how to enter an input to gets function via script in ruby

Whenever gets is called, is there any way to enter input via script itself, instead of entering manually in windows?
For example:
puts "enter your choice"
ch=gets
puts ch
In the above script when gets is called, is there any command to enter input to that via a script in windows?
Thanks in advance.
The gets function simply reads from $stdin so all you have to do is open a new File or StringIO for reading and then assign it to $stdin.
For example, if you have a file called pancakes.txt and you do this:
$stdin = File.new('pancakes.txt', 'r')
puts gets
Then you'll see the first line of pancakes.txt on the standard output.
1) If you want to provide external input to STDIN when invoking your script
Let's say your gets command is inside a file named prog.rb. If you'd like to provide some fixed input to STDIN when running prog.rb, you could run it using a pipe from the command line:
echo "My input to gets" | ruby prog.rb
This will output
enter your choice
My input to gets
in the shell without requiring manual intervention.
2) An example for feeding STDIN from within the same script:
class MyIO
def gets
"1\n"
end
end
$stdin = MyIO.new
puts "enter your choice"
ch=gets
puts ch # => 1

Running a shell command from Ruby: capturing the output while displaying the output?

I have a problem.
I want to run a ruby script from another ruby script and capture it's output information while letting it output to the screen too.
runner
#!/usr/bin/env ruby
print "Enter your password: "
password = gets.chomp
puts "Here is your password: #{password}"
The script file that I run:
start.rb
output = `runner`
puts output.match(/Here is your (password: .*)/).captures[0].to_s
As you see here there is a problem.
In the first line of start.rb the screen is empty.
I cannot see the "Enter your password: " in runner.
Is there a way to display the output of the runner script before it's finished, and still let me capture it to a string so I can process the information, eg. using match like in this example?
runner.rb
STDOUT.print "Enter your password: "
password = gets.chomp
puts "Here is your password: #{password}"
Note STDOUT.print
start.rb
require "stringio"
buffer = StringIO.new
$stdout = buffer
require "runner"
$stdout = STDOUT
buffer.rewind
puts buffer.read.match(/Here is your (password: .*)/).captures[0].to_s
output
Enter your password: hello
password: hello
Read more...
I recently did a write-up on this here: Output Buffering with Ruby
Try this:
rd, wr = IO::pipe
pid = Process.fork do
$stdout.reopen(wr)
rd.close
exec("command")
end
wr.close
rd.each do |line|
puts "line from command: #{line}"
end
Process.wait(pid)
Similar if you want to capture stderr. If you need to capture both it would a bit more difficult (Kernel.select?)
Edit: Some explanation. This is an ancient Unix procedure: pipe + fork + calls to dup2 (reopen) depending on what you want. In a nutshell: you create a pipe as a means of communication between child and parent. After the fork, each peer close the pipe's endpoint it does not use, the child remaps (reopen) the channel you need to the write endpoint of the pipe and finally the parent reads on the read channel of the pipe.
For script independent output logging you might want to enable it from the terminal emulator (shell container).
screen -L
OR
xterm -l
This will capture all output produced by any shell or program running inside the emulator, including output generated by your ruby scripts.
You could use tee to write the contents to a file or a pipe, and read the file afterwards.
Have a look at POpen4.
It claims to be platform independent (but I do not think it works in jruby where you can use IO#popen instead).
Have your script do its prompt output to stderr.
echo "Enter something" >&2
read answer
echo "output that will be captured"
This will be done for you if you use read -p to issue the prompt:
read -p "Enter something" answer
echo "output that will be captured"
io = IO.popen(<your command here>)
log = io.readlines
io.close
Now in log variable you have the output of executed command. Parse it, convert it, or do whatever you want.

Resources