Ruby - Open3 not finishing subprocess - ruby

I'm using:
- Ruby 1.9.3-p448
- Windows Server 2008
I have a file that contains commands that is used by a program, I'm using it in this way
C:\> PATH_TO_FOLDER/program.exe file.txt
File.txt have some commands so program.exe will do the following:
- Execute commands
- Reads from a DB using an ODBC method used by program
- Outputs result in a txt file
Using powershell this command works fine and as expected.
Now I have this in a file (app.rb):
require 'sinatra'
require 'open3'
get '/process' do
program_path = "path to program.exe"
file_name = "file.txt"
Open3.popen3(program_path, file_name) do |i, o, e, w|
# I have some commands here to execute but just as an example I'm using o.read
puts o.read
end
end
Now when using this by accessing http://localhost/process, Open3 works by doing this (I'm not 100% sure but after trying several times I think is the only option)
Reads commands and executes them (this is ok)
Tries to read from DB by using ODBC method (Here is my problem. I
need to receive some output from Open3 so I can show it in a browser, but I guess when it tries to read it starts another process that Open3 is not aware of, so Open3 goes on and finish without waiting for it)
Exits
Exits
I've found about following:
Use Thread.join (in this case, w.join) in order to wait for process to finish, but it doesn't work
Open4 seems to handle child process but doesn't work on Windows
Process.wait(pid), in this case pid = w.pid, but also doesn't work
Timeout.timeout(n), the problem here is that I'm not sure how long
will it take.
Is there any way of handling this? (waiting for Open3 subprocess so I get proper output).

We had a similar problem getting the exit status and this is what we did
Open3.popen3(*cmd) do |stdin, stdout, stderr, wait_thr|
# print stdout and stderr as it comes in
threads = [stdout, stderr].collect do |output|
Thread.new do
while ((line = output.gets rescue '') != nil) do
unless line.blank?
puts line
end
end
end
end
# get exit code as a Process::Status object
process_status = wait_thr.value #.exitstatus
# wait for logging threads to finish before continuing
# so we don't lose any logging output
threads.each(&:join)
# wait up to 5 minutes to make sure the process has really exited
Timeout::timeout(300) do
while !process_status.exited?
sleep(1)
end
end rescue nil
process_status.exitstatus.to_i
end

Using Open3.popen3 is easy only for trivial cases. I do not know the real code for handling the input, output and error channels of your subprocess. Neither do I know the exact behaviour of your subprocesses: Does it write on stdout? Does it write on stderr? Does it try to read from stdin?
This is why I assume that there are problems in the code that you replaced by puts o.read.
A good summary about the problems you can run into is on http://coldattic.info/shvedsky/pro/blogs/a-foo-walks-into-a-bar/posts/63.
Though I disagree with the author of the article, Pavel Shved, when it comes to finding a solution. He recommends his own solution. I just use one of the wrapper functions for popen3 in my projects: Open3.capture*. They do all the difficult things like waiting for stdout and stderr at the same time.

Related

Git Hook - Ruby Code - Interactive Input

I am trying to take input from git hook execution code(commit-msg hook). But the ruby is not able to stop at the input point. And its executing the code as if input is like a puts statement. Here is the code I tried and failed.
#!/usr/bin/env ruby
require 'open3'
def take_input_here
Open3.popen3("pwd", :chdir=>"/") {|stdin, stdout, stderr, thread|
p stdout.read.chomp #=> "/"
}
input_val = gets.chomp
puts input_val
puts 'Hellow World!'
end
take_input_here
puts "Commit Aborted."
Process.exit(1)
Somebody please help my take this interactive input or else suggest me a good language for writing git hooks. Thanks in advance.
Most Git hooks are run with stdin either coming from a pipe to which Git writes information, or with stdin disconnected from the terminal entirely. The commit-msg hook falls into this second category.
It won't matter which language you use: reading stdin in a commit-msg hook will see EOF immediately, as stdin is connected to /dev/null (Linux/Unix) or NUL: (Windows).
On Unix-like systems, you can try opening /dev/tty. Note that if Git is being run from something that doesn't have a /dev/tty (some detached process, e.g., via cron) or where reading /dev/tty is bad for some other reason, this may cause other issues, so be careful with this.

Unix commands work on server but not in ruby ssh session

I am trying to learn how to use the net-ssh gem for ruby. I want to execute the commands below, after I login to the directory - /home/james.
cd /
pwd
ls
When I do this with putty, it works and i can see a list of directories. But, when I do it with ruby code, it does not give me the the same output.
require 'rubygems'
require 'net/ssh'
host = 'server'
user = 'james'
pass = 'password123'
def get_ssh(host, user, pass)
ssh = nil
begin
ssh = Net::SSH.start(host, user, :password => pass)
puts "conn successful!"
rescue
puts "error - cannot connect to host"
end
return ssh
end
conn = get_ssh(host, user, pass)
def exec(linux_code, conn)
puts linux_code
result = conn.exec!(linux_code)
puts result
end
exec('cd /', conn)
exec('pwd', conn)
exec('ls', conn)
conn.close
Output -
conn successful!
cd /
nil
pwd
/home/james
ls
nil
I was expecting pwd to give me / instead of /home/james. That is how it works in putty. What is the mistake in the ruby code?
It seems like every command runs on it's own environment, so the current directory is not carried over exec to exec. You can verify this if you do:
exec('cd / && pwd', conn)
It will print /. It is not clear from the documentation how to make all the commands execute on the same environment or if this is even possible at all.
This is because net/ssh is stateless, so it opens a new connection with each command execution.
You can use the rye gem that implements a work around for this. but I do not know if it works with ruby > 2, since its development is not that active.
Another way is to use a pty process, in which you'll open a pseudo terminal with the ssh command, than use the input and output files to write commands for the terminal and read the results. To read the results you need to use the select method of the IO class. But you need to learn how to use those utilities since its not that obvious for a non experienced programmer.
And, Yey, I found how to do that, and in fact it is so simple. I think I did not get to this solution last time, because I was a little new to this thing of net-ssh, pty terminal. But yey, I found it finally, and here and example.
require 'net/ssh'
shell = {} #this will save the open channel so that we can use it accross threads
threads = []
# the shell thread
threads << Thread.new do
# Connect to the server
Net::SSH.start('localhost', 'your_user_name', password: 'your_password') do |session|
# Open an ssh channel
session.open_channel do |channel|
# send a shell request, this will open an interactive shell to the server
channel.send_channel_request "shell" do |ch, success|
if success
# Save the channel to be used in the other thread to send commands
shell[:ch] = ch
# Register a data event
# this will be triggered whenever there is data(output) from the server
ch.on_data do |ch, data|
puts data
end
end
end
end
end
end
# the commands thread
threads << Thread.new do
loop do
# This will prompt for a command in the terminal
print ">"
cmd = gets
# Here you've to make sure that cmd ends with '\n'
# since in this example the cmd is got from the user it ends with
#a trailing eol
shell[:ch].send_data cmd
# exit if the user enters the exit command
break if cmd == "exit\n"
end
end
threads.each(&:join)
and here we are, an interactive terminal using net-ssh ruby gem.
For more info look here its for the previous version 1, but it is so useful for you to understand how every piece works. And here

Ruby—Open3.popen3 / how to print the output

I have a little ruby script which does a mysql import in the way: mysql -u <user> -p<pass> -h <host> <db> < file.sql, but utilizes Open3.popen3 to do so. That is what I have so far:
mysqlimp = "mysql -u #{mysqllocal['user']} "
mysqlimp << "-h #{mysqllocal['host']} "
mysqlimp << "-p#{mysqllocal['pass']} "
mysqlimp << "#{mysqllocal['db']}"
Open3.popen3(mysqlimp) do |stdin, stdout, stderr, wthr|
stdin.write "DROP DATABASE IF EXISTS #{mysqllocal['db']};\n"
stdin.write "CREATE DATABASE #{mysqllocal['db']};\n"
stdin.write "USE #{mysqllocal['db']};\n"
stdin.write mysqldump #a string containing the database data
stdin.close
stdout.each_line { |line| puts line }
stdout.close
stderr.each_line { |line| puts line }
stderr.close
end
That is actually doing the Job, but there is one thing that bothers me, concerned to the output I would like to see.
If I change the first line to:
mysqlimp = "mysql -v -u #{mysqllocal['user']} " #note the -v
then the whole script hangs forever.
I guess, that happens because the read- and write-stream block each other and I also guess that the stdout needs to be flushed regularly so that stdin will go on to be consumed. In other words, as long as the buffer of the stdout is full, the process will wait until its flushed, but since this is done at the very bottom first, that never happens.
I hope someone can verify my theory? How could I write code that does prints out everything from the stdout and writes everything to the stdin as well?
Thanks in ahead!
Since you are only writing to stdout, you can simply use Open3#popen2e which consolidates stdout and stderr into a single stream.
To write newline terminated strings to a stream, you can use puts as you would with $stdout in a simple hello world program.
You must use waith_thread.join or wait_thread.value to wait until the child process terminates.
In any case, you will have to start a separate thread for reading from the stream, if you want to see the results immediately.
Example:
require 'open3'
cmd = 'sh'
Open3.popen2e(cmd) do |stdin, stdout_stderr, wait_thread|
Thread.new do
stdout_stderr.each {|l| puts l }
end
stdin.puts 'ls'
stdin.close
wait_thread.value
end
Your code, fixed:
require 'open3'
mysqldump = # ...
mysqlimp = "mysql -u #{mysqllocal['user']} "
mysqlimp << "-h #{mysqllocal['host']} "
mysqlimp << "-p#{mysqllocal['pass']} "
mysqlimp << "#{mysqllocal['db']}"
Open3.popen2e(mysqlimp) do |stdin, stdout_stderr, wait_thread|
Thread.new do
stdout_stderr.each {|l| puts l }
end
stdin.puts "DROP DATABASE IF EXISTS #{mysqllocal['db']};"
stdin.puts "CREATE DATABASE #{mysqllocal['db']};"
stdin.puts "USE #{mysqllocal['db']};"
stdin.close
wait_thread.value
end
Whenever you start a process from the command line or via fork, the process inherits stdin, stdout and stderr from the father process. This means, if your command line runs in a terminal, stdin, stdout and stderr of the new process are connected to the terminal.
Open3.popen3, on the other hand, does not connect stdin, stdout and stderr to the terminal, because you do not want direct user interaction. So we need something else.
For stdin, we need something with two abilities:
The father process needs something to enqueue data that the subprocess is supposed to get when it reads from stdin.
The subprocess needs something that offers a read function like stdin does.
For stdout and stderr, we need something similar:
The subprocess needs something to write to. puts and print should enqueue the data, that the father process is supposed to read.
The father process needs something that offers a read function in order to get the stdout and stderr data of the subprocess.
This means, for stdin, stdout and stderr, we need three queues (FIFO) for communication between father process and subprocess. These queues have to act a little bit like files as they have to provide read, write (for puts and print), close and select (is data available?).
Therefore, both Linux and Windows provide anonymous pipes. This is one of the conventional (local) interprocess communication mechanisms. And, well, Open3.popen3 really wants to do communication between two different processes. This is why Open3.popen3 connects stdin, stdout and stderr to anonymous pipes.
Each pipe, be it anonymous or named, does have a buffer of limited size. This size depends on operation system. The catch is: If the buffer is full and a processes tries to write to the pipe, the operating system suspends the process until another processes reads from the pipe.
This may be your problem:
You keep feeding data to your subprocess, but you do not read what your subprocess writes to stdout.
Consequently, the output of our subprocess keeps accumulating in a buffer until the buffer is full.
This is when the operation system suspends your subprocess (puts or print blocks).
Now you can still feed data to the anonymous pipe that is connected to the stdin of your subprocesses until too much of stdin data has accumulated. The buffer of the stdin pipe got full. Then the operating system will suspend the father processes (stdin.write will block).
I advise you to use Open3.capture2e or a similar wrapper around Open3.popen3. You can pass data to the subprocess with the keyword argument :stdin_data.
If you insist on communicating with your subprocess "interactively", you need to learn about IO.select or using multi-threading. Both of them are quite a challenge. Better use Open3.capture*.

$stdin.gets is not working when execute ruby script via pipeline

Here comes a sample ruby code:
r = gets
puts r
if the script is executed standalone from console, it work fine. But if i ran it via pipeline:
echo 'testtest' | ruby test.rb
gets seem is redirected to pipeline inputs, but i need some user input.
How?
Stdin has been attached to the receiving end of the pipe by the invoking shell. If you really need interactive input you have a couple choices. You can open the tty input directly, leavng stdin bound to the pipe:
tty_input = open('/dev/tty') {|f| f.gets }
/dev/tty works under linux and OS/x, but might not work everywhere.
Alternatively, you can use a different form of redirection, process substitution, under bash to supply the (formerly piped) input as a psuedo-file passed as an argument and leave stdin bound to your terminal:
ruby test.rb <(echo 'testtest')
# test.rb
input = open(ARGV[0])
std_input = gets
input.readlines { |line| process_line(line) }

How to wait for system command to end

I'm converting an XLS 2 CSV file with a system command in Ruby.
After the conversion I'm processing the CSV files, but the conversion is still running when the program wants to process the files, so at that time they are non-existent.
Can someone tell me if it's possible to let Ruby wait the right amount of time for the system command to finish?
Right now I'm using:
sleep 20
but if it will take longer once, it isn't right of course.
What I do specifically is this:
#Call on the program to convert xls
command = "C:/Development/Tools/xls2csv/xls2csv.exe C:/TDLINK/file1.xls"
system(command)
do_stuff
def do_stuff
#This is where i use file1.csv, however, it isn't here yet
end
Ruby's system("...") method is synchronous; i.e. it waits for the command it calls to return an exit code and system returns true if the command exited with a 0 status and false if it exited with a non-0 status. Ruby's backticks return the output of the commmand:
a = `ls`
will set a to a string with a listing of the current working directory.
So it appears that xls2csv.exe is returning an exit code before it finishes what it's supposed to do. Maybe this is a Windows issue. So it looks like you're going to have to loop until the file exists:
until File.exist?("file1.csv")
sleep 1
end
Try to use threads:
command = Thread.new do
system('ruby programm.rb') # long-long programm
end
command.join # main programm waiting for thread
puts "command complete"

Resources