Can't pipe into gvim from a ruby sub process - ruby

I've been attempting to use the PA_translator.vim plugin but found that it doesn't work in Win32 Gvim. This appears to be because in an embedded Vim Ruby script it's not possible to use any of the commands which pipe in from a subprocess. The original process builds a command dymanically and then launches a subprocess to obtain a JSON snippet like so:
cmd = "require 'net/http'; p Net::HTTP.get('%s', '%s')"% [host, path]
response = `ruby -e "#{cmd}"`
If I run this in a command line ruby script it works fine, but inside a Vim script the pipe appears to return an empty string..
I've also tried several other methods which produce the same result:
response = ''
IO.popen("ruby.exe", "w+") do |f|
f.write cmd
f.close_write
response = f.read
p response
end
And even:
def redirect
orig_defout = $stdout
$stdout = StringIO.new
yield
$stdout.string
ensure
$stdout = orig_defout
end
response = redirect { eval cmd }
All of these seem to fail for the same reason, it's not possible to get the output from the pipe and I get back an empty string. GVim is a true win32 process, is there some reason why piping from a subprocess won't work?
EDIT: If I try to capture piped output from embedded vim/perl, that works fine, so I guess it's some particular issue with the vim -> win32 -> ruby combination:
fun! SayHello()
perl << EOF
$bob = `ls`;
VIM::Msg($bob);
EOF
endfun

Related

Can't print file after writing to it

For some reason I cannot get this to print anything with the final line.
prev_std = STDOUT
$stdout = File.open(reportname, 'w')
# Several things happen to print to STDOUT here
$stdout = STDOUT
# Tell them that the report was written
puts "Report written to #{ reportname }"
# Slurp in the report ( FIXME )
reporttext = File.open(reportname, 'r') { |f| f.read }
# Print out the report after ( FIXME )
puts reporttext
I've just written the report to the file, but somehow I can't read it back to the screen. I'm using the exact same string in the code to refer to the file in both cases. Checking at the shell prompt proves the file was written correctly, and yet I still can't get it to print to the screen.
What am I doing wrong here?
It looks like the issue comes from the file not being closed. Changing $stdout doesn't close the file object it used to refer to. Add $stdout.close to the line before you reassign it to the old stdout.

Ruby - Using Kernel.exec with custom STDOUT

I'm trying to exec a shell process such that its standard output is prefixed with an identifier.
My approach is to write a custom IO object that re-implements write, passing it as the :out argument to exec (documented under Process::spawn).
require "delegate"
class PrefixedStdout < DelegateClass(IO)
def initialize(prefix, io)
#prefix = prefix
super(io)
end
def write(str)
super("#{#prefix}: #{str}")
end
end
pr_stdout = PrefixedStdout.new("my_prefix", $stdout)
pr_stdout.write("hello\n") # outputs "my_prefix: hello"
exec("echo hello", out: pr_stdout) # outputs "hello"
Somehow exec is bypassing PrefixedStdout#write and calling $stdout.write directly. How do I force exec to use my prefixed output stream as its stdout?
What gets preserved in the other process is the underlying file descriptor (or rather they are hooked up under the hood), so as I commented I don't think you'll ever get writes to that descriptor to be funnelled through your write method - exec replaces the running process with a new one.
A possible approach is to create a pipe, pass one end to your child process and then read from the other end, inserting prefixes as needed,
For example you might do
IO.pipe do |read_pipe, write_pipe|
fork do
exec("echo hello", out: write_pipe)
end
write_pipe.close
while line = read_pipe.gets
puts "prefix: #{line}"
end
end
You might also be interested in IO.popen which wraps some of this up.
Somehow exec is bypassing PrefixedStdout#write and calling
$stdout.write
Take a look at this example:
class MyIO < IO
def initialize(fd)
super
end
def write(str)
STDOUT.puts 'write called'
super
end
end
fd = IO.sysopen("data.txt", "w")
io = MyIO.new(fd)
io.write "goodbye\n"
puts '---now with exec()...'
exec("echo hello", :out => io)
--output:--
write called
---now with exec()...
Now, what do you think is in the file data.txt?
spoiler:
$cat data.txt
hello
So passing an IO object to exec() 'works', but not the way you expected: exec() never calls io.write() to write the output of the child process to io. Instead, I assume exec() obtains the file descriptor for io, then passes it to some C code, which does some system level redirection of the output from the child process to the file data.txt.
Do you have to use exec()? If not:
prefix = "prefix: "
cmd = 'echo hello'
output = `#{cmd}`
puts "#{prefix}#{output}"
--output:--
prefix: hello

Ruby: Printing system output in real time?

I have a ruby rake task that calls a bash script via:
Open3.popen('/path/file_converter.sh', file_list, output_format)
That bash script outputs logs to the command line as it processes (which takes from 30 secs to 5 hours)
When I call the rake task, the output from bash is returned to the command line, but only as one large message after the entire script has run. Anyone know of a way to pipe command line output direct to ruby output as it occurs?
According to the documentation you should be able to use the output stream given in the block:
Open3.popen3('/path/file_converter.sh', file_list, output_format) do |_,out,_,_|
out.each_line do |line|
puts line
end
end
Put the output into a file. And run the process in the background creating a new thread. After it you can parse the file.
class FileConverter
def initialize
#output_file = '/tmp/something.txt'
output_format = 'foo'
file_list = 'bar foo something'
#child = Thread.new do
`/path/file_converter.sh #{file_list} #{output_format} 2>&1 >#{#output_file}`
end
end
def data
File.readlines(#output_file)
end
def parse
while #child.alive?
# parse data # TODO: need to implement real parsing
sleep 0.5
end
end
end
fc = FileConverter.new
fc.parse

Pipe symbol before command in "open"

I stumbled over the following line of code
open("|cd lib && /opt/jruby/bin/jruby jasper_pdf.rb") { |input| open("log/jasper_pdf.log", "w") { |f| f.write(input.read) } }
What is the pipe symbol before the cd command for?
The Ruby documentation for Kernel#open says:
If path starts with a pipe character ("|"), a subprocess is created,
connected to the caller by a pair of pipes. The returned IO object may
be used to write to the standard input and read from the standard
output of this subprocess.
In your case it is used to log the output of the process spawned by the command /opt/jruby/bin/jruby jasper_pdf.rb to the file log/jasper_pdf.log.
It is roughly equivalent to use the Open3 module like this:
require 'open3'
Open3.popen2e('cd lib && /opt/jruby/bin/jruby jasper_pdf.rb') do |_, output, _|
open('log/jasper_pdf.log', 'w') do |f|
f.write(output.read)
end
end

How to proxy a shell process in ruby

I'm creating a script to wrap jdb (java debugger). I essentially want to wrap this process and proxy the user interaction. So I want it to:
start jdb from my script
send the output of jdb to stdout
pause and wait for input when jdb does
when the user enters commands, pass it to jdb
At the moment I really want a pass thru to jdb. The reason for this is to initialize the process with specific parameters and potentially add more commands in the future.
Update:
Here's the shell of what ended up working for me using expect:
PTY.spawn("jdb -attach 1234") do |read,write,pid|
write.sync = true
while (true) do
read.expect(/\r\r\n> /) do |s|
s = s[0].split(/\r\r\n/)
s.pop # get rid of prompt
s.each { |line| puts line }
print '> '
STDOUT.flush
write.print(STDIN.gets)
end
end
end
Use Open3.popen3(). e.g.:
Open3.popen3("jdb args") { |stdin, stdout, stderr|
# stdin = jdb's input stream
# stdout = jdb's output stream
# stderr = jdb's stderr stream
threads = []
threads << Thread.new(stderr) do |terr|
while (line = terr.gets)
puts "stderr: #{line}"
end
end
threads << Thread.new(stdout) do |terr|
while (line = terr.gets)
puts "stdout: #{line}"
end
end
stdin.puts "blah"
threads.each{|t| t.join()} #in order to cleanup when you're done.
}
I've given you examples for threads, but you of course want to be responsive to what jdb is doing. The above is merely a skeleton for how you open the process and handle communication with it.
The Ruby standard library includes expect, which is designed for just this type of problem. See the documentation for more information.

Resources