Neovim's job-control example in :help job-control works well for bash scripts. However, I am unable to make it work for ruby. Consider the following example:
set nocp
set buftype=nowrite
call jobstart('shell', 'bash', ['-c', 'for ((i = 0; i < 5; i++)); do sleep 2 && printf "Hello Bash!\n"; done'])
call jobstart('shell', 'ruby', ['-e', '5.times do sleep 2 and puts "Hello Ruby!" end'])
function JobHandler()
if v:job_data[1] == 'exit'
let str = v:job_data[0] . ' exited'
else
let str = join(v:job_data[2])
endif
call append(line('$'), str)
endfunction
au JobActivity shell* call JobHandler()
Running nvim -u NONE -S <filename> produces the following output:
Hello Bash!
Hello Bash!
Hello Bash!
Hello Bash!
Hello Bash!
1 exited
Hello Ruby! Hello Ruby! Hello Ruby! Hello Ruby! Hello Ruby!
2 exited
How do we make the ruby example work like that for bash?
It turns out that ruby's output is being buffered. One has to force it to be flushed in order to see the desired output.
call jobstart('shell', 'ruby', ['-e', '$stdout.sync = true; 5.times do sleep 1 and puts "Hello Ruby!\n" end'])
My original problem was to run a ruby test asynchronously. For it to work, I had to write $stdout.sync = true to a file and require it using -r:
call jobstart('shell', 'ruby', ['-r', '/tmp/opts.rb', '-I', 'test', 'test/unit/user_test.rb'])
Related
I'm new to ruby . I'm trying to source my shell script in ruby and execute functions in sourced shell script.
below is my shell script /tmp/test.sh
#!/bin/bash
function hello {
echo "hello, this script is being called from ruby"
}
below is my ruby script test.rb
#!/usr/bin/ruby
system("source /tmp/test.sh")
puts $?.exitstatus
system("hello")
puts $?.exitstatus
output using system
[root#localhost ~]# ruby test.rb
127
127
I even tried the back tick approach, but i got below error
code :
#!/usr/bin/ruby
status=`source /root/test.sh`
puts status
status2=`hello`
puts status2
error:
ruby test.rb
test.rb:3:in ``': No such file or directory - source (Errno::ENOENT)
from test.rb:3:in `<main>'
can anyone tell what is wrong in my code.
You can use session gem, or write a solution yourself.
script.sh:
#!/bin/bash
function hello() {
echo "Hello, World!"
}
Ruby file:
IO.popen('bash', 'r+') do |sh|
sh.puts 'source script.sh'
sh.puts 'hello'
sh.close_write
puts sh.gets
end
# => Hello, World!
Given a file with Ruby 2.3.0p0:
#!/usr/bin/env ruby
# frozen_string_literal: true
# Exit cleanly from an early interrupt
Signal.trap("INT") { abort }
This is fine.
# frozen_string_literal: true
#!/usr/bin/env ruby
# Exit cleanly from an early interrupt
Signal.trap("INT") { abort }
will result in error:
syntax error near unexpected token `"INT"'
`Signal.trap("INT") { abort }'
Why?
A shebang has to appear on the file's initial line.
A file test.rb containing:
#!/usr/bin/env ruby
# foo bar
puts "hello from #{RbConfig.ruby}"
will be run via Ruby:
$ ./test.rb
hello from /.../ruby-2.3.0/bin/ruby
But if test.rb contains: (1st and 2nd line swapped)
# foo bar
#!/usr/bin/env ruby
echo "hello from $SHELL"
it will be run as an ordinary shell script:
$ ./test.rb
hello from /.../bin/zsh
Therefore, the error you are getting is no Ruby error, it's from your shell.
In bash programming I am currently facing a problem where I do not only want to modify a global variable in a bash function but also return a proper return code via return and $? as well as being able to assign all stdout output which appears during the function call to a variable outside of the function.
While each of these individual tasks (modify global var, return status code, assign stdout to variable) seems to perfectly possible in bash (and even a combination of two of these wishes), a combination of all the three requirements seems to be hardly (i.e. only inconveniently) possible.
Here is an example script I have prepared to demonstrate the problem:
#!/bin/bash
#
# This is the required output:
#
# -- cut here --
# RETURN: '2'
# OUTPUT: 'Hello World!'
# GLOBAL_VAR: '3'
# -- cut here --
#
GLOBAL_VAR=0
hello() {
echo "Hello World!"
GLOBAL_VAR=3
return 2
}
# (1) normal bash command substition (subshell)
# PROBLEM: GLOBAL_VAR is 0 but should be 3
# (hello is executed in subshell)
#
output=$(hello) ; result=$?
# (2) use 'read' and a reverse pipe
# PROBLEM: RETURN and GLOBAL_VAR is 0
# (hello in subshell and read return
# code returned)
#
#read output < <(hello) ; result=$?
# (3) normal function execution
# PROBLEM: no catched output!
#
#hello ; result=$?
# (4) using lastpipe + read
# PROBLEM: GLOBAL_VAR is 0 but should be 3
# (a pipe generateѕ a subshell?!?!)
#
#shopt -s lastpipe
#hello | read output ; result=${PIPESTATUS[0]}
# (5) ksh-like command substiution
# PROBLEM: Works, but ksh-syntax
# -> doesn't work in bash!
#
#output=${ hello; } ; result=$?
# (6) using a temp file to catch output of hello()
# WORKS, but ugly due to tmpfile and 2xsubshell use!
#
#tmp=$(mktemp)
#hello >${tmp} ; result=$?
#output=$(cat ${tmp})
#rm -f ${tmp}
###################################
# OUTPUT stuff
# this should output "2"
echo "RESULT: '${result}'"
# this should output "Hello World!"
echo "OUTPUT: '$output'"
# this should output "3"
echo "GLOBAL_VAR: '$GLOBAL_VAR'"
In this script I have added a function hello() which should return a status code of 2, sets a global variable GLOBAL_VAR to 3 and outputs "Hello World!" to stdout.
In addition to this function I have added 6 potential solutions to call this hello() function to achieve exactly the output I require (which is shown at the top of the bash script code).
By commenting out/in these 6 different ways of calling the function you will see that only solution (6) is possible to fulfill all my requirements for calling the function.
Especially interesting is that solution number (5) shows the ksh-syntax which works exactly like I require this function to work. So calling this script using ksh outputs all variables with their required values. Of course this solution (command substitution using ${ cmd; }) isn't supported in bash. However, I definitely require a bash solution as the main script where I require my solution is a bash-only script I cannot port to ksh.
While of course, solution (6) also fulfills my requirements it requires to put the output of hello() in a temporary file and read it afterwards again. In terms of performance (several subshells required, temp-file management) this isn't a real solution for me.
So now the question comes up if there is any other potential solution in bash that fulfills my requirements so that the script above outputs exactly what I want, thus combines all my three requirements?!?
It's not clear from the question why you can't simply rewrite the function to behave differently. If you really can't change the function, you could change bash:
#!/bin/bash
GLOBAL_VAR=0
hello() {
echo "Hello World!"
GLOBAL_VAR=3
return 2
}
echo () { output=$*; }
hello
result=$?
unset -f echo
# this should output "2"
echo "RESULT: '$result'"
# this should output "Hello World!"
echo "OUTPUT: '$output'"
# this should output "3"
echo "GLOBAL_VAR: '$GLOBAL_VAR'"
The outputs match the expectations.
I'm looking for something equivalent of the backticks operator (``) with the capability to display output during shell command execution.
I saw a solution in another post:
(Running a command from Ruby displaying and capturing the output)
output = []
IO.popen("ruby -e '3.times{|i| p i; sleep 1}'").each do |line|
p line.chomp
output << line.chomp
end
p output
This solution doesn't fit my needs since $? remains nil after the shell command execution. The solution I'm looking for should also set $? (returning the value of $?.exitstatus in another way is also sufficient)
Thanks!
First, I'd recommend using one of the methods in Open3.
I use capture3 for one of my systems where we need to grab the output of STDOUT and STDERR of a lot of command-line applications.
If you need a piped sub-process, try popen3 or one of the other "pipeline" commands.
Here's some code to illustrate how to use popen2, which ignores the STDERR channel. If you want to track that also use popen3:
require 'open3'
output = []
exit_status = Open3.popen2(ENV, "ruby -e '3.times{|i| p i; sleep 1}'") { |stdin, stdout, thr|
stdin.close
stdout.each_line do |o|
o.chomp!
output << o
puts %Q(Read from pipe: "#{ o }")
end
thr.value
}
puts "Output array: #{ output.join(', ') }"
puts "Exit status: #{ exit_status }"
Running that outputs:
Read from pipe: "0"
Read from pipe: "1"
Read from pipe: "2"
Output array: 0, 1, 2
Exit status: pid 43413 exit 0
The example code shows one way to do it.
It's not necessary to use each_line, but that demonstrates how you can read line-by-line until the sub-process closes its STDOUT.
capture3 doesn't accept a block; It waits until the child has closed its output and exits, then it returns the content, which is great when you want a blocking process. popen2 and popen3 have blocking and non-blocking versions, but I show only the non-blocking version here to demonstrate how to read and output the content as it comes in from the sub-process.
Try following:
output = []
IO.popen("ruby -e '3.times{|i| p i; sleep 1 }'") do |f|
f.each do |line|
p line.chomp
output << line.chomp
end
end
p $?
prints
"0"
"1"
"2"
#<Process::Status: pid 2501 exit 0>
Using open3
require 'open3'
output = []
Open3.popen2("ruby -e '3.times{|i| p i; sleep 1}'") do |stdin,stdout,wait_thr|
stdout.each do |line|
p line.chomp
output << line.chomp
end
p wait_thr.value
end
I would like to find out if there is a portable way to check in a Ruby script whether it will block if it attempts to read from STDIN. The following is an approach that works for Unix (and Cygwin) but not native Win32. (It is based on a Perl approach I learned long ago.)
$ cat read-stdin.rb
#! /usr/bin/ruby
# test of reading from STDIN
require 'fcntl'
# Trace info on input objects
$stdout.sync=TRUE if $DEBUG # make sure standard output and error synchronized
$stderr.print "ARGV=#{ARGV}\n" if $DEBUG
$stderr.print "ARGF=#{ARGF}\n" if $DEBUG
# See if input available, showing usage statement if not
blocking_stdin = FALSE
if (defined? Fcntl::F_GETFL) then
$stderr.print "F_GETFL=#{Fcntl::F_GETFL} O_RDWR=#{Fcntl::O_RDWR}\n" if $DEBUG
flags = STDIN.fcntl(Fcntl::F_GETFL, 0)
$stderr.print "flags=#{flags}\n" if $DEBUG
blocking_stdin = TRUE if ((flags & Fcntl::O_RDWR) == Fcntl::O_RDWR)
$stderr.print "blocking_stdin=#{blocking_stdin}\n" if $DEBUG
end
if (blocking_stdin && (ARGV.length == 0)) then
$stderr.print "usage: #{$0} [-]\n"
Process.exit
end
# Read input and output it
$stderr.print "Input:\n" if $DEBUG
input_text = ARGF.read()
$stderr.print "Output:\n" if $DEBUG
print "#{input_text}\n"
Here is the interaction without debugging:
$ grep -v DEBUG read-stdin.rb >| /tmp/simple-read-stdin.rb
$ echo hey | ruby /tmp/simple-read-stdin.rb
hey
$ ruby /tmp/simple-read-stdin.rb
usage: /tmp/simple-read-stdin.rb [-]
Here is the interaction with debugging:
$ echo hey | ruby -d read-stdin.rb
ARGV=
ARGF=ARGF
F_GETFL=3 O_RDWR=2
flags=65536
blocking_stdin=false
Input:
Output:
hey
$ ruby -d read-stdin.rb
ARGV=
ARGF=ARGF
F_GETFL=3 O_RDWR=2
flags=98306
blocking_stdin=true
usage: read-stdin.rb [-]
I don't know if it is universally portable and I also don't know if it is considered a good idea (blocking isn't such a bad concept) but there is a non-blocking read method in IO. You can use it like this:
chunk = nil
begin
chunk = STDIN.read_nonblock(4096)
rescue Errno::EAGAIN
# Handle the case if it would block
chunk = 'nothing there...'
end
Though, I think it's quite disappointing it doesn't work without specifying a buffer size like IO#read does it, but working around this by using a loop should be quite easy.