Losing STDOUT when using Open3.capture3 and rake? - ruby

A build system that I use at work invokes several external console applications, Node.js among others.
The issue I am seeing is the STDOUT channel seems to not work after Open3.capture3 is invoked. For instance, I have a task called compileLess:
desc "Compile LESS"
task :compileLess do
puts "Preparing to compile LESS..."
execute "recess less/bootstrap.less --compress > output/css/bootstrap.min.css"
puts "Finished compiling LESS"
end
def execute(cmdLine, print_stdout = false)
puts "Executing #{cmdLine}"
stdout, stderr, status = Open3.capture3(cmdLine)
puts stdout if print_stdout
return stdout, stderr, status
end
What I would expect to see is something like:
Preparing to compile LESS...
Executing recess less/bootstrap.less --compress > output/css/bootstrap.min.css
Finished compiling LESS
But anything after the invocation of Open3.capture3 disables puts and print. I can force them to work by explicitly using:
STDOUT.puts "goodbye world"
I just want to know why it doesn't work.
Specs:
Window 7 Professional 32 bit
Ruby 1.9.3p392 (2013-02-22) [i386-mingw32]
Rake, version 10.1.0
Node v0.10.22

You redirected the STDOUT of the command-line with > output/css/bootstrap.min.css.
Your STDOUT from capture3() is empty of course.

Related

Git Hook - Ruby Code - Interactive Input

I am trying to take input from git hook execution code(commit-msg hook). But the ruby is not able to stop at the input point. And its executing the code as if input is like a puts statement. Here is the code I tried and failed.
#!/usr/bin/env ruby
require 'open3'
def take_input_here
Open3.popen3("pwd", :chdir=>"/") {|stdin, stdout, stderr, thread|
p stdout.read.chomp #=> "/"
}
input_val = gets.chomp
puts input_val
puts 'Hellow World!'
end
take_input_here
puts "Commit Aborted."
Process.exit(1)
Somebody please help my take this interactive input or else suggest me a good language for writing git hooks. Thanks in advance.
Most Git hooks are run with stdin either coming from a pipe to which Git writes information, or with stdin disconnected from the terminal entirely. The commit-msg hook falls into this second category.
It won't matter which language you use: reading stdin in a commit-msg hook will see EOF immediately, as stdin is connected to /dev/null (Linux/Unix) or NUL: (Windows).
On Unix-like systems, you can try opening /dev/tty. Note that if Git is being run from something that doesn't have a /dev/tty (some detached process, e.g., via cron) or where reading /dev/tty is bad for some other reason, this may cause other issues, so be careful with this.

How to run multiple ruby scripts simultaneously

I am trying to run multiple ruby scripts simultaneously on my mac, and I'm not having any luck. I can see the the ruby processes start up, but then they immediately stop. The script works fine as a single process, no errors. Here are some examples of things I've tried.
10.times do
system "nohup ruby program.rb \"arg1 arg2\" &"
end
10.times do
`nohup ruby program.rb \"arg1 arg2\" &`
end
10.times do
system "ruby program.rb \"arg1 arg2\""
end
Do you need to start it from ruby for any specific reason? Why don't you start it 10 times directly from bash? Like:
$ for i in seq 1 10; do nohup ruby foo.rb \&; done
Let me know..
nohup redirects its output to a file $HOME/nohup.out, unless it is explicitly redirected. You should redirect the output of each invocation to a different file.
Also, for the safe side, I would redirect stdin to /dev/null - just in case the called program reads from stdin.
10.times do |i|
system "nohup ruby program.rb 'arg1 arg2' </dev/null >#{ENV['HOME']}/nohup#{i}.out &"
end
BTW (and off topic): Are you sure, that you want to pass arg1 arg2 as a SINGLE argument to program.rb?
You can build a solution with fork, exec and wait of the module Process.
# start child processes
10.times { fork { exec(cmd) } }
# wait for child processes
10.times { |pid| Process.wait }
Or a bit longer to play around with (Tested with Ruby 1.8.7 on Ubuntu). Added rescue nil to suppress error when waiting.
10.times do |i|
fork do
ruby_cmd = "sleep(#{10-i});puts #{i}"
exec("ruby -e \"#{ruby_cmd}\"")
end
end
10.times { Process.wait rescue nil }
puts "Finished!"

Running several 'exec' in a ruby loop

I'm scanning a folder for audio files and converting them to mp3.
Works great in RUBY.
However, once the first transcoding is done, it stops the whole loop
Here's a breakdown of my code process.
def scanFolder
# lots of code above to get folder list, check for incorrect files etc..
audioFileList.each {
|getFile|
exec_command = "ffmpeg #{getFile} #{newFileName}"
exec exec_command
}
end
What's happening is that it's transcoding the first file it finds, then it stops the whole function. Is there a way to force it to continue?
The ffmpeg does run and finish correctly at the moment, so it's not breaking anything
exec replaces the current process by running the given command. Example:
2.0.0-p598 :001 > exec 'echo "hello"'
hello
shivam#bluegene:$
You can see how exec replaces the irb with system echo which then exits automatically.
Therefore try using system instead. Here the same example using system:
2.0.0-p598 :003 > system 'echo "hello"'
hello
=> true
2.0.0-p598 :004 >
You can see I am still in irb and its not exited after executing the command.
This makes your code as follows:
def scanFolder
# lots of code above to get folder list, check for incorrect files etc..
audioFileList.each {
|getFile|
exec_command = "ffmpeg #{getFile} #{newFileName}"
system exec_command
}
end
Along with shivam's answer about using system, the spawn method may also be useful here:
http://ruby-doc.org//core-2.1.5/Process.html#method-c-spawn

Ruby - Open3 not finishing subprocess

I'm using:
- Ruby 1.9.3-p448
- Windows Server 2008
I have a file that contains commands that is used by a program, I'm using it in this way
C:\> PATH_TO_FOLDER/program.exe file.txt
File.txt have some commands so program.exe will do the following:
- Execute commands
- Reads from a DB using an ODBC method used by program
- Outputs result in a txt file
Using powershell this command works fine and as expected.
Now I have this in a file (app.rb):
require 'sinatra'
require 'open3'
get '/process' do
program_path = "path to program.exe"
file_name = "file.txt"
Open3.popen3(program_path, file_name) do |i, o, e, w|
# I have some commands here to execute but just as an example I'm using o.read
puts o.read
end
end
Now when using this by accessing http://localhost/process, Open3 works by doing this (I'm not 100% sure but after trying several times I think is the only option)
Reads commands and executes them (this is ok)
Tries to read from DB by using ODBC method (Here is my problem. I
need to receive some output from Open3 so I can show it in a browser, but I guess when it tries to read it starts another process that Open3 is not aware of, so Open3 goes on and finish without waiting for it)
Exits
Exits
I've found about following:
Use Thread.join (in this case, w.join) in order to wait for process to finish, but it doesn't work
Open4 seems to handle child process but doesn't work on Windows
Process.wait(pid), in this case pid = w.pid, but also doesn't work
Timeout.timeout(n), the problem here is that I'm not sure how long
will it take.
Is there any way of handling this? (waiting for Open3 subprocess so I get proper output).
We had a similar problem getting the exit status and this is what we did
Open3.popen3(*cmd) do |stdin, stdout, stderr, wait_thr|
# print stdout and stderr as it comes in
threads = [stdout, stderr].collect do |output|
Thread.new do
while ((line = output.gets rescue '') != nil) do
unless line.blank?
puts line
end
end
end
end
# get exit code as a Process::Status object
process_status = wait_thr.value #.exitstatus
# wait for logging threads to finish before continuing
# so we don't lose any logging output
threads.each(&:join)
# wait up to 5 minutes to make sure the process has really exited
Timeout::timeout(300) do
while !process_status.exited?
sleep(1)
end
end rescue nil
process_status.exitstatus.to_i
end
Using Open3.popen3 is easy only for trivial cases. I do not know the real code for handling the input, output and error channels of your subprocess. Neither do I know the exact behaviour of your subprocesses: Does it write on stdout? Does it write on stderr? Does it try to read from stdin?
This is why I assume that there are problems in the code that you replaced by puts o.read.
A good summary about the problems you can run into is on http://coldattic.info/shvedsky/pro/blogs/a-foo-walks-into-a-bar/posts/63.
Though I disagree with the author of the article, Pavel Shved, when it comes to finding a solution. He recommends his own solution. I just use one of the wrapper functions for popen3 in my projects: Open3.capture*. They do all the difficult things like waiting for stdout and stderr at the same time.

Why does $?.exited? return an incorrect value after system() when I/O redirection is used?

I'm trying to execute a program using the system function in Ruby.
I need to capture the stdout and stderr of the program, so I'm using
a shell command that redirects stdout and stderr to files.
One important requirement is that I need to determine whether
the program exited normally or was killed by a signal.
The weird behavior I'm seeing is that when I redirect stdout and
stderr to files, $?.exited? is true even if the program was
killed by a signal! Here is a program that demonstrates the
problem:
#! /usr/bin/ruby
File.open("bad.c", "w") do |out|
out.print <<'EOF'
#include <stdio.h>
int main(void) {
int *p = 0;
*p = 42;
printf("%d\n", *p);
return 0;
}
EOF
end
raise "Couldn't compile bad.c" unless system("gcc -o bad bad.c")
system("./bad");
puts $?.exited?
system("./bad > out 2> err");
puts $?.exited?
The output of this program is
false
true
However, I would expect
false
false
since the program is killed by a segfault in both cases.
The command ruby -v produces the output
ruby 1.9.3p194 (2012-04-20 revision 35410) [x86_64-linux]
Any explanations and/or workarounds would be greatly appreciated!
Since you are performing shell redirection on the second system() call, ruby needs to invoke your shell to do the work. Even though your program is being killed, the shell ends up executing just fine.
You can, instead, do the redirection directly in ruby:
system("./bad", out: 'out', err: 'err');
puts $?.exited? # => false
For more options, check out the documentation for spawn() - the options on system() are processed the same way.

Resources