Ruby can't read TCP socket when run in background - ruby

I was writing a Slack bot in Ruby under Windows and everything worked just fine until I decided to run it on a Linux server. When I access my shell and run the script it works correctly in the foreground, but once I move it to the background it stops working. I'm getting a timeout error on a HTTP request with Net::HTTP, or an EOFError on the socket read.
I'm using Ruby 2.3 on Debian 7.
I think that the Ruby process stops on its own, because I only get the errors once I return the process to the foreground, and, if I run ps aux when the process is in the background it has the "T" (stopped) flag listed.
Since I want to become more familiar with Linux, I'd like to know what is causing the issue, rather than how to solve it.
EDIT: I found that my user input handler is causing the problem. Here is the problematic bit:
def input_handler
return Thread.new {
loop do
user_input = gets.chomp
end
}
end

The problem looks like it's gets.
By default gets reads from STDIN. The documentation says:
Returns (and assigns to $_) the next line from the list of files in ARGV (or $*), or from standard input if no files are present on the command line.
The code/thread will stop and wait for a prompt from the keyboard, or read from the piped input if STDIN is redirected or from a file given as a parameter to the script on the command-line.

Related

Process.spawn (Ruby 1.9.x): How to check if spawning was successful and detect errors?

Using a cross-platform solution (GNU/Linux, Windows), I want to spawn an external program in the background, capture it's pid and later on stop the program via the stored pid.
Consider this code in Ruby 1.9.x:
pid = Process.spawn("xxx")
puts pid
stdout/stderr:
8117
sh: 1: xxx: not found
No exception is thrown, and I don't see any way to detect the fact that the spawn was not successful (xxx is not a valid command).
What is the best way to detect that this spawn was not successful?
Process#spawn returns a process ID. If you get a process ID back, then technically the function itself did not fail. In Ruby >= 2.0.0, Process#spawn will throw Errno::ENOENT if it fails to find the command. As ruby 1.9 is unsupported, the best solution is to upgrade ruby.
A hack which may help would be to test if the process is actually running after the call returns. Sadly, this will be platform specific.
pid = Process.spawn("xxx")
case RUBY_PLATFORM
when /linux/i
success = File.exist?("/proc/#{pid}")
when /windows/i
# use win32api gem, Windows API call EnumProcesses
else
# ?
end
Unfortunately, if the process finishes by the time you test for its existence, you can't tell. You probably want to check for its results (whatever it does) to see if it did it also.
Another approach, if you control the program being launched, is to open a named pipe before launching it and have it send your ruby program a message over the pipe that it is running. You can then read from the pipe after the spawn call in a non-blocking way and use Timeout to prevent it from blocking forever. A simpler, less clean approach would be to have that program write something deterministic to a file that you can use a simple File.exist? test on to see if its there.

spawning a command prompt in a different process and sending/receiving commands on Windows

I have a problem at hand which requires me to spawn a command prompt as a different process and send some commands to it and capture/parse the command output. This interaction needs to be in the form of a parent-child process where say all the commands can be put in a ruby file and upon running the ruby file, the commands are sent to the console(command prompt) and output is received from it and processed in the ruby script.
The general logic which I would follow is:
Spawn a different process by using a fork and get a process id
Obtain streams for the process
Write to the input stream of the process and read from the output stream.
The environment which I am using is Windows XP machine with Ruby 1.9.2 installed on it. I downloaded the win32-process library found over here. By using that library, I could do step 1 as follows
require 'win32/process'
APP_NAME = "C:\\Windows\\system32\\cmd.exe"
process_info = Process.create(:app_name => APP_NAME,
:creation_flags => Windows::Process::CREATE_NEW_CONSOLE,
:process_inherit => false,
:thread_inherit => true,
:cwd => "C:\\"
)
Since the win32-process library is based on using processes and threads on windows, I tried to go through the MSDN help for it. While reading the Creation of a Console article, I found that the GetStdHandle method could be used to get the handles to the input and output streams. But, i could not find this method implemented anywhere in win32-process.
Can someone provide me with some guidance on how to proceed with steps 2 and 3?
Also, is there any other way which can be used to solve the problem at hand?
Also, I would like to learn more about inter-process communication or in general spawning and forking of processes, so can somebody please tell me some good references where I could study them?
Thanks in advance
Here an example using IO.popen in windows, imo if it works with the stdlib don't use gems
IO.popen("other_program", "w+") do |pipe|
pipe.puts "here, have some input"
pipe.close_write # If other_program process doesn't flush its output, you probably need to use this to send an end-of-file, which tells other_program to give us its output. If you don't do this, the program may hang/block, because other_program is waiting for more input.
output = pipe.read
end
# You can also use the return value from your block. (exit code stored in $? as usual)
output = IO.popen("other_program", "w+") do |pipe|
pipe.puts "here, have some input"
pipe.close_write
pipe.read
end

TCL hangs when trying to close TCL pipe

When launching tclsh and typing this:
close [open "|tclsh" w]
it works fine.
But, when in ~/.tclshrc you have package require Tk, the same line makes tclsh to HANG!
The same issue is with all GUI packages like Tk, Itk, Img, Iwidgets, however with not GUI packages like Itcl, it worsk fine.
How can I fix this issue? The point is to make tclsh not to hang, when typing close [open "|tclsh" w] with package require Tk in ~/.tclshrc.
The same issue is with wish. close [open "|wish" w] makes wish to HANG (with an empty ~/.wishrc file)!
I got this issue on both 32 and 64 bit CentOS.
I have the following versions of packages: tcl-8.5.8, tk-8.5.8, img-1.3, itcl-3.4.b1, itk-3.3, iwidgets-4.0.1.
Tcl applications mostly exit when they have finished their script, whether or not it is provided interactively. However the Tk package changes things around so that when the end of the script is reached, it instead goes into a loop handling events. If you're relying on an end-of-file causing things to exit, that's going to look a lot like a hang, but really it's just waiting properly for the GUI app to finish (so it can report the exit status of the subprocess).
The fix is to make a channel-readable event handler for stdin in the subprocess. There's a few ways to do this in detail, but here's a simple one that can go at the end of the bulk of code that you normally send:
proc ReadFromStdin {} {
if {[gets stdin line] >= 0} {
uplevel "#0" $line
} elseif {[eof stdin]} {
exit
} else {
# Partial read; try later when rest of data available
}
}
fileevent stdin readable ReadFromStdin
This assumes that each line is a full executable command; that might not be true, of course, but writing the code to use info complete to compose lines is less clear and possibly unnecessary here. (You know what you're actually sending better than I…)
My thought would be that it's waiting for wish to finish running, as per the man page:
If channelId is a blocking channel for
a command pipeline then close waits
for the child processes to complete.
Since wish enters an infinite loop (the event loop) and never exits, the close command will hang. Along the same lines, [package require Tk] (I believe) starts the event loop, so will cause the same behavior.
I'll admit though that it's loading .tclshrc at all, since
If there exists a file .tclshrc (or tclshrc.tcl on the Windows platforms) in the home directory of the user, interactive tclsh evaluates the file as a Tcl script just before reading the first command from standard input.
It seems odd to me that [open "|tclsh" w] winds up in an interactive shell.
As a side note, [pacakge require Tk] seems like a really strange thing to do in .tclshrc. In theory, you won't always want Tk (the window and event loop) when running Tcl (ie, command line only apps)... and, when you do want it, you know you do. To each their own, I suppose, it just seems odd to me.

Piping stdin to ruby script via `myapp | myscript.rb`

I have an app that runs continuously, dumping output from a server and sending strings to stdout. I want to process this output with a Ruby script. The strings are \n-terminated.
For example, I'm trying to run this on the command line:
myapp.exe | my_script.rb
...with my_script.rb defined as:
while $stdin.gets
puts $_
end
I ultimately am going to process the strings using regexes and display some summary data, but for now I'm just trying to get the basic functionality hooked up. When I run the above, I get the following error:
my_script.rb:1:in `gets': Bad file descriptor (Errno::EBADF)
from my_script.rb:1
I am running this on Windows Server 2003 R2 SP2 and Ruby 1.8.6.
How do I continuously process stdin in a Ruby script? (Continuously as in not processing a file, but running until I kill it.)
EDIT:
I was able to make this work, sort of. There were several problems standing in my way. For one thing, it may be that using Ruby to process the piped-in stdin from another process doesn't work on Windows 2003R2. Another direction, suggested by Adrian below, was to run my script as the parent process and use popen to connect to myapp.exe as a forked child process. Unfortunately, fork isn't implemented in Windows, so this didn't work either.
Finally I was able to download POpen4, a RubyGem that does implement popen on Windows. Using this in combination with Adrian's suggestion, I was able to write this script which does what I really want -- processes the output from myapp.exe:
file: my_script.rb
require 'rubygems'
require 'popen4'
status =
POpen4::popen4("myapp.exe") do |stdout, stderr, stdin, pid|
puts pid
while s = stdout.gets
puts s
end
end
This script echoes the output from myapp.exe, which is exactly what I want.
Try just plain gets, without the $stdin. If that doesn't work, you might have to examine the output of myapp.exe for non-printable characters with another ruby script, using IO.popen.
gets doesn't always use stdin but instead tries to open a file.
See SO.
Try executing your Ruby script by explicitly calling ruby:
myapp.exe | ruby my_script.rb
I've experienced some odd behavior using stdin in Ruby when relying on Windows to invoke the correct program based on the file associations.

Can Ruby access output from shell commands as it appears?

My Ruby script is running a shell command and parsing the output from it. However, it seems the command is first executed and output saved in an array. I would like to be able to access the output lines in real time just as they are printed. I've played around with threads, but haven't got it to work. Any suggestions?
You are looking for pipes. Here is an example:
# This example runs the netstat command via a pipe
# and processes the data in Ruby as it come back
pipe = IO.popen("netstat 3")
while (line = pipe.gets)
print line
print "and"
end
When call methods/functions to run system/shell commands, your interpreter spawns another process to run it and waits for it to finish, then gives you the output.
Even if you use threads, the only thing that you would accomplish is not letting your program to hang while the command is run, but you still won't get the output till its done.
I think you can accomplish that with pipes, but I am not sure how.
#Marcel got it.

Resources