I'm testing Ruby XMLRPC support right now. It all works fine, except XMLRPC::Server#shutdown.
If I run the following Ruby 1.9.3 test code, it fails to shut down the server on both Windows 7 and OSX 10.7:
# server.rb
require "xmlrpc/server"
require 'thread'
Thread.new { sleep 10; $server.shutdown() }
$server = XMLRPC::Server.new( 1234 )
$server.add_handler( "test" ) { true }
$server.serve()
# client.rb
require "xmlrpc/client"
server = XMLRPC::Client.new( "localhost", "/", 1234 )
loop { server.call( "test" ); sleep 0.1 }
After ten seconds, the server writes "INFO going to shutdown ..." to stdout, but won't actually shut down and continues to handle incoming requests. What am I doing wrong?
Have you noticed that without incoming requests it shutdowns properly? Also, after you end the client, it will shut down as it should, returning :Stop symbol. It waits for the client to stop pumping data before shutting down.
I have examined XMLRPC::Server source code. It seems a bug/feature that prevents shutdown if client uses connection with keep-alive HTTP flag.
The workaround is to use call_async instead of call.
Related
Using Ruby (tested with versions 2.6.9, 2.7.5, 3.0.3, 3.1.1) and forking processes to handle socket communication there seems to be a huge difference between MacOS OSX and a Debian Linux.
While running on Debian, the forked processes get called in a balanced manner - that mean: if having 10 tcp server forks and running 100 client calls, each fork will get 10 calls. The order of the pid call stack is also always the same even not ordered by pid (caused by load when instantiating the forks).
Doing the same on a MacOS OSX (Catalina) the forked processes will not get called balanced - that mean: "pid A" might get called 23 or whatever times while e.g. "pid G" was never used.
Sample code (originally from: https://relaxdiego.com/2017/02/load-balancing-sockets.html)
#!/usr/bin/env ruby
# server.rb
require 'socket'
# Open a socket
socket = TCPServer.open('0.0.0.0', 9999)
puts "Server started ..."
# For keeping track of children pids
wpids = []
# Forward any relevant signals to the child processes.
[:INT, :QUIT].each do |signal|
Signal.trap(signal) {
wpids.each { |wpid| Process.kill(:KILL, wpid) }
}
end
5.times {
wpids << fork do
loop {
connection = socket.accept
connection.puts "Hello from #{ Process.pid }"
connection.close
}
end
}
Process.waitall
Run some netcat to the server on a second terminal:
for i in {1..20}; do nc -d localhost 9999; done
As said: if running on Linux each forked process will get 4 calls - doing same on MacOS OSX its a random usage per forked process.
Any solution or correction to make it work on MacOS OSX in a balanced manner also?
The problem is that the default socket backlog size is 5 on MacOS and 128 on Linux. You can change the backlog size by passing it as the second argument to TCPServer#listen:
socket.listen(128)
Or you can use the backlog size from the environment variable SOMAXCONN:
socket.listen(ENV.fetch('SOMAXCONN', 128).to_i)
My oracle db is only accessable via a jumpoff server and is load balanced. As a result I run the following background tunnel command in bash:
ssh ${jumpoffUser}#${jumpoffIp} -L1521:ont-db01-vip:1521 -L1522:ont-db02-vip:1521 -fN
Before I run my commands on the db using sqlplus like so:
sqlplus #{#sqlUsername}/#{#sqlPassword}#'#{#sqlUrl}' #scripts/populateASDB.sql
This all works fine.
Now I want to rubisize this procedure.
In looking up the documentation on ruby I could not find how to put the tunnel in the background (which would be my preference) but I found documentation on local port forwarding which I thought would emulate the above tunnel and subsequent sqlplus command.
Here is my code:
Net::SSH.start( #jumpoffIp, #jumpoffUser ) do |session|
session.forward.local( 1521, 'ont-db01-vip', 1521 )
session.forward.local( 1522, 'ont-db02-vip', 1521 )
puts "About to populateDB"
res = %x[sqlplus #{#sqlUsername}/#{#sqlPassword}#'#{#sqlUrl}' #scripts/populateASDB.sql > output.txt]
puts "populateDb output #{res}"
session.loop
end
When I run the above I get the line "About to populateDB" but it hangs on the actual running of the sqlplus command. Is there something wrong with my port forwarding code or how do I put the following:
ssh ${jumpoffUser}#${jumpoffIp} -L1521:ont-db01-vip:1521 -L1522:ont-db02-vip:1521 -fN
into ruby code?
A
Try to use this gem: https://github.com/net-ssh/net-ssh-gateway/
require 'net/ssh/gateway'
gateway = Net::SSH::Gateway.new(#jumpoffIp, #jumpoffUser)
gateway.open('ont-db01-vip', 1521, 1521)
gateway.open('ont-db02-vip', 1521, 1521)
res = %x[sqlplus #{#sqlUsername}/#{#sqlPassword}#'#{#sqlUrl}' #scripts/populateASDB.sql > output.txt]
puts "populateDb output #{res}"
gateway.shutdown!
You have two problems.
1) You need to use 'session.loop { true }' so that the session actually loops
2) You don't start looping the session until your sqlplus command is done, but the sqlplus needs the session looping (the forwarding to be up).
So I suggest creating a background thread using Thread.new and then killing the thread once sqlplus is done.
Thanks to David's answer, I came up with the following:
Net::SSH.start(ip_addr, 'user') do |session|
session.forward.local( 9090, 'localhost', 9090 )
# Need to run the event loop in the background for SSH callbacks to work
t = Thread.new {
session.loop { true }
}
commands.each do | command |
command.call(9090)
end
Thread.kill(t)
end
I've implemented a very simple kind of server in Ruby, using TCPServer. I have a Server class with serve method:
def serve
# Do the actual serving in a child process
#pid = fork do
# Trap signal sent by #stop or by pressing ^C
Signal.trap('INT') { exit }
# Create a new server on port 2835 (1 ounce = 28.35 grams)
server = TCPServer.new('localhost', 2835)
#logger.info 'Listening on http://localhost:2835...'
loop do
socket = server.accept
request_line = socket.gets
#logger.info "* #{request_line}"
socket.print "message"
socket.close
end
end
end
and a stop method:
def stop
#logger.info 'Shutting down'
Process.kill('INT', #pid)
Process.wait
#pid = nil
end
I run my server from the command line, using:
if __FILE__ == $0
server = Server.new
server.logger = Logger.new(STDOUT)
server.logger.formatter = proc { |severity, datetime, progname, msg| "#{msg}\n" }
begin
server.serve
Process.wait
rescue Interrupt
server.stop
end
end
The problem is that, sometimes, when I do ruby server.rb from my terminal, the server starts, but when I try to make a request on localhost:2835, it fails. Only after several requests it starts serving some pages. In other cases, I need to stop/start the server again for it to properly serve pages. Why is this happening? Am I doing something wrong? I find this very weird...
The same things applies to my specs: I have some specs defined, and some Capybara specs. Before each test I create a server and start it and after each test I stop the server. And the problem persists: tests sometimes pass, sometimes fail because the requested page could not be found.
Is there something fishy going on with my forking?
Would appreciate any answer because I have no more place to look...
Your code is not an HTTP server. It is a TCP server that sends the string "message" over the socket after receiving a newline.
The reason that your code isn't a valid HTTP server is that it doesn't conform to the HTTP protocol. One of the many requirements of the HTTP protocol is that the server respond with a message of the form
HTTP/1.1 <code> <reason>
Where <code> is a number and <reason> is a human-readable "status", like "OK" or "Server Error" or something along those lines. The string message obviously does not conform to this requirement.
Here is a simple introduction to how you might build a HTTP server in ruby: https://practicingruby.com/articles/implementing-an-http-file-server
Situation
I connect to a WebSocket with Chrome's Remote Debugging Protocol, using a Rails application and a class that implements Celluloid, or more specifically, celluloid-websocket-client.
The problem is that I don't know how to disconnect the WebSocket cleanly.
When an error happens inside the actor, but the main program runs, Chrome somehow still makes the WebSocket unavailable, not allowing me to attach again.
Code Example
Here's the code, completely self-contained:
require 'celluloid/websocket/client'
class MeasurementConnection
include Celluloid
def initialize(url)
#ws_client = Celluloid::WebSocket::Client.new url, Celluloid::Actor.current
end
# When WebSocket is opened, register callbacks
def on_open
puts "Websocket connection opened"
# #ws_client.close to close it
end
# When raw WebSocket message is received
def on_message(msg)
puts "Received message: #{msg}"
end
# Send a raw WebSocket message
def send_chrome_message(msg)
#ws_client.text JSON.dump msg
end
# When WebSocket is closed
def on_close(code, reason)
puts "WebSocket connection closed: #{code.inspect}, #{reason.inspect}"
end
end
MeasurementConnection.new ARGV[0].strip.gsub("\"","")
while true
sleep
end
What I've tried
When I uncomment #ws_client.close, I get:
NoMethodError: undefined method `close' for #<Celluloid::CellProxy(Celluloid::WebSocket::Client::Connection:0x3f954f44edf4)
But I thought this was delegated? At least the .text method works too?
When I call terminate instead (to quit the Actor), the WebSocket is still opened in the background.
When I call terminate on the MeasurementConnection object that I create in the main code, it makes the Actor appear dead, but still does not free the connection.
How to reproduce
You can test this yourself by starting Chrome with --remote-debugging-port=9222 as command-line argument, then checking curl http://localhost:9222/json and using the webSocketDebuggerUrl from there, e.g.:
ruby chrome-test.rb $(curl http://localhost:9222/json 2>/dev/null | grep webSocket | cut -d ":" -f2-)
If no webSocketDebuggerUrl is available, then something is still connecting to it.
It used to work when I was using EventMachine similar to this example, but not with faye/websocket-client, but em-websocket-client instead. Here, upon stopping the EM loop (with EM.stop), the WebSocket would become available again.
I figured it out. I used the 0.0.1 version of the celluloid-websocket-client gem which did not delegate the close method.
Using 0.0.2 worked, and the code would look like this:
In MeasurementConnection:
def close
#ws_client.close
end
In the main code:
m = MeasurementConnection.new ARGV[0].strip.gsub("\"","")
m.close
while m.alive?
m.terminate
sleep(0.01)
end
I'm attempting to create a script in ruby that connects to a Minecraft server via TCP and fetches the current number of players much like the PHP script at http://www.webmaster-source.com/2012/07/05/checking-the-status-of-a-minecraft-server-with-php/
When running the code below I get �Took too long to log in
require 'socket'
server = TCPSocket.new '192.241.174.210', 25565
while line = server.gets
puts line
end
server.close
What am I doing wrong here?
you're not sending this:
fwrite($sock, "\xfe");
from the script you linked. You have to send that before you call read, like they do.
Basically the server is waiting for you to send data and when you don't after a timeout, you are disconnected.