I've implemented a very simple kind of server in Ruby, using TCPServer. I have a Server class with serve method:
def serve
# Do the actual serving in a child process
#pid = fork do
# Trap signal sent by #stop or by pressing ^C
Signal.trap('INT') { exit }
# Create a new server on port 2835 (1 ounce = 28.35 grams)
server = TCPServer.new('localhost', 2835)
#logger.info 'Listening on http://localhost:2835...'
loop do
socket = server.accept
request_line = socket.gets
#logger.info "* #{request_line}"
socket.print "message"
socket.close
end
end
end
and a stop method:
def stop
#logger.info 'Shutting down'
Process.kill('INT', #pid)
Process.wait
#pid = nil
end
I run my server from the command line, using:
if __FILE__ == $0
server = Server.new
server.logger = Logger.new(STDOUT)
server.logger.formatter = proc { |severity, datetime, progname, msg| "#{msg}\n" }
begin
server.serve
Process.wait
rescue Interrupt
server.stop
end
end
The problem is that, sometimes, when I do ruby server.rb from my terminal, the server starts, but when I try to make a request on localhost:2835, it fails. Only after several requests it starts serving some pages. In other cases, I need to stop/start the server again for it to properly serve pages. Why is this happening? Am I doing something wrong? I find this very weird...
The same things applies to my specs: I have some specs defined, and some Capybara specs. Before each test I create a server and start it and after each test I stop the server. And the problem persists: tests sometimes pass, sometimes fail because the requested page could not be found.
Is there something fishy going on with my forking?
Would appreciate any answer because I have no more place to look...
Your code is not an HTTP server. It is a TCP server that sends the string "message" over the socket after receiving a newline.
The reason that your code isn't a valid HTTP server is that it doesn't conform to the HTTP protocol. One of the many requirements of the HTTP protocol is that the server respond with a message of the form
HTTP/1.1 <code> <reason>
Where <code> is a number and <reason> is a human-readable "status", like "OK" or "Server Error" or something along those lines. The string message obviously does not conform to this requirement.
Here is a simple introduction to how you might build a HTTP server in ruby: https://practicingruby.com/articles/implementing-an-http-file-server
Related
I have two network connected computers sending and receiving UDP data. Sending computer is Windows using SocketTest v 3.0.0
Receiving one is Macbook using this Ruby code for echo server:
require 'eventmachine'
EM.run do
puts "EM.run"
EM.open_datagram_socket '0.0.0.0', 9100 do |server|
puts "socket open"
def server.receive_data(data)
puts "data received: #{data}"
send_data("sending back: #{data}")
end
end
end
When I launch this program and send data from Windows computer, nothing happens. But, when I run this program for a second next to eventmachine echo:
require 'socket'
s = UDPSocket.new
while 1 do
puts "sending..."
s.send "hi", 0, "localhost", 9100
end
Eventmachine prints several "hi" messages as intended, and from now it also receives data from network-connected computer properly (I see "sending back" response on Windows computer).
Why is that? My understanding is that UDP is connectionless, so it should take everything from given port. How signal from "localhost" triggers socket to listen from network here?
OK, so I gave up investigating this and did a workaround:
require 'eventmachine'
require 'socket'
EM.run do
puts "EM.run"
EM.open_datagram_socket '0.0.0.0', 9100 do |server|
# send first packet from localhost to trigger network receiving (bug on my Macbook)
s = UDPSocket.new
s.send Time.now.to_s, 0, "localhost", 9100
def server.receive_data(data)
puts data
send_data("OK")
# forward data to another UDP port
s = UDPSocket.new
s.send data, 0, "localhost", 9101
end
end
end
Ugly, but works. s probably shouldn't be created two times but whatever. Now I can receive network packets in every program on port 9101.
Situation
I connect to a WebSocket with Chrome's Remote Debugging Protocol, using a Rails application and a class that implements Celluloid, or more specifically, celluloid-websocket-client.
The problem is that I don't know how to disconnect the WebSocket cleanly.
When an error happens inside the actor, but the main program runs, Chrome somehow still makes the WebSocket unavailable, not allowing me to attach again.
Code Example
Here's the code, completely self-contained:
require 'celluloid/websocket/client'
class MeasurementConnection
include Celluloid
def initialize(url)
#ws_client = Celluloid::WebSocket::Client.new url, Celluloid::Actor.current
end
# When WebSocket is opened, register callbacks
def on_open
puts "Websocket connection opened"
# #ws_client.close to close it
end
# When raw WebSocket message is received
def on_message(msg)
puts "Received message: #{msg}"
end
# Send a raw WebSocket message
def send_chrome_message(msg)
#ws_client.text JSON.dump msg
end
# When WebSocket is closed
def on_close(code, reason)
puts "WebSocket connection closed: #{code.inspect}, #{reason.inspect}"
end
end
MeasurementConnection.new ARGV[0].strip.gsub("\"","")
while true
sleep
end
What I've tried
When I uncomment #ws_client.close, I get:
NoMethodError: undefined method `close' for #<Celluloid::CellProxy(Celluloid::WebSocket::Client::Connection:0x3f954f44edf4)
But I thought this was delegated? At least the .text method works too?
When I call terminate instead (to quit the Actor), the WebSocket is still opened in the background.
When I call terminate on the MeasurementConnection object that I create in the main code, it makes the Actor appear dead, but still does not free the connection.
How to reproduce
You can test this yourself by starting Chrome with --remote-debugging-port=9222 as command-line argument, then checking curl http://localhost:9222/json and using the webSocketDebuggerUrl from there, e.g.:
ruby chrome-test.rb $(curl http://localhost:9222/json 2>/dev/null | grep webSocket | cut -d ":" -f2-)
If no webSocketDebuggerUrl is available, then something is still connecting to it.
It used to work when I was using EventMachine similar to this example, but not with faye/websocket-client, but em-websocket-client instead. Here, upon stopping the EM loop (with EM.stop), the WebSocket would become available again.
I figured it out. I used the 0.0.1 version of the celluloid-websocket-client gem which did not delegate the close method.
Using 0.0.2 worked, and the code would look like this:
In MeasurementConnection:
def close
#ws_client.close
end
In the main code:
m = MeasurementConnection.new ARGV[0].strip.gsub("\"","")
m.close
while m.alive?
m.terminate
sleep(0.01)
end
I have an application that does something like this using Ruby's Net::SSH gem:
key = '/home/creede/.ssh/secret.pem'
conn = nil
begin
conn = Net::SSH::start('example.com','creede',:timeout=>60,:keys=>[key])
rescue
begin
conn = Net::SSH::start('example.com','root',:timeout=>60,:keys=>[key])
rescue Exception => e
puts "Can't connect to example.com: #{e.to_s}"
if not conn.nil?
if not conn.closed?
conn.close
end
end
end
end
issue = conn.exec!('cat /etc/issue')
conn.close
All well and good when we connect to the server the first time. However if the server needs to be logged into as root because the first try at connecting fails, the first attempted ssh process turns into a zombie. So does the second one if connecting as root fails.
These zombies disappear when the parent process finishes, but I'd like to figure out how (if possible) to get rid of the zombies as soon as we know the connection fails.
I'm testing Ruby XMLRPC support right now. It all works fine, except XMLRPC::Server#shutdown.
If I run the following Ruby 1.9.3 test code, it fails to shut down the server on both Windows 7 and OSX 10.7:
# server.rb
require "xmlrpc/server"
require 'thread'
Thread.new { sleep 10; $server.shutdown() }
$server = XMLRPC::Server.new( 1234 )
$server.add_handler( "test" ) { true }
$server.serve()
# client.rb
require "xmlrpc/client"
server = XMLRPC::Client.new( "localhost", "/", 1234 )
loop { server.call( "test" ); sleep 0.1 }
After ten seconds, the server writes "INFO going to shutdown ..." to stdout, but won't actually shut down and continues to handle incoming requests. What am I doing wrong?
Have you noticed that without incoming requests it shutdowns properly? Also, after you end the client, it will shut down as it should, returning :Stop symbol. It waits for the client to stop pumping data before shutting down.
I have examined XMLRPC::Server source code. It seems a bug/feature that prevents shutdown if client uses connection with keep-alive HTTP flag.
The workaround is to use call_async instead of call.
I am looking for possibility to have ruby-based webserver communicating by pipe, not by TCP/IP. So I will send HTTP request by pipe and I want to read response by pipe. It should be used as bundled/internal webserver (RPC or something) for desktop application. I don't want to handle ports configuration when there will be more instances of my application running on same machine.
Any ideas?
Thank you in advance.
Try a UNIXSocket You use a local path to specify where the socket connection is,
not a port, and you can easily handle multiple simultaneous connections.
# server.rb
require 'socket'
File.delete( filename ) if File.exists? filename
server = UNIXServer.open( filename )
server.listen( queuesize )
puts "waiting on client connection"
while client= server.accept
puts "got client connection #{client.inspect}"
child_pid = fork do
puts "Asking the client what they want"
client.puts "Welcome to your server, what can I get for you?"
until client.eof?
line = client.gets
puts "The client wants #{line.chomp.inspect}"
end
client.close
end
puts "running server (#{child_pid})"
client.close
Process.detach(child_pid)
end
server.close
# client.rb
require 'socket'
puts "requesting server connection"
server = UNIXSocket.new( filename )
puts "got server connection #{server}"
line = server.gets
puts "The server said: #{line.chomp.inspect}"
%w{ a-pony a-puppy a-kitten a-million-dollars }.each do |item|
server.puts item
end
server.close
Pipe is for one-way communication, so there is no way you can set up webserver on that. You might try with unix socket. But really simplest solution is to use loopback (127.0.0.1). It's highly optimized, so the speed won't be a problem.
Not an answer to your question. However, if you do end up having to use a TCP/IP HTTP Server, you should ensure that it's only listening on 127.0.0.1. Listening on the local host address should be quite fast, as it won't hit the network, and will also make it a tad more secure to stop people connecting from the outside.
Thin supports unix sockets.