How do you secure a socket with SSL in Ruby when you need to communicate over plaintext first?
I can't use OpenSSL::SSL::SSLServer because it's the client's responsibility to request an SSL connection first
To put a long story short, I am attempting to implement RFC3207, where the client sends the keyword "STARTTLS", and then an SSL connection is created.
My question is "How do I create the SSL connection after the server has sent '220 OK'?"
I know I can use OpenSSL::SSL::SSLSocket on the client-side, but I have no idea what to do on the server-side
If you know how to do this in a language other than Ruby, just post the code and I'll translate it, I've been working on this for about 8 hours and I need everything I can get
I have asked in #ruby-lang, but with no avail, and I have tried wrapping Socket objects in SSLSockets on the server and client at the same time, but that isn't working either
In short, I'm very stuck, I need all the help I can get
I created this gist to illustrate how to set up a minimal TLS server. You may want to leave out lines 62-67, that was to illustrate a new feature on trunk.
But other than that, it's a fully working TLS server, you may build on it to add further functionality.
You may also want to change the server certificate's CN from "localhost" to a real domain if you want to use it seriously :)
You may notice that the largest part of the work is actually setting up the PKI aspects correctly. The core server part is this:
ctx = OpenSSL::SSL::SSLContext.new
ctx.cert = ... # the server certificate
ctx.key = ... # the key associated with this certificate
ctx.ssl_version = :SSLv23
tcps = TCPServer.new('127.0.0.1', 8443)
ssls = OpenSSL::SSL::SSLServer.new(tcps, ctx)
ssls.start_immediately = true
begin
loop do
ssl = ssls.accept
puts "Connected"
begin
while line = ssl.gets
puts "Client says: #{line}"
ssl.write(line) # simple echo, do something more useful here
end
ensure
ssl.close
end
end
ensure
tcps.close if tcps
end
You have to set the SSLServer's start_immediately field to false in order to start the SSL server in plain text mode. At any point (ie. when you receive the STARTTLS command from the client), you can call the SSLSocket's accept method to initiate SSL/TLS handshake. The client will of course have to agree to the protocol :)
Here is a sample server I wrote to test this:
#!/usr/bin/ruby
require 'socket';
require 'openssl';
certfile = 'mycert.pem';
port = 9002;
server = TCPServer.new( port );
# Establish an SSL context
sslContext = OpenSSL::SSL::SSLContext.new
sslContext.cert = OpenSSL::X509::Certificate.new( File.open( certfile ) );
sslContext.key = OpenSSL::PKey::RSA.new( File.open( certfile ) );
# Create SSL server
sslServer = OpenSSL::SSL::SSLServer.new( server, sslContext );
# Don't expect an immidate SSL handshake upon connection.
sslServer.start_immediately = false;
sslSocket = sslServer.accept;
sslSocket.puts( "Toast.." );
# Server loop
while line = sslSocket.gets
line.chomp!;
if "STARTTLS" == line
# Starting TLS
sslSocket.accept;
end
sslSocket.puts( "Got '#{line}'" );
end
sslSocket.close;
I'm sure the original poster knows how to test STARTTLS, but the rest of us might need this reminder. Actaually I'm normally using the utils from the GNUTLS package (gnutls-bin in debian/ubuntu) to test starttls, because it allows me to start the handshake whenever I want to:
$ gnutls-cli --starttls --port 9002 --insecure localhost
This connects in plain text TCP socket mode. Type some lines and get them echoed. This traffic is unencrypted. If you send STARTTLS, the sslSocket.accept is called, and the server waits for SSL handshake. Press ctrl-d (EOF) to start handshake from the gnutls client, and watch it establish an encrypted SSL connection. Subsequent lines will be echoed as well, but the traffic is now encrypted.
I have made some headway on this, saved for future use:
Yes, you should use OpenSSL::SSL::SSLSocket on both ends
On the server side, you must create an OpenSSL::SSL::SSLContext object, passing in a symbol with the protocol you wish to use and "_server" appended to the end, see OpenSSL::SSL::SSLContext::METHODS for what I mean, in short use ":TLSv1_server" for RFC3207 don't even need to do that, on the server side create the context with certs and then call #accept on the socket to wait for client transfer
Pass in SSL certificates to the ctx object
Edit as you please
Related
I have a Ruby polling script that runs on a set of servers in an IP range. I very strongly prefer to do this polling by IP address, not by hostname, because:
1) I have defined IP address ranges to poll, and hostnames are arbitrary/change a lot
2) Because they change a lot, most of the hostnames do not have a reverse DNS lookup, so I can't engineer a list of hostnames from IPs
Before our web servers had no problem with this polling, but on a new server that does not accept SSLv3 communication, this is the error I get when I run my poll:
/home/dashboard/.rvm/rubies/ruby-2.1.6/lib/ruby/2.1.0/net/http.rb:923:in `connect': SSL_connect returned=1 errno=0 state=unknown state: tlsv1 unrecognized name (OpenSSL::SSL::SSLError)
On the server side, this is the error:
nss_engine_init.c(1802): start function ownSSLSNISocketConfig for SNI
nss_engine_init.c(1827): Search [val = 172.16.99.18] failed, unrecognized name
When I run the poll with hostname, everything works fine.
Here is the crux of the HTTP Client code in Ruby:
def init_http(url)
uri = URI.parse(url)
http = Net::HTTP.new(uri.host, uri.port)
http.verify_mode = OpenSSL::SSL::VERIFY_PEER
http.read_timeout = 10
http.use_ssl = true
#http.ssl_version = 'TLSv1'
return [http, uri]
end
As you can tell, I've been playing around with TLS and SSL version, because I figured that might be the issue. My next thought (that Google only has evidence of for Java) is, "How easy is it to just disable the SNI extension on my client?" The more general question is, "Can I keep using IP addresses with Ruby net/http while taking advantage of newer, more secure communication protocols?"
... tlsv1 unrecognized name (OpenSSL::SSL::SSLError)
This is in most cases not a problem you can solve by disabling SNI on the client side. SNI is required when you have multiple certificates on the same IP address and if you just connect by IP address and don't send the requested hostname (i.e. disabling SNI) the server will not know which certificate it should provide - which then results in the above error.
I very strongly prefer to do this polling by IP address, not by hostname, ...
If you have to deal with server which require SNI then you have to use SNI and you have to use SNI with the proper hostname, which is not necessarily the same name as you get from reverse lookup.
The easiest way to solve this is to add the patch that #steffenullrich mentioned for bug 10613.
All I did was look at the diff, and edit the file myself, but you can use the Linux patch tool if you're familiar with it.
For those who are unsure of where /net/http.rb is located, it is in the same location as the rest of your Ruby sources. For example, mine was here:
/home/myuser/.rvm/rubies/ruby-2.1.6/lib/net/http.rb
Once you patch the file, set the .disable_sni property of your HTTP object to true, and SNI will not be required, allowing the use of IP addresses in mixed-TLS communication.
I am learning TCPSocket and have a simple server written:
require 'socket'
server = TCPServer.open(2000)
loop {
client = server.accept
p client.gets
client.print("bar")
client.close
}
and simple client written:
require 'socket'
hostname = 'localhost'
port = 2000
socket = TCPSocket.open(hostname, port)
socket.print("foo")
p socket.gets
When I run these in separate terminals with either the server or client communicating one way (i.e. one "prints" and the other "gets") I get the expected string on the other side. When I run these as written, with the client first "print"-ing a message to the server and then the server "gets"ing it to then "print" a string to the client, it just hangs. What is causing this issue?
Your program does following:
The connection is established between client and server.
Client side
Calls print("foo") - exactly 3 bytes are transmitted to the server.
Calls gets - waits for data from server but server never send any.
Server side
Calls gets - The ruby function gets parse stream data and it always return the whole line. But the server received only "foo" and the it has no idea whether it is whole line or not. So it is waiting forever for new line character which client never send.
I run several sql statements in a transaction using Ruby pg gem. The problem that I bumped in is that connection times out on these queries due to firewall setup. Solution proposed here does not work, because it requires jdbc connections string, and I'm in Ruby (jRuby is not an option). Moving driver program to AWS to remove firewall is not an option either.
The code that I have is along the following lines:
conn = RedshiftHelper.get_redshift_connection
begin
conn.transaction do
# run my queries
end
ensure
conn.flush
conn.finish
end
I'm now looking into PG asynchronous API. I'm wondering if I can use is_busy to prevent firewall from timing out, or something to that effect. I can't find good documentation on the topic though. Appreciate any hints on that.
PS: I have solved this problem for a single query - I can trigger it asynchronously and track its completion using system STV_INFLIGHT Redshift table.Transaction does not work this way as I have to keep connection open.
Ok, I nailed it down. Here are the facts:
Redshift is based on Postgres 8.0. To check that, connect to Redshift instance using psql and see that it says "server version 8.0"
Keepalive requests are specified on the level of tcp socket (link).
Postgres 8.0 does not support keepalive option when specifying a connection string (link to 9.0 release changes, section E.19.3.9.1 on libpq)
PG gem in Ruby is a wrapper about libpq
Based on the facts above, tcp keepalive is not supported by Redshift. However, PG allows you to retrieve a socket that is used in the established connection. This means that even though libpq does not set keepalive feature, we still can use it manually. The solution thus:
class Connection
attr_accessor :socket, :pg_connection
def initialize(conn, socket)
#socket = socket
#pg_connection = conn
end
def method_missing(m, *args, &block)
#pg_connection.send(m, *args, &block)
end
def close
#socket.close
#pg_connection.close
end
def finish
#socket.close
#pg_connection.close
end
end
def get_connection
conn = PGconn.open(...)
socket_descriptor = conn.socket
socket = Socket.for_fd(socket_descriptor)
# Use TCP keep-alive feature
socket.setsockopt(Socket::SOL_SOCKET, Socket::SO_KEEPALIVE, 1)
# Maximum keep-alive probes before asuming the connection is lost
socket.setsockopt(Socket::IPPROTO_TCP, Socket::TCP_KEEPCNT, 5)
# Interval (in seconds) between keep-alive probes
socket.setsockopt(Socket::IPPROTO_TCP, Socket::TCP_KEEPINTVL, 2)
# Maximum idle time (in seconds) before start sending keep-alive probes
socket.setsockopt(Socket::IPPROTO_TCP, Socket::TCP_KEEPIDLE, 2)
socket.autoclose = true
return Connection.new(conn, socket)
end
The reason why I introduce a proxy Connection class is because of Ruby tendency to garbage-collect IO objects (like sockets) when they get out of scope. This means that we now need connection and socket to be in the same scope, which is achieved through this proxy class. My Ruby knowledge is not deep, so there may be a better way to handle the socket object.
This approach works, but I would be happy to learn if there are better/cleaner solutions.
The link you provided has the answer. I think you just want to follow the section at the top, which has settings for 3 different OS'es, pick the one you are running the code on (the client to the Amazon service).
Look in this section: To change TCP/IP timeout settings - this is the OS that your code is running on (i.e. The client for the Amazon Service is your Server probably)
Linux — If your client is running on Linux, run the following command as the root user.
-- details omitted --
Windows — If your client runs on Windows, edit the values for the following registry settings under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters:
-- details omitted --
Mac — If your client is a Mac, create or modify the /etc/sysctl.conf file with the following values:
-- Details omitted --
I have been reading examples online about Ruby's TCPSocket and TCPServer, but I still don't know and can't find what's the best practice for this. If you have a running TCPServer, and you want to keep the socket open across multiple connections/clients, who should be responsible in keeping them open, the server or the clients?
Let's say that you have a TCPServer running:
server = TCPServer.new(8000)
loop do
client = server.accept
while line = client.gets
# process data from client
end
client.puts "Response from server"
client.close # should server close the socket?
end
And Client:
socket = TCPSocket.new 'localhost', 8000
while line = socket.gets
# process data from server
end
socket.close # should client close the socket here?
All of the examples I have seen have the socket.close at the end, which I would assume is not what I want as that would close the connection. Server and clients should maintain open connection as they will need to send data back and forth.
PS: I'm pretty a noob on networking, so just kindly let me know if my question sounds completely dumb.
The server is usually responsible for keeping the connections open because the client (being the one connecting to the server) can break the connection at anytime.
Servers are usually in charge of everything that the client doesn't care about. A video game doesn't really care about the connection to the server as long as it's there. It just wants its data so it can keep running.
I want to write a simple server socket in Ruby, which, when a client connects to it, prints a message and closes the client connection. I came up with:
require 'socket'
server = TCPServer.open('localhost',8800)
loop {
client = server.accept
Thread.start do
s = client
s.puts "Closing the connection. Bye!"
s.close
end
}
However, when I access "localhost:8800" in my browser, I am not getting that message, instead, it says page not found.. What am I doing wrong here?
It is quite likely that your browser is expecting something on the remote end that talks Http.
This is dependant upon your browser and also the exact URI you typed in. It is also possible that your browser is connecting getting the connection close and then displaying an error page.
If you want to see the server working then use telnet from a command prompt. So in one window type ruby ./myfilename.rb and then in another type telnet localhost 8800