Why isn't my Sinatra app working with SSL? - ruby

Alright so, I decided to make sure i can get this ssl stuff working BEFORE building the api.. and I feel 95% of the way there.
So, I have a cert and key from namecheap. All should be good there.
Here is my app.rb
require 'sinatra/base'
require 'webrick'
require 'webrick/https'
require 'openssl'
class MyServer < Sinatra::Base
set :bind, '0.0.0.0'
get '/' do
"Hello, world!\n"
end
end
CERT_PATH = './ssl'
webrick_options = {
:Port => 443,
:Logger => WEBrick::Log::new($stderr, WEBrick::Log::DEBUG),
:DocumentRoot => "/ruby/htdocs",
:SSLEnable => true,
:SSLVerifyClient => OpenSSL::SSL::VERIFY_NONE,
:SSLCertificate => OpenSSL::X509::Certificate.new( File.open(File.join(CERT_PATH, "server.crt")).read),
:SSLPrivateKey => OpenSSL::PKey::RSA.new( File.open(File.join(CERT_PATH, "server.key")).read),
:SSLCertName => [ [ "CN",WEBrick::Utils::getservername ] ],
:app => MyServer
}
Rack::Server.start webrick_options
I run the program with
sudo ruby app.rb
And what's interesting is, on localhost (testing from my macbook pro, running El Capitan) i can access https://localhost and it just says the cert isn't trusted, and asks if I want to go in anyway. Great.
My ec2 instance, however, I can now access via a domain name, one that matches the cert of course. But the site just returns a ERR_CONNECTION_REFUSED (this is what displays in chrome)
But of course, that shows whether or not I run the sinatra server.
Ok, so it sounds easy. Security group, right?
Well, according to ec2, I'm using a security group that has tpc port 443 enabled on inbound. (HTTPS)
So, what gives? What am I not doing right? Why does it do what I expect on localhost but not on the ec2 instance?
Any help would be super appreciated.
Other information:
The server does appear to be running. sudo ruby app.rb gives me valid info about my cert, followed by
[2016-01-22 03:36:52] DEBUG WEBrick::HTTPServlet::FileHandler is mounted on /.
[2016-01-22 03:36:52] DEBUG Rack::Handler::WEBrick is mounted on /.
[2016-01-22 03:36:52] INFO WEBrick::HTTPServer#start: pid=2499 port=443
If I remove webrick and change the port to 80, everything works fine. I can access this app from my site's domain, on http (not https) of course.
From the local machine, I am getting a response.
$ wget https://localhost
--2016-01-22 04:11:48-- https://localhost/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:443... connected.
ERROR: cannot verify localhost's certificate, issued by ‘/C=GB/ST=Greater Manchester/L=Salford/O=COMODO CA Limited/CN=COMODO RSA Domain Validation Secure Server CA’:
Unable to locally verify the issuer's authority.
ERROR: no certificate subject alternative name matches requested host name ‘localhost’.
To connect to localhost insecurely, use `--no-check-certificate'.
This seems correct! So, it does seem to be something with the server setup. I can't connect to it. =/ Again. Security group allows 443 and 80.
Things added since I asked the question originally, but still hasn't fixed the issue:
set :bind, '0.0.0.0'

Generally you don't want any ruby webservers actually handling SSL. You make them serve plain HTTP (that is accessible only via localhost). Then you install a reverse proxy that handles all of the SSL communicate.
For example
Install nginx (reverse proxy) and configure it to listen on port 443.
Set your
ruby app server to listen on port 127.0.0.1:80 (accept local
connections only)
All requests hit nginx, which strips the SSL,
and send the plain HTTP request to your ruby webserver.
A very simple nginx config to get you started:
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/your.key;
ssl on;
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains;";
server {
listen 443 ssl;
server_name you.example.com;
location / {
proxy_pass http://localhost:8080; # your ruby appserver
}
}

Related

How to proxy net-sftp?

I'm using net-sftp which relies on the net-ssh gem.
I'm trying to connect to a remote log service via SFTP, and it requires IP whitelisting. All my current servers have dynamic IPs.
I'm trying to set up a static, secure, proxy server in Google Cloud. I don't really understand all the differences between all the types of proxying, but net-ssh appears to support...
socks4
socks5
'jump' proxy
I looked into setting up a socks5 proxy with Dante but it appears a bit overkill just to relay the SFTP connection through it, not to mention I think it sends passwords in plain text.
How would I go about proxying net-sftp through some server in the easiest way?
The easiest way would be to setup a Jump-host server that can reach the target servers and then connecting to the target server by letting the Jump-host server proxy your connection through.
SSH makes it trivially easy:
ssh -J user#jump-host myuser#target-host
In your .ssh/config you can do the following:
### First jump-host. Directly reachable
Host jump-host
HostName jum-phost.example.org
### Host to jump to via jump-host.example.org
Host target-host
HostName target-host.example.org
ProxyJump jump-host
This will allow you to use net-ssh as usual. If you dont want to change the config file then you will have to use 'net/ssh/proxy/jump':
require 'net/ssh/proxy/jump'
proxy = Net::SSH::Proxy::Jump.new('user#proxy')
Net::SSH.start('host', 'user', :proxy => proxy) do |ssh|
...
end
See this article for more info on Jump Hosts.

Couchdb ssl not listening on port 6984

I've been setting up couchdb to run on SSL following the instructions from couch docs. Its pretty straight forward, you make 3 adjustments to local.ini:
httpsd = {chttpd, start_link, [https]}
cert_file = absolute/path/to/cert.pem
key_file = absolute/path/to/key.pem
I've made the key and certificate with openssl no problem, but whenever I ping port 6984 on the localhost (the port its supposed to run on by default) I just get a non active port:
==> curl https://127.0.0.1:6984/
curl: (7) Failed to connect to 127.0.0.1 port 6984: Connection refused
I've inspected the port, nothing is running there. I can put a node.js server on the port and it works fine too. I can't find a similar situation to this anywhere. I'm running the mac OSX couchdb application (v 2.1.2). It appears that the ssl server daemon is just straight up not running at all. Everything else in couch is working fine. Maybe I have to tweak the local.ini file to turn the daemon on? No idea really. Any suggestions are appreciated.
Not sure if this will ever be a very popular question but just thought I'd point out that a very popular way to set up SSL with couchdb is to use a proxy like haproxy due to annoyances with ssl and erlang (which couchdb is written in).
That being said, I solved my problem by setting up SSL termination at haproxy that then forwards traffic to couchdb on an internal port. For use on a mac OSX machine the steps were pretty easy.
1) Install haproxy with brew brew install haproxy
2) Create a self signed certificate with openssl that haproxy needs for ssl configuration (it's really just a concatenated file of your key and certificate):
openssl genrsa -out key.key 1024
openssl req -new -key key.key -out cert.csr
openssl x509 -req -days 365 -in cert.csr -signkey key.key -out certificate.crt
cat ./certificate.crt ./key.key | tee combined.pem
3) create haproxy configuration file (haproxy.cfg), this is just a pretty naive first implementation, but is a good starting point. Note that "/absolute/path/to/combined.pem" would be changed to wherever the combined.pem file is actually located.
global
maxconn 512
spread-checks 5
defaults
mode http
log global
monitor-uri /_haproxy_health_check
option log-health-checks
option httplog
balance roundrobin
option forwardfor
option redispatch
retries 4
option http-server-close
timeout client 150000
timeout server 3600000
timeout connect 500
stats enable
stats uri /_haproxy_stats
# stats auth admin:admin # Uncomment for basic auth
frontend http-in
# bind *:$HAPROXY_PORT
bind *:443 ssl crt /absolute/path/to/combined.pem no-tls-tickets ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-GCM-SHA384:AES128-SHA256:AES128-SHA:AES256-SHA256:AES256-SHA:!MD5:!aNULL:!DH:!RC4
#Add these lines beneath bind, still within http-in
reqadd X-Forwarded-Proto:\ https
# Distinguish between secure and insecure requests
acl secure dst_port eq 8000
# Mark all cookies as secure if sent over SSL
rsprep ^Set-Cookie:\ (.*) Set-Cookie:\ \1;\ Secure if secure
# Add the HSTS header with a 1 year max-age
rspadd Strict-Transport-Security:\ max-age=31536000 if secure
# Redirect HTTP to HTTPS
redirect scheme https code 301 if !{ ssl_fc }
default_backend couchdbs
backend couchdbs
option httpchk GET /_up
http-check disable-on-404
server couchdb1 127.0.0.1:5984 check inter 5s
4) Run couchdb, run haproxy via changing directory to the directory housing the above haproxy.cfg file and running with that configuration: haproxy -f haproxy.cfg.
This is a simple point to start from. This set up can handle load balancing of multiple couchdbs, and in production would need a valid certificate from some authority. For anyone interested in, or having difficulty with ssl and couchdb in a mac OSX development environment, this is a decent solution that I found to work quite nicely.

Ruby `gem` command fails SSL verification on self-hosted gemserver

I've got a Ruby Gemserver running via Geminabox over http on port 9392.
It's behind an HAProxy load balancer which is enforcing https and doing SSL termination. Here's the relevant chunk(s) of my haproxy.cfg:
global
daemon
maxconn 256
user nobody
tune.ssl.default-dh-param 2048
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
option httpclose
frontend http-in
bind *:80
reqadd X-Forwarded-Proto:\ http
redirect scheme https if !{ ssl_fc }
frontend www-https
reqadd X-Forwarded-Proto:\ https
bind *:443 ssl crt /usr/local/etc/haproxy/***********.pem
default_backend home_server
acl is_gems hdr(host) -i gems.example.com
use_backend gems if is_gems
backend gems
server gems1 192.168.100.102:9392 ssl verify none
When I try to add my gemserver from any other machine:
Error fetching https://gems.example.com:
SSL_connect returned=1 errno=0 state=error: certificate verify failed (https://gems.boynton.io/specs.4.8.gz)
What's interesting is this remains true whether the gemserver is running or not, leading me to believe my local ruby gem client is rejecting the cert out of the gate. (Otherwise I'd have stuck this question in ServerFault)
The cert is a Comodo PositiveSSL Wildcard cert, not a self-signed. I've confirmed the CA cert is in my local trust store (I'm on OS X El Cap, so it's been added to my keychain). It seems like maybe the gem command isn't using my system trust store.
I've googled around on this for two or three days to no avail -- everything I can find relates to rubygems.org and suggests gem update --system (I'm running the latest rubygems) or switching to http, both of which are rubygems.org-specific fixes.
How can I get gem to use my local trust store or take an additional cert?

Can't access Ruby server on VM from host machine

I have a VM set up running Ubuntu server 14.04. In my VM I have created the following Ruby/Sinatra app:
require 'sinatra'
set :environment, :production
set :bind, '0.0.0.0'
get '/' do
"Hello World!"
end
When I execute this using ruby hello.rb I get the following output:
[2015-03-09 16:58:34] INFO WEBrick 1.3.1
[2015-03-09 16:58:34] INFO ruby 2.1.5 (2014-11-13) [x86_64-linux]
== Sinatra/1.4.5 has taken the stage on 4567 for production with backup from WEBrick
[2015-03-09 16:58:34] INFO WEBrick::HTTPServer#start: pid=2258 port=4567
Everything seems to work fine, but when I try to access localhost:4567 from my host machine (Windows 8.1) I get a GET http://localhost:4567/ net::ERR_CONNECTION_REFUSED error (in Chrome).
If I try to access the server from within my VM (ex, by using wget http://localhost:4567) it works fine.
I also have Apache2.4 running in my VM, which works fine, but I disabled it when trying to access my Ruby server by running sudo service apache2 stop.
What could the problem be? I have no problem running regular Ruby files, and I can access my Ruby/Sinatra app if I use Apache2 with Phusion Passenger. But when I simply run ruby hello.rb I can't access it from my host machine.
localhost refers to your local host, which in the case of Windows is not the same as your Ubuntu instance.
You'll need to connect to your Ubuntu instance IP directly, whatever that is. Usually you can find out with ip addr or ifconfig.
If you need a friendly DNS name you can put in your browser, xip.io provides one.
If you're using Vagrant then you can configure port forwarding so you can still use localhost if you want. Without port forwarding you will not be able to connect indirectly.
I have the network adapter for my VM attached to NAT. I was forwarding ports 443, 22 and 80 to my VM, and accessing my server on those ports works fine. Since I was running the Ruby WEBrick server on the default port 4567, I just had to forward port 4567 from my host machine to my VM as well.
After that change, typing http://localhost:4567 into my web browser served up the content from my Ruby file.

How to log real client ip in rails log when behind proxy like nginx

Problem
I have a rails 3.2.15 with rack 1.4.5 setup on two servers. First server is a nginx proxy serving static assets. Second server is a unicorn serving the rails app.
In Rails production.log I always see the nginx IP address (10.0.10.150) and not my client IP address (10.0.10.62):
Started GET "/" for 10.0.10.150 at 2013-11-21 13:51:05 +0000
I want to have the real client IP in logs.
Our Setup
The HTTP headers X-Forwarded-For and X-Real-IP are setup correctly in nginx and I have defined 10.0.10.62 as not being a trusted proxy address by setting config.action_dispatch.trusted_proxies = /^127\.0\.0\.1$/ in config/environments/production.rb, thanks to another answer. I can check it is working because I log them in the application controller:
in app/controllers/application_controller.rb:
class ApplicationController < ActionController::Base
before_filter :log_ips
def log_ips
logger.info("request.ip = #{request.ip} and request.remote_ip = #{request.remote_ip}")
end
end
in production.log:
request.ip = 10.0.10.150 and request.remote_ip = 10.0.10.62
Investigation
When investigating, I saw that Rails::Rack::Logger is responsible for logging the IP address:
def started_request_message(request)
'Started %s "%s" for %s at %s' % [
request.request_method,
request.filtered_path,
request.ip,
Time.now.to_default_s ]
end
request is an instance of ActionDispatch::Request. It inherits Rack::Request which defines how the IP address is computed:
def trusted_proxy?(ip)
ip =~ /^127\.0\.0\.1$|^(10|172\.(1[6-9]|2[0-9]|30|31)|192\.168)\.|^::1$|^fd[0-9a-f]{2}:.+|^localhost$/i
end
def ip
remote_addrs = #env['REMOTE_ADDR'] ? #env['REMOTE_ADDR'].split(/[,\s]+/) : []
remote_addrs.reject! { |addr| trusted_proxy?(addr) }
return remote_addrs.first if remote_addrs.any?
forwarded_ips = #env['HTTP_X_FORWARDED_FOR'] ? #env['HTTP_X_FORWARDED_FOR'].strip.split(/[,\s]+/) : []
if client_ip = #env['HTTP_CLIENT_IP']
# If forwarded_ips doesn't include the client_ip, it might be an
# ip spoofing attempt, so we ignore HTTP_CLIENT_IP
return client_ip if forwarded_ips.include?(client_ip)
end
return forwarded_ips.reject { |ip| trusted_proxy?(ip) }.last || #env["REMOTE_ADDR"]
end
The forwarded IP address are filtered with trusted_proxy?. Because our nginx server is using a public IP address and not a private IP address, Rack::Request#ip thinks it is not a proxy but the real client ip that tries to do some IP spoofing. That's why I see nginx IP address in my logs.
In log excerpts, client and servers have IP address 10.0.10.x because I am using virtual machines to reproduce our production environment.
Our current solution
To circumvent this behavior, I wrote a little Rack middleware located in app/middleware/remote_ip_logger.rb:
class RemoteIpLogger
def initialize(app)
#app = app
end
def call(env)
remote_ip = env["action_dispatch.remote_ip"]
Rails.logger.info "Remote IP: #{remote_ip}" if remote_ip
#app.call(env)
end
end
And I insert it just after the ActionDispatch::RemoteIp middleware
config.middleware.insert_after ActionDispatch::RemoteIp, "RemoteIpLogger"
This way I can see the real client IP in logs:
Started GET "/" for 10.0.10.150 at 2013-11-21 13:59:06 +0000
Remote IP: 10.0.10.62
I feel a little uncomfortable with this solution. nginx+unicorn is a common setup for rails application. If I have to log the client IP myself, it means I have missed something. Is it because the Nginx server is using a public IP address when communicating with the rails server? Is there a way to customize the trusted_proxy? method of Rack::Request?
EDITED: add nginx configuration and a HTTP request capture
/etc/nginx/sites-enabled/site.example.com.conf:
server {
server_name site.example.com;
listen 80;
location ^~ /assets/ {
root /home/deployer/site/shared;
expires 30d;
}
location / {
root /home/deployer/site/current/public;
try_files $uri #proxy;
}
location #proxy {
access_log /var/log/nginx/site.access.log combined_proxy;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_read_timeout 300;
proxy_pass http://rails.example.com:8080;
}
}
Nginx server is 10.0.10.150. Rails server is 10.0.10.190. My machine is 10.0.10.62 When doing curl http://10.0.10.150/ from my machine, a tcpdump port 8080 -i eth0 -Aq -s 0 on rails server show theses request HTTP headers:
GET / HTTP/1.0
X-Forwarded-For: 10.0.10.62
X-Forwarded-Proto: http
Host: 10.0.10.150
Connection: close
User-Agent: curl/7.29.0
Accept: */*
And the rails log /home/deployer/site/current/log/production.log (Remote IP and request.ip lines being added by custom code):
Started GET "/" for 10.0.10.150 at 2013-11-22 08:01:17 +0000
Remote IP: 10.0.10.62
Processing by Devise::RegistrationsController#new as */*
request.ip = 10.0.10.150 and request.remote_ip = 10.0.10.62
Rendered devise/shared/_links.erb (0.1ms)
Rendered devise/registrations/new.html.erb within layouts/application (2.3ms)
Rendered layouts/_landing.html.erb (1.5ms)
Completed 200 OK in 8.9ms (Views: 7.5ms | ActiveRecord: 0.0ms)
In my opinion, your current approach is the only sane one. The only step that is missing is overwriting the IP address in env.
The typical REMOTE_ADDR seldom holds the correct IP if you've any amount of layers of proxies and load balancers and what not -- you're not unique in this respect. Each potentially adds or changes remote IP-related headers. And you cannot assume that each of those fields necessarily correspond to a single IP address, at that. Some will push or unshift an IP to a list instead.
There is only one way to know for sure which field holds the correct value and how, and that is to dive in there and look. You've evidently done that already. Now, just overwrite env['REMOTE_ADDR'] with its correct value using your Rack middleware. There's little point in letting any piece of code you didn't write log or process the wrong IP address, as is happening now.
(This being Ruby, you could also monkey patch Rack::Request, of course...)
For colorful reading that illustrate the varying degrees of which exotic setups can mess up attempts at finding a client's real IP address, see for instance the unending discussions that occurred about this for WordPress:
https://core.trac.wordpress.org/ticket/9235
https://core.trac.wordpress.org/ticket/4198
https://core.trac.wordpress.org/ticket/4602
It's PHP but the gist of the points raised apply equally well to Ruby. (Note that they're unresolved as I write this, too, and that they've been around for aeons.)
This seemed to do the trick for me. (set in nginx config)
proxy_set_header CLIENT_IP $remote_addr;
I ran into the same issue, that a subset of our web clients access our rails app on (Rails 4.2.7) on our private network and we get the wrong IP reported. So, I thought I'd add what worked for us to resolve the problem.
I found Rails issue 5223 that provided a better workaround than double logging the IP like the question does. So, we monkey patch Rack to remove the private network from the list of trusted proxies like so:
module Rack
class Request
def trusted_proxy?(ip)
ip =~ /^127\.0\.0\.1$/
end
end
end
That addresses the controller logging the wrong IP, the other half of the fix to ensure that request.remote_ip is handled correctly. To do so add the following to your config/environments/production.rb:
config.action_dispatch.trusted_proxies = [IPAddr.new('127.0.0.1')]
I was facing the same problem. To fix this I referred your implementation, just below the line in config/application.rb made it fixed.
config.middleware.insert_before Rails::Rack::Logger, 'RemoteIpLogger'
No need to write extra loggers you will see actual client IP in the first row itself.
Started GET "/" for 10.0.10.62 at 2013-11-22 08:01:17 +0000
And in app\middleware\remote_ip_logger.rb. My HTTP_X_FORWARDED_FOR is having a list of IPs and the first one is the actual client's IP.
class RemoteIpLogger
def initialize(app)
#app = app
end
def call(env)
if env["HTTP_X_FORWARDED_FOR"]
remote_ip = env["HTTP_X_FORWARDED_FOR"].split(",")[0]
env['REMOTE_ADDR'] = env["action_dispatch.remote_ip"] = env["HTTP_X_FORWARDED_FOR"] = remote_ip
#app.call(env)
else
#app.call(env)
end
end
end
Short and simple :
request.remote_ip

Resources