I have a Ruby web application built with Sinatra, Rack and Puma. I'm using Sinatra to implement the controllers (MVC pattern), each handling a different route, and each controller class extends Sinatra::Base. I'd like to enable TLS so that all connections to the server are served over HTTPS.
My Rack config.ru looks like:
require 'sinatra'
require 'rack'
# Start my database ...
run Rack::URLMap.new(
'/api/foo' => FooController.new,
'/api/bar' => BarController.new
)
Puma is picked up automatically by Rack.
How do I enable HTTPS? To start, I'm happy to use a self-signed certificate, but how can I configure the server with a valid cert? None of this seems well-documented at all, which I find quite frustrating. Am I overlooking an option I can just set at the top-level in my Rack config file, something like set :ssl => true maybe?
Similar yet fruitless SO posts:
How to make Sinatra work over HTTPS/SSL?
How to enable SSL for a standalone Sinatra app?
Since you mentioned that you use Puma, you can find this in their docs:
Need a bit of security? Use SSL sockets!
$ puma -b 'ssl://127.0.0.1:9292?key=path_to_key&cert=path_to_cert'
In production deployments a dedicated load balancer (e.g. nginx, HAProxy, AWS ELB) is usually responsible for SSL termination, and forwards plain HTTP traffic to application servers over the internal network. These heavy duty web servers are usually much faster, more stable, and better audited.
Related
According to the definition, the puma is kind of web server and the rack is an interface between web server and application server.
But, lots of videos mention that rack is a interface between web framework and web server. So can I interpret that we use web framework to build our application, so the rack is an interface between web framework and web server?
Another question is that if puma is kind of web server, can I use Apache or Nginx to replace it?
Puma is an application server, more specifically a Rack app server. (There are more than just Puma: Unicorn, Passenger etc. There are also application servers for different interfaces; for example, Tomcat and JBoss are Java application servers.) An application server accepts a HTTP request, parses it into a structure in the application's language, hands it off to the application, and awaits a response object which it then returns to the client.
Nginx/Apache are general purpose web servers. Apache does not know how to serve Rack applications, and Puma doesn't know how to do a bunch of other things Nginx/Apache do (e.g. CGI scripts, URL rewriting, proxies, balancing, blacklisting...)
Rack is a library for Ruby that accepts parsed HTTP requests from an app server, funnels them through a configurable stack of middleware (such as e.g. session handling) passing the request object to a handler, and returning the response object the app server, making web development in Ruby easy. You can execute a Rack app directly (or rather, with a very simple server that is installed with Rack), but it is not recommended outside development, which is where "proper" application servers come in: they know how to keep your app alive, restart it if it dies, guarantee that there is the predetermined number of threads running, things like that.
Thus, typically, you have your Web server accept connections, then using simple reverse proxy pass the appropriate requests to your Rack application, which is executing inside the Rack app server. This gives you the benefits from all the involved pieces.
We are using Ruby gem to 'Redis' to connect to dynomite from our ruby application. If the redis in that node is not available or getting killed, the requests are not getting forwarded to nodes or replicas in other racks.
Is there any configuration we have to set to forward requests to other nodes when redis in that machine is not available ?
Is it is the feature not available in dynomite ?
Do I have to use some way or some other gem instead of redis to connect to dynomite ?
Please help
I'm just researching this myself; I've discovered Dynomite doesn't provide any failover or anything like that. However, their own Dyno java client does provide this functionality.
Any client that can talk to Memcached or Redis can talk to Dynomite -
no change needed. However, there will be a few things missing,
including failover strategy, request throttling, connection pooling,
etc., unless our Dyno client is used (more details to in the Client
Architecture section).
Source: http://techblog.netflix.com/2014/11/introducing-dynomite.html
I want to use to the xmpp4r gem to send notifications to gtalk from my Rails app. However I am behind a HTTP proxy and hence cannot use regular jabber. Also, xmpp4r supports HTTPBind but it seems gtalk does not. So is there a way to use HTTPBind with gtalk?
Use proxifier http://rubygems.org/gems/proxifier.
If your proxy is configured on the environment already (ex. by setting the environment variable http_proxy) you can simply add the following two lines to your code.
require "proxifier"
require "proxifier/env"
This enhances the used TCPSocket to support connection via proxy.
xmpp4rproxifier
I have currently several apps running that are behind an Apache reverse proxy. I do this because I have one public IP address for multiple servers. I use VirtualHosts to proxy the right app to the right service. For example:
<VirtualHost *:80>
ServerAdmin webmaster#localhost
ServerName nagios.myoffice.com
ProxyPass / http://nagios.myoffice.com/
ProxyPassReverse / http://nagios.myoffice.com/
</VirtualHost>
This works fine for apps like PHP, Django and Rails, but I'd like to start experimenting with Node.js.
I've already noticed that apps that are behind the Apache proxy can't handle as high of a load as when I access them directly. Very likely because the Apache configuration is not ideal (not enough simultaneous connections maybe).
One of the coolest features I'd like to experiment with in node.js is the socket.io capabilities which I'm afraid will really expose the performance problem. Especially because, as I understand it, socket.io will keep one of my precious few Apache connections open constantly.
Can you suggest a reverse proxy server I can use in this situation that will let me use multiple virtualhosts and will not stifle the node.js apps performance too much or get in the way of socket.io experimentation?
I recommend node-http-proxy. Very active community and proven in production.
FEATURES
Reverse proxies incoming http.ServerRequest streams
Can be used as a CommonJS module in node.js
Uses event buffering to support application latency in proxied requests
Reverse or Forward Proxy based on simple JSON-based configuration
Supports WebSockets
Supports HTTPS
Minimal request overhead and latency
Full suite of functional tests
Battled-hardened through production usage # [nodejitsu.com][0]
Written entirely in Javascript
Easy to use API
Install using the following command
npm install http-proxy
Here is the Github page and the NPM page
Although this introduces a new technology, I'd recommend using nginx as a front-end. nginx is a fast and hardened server written in c that is quite good at reverse proxying. Like node, it is event-driven and asynchronous.
You can use nginx to forward requests to various nodejs servers you are running, either load-balancing, or depending on the url (since it can do things like rewrites).
I'm writing a simple chat application. The only "front-end" required is a single html file, a javascript file, and a few stylesheets. The majority of the application is the server-side EventMachine WebSocket server.
I'm also trying to host this on Heroku.
I currently have a sinatra app that just serves the static files, and a separate app that serves the WebSocket server (on a different port).
Is there a way I can combine these so that I can start up one application which serves/responds to port 80 (for the static files) and another port for the WebSocket server?
It's probably not a good idea to have your WebSocket server run on a different port. WebSockets run on port 80 specifically because that port is not blocked on most networks. If you use a different port, you will find users behind some firewalls unable to use your application.
Running your event server separate from your web server is probably the best way to go.
If you want something a bit more experimental, Goliath has WebSocket support in the master branch and can also serve the needed resources. If you look in the examples directory there is a WebSocket server that also serves it's HTML page.