Nginx TCP stream optimization - performance

I am using Nginx to connect my PHP web server with my MySQL server (via the stream module), so Nginx is running on the web server and on the MySQL server and both are connected via TCP over SSL.
I have noticed the initial connection from my application to the MySQL server takes 4-6ms for every request, and I am wondering if I can do something more to reuse connections or speed things up in general - both servers are in the same network locally.
This is the configuration on my web server:
stream {
upstream mysql {
server 192.168.10.5:3999;
}
server {
listen 127.0.0.1:998 so_keepalive=30s:10s:6;
proxy_pass mysql;
proxy_connect_timeout 1s;
proxy_ssl on;
proxy_ssl_certificate mysql.client.crt;
proxy_ssl_certificate_key mysql.client.key;
proxy_ssl_protocols TLSv1.2;
proxy_ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
proxy_ssl_trusted_certificate root.crt;
proxy_ssl_verify on;
proxy_ssl_verify_depth 2;
proxy_ssl_session_reuse on;
}
}
This is the configuration on my MySQL server:
stream {
upstream mysql_local {
server 127.0.0.1:3306;
}
server {
listen 3999 ssl so_keepalive=60s:30s:20;
proxy_pass mysql_local;
proxy_connect_timeout 1s;
# SSL configuration - use server certificate & key
ssl_certificate mysql.server.crt;
ssl_certificate_key mysql.server.key;
ssl_protocols TLSv1.2;
ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
ssl_prefer_server_ciphers on;
ssl_client_certificate root.crt;
ssl_verify_client on;
ssl_session_cache shared:MYSQL:100m;
ssl_session_tickets off;
ssl_session_timeout 600m;
ssl_handshake_timeout 5s;
}
}
I tried setting up TCP keepalive on both sides, but not sure if something more is needed or how to check if connections are even reused. There is little online documentation on this kind of setup, especially optimizing it.
I suspect there is more that I can do because within the application I am also connecting to Elasticsearch (also running on the MySQL server) via HTTP+SSL using Nginx as a HTTP proxy and there the connection time is much lower even though it is HTTP, also uses the same certificates and also goes through Nginx as a go-between, yet it only takes max. 1ms to connect. I am using HTTP keepalive connections there.

Someone from Nginx confirmed that TCP connection pooling / some kind of keepalive method is currently not supported and also not planned for the future.
Instead, I added ProxySQL in front of Nginx, which has connection pooling built-in. It also has other benefits, like statistics on which queries are run the most, which take the longest time, etc., so it is actually a really good tool to find out where you can optimize an application and which queries might need additional care. You can even cache certain queries transparently, although I am not really interested in that. Beware though: the documentation of ProxySQL is a bit spotty, and the configuration system seems more confusing than beneficial to me, but once you have set it up it runs really well.

Related

Nginx while serving lots of static files Aborted

While loading a lot of tiny/regular images on my Nginx server since yesterday I start getting very slow times for certain images (in random order)
I've setup the sendfile_max_chunk 128k; directive to try and mitigate the issue but still no success.
Server loads at lightning speed but some static files get Aborted and endup loading after 30s or more.
screenshot of the issue
TL;TR; You need to cache frequently requested files, allow to request multiple files in terms of one HTTP session (keepalive), and use caching for SSL sessions to avoid delays in SSL handshake.
Use open_file_cache
As you actively use SSL, please consider to use ssl_session_cache, keepalive, keepalive_timeout keepalive_requests, use http2 module if available
Read official nginx tuning article
open_file_cache example:
open_file_cache max=2048 inactive=12h;
open_file_cache_valid 12h;
open_file_cache_min_uses 2;
open_file_cache_errors off;
ssl_session_cache example:
ssl_session_cache shared:SSL:32m;
ssl_session_timeout 4h;
ssl_buffer_size 1400;
http2 example for "default" server block, for more info read DO article:
server {
listen 443 default_server ssl http2 deferred reuseport;
listen [::]:443 default_server ssl http2 deferred reuseport ipv6only=on;
server_name _;
}

How to deal with mixed content in a website which should be secured as https?

I am building a website on server A (with domain name registered), used for people to create and run their "apps".
These "apps" are actually docker containers running on server B, in the container, there lives a small web app which can be accessed directly like:
http://IP_ADDR_OF_SERVER_B:PORT
The PORT is a random big number one which maps to the docker container.
Now I can make SSL certificate working on server A, so that it works fine by accessing:
https://DOMAIN_NAME_OF_SERVER_A
The problem is, I enclosed the "apps" in iframe by accessing "http" like above, therefore my browser(Chrome) refuse to open it and report error as:
Mixed Content: The page at 'https://DOMAIN_NAME_OF_SERVER_A/xxx' was loaded over HTTPS, but requested an insecure resource 'http://IP_ADDR_OF_SERVER_B:PORT/xxx'. This request has been blocked; the content must be served over HTTPS.
So, how should I deal with such issue?
I am a full stack green hand, I'd appreciate a lot if you can share some knowledge on how to build a healthy https website while solving such problem in a proper way.
Supplementary explanation
Ok I think I just threw out the outline of the question, here goes more details.
I see it is intact and straight forward to make the iframe requests to be served with https, then it won't confuse me anymore.
However the trouble is, since all the "apps" are dynamically created/removed, it seems I'll need to prepare many certificates for each one of them.
Will self signed certificate work without being blocked or complained by the browser? Or do I have a way to serve all the "apps" with one SSL certificate?
Software environment
Server A: Running node.js website listening to port 5000 and served with Nginx proxy_pass.
server {
listen 80;
server_name DOMAIN_NAME_OF_SERVER_A;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:5000;
}
}
server {
listen 443;
server_name DOMAIN_NAME_OF_SERVER_A;
ssl on;
ssl_certificate /etc/nginx/ssl/DOMAIN_NAME_OF_SERVER_A.cer;
ssl_certificate_key /etc/nginx/ssl/DOMAIN_NAME_OF_SERVER_A.key;
ssl_session_timeout 5m;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:5000;
}
}
Server B: Running node.js apps listening to different random big port numbers such as 50055, assigned dynamically when "apps" are created. (In fact these apps are running in docker containers while I think it doesn't matter) Can run Nginx if needed.
Server A and Server B talk with each other in public traffic.
Solution
Just as all the answers, especially the one from #eawenden, I need a reverse proxy to achieve my goal.
In addition, I did a few more things:
1. Assign a domain name to Server B for using a letsencrypt cert.
2. Proxy predefined url to specific port.
Therefore I setup a reverse proxy server using nginx on Server B, proxy all the requests like:
https://DOMAIN_NAME_OF_SERVER_B/PORT/xxx
to
https://127.0.0.1:PORT/xxx
Ps: nginx reverse proxy config on Server B
server {
listen 443;
server_name DOMAIN_NAME_OF_SERVER_B;
ssl on;
ssl_certificate /etc/nginx/ssl/DOMAIN_NAME_OF_SERVER_B.cer;
ssl_certificate_key /etc/nginx/ssl/DOMAIN_NAME_OF_SERVER_B.key;
ssl_session_timeout 5m;
rewrite_log off;
error_log /var/log/nginx/rewrite.error.log info;
location ~ ^/(?<port>\d+)/ {
rewrite ^/\d+?(/.*) $1 break;
proxy_pass http://127.0.0.1:$port;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 86400;
}
}
Thus everything seems to be working as expected!
Thanks again to all the answerers.
I have mix content issue on dynamic request
add_header 'Content-Security-Policy' 'upgrade-insecure-requests';
This resolve my issue with ngnix server
The best way to do it would be to have a reverse proxy (Nginx supports them) that provides access to the docker containers:
A reverse proxy server is a type of proxy server that typically sits
behind the firewall in a private network and directs client requests
to the appropriate backend server. A reverse proxy provides an
additional level of abstraction and control to ensure the smooth flow
of network traffic between clients and servers.
- Source
Assign a domain name or just use the IP address of the reverse proxy and create a trusted certificate (Let's Encrypt provides free certificates). Then you can connect to the reverse proxy over HTTPS with a trusted certificate and it will handle connecting to the correct Docker container.
Here's an example of this type of setup geared specifically towards Docker: https://github.com/jwilder/nginx-proxy
The error message is pretty much telling you the solution.
This request has been blocked; the content must be served over HTTPS.
If the main page is loaded over HTTPS, then all the other page content, including the iframes, should also be loaded over HTTPS.
The reason is that insecure (non-HTTPS) traffic can be tampered with in transit, potentially being altered to include malicious code that alters the secure content. (Consider for example a login page with a script being injected that steals the userid and password.)
== Update to reflect the "supplemental information" ==
As I said, everything on the page needs to be loaded via HTTPS. Yes, self-signed certificates will work, but with some caveats: first, you'll have to tell the browser to allow them, and second, they're really only suitable for use in a development situation. (You do not want to get users in the habit of clicking through a security warning.)
The answer from #eawenden provides a solution for making all of the content appear to come from a single server, thus providing a way to use a single certificate. Be warned, reverse proxy is a somewhat advanced topic and may be more difficult to set up in a production environment.
An alternative, if you control the servers for all of the iframes, may be to use a wildcard SSL certificate. This would be issued for example for *.mydomain.com, and would work for www.mydomain.com, subsite1.mydomain.com, subsite2.mydomain, etc, for everything under mydomain.com
Like others have said, you should serve all the content over HTTPS.
You could use http proxy to do this. This means that server A will handle the HTTPS connection and forward the request to server B over HTTP. HTTP will then send the response back to server A, which will update the response headers to make it look like the response came from server A itself and forward the response to the user.
You would make each of your apps on server B available on a url on domain A, for instance https://www.domain-a.com/appOnB1 and https://www.domain-a.com/appOnB2. The proxy would then forward the requests to the right port on server B.
For Apache this would mean two extra lines in your configuration per app:
ProxyPass "/fooApp" "http://IP_ADDR_OF_SERVER_B:PORT"
ProxyPassReverse "/fooApp" "http://IP_ADDR_OF_SERVER_B:PORT"
The first line will make sure that Apache forwards this request to server B and the second line will make sure that Apache changes the address in the HTTP response headers to make it look like the response came from server A instead of server B.
As you have a requirement to make this proxy dynamic, it might make more sense to set this proxy up inside your NodeJS app on server A, because that app probably already has knowledge about the different apps that live on server B. I'm no NodeJS expert, but a quick search turned up https://github.com/nodejitsu/node-http-proxy which looks like it would do the trick and seems like a well maintained project.
The general idea remains the same though: You make the apps on server B accessible through server A using a proxy, using server A's HTTPS set-up. To the user it will look like all the apps on server B are hosted on domain A.
After you set this up you can use https://DOMAIN_NAME_OF_SERVER_A/fooApp as the url for your iFrame to load the apps over HTTPS.
Warning: You should only do this if you can route this traffic internally (server A and B can reach each other on the same network), otherwise traffic could be intercepted on its way from server A to server B.

How to log errors from 2 different Lans to one sentry server

Sentry needs a value the location it is installed: SENTRY_URL_PREFIX. The problem is that I want to log errors to an installation via two different lan's.
Lets say, the server that's running sentry has an ip 192.168.1.1 and 10.0.0.1, and I want to log errors from 192.168.1.2 and from 10.0.0.2.
The connection between the sentry server (machine) and the machines that need to do the logging is fine, but I need to 'switch' a url-prefix setting in sentry for it to work with one or the other: If I set the SENTRY_URL_PREFIX to http://10.0.0.1 it works and is able to receive logs from that lan, but all requests from the other lan go wrong (direct http request for the frontend get an http 400 result for instance) and of course the other way around.
Details:
I'm running sentry 8.1.2 in docker (https://hub.docker.com/_/sentry/)
Interestingly enough, I read this in the changelog
SENTRY_URL_PREFIX has been deprecated, and moved to system.url-prefix inside of config.yml or it can be configured at runtime.
Starting sentry for the first time actually still seems to ask for the prefix; changing the prefix does seem to work for the connections, so to me it looks like this is the culprint. It could be that behind the scenes this is communicated to above mentioned system.url-prefix, so that this setting is the problem, but I'm not sure.
Does anyone know how to run one server on two adresses?
The main issue is of course sending the errors, it's not a big deal to have the web-interface only visible from one ip.
While I'm not really sure how it is supposed to work, I could not get any logs to a server with a different system.url-prefix then I used in the call.
From twitter and the sentry group I gather that it does need the correct host for the interface, while it shouldn't really break stuff otherwise. Not sure how to unbreak it though.
The solution for me was just to address the sentry install from one point. Because we need the separate NIC's, we do this by using a simple nginx reverse proxy in front of the set-up, and let that set the host header. I used a default https://hub.docker.com/_/nginx/ image, and this config:
events {
worker_connections 1024; ## Default: 1024
}
http{
server {
listen 80;
location / {
proxy_pass http://$internalip-sentry-knows;
proxy_redirect http://$internalip-sentry-knows $externalip-we-use;
proxy_set_header Host $internalip-sentry-knows;
proxy_set_header X-Real-IP $remote_addr ;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
The port is exposed to both nics, so this listens to both adresses we want to use, and proxys it nice and easy to sentry.
Disclaimer: The interface speaks of SENTRY_URL_PREFIX but that's deprecated, so I use system.url-prefix, but for all practical purposes in my answer they are interchangeable. This could be a source of confusion for someone who does know what goes where exactly :)
References:
* twitter conversation with Matt from sentry
* Groups response from David from sentry

SSL client certificate auth with ruby (sinatra)

How can I authorize an API in sinatra, so that only callers possessing a known client certificate (or one issued by a trusted CA) are allowed to call it?
Currently I am using the 'thin' webserver, but I am open to other options if that is necessary.
You can use nginx to take care of your client-certificate - here is a blog post which shows how to set it up:
server {
listen 443;
ssl on;
server_name example.com;
ssl_certificate /etc/nginx/certs/server.crt;
ssl_certificate_key /etc/nginx/certs/server.key;
ssl_client_certificate /etc/nginx/certs/ca.crt;
ssl_verify_client optional;
location / {
root /var/www/example.com/html;
fastcgi_pass 127.0.0.1:9000;
fastcgi_param SCRIPT_FILENAME /var/www/example.com/lib/Request.class.php;
fastcgi_param VERIFIED $ssl_client_verify;
fastcgi_param DN $ssl_client_s_dn;
include fastcgi_params;
}
}
We specify our the server's certificate (server.crt) and private key
(server.key) We specify the CA cert that we used to sign our client
certificates (ca.crt) We set the ssl_verify_client to optional. This
tells nginx to attempt to verify to SSL certificate if provided. My
API allows both authenticated and unauthenticated requests, however if
you only want to allow authenticated requests, you can go ahead and
set this value to on.
You can use thin with nginx, but I believe that using passenger with nginx is more popular in this case, and quite easy to deploy.
ssl_verify_client optional is explained here:
ssl_verify_client
Syntax: ssl_verify_client on | off | optional | optional_no_ca
Default: off
Context: http | server
Reference: ssl_verify_client
This directive enables the verification of the client identity.
Parameter 'optional' checks the client identity using its certificate
in case it was made available to the server.
Since you're using Thin I don't think this is possible at the moment because peer verification appears to be broken. See https://github.com/macournoyer/thin/pull/203:
"The get_peer_cert method of EM doesn't return anything unless the cert has been verified. The --ssl-verify option of thin actually doesn't do anything. These two behaviors combined mean that env['rack.peer_cert'], which was introduced in thin 1.2.8, always returns nil. Since --ssl-verify never actually caused a verification to happen, it is better to remove that option until a fully verification process is put in place. However, the peer_cert can be made available in --ssl mode by always "verifying" the cert, thereby providing the client supplied certificate, if there is one, available in env['rack.peer_cert']."
I believe Uri Agassi is partially correct by recommending passenger but I worry a nginx/thin combination introduces a security risk if you're expecting the cert to act as the authentication if thin is ever moved off the nginx server, thereby exposing your thin server. I think an embedded appserver solution is the way to go (a la passenger).

Load balance WebSocket connections to Tornado app using HAProxy?

I am working on a Tornado app that uses websocket handlers. I'm running multiple instances of the app using Supervisord, but I have trouble load balancing websocket connections.
I know nginx does not support dealing with websockets out of the box, but I followed the instructions here http://www.letseehere.com/reverse-proxy-web-sockets to use the nginx tcp_proxy module to reverse proxy websocket connections. However, this did not work since the module can't route websocket urls (ex: ws://localhost:80/something). So it would not work with the URL routes I have defined in my Tornado app.
From my research around the web, it seems that HAProxy is the way to go to load balance my websocket connections. However, I'm having trouble finding any decent guidance to setup HAProxy to load balance websocket connections and also be able to handle websocket URL routes.
I would really appreciate some detailed directions on how to get this going. I am also fully open to other solutions as well.
it's not difficult to implement WebSocket in haproxy, though I admit it's not yet easy to find doc on this (hopefully this response will make one example). If you're using haproxy 1.4 (which I suppose you are) then it works just like any other HTTP request without having to do anything, as the HTTP Upgrade is recognized by haproxy.
If you want to direct the WebSocket traffic to a different farm than the rest of HTTP, then you should use content switching rules, in short :
frontend pub-srv
bind :80
use_backend websocket if { hdr(Upgrade) -i WebSocket }
default_backend http
backend websocket
timeout server 600s
server node1 1.1.1.1:8080 check
server node2 2.2.2.2:8080 check
backend http
timeout server 30s
server www1 1.1.1.1:80 check
server www2 2.2.2.2:80 check
If you're using 1.5-dev, you can even specify "timeout tunnel" to have a larger timeout for WS connections than for normal HTTP connections, which saves you from using overly long timeouts on the client side.
You can also combine Upgrade: WebSocket + a specific URL :
frontend pub-srv
bind :80
acl is_websocket hdr(Upgrade) -i WebSocket
acl is_ws_url path /something1 /something2 /something3
use_backend websocket if is_websocket is_ws_url
default_backend http
Last, please don't use the stupid 24h idle timeouts we sometimes see, it makes absolutely no
sense to wait for a client for 24h with an established session right now. The web is much more
mobile than in the 80s and connection are very ephemeral. You'd end up with many FIN_WAIT sockets
for nothing. 10 minutes is already quite long for the current internet.
Hoping this helps!
WebSockets does not traverse Proxies too well since after the handshake they are not following the normal HTTP behavior.
Try use the WebSocket (wss://) protocol (secured WS). this will use the Proxy CONNECT API which will hide the WebSocket protocol.
I used https://launchpad.net/txloadbalancer to do loadbalancing with Tornado websocket handlers. It's simple and worked well (I think).
http nginx (only nginx v1.3+)
upstream chatservice {
#multi thread by tornado
server 127.0.0.1:6661;
server 127.0.0.1:6662;
server 127.0.0.1:6663;
server 127.0.0.1:6664;
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
virtual host
server {
listen 80;
server_name chat.domain.com;
root /home/duchat/www;
index index.html index.htm;
location / {
try_files $uri $uri/ #backend;
}
location #backend {
proxy_pass_header Server;
proxy_set_header X-Real-IP $remote_addr;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_pass http://chatservice;
internal;
}

Resources