net::ERR_HTTP2_PROTOCOL_ERROR 200 with http2 nginx proxy to java spring boot application - spring-boot

I run CapRover to enable https and domains on my java spring boot embedded tomcat server. This java application works on http2 to enable more SSE events.
Runs fine without proxy
The application without CapRover/ nginx proxy runs fine (so when I access the ip:port on my server).
Error when nginx proxy enabled
When I add an nginx proxy to enable https/domain on the application with CapRover, after a few minutes I receive this error in the console of the browser:
NS_ERROR_NET_PARTIAL_TRANSFER in FireFox.
net::ERR_HTTP2_PROTOCOL_ERROR 200 in Chrome.
This error occurs on the SSE stream endpoints. BTW when loading the application I receive SSE events on those endpoints, so the endpoints do work. But there seems to be an error after certain period, maybe a timeout or something?
This is the nginx config I have:
events {
}
http {
server {
listen 443;
location / {
proxy_pass https://MY_IP_ADDRESS_COMES_HERE:7372;
}
}
}

The proper directive is:
listen 443 ssl http2;
And you should add the paths to SSL key and certificate by adding these directives:
ssl_certificate ...;
ssl_certificate_key ...;
Additional you need the following directives: proxy_ssl_server_name and proxy_ssl_verify. See.

Related

HAProxy redirect based on the selected server

I'm trying to setup HAProxy to change the destination URL depending on the Server that gets picked by the loadbalancer, I have 3 webservices each one deployed in each of the services AWS, GCP and AZURE
The AWS is located at address1.com/service
The AZR is located at address2.com/bucketName/service
The GCP is located at address3.com/api/service
But in HAProxy I can't put a dash / on the server address to force a request made to example.com/helloWorld to go, for example, to address3.com/api/helloWorld, which is what I need, I want HAProxy to pick one of the servers for me using the balance method provided, and then have it calling the correct webservice depending on the server that was picked.
frontend example.com
bind 0.0.0.0:80
use_backend back_farm
default_backend back_farm
backend back_farm
mode http
balance roundrobin
option httpclose
option forwardfor
server awsback address1.com/
server azrback address2.com/bucketName/
server gcpback address3.com/api/
I tried creating an ACL to check the selected server, but it doesn't seem to recognize the %s argument mentioned on the docs, and I've neither found a better replacement for it.
backend back_farm
mode http
balance roundrobin
option httpclose
option forwardfor
server awsback address1.com
server azrback address2.com
server gcpback address3.com
acl is_azr %s -i azrback
acl is_gcp %s -i gcpback
http-request redirect code 301 location http://%[hdr(host)]%[url,regsub(^/service,/api/service,)] if is_azr
http-request redirect code 301 location http://%[hdr(host)]%[url,regsub(^/service,/bucketName/service,)] if is_gcp
What am I doing wrong in here, or what other way could I do this, to force the address to be changed depending on the destination server?
Any ideas or guidance?
Even throught this isn't 100% a reply to the question, because I had to switch from HAProxy to NGINX to be able to achieve what I intended, I wanted to leave the solution I found here for others that might find it usefull.
I was able to achieve this using the following configuration:
upstream backend.example.com {
server aws.backend.example.com:8091;
server azr.backend.example.com:8092;
server gcp.backend.example.com:8093;
}
server {
listen 80;
listen [::]:80;
server_name backend.example.com;
location / {
proxy_pass http://backend.example.com;
}
}
server {
listen 8091;
listen [::]:8091;
server_name aws.backend.example.com;
location / {
proxy_pass http://address1.com:80/;
}
}
server {
listen 8092;
listen [::]:8092;
server_name azr.backend.example.com;
location / {
proxy_pass http://address2.com:80/api/;
}
}
server {
listen 8093;
listen [::]:8093;
server_name gcp.backend.example.com;
location / {
proxy_pass http://address3.com:80/bucketName/;
}
}

Nginx TCP stream optimization

I am using Nginx to connect my PHP web server with my MySQL server (via the stream module), so Nginx is running on the web server and on the MySQL server and both are connected via TCP over SSL.
I have noticed the initial connection from my application to the MySQL server takes 4-6ms for every request, and I am wondering if I can do something more to reuse connections or speed things up in general - both servers are in the same network locally.
This is the configuration on my web server:
stream {
upstream mysql {
server 192.168.10.5:3999;
}
server {
listen 127.0.0.1:998 so_keepalive=30s:10s:6;
proxy_pass mysql;
proxy_connect_timeout 1s;
proxy_ssl on;
proxy_ssl_certificate mysql.client.crt;
proxy_ssl_certificate_key mysql.client.key;
proxy_ssl_protocols TLSv1.2;
proxy_ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
proxy_ssl_trusted_certificate root.crt;
proxy_ssl_verify on;
proxy_ssl_verify_depth 2;
proxy_ssl_session_reuse on;
}
}
This is the configuration on my MySQL server:
stream {
upstream mysql_local {
server 127.0.0.1:3306;
}
server {
listen 3999 ssl so_keepalive=60s:30s:20;
proxy_pass mysql_local;
proxy_connect_timeout 1s;
# SSL configuration - use server certificate & key
ssl_certificate mysql.server.crt;
ssl_certificate_key mysql.server.key;
ssl_protocols TLSv1.2;
ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
ssl_prefer_server_ciphers on;
ssl_client_certificate root.crt;
ssl_verify_client on;
ssl_session_cache shared:MYSQL:100m;
ssl_session_tickets off;
ssl_session_timeout 600m;
ssl_handshake_timeout 5s;
}
}
I tried setting up TCP keepalive on both sides, but not sure if something more is needed or how to check if connections are even reused. There is little online documentation on this kind of setup, especially optimizing it.
I suspect there is more that I can do because within the application I am also connecting to Elasticsearch (also running on the MySQL server) via HTTP+SSL using Nginx as a HTTP proxy and there the connection time is much lower even though it is HTTP, also uses the same certificates and also goes through Nginx as a go-between, yet it only takes max. 1ms to connect. I am using HTTP keepalive connections there.
Someone from Nginx confirmed that TCP connection pooling / some kind of keepalive method is currently not supported and also not planned for the future.
Instead, I added ProxySQL in front of Nginx, which has connection pooling built-in. It also has other benefits, like statistics on which queries are run the most, which take the longest time, etc., so it is actually a really good tool to find out where you can optimize an application and which queries might need additional care. You can even cache certain queries transparently, although I am not really interested in that. Beware though: the documentation of ProxySQL is a bit spotty, and the configuration system seems more confusing than beneficial to me, but once you have set it up it runs really well.

CAS 5.2.0 How to configure cas in a way so that it listen to HTTP?

There is a load balancer in between the user and the CAS. The load balancer will check allow the SSL certificate. But from the load balancer to the CAS the connection will be HTTP.
How to configure cas in a way so that it listen to HTTP?
I have tried this in my cas.properties but didn't solve my problem:
cas.server.httpProxy.enabled=true
cas.server.httpProxy.secure=false ## changed from True
cas.server.httpProxy.protocol=AJP/1.3
cas.server.httpProxy.scheme=http ## changed to http
cas.server.httpProxy.redirectPort=8080
cas.server.httpProxy.proxyPort=8080
cas.server.httpProxy.attributes.attributeName=attributeValue
I do have the warning:
"Non-secure Connection You are currently accessing CAS over a non-secure connection. Single Sign On WILL NOT WORK. In order to have single sign on work, you MUST log in over HTTPS." but the warning still remains.
https://apereo.github.io/cas/5.2.x/installation/Configuration-Properties.html#http-proxying
Try adding
cas.server.http.port=8080
cas.server.http.protocol=org.apache.coyote.http11.Http11NioProtocol
cas.server.http.enabled=true

How to deal with mixed content in a website which should be secured as https?

I am building a website on server A (with domain name registered), used for people to create and run their "apps".
These "apps" are actually docker containers running on server B, in the container, there lives a small web app which can be accessed directly like:
http://IP_ADDR_OF_SERVER_B:PORT
The PORT is a random big number one which maps to the docker container.
Now I can make SSL certificate working on server A, so that it works fine by accessing:
https://DOMAIN_NAME_OF_SERVER_A
The problem is, I enclosed the "apps" in iframe by accessing "http" like above, therefore my browser(Chrome) refuse to open it and report error as:
Mixed Content: The page at 'https://DOMAIN_NAME_OF_SERVER_A/xxx' was loaded over HTTPS, but requested an insecure resource 'http://IP_ADDR_OF_SERVER_B:PORT/xxx'. This request has been blocked; the content must be served over HTTPS.
So, how should I deal with such issue?
I am a full stack green hand, I'd appreciate a lot if you can share some knowledge on how to build a healthy https website while solving such problem in a proper way.
Supplementary explanation
Ok I think I just threw out the outline of the question, here goes more details.
I see it is intact and straight forward to make the iframe requests to be served with https, then it won't confuse me anymore.
However the trouble is, since all the "apps" are dynamically created/removed, it seems I'll need to prepare many certificates for each one of them.
Will self signed certificate work without being blocked or complained by the browser? Or do I have a way to serve all the "apps" with one SSL certificate?
Software environment
Server A: Running node.js website listening to port 5000 and served with Nginx proxy_pass.
server {
listen 80;
server_name DOMAIN_NAME_OF_SERVER_A;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:5000;
}
}
server {
listen 443;
server_name DOMAIN_NAME_OF_SERVER_A;
ssl on;
ssl_certificate /etc/nginx/ssl/DOMAIN_NAME_OF_SERVER_A.cer;
ssl_certificate_key /etc/nginx/ssl/DOMAIN_NAME_OF_SERVER_A.key;
ssl_session_timeout 5m;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:5000;
}
}
Server B: Running node.js apps listening to different random big port numbers such as 50055, assigned dynamically when "apps" are created. (In fact these apps are running in docker containers while I think it doesn't matter) Can run Nginx if needed.
Server A and Server B talk with each other in public traffic.
Solution
Just as all the answers, especially the one from #eawenden, I need a reverse proxy to achieve my goal.
In addition, I did a few more things:
1. Assign a domain name to Server B for using a letsencrypt cert.
2. Proxy predefined url to specific port.
Therefore I setup a reverse proxy server using nginx on Server B, proxy all the requests like:
https://DOMAIN_NAME_OF_SERVER_B/PORT/xxx
to
https://127.0.0.1:PORT/xxx
Ps: nginx reverse proxy config on Server B
server {
listen 443;
server_name DOMAIN_NAME_OF_SERVER_B;
ssl on;
ssl_certificate /etc/nginx/ssl/DOMAIN_NAME_OF_SERVER_B.cer;
ssl_certificate_key /etc/nginx/ssl/DOMAIN_NAME_OF_SERVER_B.key;
ssl_session_timeout 5m;
rewrite_log off;
error_log /var/log/nginx/rewrite.error.log info;
location ~ ^/(?<port>\d+)/ {
rewrite ^/\d+?(/.*) $1 break;
proxy_pass http://127.0.0.1:$port;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 86400;
}
}
Thus everything seems to be working as expected!
Thanks again to all the answerers.
I have mix content issue on dynamic request
add_header 'Content-Security-Policy' 'upgrade-insecure-requests';
This resolve my issue with ngnix server
The best way to do it would be to have a reverse proxy (Nginx supports them) that provides access to the docker containers:
A reverse proxy server is a type of proxy server that typically sits
behind the firewall in a private network and directs client requests
to the appropriate backend server. A reverse proxy provides an
additional level of abstraction and control to ensure the smooth flow
of network traffic between clients and servers.
- Source
Assign a domain name or just use the IP address of the reverse proxy and create a trusted certificate (Let's Encrypt provides free certificates). Then you can connect to the reverse proxy over HTTPS with a trusted certificate and it will handle connecting to the correct Docker container.
Here's an example of this type of setup geared specifically towards Docker: https://github.com/jwilder/nginx-proxy
The error message is pretty much telling you the solution.
This request has been blocked; the content must be served over HTTPS.
If the main page is loaded over HTTPS, then all the other page content, including the iframes, should also be loaded over HTTPS.
The reason is that insecure (non-HTTPS) traffic can be tampered with in transit, potentially being altered to include malicious code that alters the secure content. (Consider for example a login page with a script being injected that steals the userid and password.)
== Update to reflect the "supplemental information" ==
As I said, everything on the page needs to be loaded via HTTPS. Yes, self-signed certificates will work, but with some caveats: first, you'll have to tell the browser to allow them, and second, they're really only suitable for use in a development situation. (You do not want to get users in the habit of clicking through a security warning.)
The answer from #eawenden provides a solution for making all of the content appear to come from a single server, thus providing a way to use a single certificate. Be warned, reverse proxy is a somewhat advanced topic and may be more difficult to set up in a production environment.
An alternative, if you control the servers for all of the iframes, may be to use a wildcard SSL certificate. This would be issued for example for *.mydomain.com, and would work for www.mydomain.com, subsite1.mydomain.com, subsite2.mydomain, etc, for everything under mydomain.com
Like others have said, you should serve all the content over HTTPS.
You could use http proxy to do this. This means that server A will handle the HTTPS connection and forward the request to server B over HTTP. HTTP will then send the response back to server A, which will update the response headers to make it look like the response came from server A itself and forward the response to the user.
You would make each of your apps on server B available on a url on domain A, for instance https://www.domain-a.com/appOnB1 and https://www.domain-a.com/appOnB2. The proxy would then forward the requests to the right port on server B.
For Apache this would mean two extra lines in your configuration per app:
ProxyPass "/fooApp" "http://IP_ADDR_OF_SERVER_B:PORT"
ProxyPassReverse "/fooApp" "http://IP_ADDR_OF_SERVER_B:PORT"
The first line will make sure that Apache forwards this request to server B and the second line will make sure that Apache changes the address in the HTTP response headers to make it look like the response came from server A instead of server B.
As you have a requirement to make this proxy dynamic, it might make more sense to set this proxy up inside your NodeJS app on server A, because that app probably already has knowledge about the different apps that live on server B. I'm no NodeJS expert, but a quick search turned up https://github.com/nodejitsu/node-http-proxy which looks like it would do the trick and seems like a well maintained project.
The general idea remains the same though: You make the apps on server B accessible through server A using a proxy, using server A's HTTPS set-up. To the user it will look like all the apps on server B are hosted on domain A.
After you set this up you can use https://DOMAIN_NAME_OF_SERVER_A/fooApp as the url for your iFrame to load the apps over HTTPS.
Warning: You should only do this if you can route this traffic internally (server A and B can reach each other on the same network), otherwise traffic could be intercepted on its way from server A to server B.

Load balance WebSocket connections to Tornado app using HAProxy?

I am working on a Tornado app that uses websocket handlers. I'm running multiple instances of the app using Supervisord, but I have trouble load balancing websocket connections.
I know nginx does not support dealing with websockets out of the box, but I followed the instructions here http://www.letseehere.com/reverse-proxy-web-sockets to use the nginx tcp_proxy module to reverse proxy websocket connections. However, this did not work since the module can't route websocket urls (ex: ws://localhost:80/something). So it would not work with the URL routes I have defined in my Tornado app.
From my research around the web, it seems that HAProxy is the way to go to load balance my websocket connections. However, I'm having trouble finding any decent guidance to setup HAProxy to load balance websocket connections and also be able to handle websocket URL routes.
I would really appreciate some detailed directions on how to get this going. I am also fully open to other solutions as well.
it's not difficult to implement WebSocket in haproxy, though I admit it's not yet easy to find doc on this (hopefully this response will make one example). If you're using haproxy 1.4 (which I suppose you are) then it works just like any other HTTP request without having to do anything, as the HTTP Upgrade is recognized by haproxy.
If you want to direct the WebSocket traffic to a different farm than the rest of HTTP, then you should use content switching rules, in short :
frontend pub-srv
bind :80
use_backend websocket if { hdr(Upgrade) -i WebSocket }
default_backend http
backend websocket
timeout server 600s
server node1 1.1.1.1:8080 check
server node2 2.2.2.2:8080 check
backend http
timeout server 30s
server www1 1.1.1.1:80 check
server www2 2.2.2.2:80 check
If you're using 1.5-dev, you can even specify "timeout tunnel" to have a larger timeout for WS connections than for normal HTTP connections, which saves you from using overly long timeouts on the client side.
You can also combine Upgrade: WebSocket + a specific URL :
frontend pub-srv
bind :80
acl is_websocket hdr(Upgrade) -i WebSocket
acl is_ws_url path /something1 /something2 /something3
use_backend websocket if is_websocket is_ws_url
default_backend http
Last, please don't use the stupid 24h idle timeouts we sometimes see, it makes absolutely no
sense to wait for a client for 24h with an established session right now. The web is much more
mobile than in the 80s and connection are very ephemeral. You'd end up with many FIN_WAIT sockets
for nothing. 10 minutes is already quite long for the current internet.
Hoping this helps!
WebSockets does not traverse Proxies too well since after the handshake they are not following the normal HTTP behavior.
Try use the WebSocket (wss://) protocol (secured WS). this will use the Proxy CONNECT API which will hide the WebSocket protocol.
I used https://launchpad.net/txloadbalancer to do loadbalancing with Tornado websocket handlers. It's simple and worked well (I think).
http nginx (only nginx v1.3+)
upstream chatservice {
#multi thread by tornado
server 127.0.0.1:6661;
server 127.0.0.1:6662;
server 127.0.0.1:6663;
server 127.0.0.1:6664;
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
virtual host
server {
listen 80;
server_name chat.domain.com;
root /home/duchat/www;
index index.html index.htm;
location / {
try_files $uri $uri/ #backend;
}
location #backend {
proxy_pass_header Server;
proxy_set_header X-Real-IP $remote_addr;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_pass http://chatservice;
internal;
}

Resources