Mixed Content problem for Plone behind Traefik reverse proxy - https

I just tried installing the Docker-based Plone, and it runs behind Traefik as a reverse proxy, but when I access it from a web browser, I get an error on the console like this:
Mixed Content: The page at 'https://new.mydomain.id/' was loaded over HTTPS, but requested an insecure stylesheet 'http://fonts.googleapis.com/css?family=Roboto:400,300,700'. This request has been blocked; the content must be served over HTTP
Mixed Content: The page at 'https://new.mydomain.id/' was loaded over HTTPS, but requested an insecure stylesheet 'http://new.mydomain.id/++resource++plone-admin-ui.css'. This request has been blocked; the content must be served over HTTPS.
Are there any special environment variables that can be passed to make all communication using HTTPS?
Previously, I installed the OJS3 web application behind the same reverse proxy, and got the same error message, but resolved by passing the environment variable HTTPS=on to the container.
I hope there are similar environment variables for Plone. I use Traefik 1.7.16

You need to correctly configure the proxy rewrite URL, including Virtual Host Monster (VHM) parts. That way, Zope's VHM can correctly rewrite the request.
An example for Nginx can be found here:
https://docs.plone.org/manage/deploying/front-end/nginx.html#minimal-nginx-front-end-configuration-for-plone-on-ubuntu-debian-linux
Basically, the rewrite URL should look like this:
Protocol plus domain or address of the proxied server
| Protocol of frontend server
| | Domain of frontend server
| | | Port of frontend server
| | | | Path to Plone site root
V V V V V
http://plone/VirtualHostBase/http/yoursite.com:80/Plone/VirtualHostRoot/
More information on the VHM: https://zope.readthedocs.io/en/latest/zopebook/VirtualHosting.html

Related

Redirect http traffic to https on a non standard apache port

I have a website running at https://example.com:555
The 80 and 443 port for the same public ip have been forwarded to another server.
I only want to re-direct the url from http://example.com:555/1618/?id=877 to https://example.com:555/1618/?id=877
Right now I am getting the 400 error "Your browser sent a request that this server could not understand..."
I am using apache 2.4 on ubuntu 20.04
Any leads would be highly appreciated.
Thanks.
adding "ErrorDocument 400 https://example.com:555" in .htaccess
works but the id parameter is not passed
adding ErrorDocument in the localized-error-pages.conf in apache
works but the id parameter is not passed
configure Strict-Transport-Security
didnt work
trying different $1 , REQUEST_URI etc in .htaccess
did not work

Swagger page being redirected from https to http

AWS Elastic Load Balancer listening through HTTPS (443) using SSL and redirecting requests to EC2 instances through HTTP (80), with IIS hosting a .net webapi application, using swashbuckle to describe the API methods.
Home page of the API (https://example.com) has a link to Swagger documentation which can bee read as https://example.com/swagger/ui/index.html when you hove over on the link.
If I click on the link it redirects the request on the browser to http://example.com/swagger/ui/index.html which displays a Page Not Found error
but if I type directly in the browser URL https://example.com/swagger/ui/index.html then it loads Swagger page, but then, when expanding the methods an click on "Try it out", the Request URL starts with "http" again.
This configuration is only for Stage and Production environments. Lower environments don't use the load balancer and just use http.
Any ideas on how to stop https being redirected to http? And how make swagger to display Request URLs using https?
Thank you
EDIT:
I'm using a custom index.html file
Seems is a known issue for Swashbuckle. Quote:
"By default, the service root url is inferred from the request used to access the docs. However, there may be situations (e.g. proxy and load-balanced environments) where this does not resolve correctly. You can workaround this by providing your own code to determine the root URL."
What I did was provide the root url and/or scheme to use based on the environment
GlobalConfiguration.Configuration
.EnableSwagger(c =>
{
...
c.RootUrl(req => GetRootUrlFromAppConfig(req));
...
c.Schemes(GetEnvironmentScheme());
...
})
.EnableSwaggerUi(c =>
{
...
});
where
public static string[] GetEnvironmentScheme()
{
...
}
public static string GetRootUrlFromAppConfig(HttpRequestMessage request)
{
...
}
The way I would probably do it is having a main file, and generating during the build of your application a different swagger file based on the environnement parameters for schemes and hosts.
That way, you have to manage only one swagger file accross your environments, and you only have to manage a few extra environnement properties, host and schemes (if you don't already have them)
Since I don't know about swashbuckle, I cannot answer for sure at your first question (the redirect)

Allow only CloudFront to read from origin servers?

I'm using origin servers on CloudFront (as opposed to s3) with signed URLs. I need a way to ensure that requests to my server are coming only from CloudFront. That is, a way to prevent somebody from bypassing CloudFront and requesting a resource directly on my server. How can this be done?
As per the documentation, there's no support for that yet. The only thing I can think of is you can restrict more access, although not entirely by just allowing only Amazon IP addresses to your webserver. They should be able to provide them to you (IP address ranges) as they have provided them to us.
This what the docs say:
Using an HTTP Server for Private Content
You can use signed URLs for any CloudFront distribution, regardless of whether the origin is an Amazon S3 bucket or an HTTP server. However, for CloudFront to access your objects on an HTTP server, the objects must remain publicly accessible. Because the objects are publicly accessible, anyone who has the URL for an object on your HTTP server can access the object without the protection provided by CloudFront signed URLs. If you use signed URLs and your origin is an HTTP server, do not give the URLs for the objects on your HTTP server to your customers or to others outside your organization.
I've just done this for myself, and thought I'd leave the answer here where I started my search.
Here's the few lines you need to put in your .htaccess (assuming you've already turned the rewrite engine on):
RewriteCond %{HTTP_HOST} ^www-origin\.example\.com [NC]
RewriteCond %{HTTP_USER_AGENT} !^Amazon\ CloudFront$ [NC]
RewriteRule ^(.*)$ https://example.com/$1 [R=301,L]
This will redirect all visitors to your Cloudfront distribution - https://example.com in this, um, example - and only let www-origin.example.com work for Amazon CloudFront. If your website code is also on a different URL (a development or staging server, for example) this won't get in the way.
Caution: the user-agent is guessable and spoofable; a more secure way of achieving this would be to set a custom HTTP header in Cloudfront, and check for its value in .htaccess.
I ended up creating 3 Security Groups filled solely with CloudFront IP addresses.
I found the list of IPs on this AWS docs page.
If you want to just copy and paste the IP ranges into the console, you can use this list I created:
Regional:
13.113.196.64/26, 13.113.203.0/24, 52.199.127.192/26, 13.124.199.0/24, 3.35.130.128/25, 52.78.247.128/26, 13.233.177.192/26, 15.207.13.128/25, 15.207.213.128/25, 52.66.194.128/26, 13.228.69.0/24, 52.220.191.0/26, 13.210.67.128/26, 13.54.63.128/26, 99.79.169.0/24, 18.192.142.0/23, 35.158.136.0/24, 52.57.254.0/24, 13.48.32.0/24, 18.200.212.0/23, 52.212.248.0/26, 3.10.17.128/25, 3.11.53.0/24, 52.56.127.0/25, 15.188.184.0/24, 52.47.139.0/24, 18.229.220.192/26, 54.233.255.128/26, 3.231.2.0/25, 3.234.232.224/27, 3.236.169.192/26, 3.236.48.0/23, 34.195.252.0/24, 34.226.14.0/24, 13.59.250.0/26, 18.216.170.128/25, 3.128.93.0/24, 3.134.215.0/24, 52.15.127.128/26, 3.101.158.0/23, 52.52.191.128/26, 34.216.51.0/25, 34.223.12.224/27, 34.223.80.192/26, 35.162.63.192/26, 35.167.191.128/26, 44.227.178.0/24, 44.234.108.128/25, 44.234.90.252/30
Global:
120.52.22.96/27, 205.251.249.0/24, 180.163.57.128/26, 204.246.168.0/22, 205.251.252.0/23, 54.192.0.0/16, 204.246.173.0/24, 54.230.200.0/21, 120.253.240.192/26, 116.129.226.128/26, 130.176.0.0/17, 99.86.0.0/16, 205.251.200.0/21, 223.71.71.128/25, 13.32.0.0/15, 120.253.245.128/26, 13.224.0.0/14, 70.132.0.0/18, 13.249.0.0/16, 205.251.208.0/20, 65.9.128.0/18, 130.176.128.0/18, 58.254.138.0/25, 54.230.208.0/20, 116.129.226.0/25, 52.222.128.0/17, 64.252.128.0/18, 205.251.254.0/24, 54.230.224.0/19, 71.152.0.0/17, 216.137.32.0/19, 204.246.172.0/24, 120.52.39.128/27, 118.193.97.64/26, 223.71.71.96/27, 54.240.128.0/18, 205.251.250.0/23, 180.163.57.0/25, 52.46.0.0/18, 223.71.11.0/27, 52.82.128.0/19, 54.230.0.0/17, 54.230.128.0/18, 54.239.128.0/18, 130.176.224.0/20, 36.103.232.128/26, 52.84.0.0/15, 143.204.0.0/16, 144.220.0.0/16, 120.52.153.192/26, 119.147.182.0/25, 120.232.236.0/25, 54.182.0.0/16, 58.254.138.128/26, 120.253.245.192/27, 54.239.192.0/19, 18.64.0.0/14, 120.52.12.64/26, 99.84.0.0/16, 130.176.192.0/19, 52.124.128.0/17, 204.246.164.0/22, 13.35.0.0/16, 204.246.174.0/23, 36.103.232.0/25, 119.147.182.128/26, 118.193.97.128/25, 120.232.236.128/26, 204.246.176.0/20, 65.8.0.0/16, 65.9.0.0/17, 120.253.241.160/27, 64.252.64.0/18
I'd like to note that by default, Security Groups only allow a maximum of 60 inbound and outbound rules each, which is why I'm splitting these up 122 IPs into 3 security groups.
After creating your 3 Security Groups, attach them to your EC2 (you can attach multiple Security Groups to an EC2). I left the EC2's default Security Group to only allow SSH traffic from my IP address.
Then you should be good to go! This forces users to use your CloudFront distribution and keeps your EC2's IP/DNS private.
AWS have finally created an AWS managed prefix list for CloudFront to Origin server requests. So no more need for custom Lambdas updating Security Groups etc.
Use the prefix com.amazonaws.global.cloudfront.origin-facing in your Security Groups etc.
See the following links for more info:
The What's New Announcement
The Documentation

Nginx will not stop rewriting

I am attempting to configure an owncloud server that rewrites all incoming requests and ships them back out at the exact same domain and request uri but change the scheme from http to https.
This is failed miserably. I tried:
redirect 301 https://$hostname/$request_uri
and
rewrite ^ https://$hostname/$request_uri
Anyway, after removing that just to make sure the basic nginx configuration would work it as it had prior to adding the ssl redirects/rewrites it will NOT stop changing the scheme to https.
Is there a cached list somewhere in nginx's configuration that keeps hold of redirect/rewrite protocols? I cleared my browser cache completely and it will not stop.
AH HA!
in config/config.php there was a line
'forcessl' => true,
Stupid line got switched on when it received a request at the 443 port.
Turned off and standard http owncloud works and neither apache/nginx are redirecting to ssl.
Phew.

IIIS 7.5 URL rewrite to Geoserver giving Connection Reset error

I have Geoserver set up on a Windows 2008 server using Jetty as the web container on port 8080. If the browse to http://[servername]:8080/geoserver/www/test/test.html I get a html page returned as expected.
Then I have set up IIS 7.5 using ARR and URL rewrite at the application pool level, to set up a reserve proxy. So that http://[servername]/geoserver.. is rewritten to http://[servername]:8080/geoserver... I am using match '.*' for the url and 'geoserver/' for the condition.
This gives a error when browsed to of 'connection reset' IIS http error log (C:\Windows\System32\LogFiles\HTTPERR) shows 'Connection_Dropped DefaultAppPool'
If I change the url rewrite to an action of redirect, the html page is displayed as expect, but obviously the url shows as redirected to port 8080.
Solved this by changing the URL rewrite to use .* for the URL match condition.

Resources