Override domain in TLS request - lets-encrypt

I am running Caddy 2.6.3. Let’s say I have two domains, domain1.net and domain2.net, Let’s Encrypt should be enabled for both, DNS-Challenge is not an option for me, so I can’t use wildcard certs.
I would like to redirect domain1.net, domain2.net, and www.domain2.net to www.domain1.net.
What I basically do is this:
domain1.net, domain2.net, www.domain2.net {
redir https://www.domain1.net${uri}
}
www.domain1.net {
file_server
}
This works well, however, in fact I do not have just two domains, I have around 20. So listing all the domains one by one would not scale well. So I changed to do this instead, to not having to specify www.domain.net and domain.net all the time:
domain1.net, (www.)?domain2.net {
redir https://www.domain1.net${uri}
}
www.domain1.net {
file_server
}
My problem is, that this breaks Let’s Encrypts since the “real” domain names can’t be derived from the wildcard matcher. I somehow would have to tell Let’s encrypt to always request the Cert for the actual domain as soon as a request matches my host matcher:
domain1.net --> matches, so request cert for "domain1.net"
domain2.net --> matches, so request cert for "domain2.net"
www.domain2.net --> matches, so request cert for "www.domain2.net"
domain3.net --> does not match, do nothing
I figure that there is dns_challenge_override_domain yet it seems to only work for the DNS challenge.
I wonder if there is something like :
domain1.net, (www.)?domain2.net {
redir https://www.domain1.net${uri}
tls {
override_request_domain ${host} // this does not work, but this is what i probably want to do :D
}
}
www.domain1.net {
file_server
}

Related

index_not_found_exception - Elasticsearch

In image #1, as you can see, I am getting a valid ES response on firing a GET request. However, if I try doing the same things through the NGINX reverse proxy that I have created and hit myip/elasticsearch, it returns me the error (image #2). Can someone help me with this?
server {
listen 80;
server_name myip;
location /elasticsearch/ {
proxy_pass http://127.0.0.1:9200;
}
location /kibana/ {
proxy_pass http://127.0.0.1:5601;
}
}
The right way is to specify both of those slashes. Slash after 127.0.0.1:9000 is essential, without it your request /elasticsearch/some/route would be passed as-is while with that slash it would be passed as /some/route. In nginx terms it means that you specified an URI after the backend name. That is, an URI prefix specified in a location directive (/elasticsearch/) stripped from an original URI (we having some/route at this stage) and an URI specified after the backend name (/) prepended to it resulting in / + some/route = /some/route. You can specify any path in a proxy_pass directive, for example, with proxy_pass http://127.0.0.1:9200/prefix/ that request would be passed to the backend as /prefix/some/route. Now if you understand all being said, you can see that specifying location /elasticsearch { ... } instead of location /elasticsearch/ { ... } would give you //some/route instead of /some/route. I'm not sure it is exactly the cause of your problem however configurations like
location /elasticsearch/ {
proxy_pass http://127.0.0.1:9200/;
}
are more correct.
Now may I ask you what you get with exactly this configuration in response to curl -i http://localhost:9200/ and curl -i http://localhost/? I want to see all the headers (of cause except those containing private information).
The problem is the path. Nginx is passing it unmodified.
Add a slash at the proxy_pass urls.
server {
listen 80;
server_name myip;
location /elasticsearch/ {
proxy_pass http://127.0.0.1:9200/;
}
location /kibana/ {
proxy_pass http://127.0.0.1:5601/;
}
}
From the documentation:
Note that in the first example above, the address of the proxied server is followed by a URI, /link/. If the URI is specified along with the address, it replaces the part of the request URI that matches the location parameter. For example, here the request with the /some/path/page.html URI will be proxied to http://www.example.com/link/page.html. If the address is specified without a URI, or it is not possible to determine the part of URI to be replaced, the full request URI is passed (possibly, modified).

PCEP-SR draft version 6, SR Explicit Route Object/Record Route Object subobjects

I am setting up Segment routing via Pathman-SR with ODL Nitrogen Controller and vMX Juniper routers. To allow this, I have to change IANA subojbects code points, but I am unable to do it...
Followed this documenntations, but still no result:
https://docs.opendaylight.org/en/stable-carbon/user-guide/pcep-user-guide.html#segment-routing
https://test-odl-docs.readthedocs.io/en/latest/user-guide/pcep-user-guide.html
I tried to update configuration via REST API, but when I send PUT request:
/restconf/config/pcep-segment-routing-app-config:pcep-segment-routing-app-config
with the body:
<pcep-segment-routing-config xmlns="urn:opendaylight:params:xml:ns:yang:controller:pcep:segment-routing-app-config">
<iana-sr-subobjects-type>true</iana-sr-subobjects-type>
</pcep-segment-routing-config>
I get the following error:
{
"errors": {
"error": [
{
"error-type": "protocol",
"error-tag": "invalid-value",
"error-message": "URI has bad format. Possible reasons:\n 1. \"pcep-segment-routing-app-config:pcep-segment-routing-app-config\" was not found in parent data node.\n 2. \"pcep-segment-routing-app-config:pcep-segment-routing-app-config\" is behind mount point. Then it should be in format \"/yang-ext:mount/pcep-segment-routing-app-config:pcep-segment-routing-app-config\"."
}
]
}
}
I think there is a typo in the URL in the doc, you have to use /restconf/config/pcep-segment-routing-app-config:pcep-segment-routing-config
You can check this guide for reference:
https://docs.opendaylight.org/projects/bgpcep/en/stable-neon/pcep/pcep-user-guide-active-stateful-pce.html#iana-code-points

Cross domain session with Sinatra and AngularJS

I am using Sinatra as a webservice and angularjs to make the calls
post '/loginUser' do
session[:cui]=user['cui']
end
get '/cui' do
return session[:cui].to_s
end
But it doesn't seem to work (the '/cui' call returns an empty string) any help would be greatly apreciated.
UPDATE:
setting this in sinatra headers['Access-Control-Allow-Credentials'] = 'true' allows me to send the session, but it seems like $http directive is not using the browsers cookies
on the sinatra app
before do
headers['Access-Control-Allow-Methods'] = 'GET, POST, PUT, DELETE, OPTIONS'
headers['Access-Control-Allow-Origin'] = 'http://localhost:4567'
headers['Access-Control-Allow-Headers'] = 'accept, authorization, origin'
headers['Access-Control-Allow-Credentials'] = 'true'
end
angularjs app
host='http://127.0.0.1:5445/'
#viewController = ($scope,$http)->
$scope.getCui = ()->
$http.get(host+'cui',{ withCredentials: true}).success (data)->
$scope.cui=data
console.log data
Explanation:
AngularJS uses his own cookie system, so we need to specify that we can pass the cookies trough the $http.get call using the {withCredentials:true} configuration object. Sinatra needs to accept the cross domain cookies so we need the headers mentioned above.
Note: 'Access-Control-Allow-Origin' header cannot be wildcard.
One option around this would be to configure a http server with a proxy pass, so you could hit the same domain without incurring a cross origin error. That way you can continue to properly maintain your abstractions as 2 separate apps.
Here is a brief example with nginx:
upstream angular_app {
server localhost:3003;
}
upstream sinatra_app {
server localhost:3004;
}
server {
listen 80;
server_name local.angular_app.com;
root /Users/username/source/angular_app/;
location / {
proxy_set_header Host $http_host;
proxy_redirect off;
}
location ~ ^/api/(.*)$ {
proxy_set_header Host $http_host;
proxy_read_timeout 1200;
proxy_pass http://sinatra_app/;
}
}
By routing at the server level, you can successfully bypass domain restrictions AND you can keep the applications separate.

HTTPS in PAC file

I am writing a .pac file for use with iOS5 without jailbreak, but I feel trouble in matching the url starting with "https" (eg: https://test.com).
Here is my script:
function FindProxyForURL(url, host) {
if (shExpMatch(url, "https://*")) return "PROXY 123.123.123.123";
return 'DIRECT';
}
And if I matched "https://test.com", how can I return "https://123.123.123.123" to the URL?
Use this:
if (shExpMatch(url, "https:**"))
This should fix it.

Https redirect and login cookies on Heroku with Play Framework

I have a Play! framework Heroku project that has three deployments. One for running my dev machine, one for beta on Heroku, and one for production on Heroku. Their http and https urls are as follows:
DEV BETA PRODUCTION
HTTP URL | http://localhost:9000 http://domain-beta.herokuapps.com http://www.domain.com
HTTPS URL | https://localhost:9443 https://domain-beta.herokuapps.com https://secure.domain.com
HTTPS Type | My cert Piggyback (using Heroku's cert) Hostname-based SSL (using my cert)
I also have a class HttpsRequired that has methods for requiring HTTPS, and for redirecting back to HTTP (thanks to this post for the help).
public class HttpsRequired extends Controller {
/** Called before every request to ensure that HTTPS is used. */
#Before
public static void redirectToHttps() {
//if it's not secure, but Heroku has already done the SSL processing then it might actually be secure after all
if (!request.secure && request.headers.get("x-forwarded-proto") != null) {
request.secure = request.headers.get("x-forwarded-proto").values.contains("https");
}
//redirect if it's not secure
if (!request.secure) {
String url = redirectHostHttps() + request.url;
System.out.println("Redirecting to secure: " + url);
redirect(url);
}
}
/** Renames the host to be https://, handles both Heroku and local testing. */
#Util
public static String redirectHostHttps() {
if (Play.id.equals("dev")) {
String[] pieces = request.host.split(":");
String httpsPort = (String) Play.configuration.get("https.port");
return "https://" + pieces[0] + ":" + httpsPort;
} else {
if (request.host.endsWith("domain.com")) {
return "https://secure.domain.com";
} else {
return "https://" + request.host;
}
}
}
/** Renames the host to be https://, handles both Heroku and local testing. */
#Util
public static String redirectHostNotHttps() {
if (Play.id.equals("dev")) {
String[] pieces = request.host.split(":");
String httpPort = (String) Play.configuration.get("http.port");
return "http://" + pieces[0] + ":" + httpPort;
} else {
if (request.host.endsWith("domain.com")) {
return "http://www.domain.com";
} else {
return "http://" + request.host;
}
}
}
}
I modified Secure.login() to call HttpsRequired.redirectToHttps() before it runs, to ensure that all passwords are submitted encrypted. Then, in my Security.onAuthenticated(), I redirect to the homepage on standard HTTP.
This works great on my dev and beta deployments, but in production all of my HTTP requests are redirected to the HTTPS login page. I can still use the whole site in HTTPS, but I want regular HTTP to work too.
All of my pages are protected as members-only and require users to login, using the #With(Secure.class) annotation. I'm thinking that it must be related to the fact that the login happens at secure.domain.com instead of www.domain.com, and that they somehow generate different cookies.
Is there a way to change the login cookie created at secure.domain.com to make it work at www.domain.com?
Check out the documentation for the setting for default cookie domain.
http://www.playframework.org/documentation/1.2.4/configuration#application.defaultCookieDomain
It explains how you can set a cookie to work across all subdomains.
application.defaultCookieDomain
Enables session/cookie sharing between subdomains. For example, to
make cookies valid for all domains ending with ‘.example.com’, e.g.
foo.example.com and bar.example.com:
application.defaultCookieDomain=.example.com

Resources