HTTPS in PAC file - proxy

I am writing a .pac file for use with iOS5 without jailbreak, but I feel trouble in matching the url starting with "https" (eg: https://test.com).
Here is my script:
function FindProxyForURL(url, host) {
if (shExpMatch(url, "https://*")) return "PROXY 123.123.123.123";
return 'DIRECT';
}
And if I matched "https://test.com", how can I return "https://123.123.123.123" to the URL?

Use this:
if (shExpMatch(url, "https:**"))
This should fix it.

Related

Override domain in TLS request

I am running Caddy 2.6.3. Let’s say I have two domains, domain1.net and domain2.net, Let’s Encrypt should be enabled for both, DNS-Challenge is not an option for me, so I can’t use wildcard certs.
I would like to redirect domain1.net, domain2.net, and www.domain2.net to www.domain1.net.
What I basically do is this:
domain1.net, domain2.net, www.domain2.net {
redir https://www.domain1.net${uri}
}
www.domain1.net {
file_server
}
This works well, however, in fact I do not have just two domains, I have around 20. So listing all the domains one by one would not scale well. So I changed to do this instead, to not having to specify www.domain.net and domain.net all the time:
domain1.net, (www.)?domain2.net {
redir https://www.domain1.net${uri}
}
www.domain1.net {
file_server
}
My problem is, that this breaks Let’s Encrypts since the “real” domain names can’t be derived from the wildcard matcher. I somehow would have to tell Let’s encrypt to always request the Cert for the actual domain as soon as a request matches my host matcher:
domain1.net --> matches, so request cert for "domain1.net"
domain2.net --> matches, so request cert for "domain2.net"
www.domain2.net --> matches, so request cert for "www.domain2.net"
domain3.net --> does not match, do nothing
I figure that there is dns_challenge_override_domain yet it seems to only work for the DNS challenge.
I wonder if there is something like :
domain1.net, (www.)?domain2.net {
redir https://www.domain1.net${uri}
tls {
override_request_domain ${host} // this does not work, but this is what i probably want to do :D
}
}
www.domain1.net {
file_server
}

Is it possible to get webpack-dev-server to ignore all but a certain path in the proxy settings?

I've got my WDS running on port 9000, and the webpack bundles located under /dist/ I've got a back end server running on port 55555
Is there a way to get WDS to ignore (proxy to 55555) every call except those starting with /dist/?
I've got the following:
devServer: {
port: 9000,
proxy: {
"/dist": "http://localhost:9000",
"/": "http://localhost:55555"
}
}
Trouble is, that root ("/") just overrides everything...
Thanks for any advice you can offer.
UPDATE:
I've gotten a little farther with the following:
proxy: {
"/": {
target: "http://localhost:55555",
bypass: function(req, res, proxyOptions) {
return (req.url.indexOf("/dist/") !== -1);
}
}
},
But the bypass just seems to kill the connection. I was hoping it would tell the (9000) server to not proxy when the condition is true. Anybody know a good source explaining "bypass"?
Webpack allows glob syntax for these patterns. As a result, you should be able to use an exclusion to match "all-but-dist".
Something like this may work (sorry I don't have webpack in front of me at the moment):
devServer: {
port: 9000,
proxy: {
"!/dist/**/*": "http://localhost:55555"
}
}

Getting comma separated ips from Http header

I have a Spring boot app running on Tomcat. I have to resolve each ip to its Geolocation : city , province and Country . However,sometimes I receive ip address as a comma separated String instead of a single ip address. For example , 1.39.27.224, 8.37.225.221 .
The code to extract ip from a Http request that I am using :
public static String getIp(final HttpServletRequest request) {
PreConditions.checkNull(request, "request cannot be null");
String ip = request.getHeader("X-FORWARDED-FOR");
if (!StringUtils.hasText(ip)) {
ip = request.getRemoteAddr();
}
return ip;
}
The X-Forwarded-For can be used to identify the originating IP address of a client connecting to a web server through an HTTP proxy or load balancer.
The general format of this field is
X-Forwarded-For: client, proxy1, proxy2
In above example you can see that the request is passed through proxy1 and proxy2.
In your case you should parse this comma separated string and read the first value which is client's IP address.
Warning - It is easy to forge an X-Forwarded-For field so you might get wrong information.
Please take a look at https://en.wikipedia.org/wiki/X-Forwarded-For to read more about this.
Here is what I use in my servlet (running on Jetty behind HAProxy) -
I just try to get the first IP address in the X-Forwarded-For header:
Pattern FIRST_IP_ADDRESS = Pattern.compile("^(\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3})");
public static String parseXff(HttpServletRequest httpReq) {
String xff = httpReq.getHeader("X-Forwarded-For");
if (xff != null) {
Matcher matcher = FIRST_IP_ADDRESS.matcher(xff);
if (matcher.find()) {
return matcher.group(1);
}
}
// return localhost when servlet is accessed directly, without HAProxy
return "127.0.0.1";
}

Https redirect and login cookies on Heroku with Play Framework

I have a Play! framework Heroku project that has three deployments. One for running my dev machine, one for beta on Heroku, and one for production on Heroku. Their http and https urls are as follows:
DEV BETA PRODUCTION
HTTP URL | http://localhost:9000 http://domain-beta.herokuapps.com http://www.domain.com
HTTPS URL | https://localhost:9443 https://domain-beta.herokuapps.com https://secure.domain.com
HTTPS Type | My cert Piggyback (using Heroku's cert) Hostname-based SSL (using my cert)
I also have a class HttpsRequired that has methods for requiring HTTPS, and for redirecting back to HTTP (thanks to this post for the help).
public class HttpsRequired extends Controller {
/** Called before every request to ensure that HTTPS is used. */
#Before
public static void redirectToHttps() {
//if it's not secure, but Heroku has already done the SSL processing then it might actually be secure after all
if (!request.secure && request.headers.get("x-forwarded-proto") != null) {
request.secure = request.headers.get("x-forwarded-proto").values.contains("https");
}
//redirect if it's not secure
if (!request.secure) {
String url = redirectHostHttps() + request.url;
System.out.println("Redirecting to secure: " + url);
redirect(url);
}
}
/** Renames the host to be https://, handles both Heroku and local testing. */
#Util
public static String redirectHostHttps() {
if (Play.id.equals("dev")) {
String[] pieces = request.host.split(":");
String httpsPort = (String) Play.configuration.get("https.port");
return "https://" + pieces[0] + ":" + httpsPort;
} else {
if (request.host.endsWith("domain.com")) {
return "https://secure.domain.com";
} else {
return "https://" + request.host;
}
}
}
/** Renames the host to be https://, handles both Heroku and local testing. */
#Util
public static String redirectHostNotHttps() {
if (Play.id.equals("dev")) {
String[] pieces = request.host.split(":");
String httpPort = (String) Play.configuration.get("http.port");
return "http://" + pieces[0] + ":" + httpPort;
} else {
if (request.host.endsWith("domain.com")) {
return "http://www.domain.com";
} else {
return "http://" + request.host;
}
}
}
}
I modified Secure.login() to call HttpsRequired.redirectToHttps() before it runs, to ensure that all passwords are submitted encrypted. Then, in my Security.onAuthenticated(), I redirect to the homepage on standard HTTP.
This works great on my dev and beta deployments, but in production all of my HTTP requests are redirected to the HTTPS login page. I can still use the whole site in HTTPS, but I want regular HTTP to work too.
All of my pages are protected as members-only and require users to login, using the #With(Secure.class) annotation. I'm thinking that it must be related to the fact that the login happens at secure.domain.com instead of www.domain.com, and that they somehow generate different cookies.
Is there a way to change the login cookie created at secure.domain.com to make it work at www.domain.com?
Check out the documentation for the setting for default cookie domain.
http://www.playframework.org/documentation/1.2.4/configuration#application.defaultCookieDomain
It explains how you can set a cookie to work across all subdomains.
application.defaultCookieDomain
Enables session/cookie sharing between subdomains. For example, to
make cookies valid for all domains ending with ‘.example.com’, e.g.
foo.example.com and bar.example.com:
application.defaultCookieDomain=.example.com

How do you restrict access to certain paths using Lighttpd?

I would like to restrict access to my /admin URL to internal IP addresses only. Anyone on the open Internet should not be able to login to my web site. Since I'm using Lighttpd my first thought was to use mod_rewrite to redirect any outside request for the /admin URL back to my home page, but I don't know much about Lighty and the docs don't say much about detecting a 192.168.0.0 IP range.
Try this:
$HTTP["remoteip"] == "192.168.0.0/16" {
/* your rules here */
}
Example from the docs:
# deny the access to www.example.org to all user which
# are not in the 10.0.0.0/8 network
$HTTP["host"] == "www.example.org" {
$HTTP["remoteip"] != "10.0.0.0/8" {
url.access-deny = ( "" )
}
}
This worked for me:
$HTTP["remoteip"] != "192.168.1.1/254" {
$HTTP["url"] =~ "^/intranet/" {
url.access-deny = ( "" )
}
}
!= worked over ==.

Resources