I have a Play! Application which needs to use HTTP and HTTPS. The application is running behind a proxy server (using Apache) that forwards web requests to the play application.
The proxy is using one port for HTTP requests, and another port that is intended for HTTPs requests. Note that the ports on the proxy are not the same ports as the ones used by the Play application (this is due to provider restrictions!).
The Play application is using the "standard" ports 9000 for HTTP and 9443 for HTTPs. The proxy receives HTTP requests on Port 8080 and forwards them to Play's port 9000. The proxy receives HTTPs requests on Port 8090 and forwards them to Play's port 9443.
My problem is that when I use the secure() method of making pages appear in Play, Play's logic causes the app to attempt to use 9443 as the port for HTTPs. This causes the request to be lost because the proxy is using a different port.
The same appears to happen when I want to switch from HTTPs to HTTP. I cannot seem to make the system go to the port used by the proxy.
Somehow I need to make the system go to the ports known to the proxy server, without screwing up my routes. Is there some way to do this?
Thanks in advance for any help/insights.
I have found my own "answer", though it is somewhat of a cludge.
It turns out that, based on what I can ascertain from the documentation, there really is no way to tell Play to switch between ports when a Play application is behind a proxy. This is because while Play does recognize the port that a request comes in on, it has no way of telling what proxy port it should use when going between secure and unsecure ports. It knows, for example, that an HTTP request comes through a proxy port 8080, and it knows that subsequent requests to its port 9000 will come from that proxy port. What it does not know to do is to switch to another proxy port when someone attempts to use https to access its port 9443. Consequently, if you have a page like
http://toproxy:8080/links that has one or more links that use the secure() method to activate https, then Play will resolve the link to be https://toproxy:8080 -- despite the fact that the proxy server may want to use port 8090 for HTTPS requests. Because proxy port 8080 is redirected to Play's port 9000, use of that port for HTTPS requests always fails. This is true in Play 2.0 as well as Play 1.X.
I believe that Play needs some standard configuration parameter that can tell it to map proxy ports to its HTTP and HTTPS ports. That way, when it is behind a proxy server a developer can use the secure() method and Play will be able to resolve the URL to the correct proxy port. This capability should be available in 1.X as well as Version 2.
Until someone actually implements this (I might if I ever get the time, but with all that I am committed to do people shouldn't hold their breath), my cludge is to simply use the rediirect() method to switch between HTTP and HTTPS proxy ports. The redirect() method apparently allows us to enter full URLs, so I simply call the full URL of the page that I switch requests on.
For example: in the aforementioned page http://toproxy:8080/links, I may have a link to a login page that I want to protect using HTTPS. To do this I create two actions: one for the redirect to the proxy HTTPS port (for this example, call it gotoLogin()) and another for actually rendering the login page (for this example, call it loginPage() and give it a route of /loginpage).
In gotoLogin() I redirect to loginPage in the following manner:
redirect("https://toproxy:8090/loginpage");
This makes Play come in on proxy port 8090, which is redirected to Play's port 9443.
When the user properly logs in, my authentication action simply uses another redirect() call:
redirect("http://toproxy:8080/destination_page");
This causes Play to go back to the proper proxy port for unsecured requests.
This is not the best solution for this problem. There is a possibility that the proxy server can be somehow configured to make the proper switch between HTTP and HTTPS ports, but investigating this may take some time for someone not an expert in proxy server configuration (which describes me!). Of course, the best solution would be to implement in Play the kind of proxy port handling that I have previously described.
But this solution works and can be adapted to many different situations. Unless someone comes up with another better solution, this solution is the one I am using.
Play should recognise the port the request is coming from, and use that port. It is possible your apache config is not set up correctly.
Have you put the following line in your apache config?
ProxyPreserveHost on
You may also need XForwardedSupport=127.0.0.1 depending on your server configuration.
Related
We have two applications running on ibm cloud cloud foundry (appA and appB).
appA is accessing appB over a container-to-container networking while appB is also available externally over a Gorouter route.
The thing is that while it is http-8080 our app exposes - all is good.
Now we have to do container-to-container networking over https.
We configured the app to expose https-8080. 8080 is used as https://docs.cloudfoundry.org/devguide/custom-ports.html states that:
By default, apps only receive requests on port 8080 for both HTTP and TCP routing,
and so must be configured, or hardcoded, to listen on this port
container-to-container networking works as expected now using https.
But we are no longer able to use the appB over the external Gorouter route.
What is the best way to have it all up and running as we expect?
There isn't a good answer to this question, at least at the time I write this.
You do have a couple options though:
Manually set up HTTPS for the internal route. To do this, you would need to use the instructions for your application/server of choice to configure HTTPS. Then use whatever functionality your buildpack provides to inject this confirmation into the application container. This would also require you to bundle and push TLS certs with your application. The platform isn't going to provide you TLS certs if you take this option.
The trick to making both the internal and public route work is that you need your application to listen on both port 8080 and the port you choose for your HTTPS traffic. As long as you continue taking HTTP traffic on port 8080, then your public routes should keep working.
If you want a quick, but not ideal solution you can use port 61001. For newer versions of Cloud Foundry, this port is used by Envoy to accept traffic to your app over HTTPS. Envoy then proxies the request to your app via HTTP over port 8080. You can use this port for your container to container traffic as well, however the configured subject name on the TLS cert won't match your route.
Here's an example of what the subject name will look like.
subject: OU=organization:639f74aa-5d97-4a47-a6b3-e9c2613729d8 + OU=space:10180e2b-33b9-44ee-9f8f-da96da17ac1a + OU=app:10a4752e-be17-41f5-bfb2-d858d49165f2; CN=b7520259-6428-4a52-60d4-5f25
Because it's using this format, you would need to have your clients ignore certificate subject name match errors (not ideal as that weakens HTTPS), or perhaps create a custom hostname matcher.
For what it's worth, I don't think you want or need to change the port. That is typically used if your application is not flexible and unable to listen on port 8080. It changes the port for inbound traffic. Since you're only using C2C networking, you don't need that option.
What you want, from what I understand, is that you want HTTPS for C2C traffic. In that case, the public traffic doesn't matter. It can still go through Gorouter to port 8080. For your container-to-container traffic, you can pick any port you want. You just need to make sure the port you choose has network policy set to allow that traffic (by default all C2C traffic is blocked). Once the network policy is set, you can connect directly over whatever port you designate.
We are hosting an application in the preprod azure PCF environment which exposes websocket endpoints for client devices to connect to. Is there a prescribed methodology to secure the said websocket endpoint using TLS/SSL when hosted on PCF and running behind the PCF HAProxy?
I am having trouble interpreting this information, as in, are we supposed to expose port 4443 on the server and PCF shall by default pick it up to be a secure port that ensures unsecured connections cannot be established? Or does it require some configuration to be done on HAProxy?
Is there a prescribed methodology to secure the said websocket endpoint using TLS/SSL when hosted on PCF and running behind the PCF HAProxy?
A few things:
You don't need to configure certs or anything like that when deploying your app to PCF. The platform takes care of all that. In your case, it'll likely be handled by HAProxy, but it could be some other load balancer or even Gorouter depending on your platform operations team installed PCF. The net result is that TLS is first terminated before it hits your app, so you don't need to worry about it.
Your app should always force users to HTTPS. How you do this depends on the language/framework you're using, but most have some functionality for this.
This process generally works by checking to see if the incoming request was over HTTP or HTTPS. If it's HTTP, then you issue a redirect to the same URL, but over HTTPS. This is important for all apps, not just ones using WebSockets. Encrypt all the things.
Do keep in mind that you are behind one or more reverse proxies, so if you are doing this manually, you'll need to consider what's in x-forwarded-proto or x-forwarded-port, not just the upstream connection which would be Gorouter, not your client's browser.
https://docs.pivotal.io/platform/application-service/2-7/concepts/http-routing.html#http-headers
If you are forcing your user's to HTTPS (#1 above), then your users will be unable to initiate an insecure WebSocket connection to your app. Browsers like Chrome & Firefox have restrictions to prevent an insecure WebSocket connection from being made when the site was loaded over HTTPS.
You'll get a message like The operation is insecure in Firefox or Cannot connect: SecurityError: Failed to construct 'WebSocket': An insecure WebSocket connection may not be initiated from a page loaded over HTTPS. in Chrome.
I am having trouble interpreting this information, as in, are we supposed to expose port 4443 on the server and PCF shall by default pick it up to be a secure port that ensures unsecured connections cannot be established? Or does it require some configuration to be done on HAProxy?
From the application perspective, you don't do anything different. Your app is supposed to start and listen on the assigned port, i.e. what's in $PORT. This is the same for HTTP, HTTP, WS & WSS traffic. In short, as an app developer you don't need to think about this when deploying to PCF.
The only exception would be if your platform operations team uses a load balancer that does not natively support WebSockets. In this case, to work around the issue they need to separate traffic. HTTP and HTTPS go on the traditional ports 80 and 443, and they will route WebSockets on a different port. The PCF docs recommend 4443, which is where you're probably seeing that port. I can't tell you if your platform is set up this way, but if you know that you're using HAproxy, it is probably not.
https://docs.pivotal.io/platform/application-service/2-8/adminguide/supporting-websockets.html
At any rate, if you don't know just push an app and try to initiate a secure WebSocket connection over port 443 and see if it works. If it fails, try 4443 and see if that works. That or ask your platform operations team.
For what it's worth, even if your need to use port 4443 there is no difference to your application that runs on PCF. The only difference would be in your JavaScript code that initiates the WebSocket connection. It would need to know to use port 4443 instead of the default 443.
I made a proxy server in python 3. It listens on the port 4444. It basically receives the request from clients and sends it to the server. I want to use it as a firewall to my Dvwa server. So added another functionality to the proxy. What it does is, before sending the request to the DVWA server, it validates the input.
But the problem is, the clients have to configure their proxy settings in the browser to use my proxy server. Is there any way to access the proxy without configuring the browser settings. Basically I want to host the proxy server instead of the original web server. So that all the traffic goes through the proxy before going to the webserver.
Thanks in advance...
You don't say whether your Python3 proxy is hosted on the same machine as the DVWA.
Assuming it is, the solution is simple: a reverse-proxy configuration. Your proxy transparently accepts and forwards requests to your server who then processes them and sends them back via the proxy to the client.
Have your proxy listen on port 80
Have the DVWA listen on a port other than 80 so it's not clashing (e.g. 8080)
Your proxy, which is now receiving requests for the IP/hostname which would otherwise go to the DVWA, then forwards them as usual.
The client/web browser is none the wiser that anything has changed. No settings need changing.
That's the best case scenario, given the information provided in your question. Unfortunately, I can't give any alternative solutions without knowing the network layout, where the machines reside, and the intent of the project. Some things to consider:
do you have a proper separation of concerns for this middleware you're building?
what is the purpose of the proxy?
is it for debugging/observing traffic?
are you actually trying to build a Web Application Firewall?
HTTP proxy with SSL and DNS support.
I must be lacking some key concepts about proxy-ing because I cannot grasp this. I am looking to run a simply http or https proxy without interfering with SSL. Simply, a fully transparent proxy that can passthrough all the traffic to the browser connected via HTTP or HTTPS proxy without modifying or intercepting any packets. Not able to find any code online or I'm not using the right keywords.
EX. On the browser adding server.someVPN.com:80 on the HTTP proxy field and as soon as you try to visit a website, it prompts for authentication. Then it works perfectly with any domain, any security, any ssl, no further steps needed. Most VPN providers have this.
How's this possible? it even resolves DNS itself. I thought on transparent proxy the dns relies on the client. Preferably looking for a nodeJS solution but any lang works.
Please don't propose any solutions such as SOCKS5 or sock forwarding or DNS overriding or CA based MITM. According to HTTP 1.1 which supports 'CONNECT' this should be easy.
Not looking to proxy specific domains, looking for an all inclusive solution just like most VPN Providers providers.
----Found the answer too quickly, feel free to delete this post/question admins.
The way it works is that the browser knows it is talking to a proxy server, so for example if the browser want to connect to htttp://www.example.com it sends a CONNECT www.example.com:443 HTTP/1.1 to the proxy server, the proxy server resolves wwww.example.com via DNS and then opens a TCP connection to wwww.example.com port 443 and proxies the TCP stream transparently to the client.
I don't know any solution for nodejs. Common proxy servers include Squid, Privoxy and Apache Traffic Server
See also: https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/CONNECT
Found the solution right after I asked...
This module works perfectly https://github.com/mpangrazzi/harrier
Does exactly what I was asking for.
We often find columns like Address, Port in web browser proxy settings. I know when we use proxy to visit a page, the web browser request the web page from the proxy server, but what I want to know is how the whole mechanism works? I have observed that many ISP allow only access to a single IP(of their website) after we exhausted our free data usage. But when we enter the site which we wants to browse in proxy URL and then type in the allowed IP, the site get loaded. How this works?
In general, your browser simply connects to the proxy address & port instead of whatever IP address the DNS name resolved to. It then makes the web request as per normal.
The web proxy reads the headers, uses the "Host" header of HTTP/1.1 to determine where the request is supposed to go, and then makes that request itself relaying all remaining data in both directions.
Proxies will typically also do caching so if another person requests the same page from that proxy, it can just return the previous result. (This is simplified -- caching is a complex topic.)
Since the proxy is in complete control of the connection, it can choose to route the request elsewhere, scrape request and reply data, inject other things (like ads), or block you altogether. Use SSL to protect against this.
Some web proxies are "transparent". They reside on a gateway through which all IP traffic must pass and use the machine's networking stack to redirect outgoing connections to port 80 to a local port instead. It then behaves the same as though a proxy was defined in the browser.
Other proxies, like SOCKS, have a dedicated protocol that allows non-HTTP requests to be made as well.
There are 2 types of HTTP proxies, there are the ones that are reversed and the ones that
are forward.
The web browser uses a forward proxy, basically it is sending all http traffic through the proxy, the proxy will take this traffic out to the internet. Every http packet that comes out from your computer, will be send to the proxy before going to the target site.
The ISP blocking does not work when using a proxy because, every packet that comes out from your machine is pointing to the proxy and not to the targe site. The proxy could be getting internet through another ISP that has no blocks whatsoever.