We are hosting an application in the preprod azure PCF environment which exposes websocket endpoints for client devices to connect to. Is there a prescribed methodology to secure the said websocket endpoint using TLS/SSL when hosted on PCF and running behind the PCF HAProxy?
I am having trouble interpreting this information, as in, are we supposed to expose port 4443 on the server and PCF shall by default pick it up to be a secure port that ensures unsecured connections cannot be established? Or does it require some configuration to be done on HAProxy?
Is there a prescribed methodology to secure the said websocket endpoint using TLS/SSL when hosted on PCF and running behind the PCF HAProxy?
A few things:
You don't need to configure certs or anything like that when deploying your app to PCF. The platform takes care of all that. In your case, it'll likely be handled by HAProxy, but it could be some other load balancer or even Gorouter depending on your platform operations team installed PCF. The net result is that TLS is first terminated before it hits your app, so you don't need to worry about it.
Your app should always force users to HTTPS. How you do this depends on the language/framework you're using, but most have some functionality for this.
This process generally works by checking to see if the incoming request was over HTTP or HTTPS. If it's HTTP, then you issue a redirect to the same URL, but over HTTPS. This is important for all apps, not just ones using WebSockets. Encrypt all the things.
Do keep in mind that you are behind one or more reverse proxies, so if you are doing this manually, you'll need to consider what's in x-forwarded-proto or x-forwarded-port, not just the upstream connection which would be Gorouter, not your client's browser.
https://docs.pivotal.io/platform/application-service/2-7/concepts/http-routing.html#http-headers
If you are forcing your user's to HTTPS (#1 above), then your users will be unable to initiate an insecure WebSocket connection to your app. Browsers like Chrome & Firefox have restrictions to prevent an insecure WebSocket connection from being made when the site was loaded over HTTPS.
You'll get a message like The operation is insecure in Firefox or Cannot connect: SecurityError: Failed to construct 'WebSocket': An insecure WebSocket connection may not be initiated from a page loaded over HTTPS. in Chrome.
I am having trouble interpreting this information, as in, are we supposed to expose port 4443 on the server and PCF shall by default pick it up to be a secure port that ensures unsecured connections cannot be established? Or does it require some configuration to be done on HAProxy?
From the application perspective, you don't do anything different. Your app is supposed to start and listen on the assigned port, i.e. what's in $PORT. This is the same for HTTP, HTTP, WS & WSS traffic. In short, as an app developer you don't need to think about this when deploying to PCF.
The only exception would be if your platform operations team uses a load balancer that does not natively support WebSockets. In this case, to work around the issue they need to separate traffic. HTTP and HTTPS go on the traditional ports 80 and 443, and they will route WebSockets on a different port. The PCF docs recommend 4443, which is where you're probably seeing that port. I can't tell you if your platform is set up this way, but if you know that you're using HAproxy, it is probably not.
https://docs.pivotal.io/platform/application-service/2-8/adminguide/supporting-websockets.html
At any rate, if you don't know just push an app and try to initiate a secure WebSocket connection over port 443 and see if it works. If it fails, try 4443 and see if that works. That or ask your platform operations team.
For what it's worth, even if your need to use port 4443 there is no difference to your application that runs on PCF. The only difference would be in your JavaScript code that initiates the WebSocket connection. It would need to know to use port 4443 instead of the default 443.
Related
We have two applications running on ibm cloud cloud foundry (appA and appB).
appA is accessing appB over a container-to-container networking while appB is also available externally over a Gorouter route.
The thing is that while it is http-8080 our app exposes - all is good.
Now we have to do container-to-container networking over https.
We configured the app to expose https-8080. 8080 is used as https://docs.cloudfoundry.org/devguide/custom-ports.html states that:
By default, apps only receive requests on port 8080 for both HTTP and TCP routing,
and so must be configured, or hardcoded, to listen on this port
container-to-container networking works as expected now using https.
But we are no longer able to use the appB over the external Gorouter route.
What is the best way to have it all up and running as we expect?
There isn't a good answer to this question, at least at the time I write this.
You do have a couple options though:
Manually set up HTTPS for the internal route. To do this, you would need to use the instructions for your application/server of choice to configure HTTPS. Then use whatever functionality your buildpack provides to inject this confirmation into the application container. This would also require you to bundle and push TLS certs with your application. The platform isn't going to provide you TLS certs if you take this option.
The trick to making both the internal and public route work is that you need your application to listen on both port 8080 and the port you choose for your HTTPS traffic. As long as you continue taking HTTP traffic on port 8080, then your public routes should keep working.
If you want a quick, but not ideal solution you can use port 61001. For newer versions of Cloud Foundry, this port is used by Envoy to accept traffic to your app over HTTPS. Envoy then proxies the request to your app via HTTP over port 8080. You can use this port for your container to container traffic as well, however the configured subject name on the TLS cert won't match your route.
Here's an example of what the subject name will look like.
subject: OU=organization:639f74aa-5d97-4a47-a6b3-e9c2613729d8 + OU=space:10180e2b-33b9-44ee-9f8f-da96da17ac1a + OU=app:10a4752e-be17-41f5-bfb2-d858d49165f2; CN=b7520259-6428-4a52-60d4-5f25
Because it's using this format, you would need to have your clients ignore certificate subject name match errors (not ideal as that weakens HTTPS), or perhaps create a custom hostname matcher.
For what it's worth, I don't think you want or need to change the port. That is typically used if your application is not flexible and unable to listen on port 8080. It changes the port for inbound traffic. Since you're only using C2C networking, you don't need that option.
What you want, from what I understand, is that you want HTTPS for C2C traffic. In that case, the public traffic doesn't matter. It can still go through Gorouter to port 8080. For your container-to-container traffic, you can pick any port you want. You just need to make sure the port you choose has network policy set to allow that traffic (by default all C2C traffic is blocked). Once the network policy is set, you can connect directly over whatever port you designate.
I made a proxy server in python 3. It listens on the port 4444. It basically receives the request from clients and sends it to the server. I want to use it as a firewall to my Dvwa server. So added another functionality to the proxy. What it does is, before sending the request to the DVWA server, it validates the input.
But the problem is, the clients have to configure their proxy settings in the browser to use my proxy server. Is there any way to access the proxy without configuring the browser settings. Basically I want to host the proxy server instead of the original web server. So that all the traffic goes through the proxy before going to the webserver.
Thanks in advance...
You don't say whether your Python3 proxy is hosted on the same machine as the DVWA.
Assuming it is, the solution is simple: a reverse-proxy configuration. Your proxy transparently accepts and forwards requests to your server who then processes them and sends them back via the proxy to the client.
Have your proxy listen on port 80
Have the DVWA listen on a port other than 80 so it's not clashing (e.g. 8080)
Your proxy, which is now receiving requests for the IP/hostname which would otherwise go to the DVWA, then forwards them as usual.
The client/web browser is none the wiser that anything has changed. No settings need changing.
That's the best case scenario, given the information provided in your question. Unfortunately, I can't give any alternative solutions without knowing the network layout, where the machines reside, and the intent of the project. Some things to consider:
do you have a proper separation of concerns for this middleware you're building?
what is the purpose of the proxy?
is it for debugging/observing traffic?
are you actually trying to build a Web Application Firewall?
Shall I use WebSocket on non-80 ports? Does it ruin the whole purpose of using existing web/HTTP infrastructures? And I think it no longer fits the name WebSocket on non-80 ports.
If I use WebSocket over other ports, why not just use TCP directly? Or is there any special benefits in the WebSocket protocol itself?
And since current WebSocket handshake is in the form of a HTTP UPGRADE request, does it mean I have to enable HTTP protocol on the port so that WebSocket handshake can be accomplished?
Shall I use WebSocket on non-80 ports? Does it ruin the whole purpose
of using existing web/HTTP infrastructures? And I think it no longer
fits the name WebSocket on non-80 ports.
You can run a webSocket server on any port that your host OS allows and that your client will be allowed to connect to.
However, there are a number of advantages to running it on port 80 (or 443).
Networking infrastructure is generally already deployed and open on port 80 for outbound connections from the places that clients live (like desktop computers, mobile devices, etc...) to the places that servers live (like data centers). So, new holes in the firewall or router configurations, etc... are usually not required in order to deploy a webSocket app on port 80. Configuration changes may be required to run on different ports. For example, many large corporate networks are very picky about what ports outbound connections can be made on and are configured only for certain standard and expected behaviors. Picking a non-standard port for a webSocket connection may not be allowed from some corporate networks. This is the BIG reason to use port 80 (maximum interoperability from private networks that have locked down configurations).
Many webSocket apps running from the browser wish to leverage existing security/login/auth infrastructure already being used on port 80 for the host web page. Using that exact same infrastructure to check authentication of a webSocket connection may be simpler if everything is on the same port.
Some server infrastructures for webSockets (such as socket.io in node.js) use a combined server infrastructure (single process, one listener) to support both HTTP requests and webSockets. This is simpler if both are on the same port.
If I use WebSocket over other ports, why not just use TCP directly? Or
is there any special benefits in the WebSocket protocol itself?
The webSocket protocol was originally defined to work from a browser to a server. There is no generic TCP access from a browser so if you want a persistent socket without custom browser add-ons, then a webSocket is what is offered. As compared to a plain TCP connection, the webSocket protocol offers the ability to leverage HTTP authentication and cookies, a standard way of doing app-level and end-to-end keep-alive ping/pong (TCP offers hop-level keep-alive, but not end-to-end), a built in framing protocol (you'd have to design your own packet formats in TCP) and a lot of libraries that support these higher level features. Basically, webSocket works at a higher level than TCP (using TCP under the covers) and offers more built-in features that most people find useful. For example, if using TCP, one of the first things you have to do is get or design a protocol (a means of expressing your data). This is already built-in with webSocket.
And since current WebSocket handshake is in the form of a HTTP UPGRADE
request, does it mean I have to enable HTTP protocol on the port so
that WebSocket handshake can be accomplished?
You MUST have an HTTP server running on the port that you wish to use webSocket on because all webSocket requests start with an HTTP request. It wouldn't have to be heavily featured HTTP server, but it does have to handle the initial HTTP request.
Yes - Use 443 (ie, the HTTPS port) instead.
There's little reason these days to use port 80 (HTTP) for anything other than a redirection to port 443 (HTTPS), as certification (via services like LetsEncrypt) are easy and free to set up.
The only possible exceptions to this rule are local development, and non-internet facing services.
Should I use a non-standard port?
I suspect this is the intent of your question. To this, I'd argue that doing so adds an unnecessary layer of complication with no obvious benefits. It doesn't add security, and it doesn't make anything easier.
But it does mean that specific firewall exceptions need to be made to host and connect to your websocket server. This means that people accessing your services from a corporate/school/locked down environment are probably not going to be able to use it, unless they can somehow convince management that it is mandatory. I doubt there are many good reasons to exclude your userbase in this way.
But there's nothing stopping you from doing it either...
In my opinion, yes you can. 80 is the default port, but you can change it to any as you like.
The WebSocket standard hasn't been ratified yet, however from the draft it appears that the technology is meant to be implemented in Web servers. pywebsocket implements a WebSocket server which can be dedicated or loaded as Apache plugin.
So what I am am wondering is: what's the ideal use of WebSockets? Does it make any sense to implement a service using as dedicated WebSocket servers or is it better to rethink it to run on top of WebSocket-enabled Web server?
The WebSocket protocol was designed with three models in mind:
A WebSocket server running completely separately from any web server.
A WebSocket server running separately from a web server, but with traffic proxied to the websocket server from the web server (allowing websocket and HTTP traffic to co-exist on the same port)
A WebSocket server running as a plugin in the web server.
The model you pick really depends on the application you are trying to build and some other constraints that may limit your choices.
For example, if your application is going to be served from a single web server and the WebSocket connection will always be back to that same server, then it probably makes sense to just run the WebSocket server as a plugin/module in the web server.
On the other hand if you have a general WebSocket service that is usable from many different web sites (for example, you could have continuous low-latency traffic updates served from a WebSocket server), then you probably want to run the WebSocket server separate from any web server.
Basically, the tighter the integration between your WebSocket service and your web service, the more likely you will want to run them together and on the same port.
There are some constraints that may force one model or another:
If you control the server(s) but not the incoming firewall rules, then you probably have no choice but to run the WebSocket server on the same port(s) as your HTTP/HTTPS server (e.g. 80 and 443). In which case you will have to use a web server plugin or proxy to the real WebSocket server.
On the other hand, if you do not have super-user permission on the server where you are running the WebSocket server, then you will probably not be able to use ports 80 and 443 (below 1024 is generally a privileged port range) and in that case it really doesn't matter whether you run the HTTP/S and WebSocket servers on the same port or not.
If you have cookie based authentication (such as OAuth) in the web server and you would like to re-use this for the WebSocket connections then you will probably want to run them together (special case of tight integration).
A Web Socket detects the presence of a proxy server and automatically sets up a tunnel to pass through the proxy. The tunnel is established by issuing an HTTP CONNECT statement to the proxy server, which requests for the proxy server to open a TCP/IP connection to a specific host and port. Once the tunnel is set up, communication can flow unimpeded through the proxy. Since HTTP/S works in a similar fashion, secure Web Sockets over SSL can leverage the same HTTP CONNECT technique. [1]
OK, sounds useful! But, in the client implementations I've seen thus far (Go [2], Java [3]) I do not see anything related to proxy detection.
Am I missing something or are these implementations just young? I know WebSockets is extremely new and client implementations may be equally young and immature. I just want to know if I'm missing something about proxy detection and handling.
[1] http://www.kaazing.org/confluence/display/KAAZING/What+is+an+HTML+5+WebSocket
[2] http://golang.org/src/pkg/websocket/client.go
[3] http://github.com/adamac/Java-WebSocket-client/raw/master/src/com/sixfire/websocket/WebSocket.java
Let me try to explain the different success rates you may have encountered. While the HTML5 Web Socket protocol itself is unaware of proxy servers and firewalls, it features an HTTP-compatible handshake so that HTTP servers can share their default HTTP and HTTPS ports (80 and 443) with a Web Sockets gateway or server.
The Web Socket protocol defines a ws:// and wss:// prefix to indicate a WebSocket and a WebSocket Secure connection, respectively. Both schemes use an HTTP upgrade mechanism to upgrade to the Web Socket protocol. Some proxy servers are harmless and work fine with Web Sockets; others will prevent Web Sockets from working correctly, causing the connection to fail. In some cases additional proxy server configuration may be required, and certain proxy servers may need to be upgraded to support Web Sockets.
If unencrypted WebSocket traffic flows through an explicit or a transparent proxy server on its way the WebSocket server, then, whether or not the proxy server behaves as it should, the connection is almost certainly bound to fail today (in the future, proxy servers may become Web Socket aware). Therefore, unencrypted WebSocket connections should be used only in the simplest topologies.
If encrypted WebSocket connection is used, then the use of Transport Layer Security (TLS) in the Web Sockets Secure connection ensures that an HTTP CONNECT command is issued when the browser is configured to use an explicit proxy server. This sets up a tunnel, which provides low-level end-to-end TCP communication through the HTTP proxy, between the Web Sockets Secure client and the WebSocket server. In the case of transparent proxy servers, the browser is unaware of the proxy server, so no HTTP CONNECT is sent. However, since the wire traffic is encrypted, intermediate transparent proxy servers may simply allow the encrypted traffic through, so there is a much better chance that the WebSocket connection will succeed if Web Sockets Secure is used. Using encryption, of course, is not free, but often provides the highest success rate.
One way to see it in action is to download and install the Kaazing WebSocket Gateway--a highly optimized, proxy-aware WebSocket gateway, which provides native WebSocket support as well as a full emulation of the standard for older browsers.
The answer is that these clients simply do not support proxies.
-Occam
The communication channel is already established by the time the WebSocket protocol enters the scene. The WebSocket is built on top of TCP and HTTP so you don't have to care about the things already done by these protocols, including proxies.
When a WebSocket connection is established it always starts with a HTTP/TCP connection which is later "upgraded" during the "handshake" phase of WebSocket. At this time the tunnel is established so the proxies are transparent, there's no need to care about them.
Regarding websocket clients and transparent proxies,
I think websocket client connections will fail most of the time for the following reasons (not tested):
If the connection is in clear, since the client does not know it is communicating with a http proxy server, it won't send the "CONNECT TO" instruction that turns the http proxy into a tcp proxy (needed for the client after the websocket handshake). It could work if the proxy supports natively websocket and handles the URL with the ws scheme differently than http.
If the connection is in SSL, the transparent proxy cannot know to which server it should connect to since it has decrypt the host name in the https request. It could by either generating a self-signed certificate on the fly (like for SSLStrip) or providing its own static certificate and decrypt the communication but if the client validates the server certificate it will fail (see https://serverfault.com/questions/369829/setting-up-a-transparent-ssl-proxy).
You mentioned Java proxies, and to respond to that I wanted to mention that Java-Websocket now supports proxies.
You can see the information about that here: http://github.com/TooTallNate/Java-WebSocket/issues/88
websocket-client, a Python package, supports proxies, at the very least over secure scheme wss:// as in that case proxy need no be aware of the traffic it forwards.
https://github.com/liris/websocket-client/commit/9f4cdb9ec982bfedb9270e883adab2e028bbd8e9