I have been working my way though setting up HTTPS using AWS. I have been attempting this with a self-signed certificate and am finding the process a bit problematic.
One question that has come up along the way is this business of server-side HTTPS. The client that I am working with requests that when a user hits the server the URL change to HTTPS. I am wondering if "Server-Side HTTPS" means that the protocol is transparent to the end-user?
Will they still see HTTP int the browser?
Thanks.
Don't know if this is an exact answer to your question, but rather perhaps a piece of advice. When using ELB, I have found it MUCH easier to install the SSL cert on the ELB and use SSL offload to forward requests from port 443 on ELB to port 80 on the EC2 instances.
The pros of this:
There is only one place where you need to install the cert rather than having to install across a number of instances (or update AMI and relaunch instances), making cert updates much easier to perform.
You get better performance on your web servers as they don't have to deal with SSL encryption.
Some cons:
The communication is not encrypted end-to-end so there is the technical (albeit unlikely) chance that the communication could be intercepted between ELB and servers. If you are dealing with something like PCI compliance this might matter to you.
If you needed to directly access one of the instances over HTTPS that would not be possible.
You may need to make sure your application is aware of the https-related headers (i.e. x-forwarded-proto) that the ELB injects into the request if your application needs to check whether the request is over HTTPS.
There is no reason that this configuration would disallow you from redirecting incoming requests over HTTP to HTTPS. You might however need to look the x-forwarded-proto header to do any web-server or application level redirects to HTTPS. The end user would not have any way of knowing that their HTTPS wrapper for their request was being offloaded at the ELB.
Related
I am trying to find (or write) a caching proxy tool that accepts all traffic from a specific container in my localhost (using Iptables). What I want to do with this traffic is to save it and cache the response, and later, if I see that a request was already sent to a server, return the cached response to the requesting party (and not sending the request to the server again, because a previous similar request was already sent).
Here's a diagram to demonstrate what I'm trying to do:
I'm not sure exactly how big is the problem I'm trying to deal with here. I want to do it for all traffic, including HTTP, TLS and other TCP based traffic (database connections and such). I tried to check mitmproxy, and it seems to deal pretty good with HTTP and the TLS part, but intercepting raw TCP traffic (for databases etc.) is not possible.
Any advices or resources I can use to accomplish that? (Not necessarily in Python). How complex do you think this problem is? Do you think I can find a generic solution?
Thanks in advance!
We are hosting an application in the preprod azure PCF environment which exposes websocket endpoints for client devices to connect to. Is there a prescribed methodology to secure the said websocket endpoint using TLS/SSL when hosted on PCF and running behind the PCF HAProxy?
I am having trouble interpreting this information, as in, are we supposed to expose port 4443 on the server and PCF shall by default pick it up to be a secure port that ensures unsecured connections cannot be established? Or does it require some configuration to be done on HAProxy?
Is there a prescribed methodology to secure the said websocket endpoint using TLS/SSL when hosted on PCF and running behind the PCF HAProxy?
A few things:
You don't need to configure certs or anything like that when deploying your app to PCF. The platform takes care of all that. In your case, it'll likely be handled by HAProxy, but it could be some other load balancer or even Gorouter depending on your platform operations team installed PCF. The net result is that TLS is first terminated before it hits your app, so you don't need to worry about it.
Your app should always force users to HTTPS. How you do this depends on the language/framework you're using, but most have some functionality for this.
This process generally works by checking to see if the incoming request was over HTTP or HTTPS. If it's HTTP, then you issue a redirect to the same URL, but over HTTPS. This is important for all apps, not just ones using WebSockets. Encrypt all the things.
Do keep in mind that you are behind one or more reverse proxies, so if you are doing this manually, you'll need to consider what's in x-forwarded-proto or x-forwarded-port, not just the upstream connection which would be Gorouter, not your client's browser.
https://docs.pivotal.io/platform/application-service/2-7/concepts/http-routing.html#http-headers
If you are forcing your user's to HTTPS (#1 above), then your users will be unable to initiate an insecure WebSocket connection to your app. Browsers like Chrome & Firefox have restrictions to prevent an insecure WebSocket connection from being made when the site was loaded over HTTPS.
You'll get a message like The operation is insecure in Firefox or Cannot connect: SecurityError: Failed to construct 'WebSocket': An insecure WebSocket connection may not be initiated from a page loaded over HTTPS. in Chrome.
I am having trouble interpreting this information, as in, are we supposed to expose port 4443 on the server and PCF shall by default pick it up to be a secure port that ensures unsecured connections cannot be established? Or does it require some configuration to be done on HAProxy?
From the application perspective, you don't do anything different. Your app is supposed to start and listen on the assigned port, i.e. what's in $PORT. This is the same for HTTP, HTTP, WS & WSS traffic. In short, as an app developer you don't need to think about this when deploying to PCF.
The only exception would be if your platform operations team uses a load balancer that does not natively support WebSockets. In this case, to work around the issue they need to separate traffic. HTTP and HTTPS go on the traditional ports 80 and 443, and they will route WebSockets on a different port. The PCF docs recommend 4443, which is where you're probably seeing that port. I can't tell you if your platform is set up this way, but if you know that you're using HAproxy, it is probably not.
https://docs.pivotal.io/platform/application-service/2-8/adminguide/supporting-websockets.html
At any rate, if you don't know just push an app and try to initiate a secure WebSocket connection over port 443 and see if it works. If it fails, try 4443 and see if that works. That or ask your platform operations team.
For what it's worth, even if your need to use port 4443 there is no difference to your application that runs on PCF. The only difference would be in your JavaScript code that initiates the WebSocket connection. It would need to know to use port 4443 instead of the default 443.
HTTP proxy with SSL and DNS support.
I must be lacking some key concepts about proxy-ing because I cannot grasp this. I am looking to run a simply http or https proxy without interfering with SSL. Simply, a fully transparent proxy that can passthrough all the traffic to the browser connected via HTTP or HTTPS proxy without modifying or intercepting any packets. Not able to find any code online or I'm not using the right keywords.
EX. On the browser adding server.someVPN.com:80 on the HTTP proxy field and as soon as you try to visit a website, it prompts for authentication. Then it works perfectly with any domain, any security, any ssl, no further steps needed. Most VPN providers have this.
How's this possible? it even resolves DNS itself. I thought on transparent proxy the dns relies on the client. Preferably looking for a nodeJS solution but any lang works.
Please don't propose any solutions such as SOCKS5 or sock forwarding or DNS overriding or CA based MITM. According to HTTP 1.1 which supports 'CONNECT' this should be easy.
Not looking to proxy specific domains, looking for an all inclusive solution just like most VPN Providers providers.
----Found the answer too quickly, feel free to delete this post/question admins.
The way it works is that the browser knows it is talking to a proxy server, so for example if the browser want to connect to htttp://www.example.com it sends a CONNECT www.example.com:443 HTTP/1.1 to the proxy server, the proxy server resolves wwww.example.com via DNS and then opens a TCP connection to wwww.example.com port 443 and proxies the TCP stream transparently to the client.
I don't know any solution for nodejs. Common proxy servers include Squid, Privoxy and Apache Traffic Server
See also: https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/CONNECT
Found the solution right after I asked...
This module works perfectly https://github.com/mpangrazzi/harrier
Does exactly what I was asking for.
I have an https site that needs data from an API that is only available in http.
To get around the mixed content warning, I changed it so the JS requests a path on the server, which then makes the http request and returns the data.
Is this bad? If it is bad, why?
My understanding of what you're doing :
You are providing a HTTPS url on your server which is essentially acting as a proxy, making a backend connection between your server and the API provider over HTTP.
If my understanding of what you're doing is correct, then what you're doing is better than just serving everything over HTTP...
You are providing security between the client and your server. Most security threats that would take advantage of a plain HTTP connection are in the local environment of the client - such as on a shared local network. Dodgy wifi in a cafe. School lans. etc.
The connection between your server and the API provider is unencrypted but apparently they only provide that unencrypted anyway. This is really the best you can do unless your API provider starts providing an HTTPS interface.
It's more secure than doing nothing and should eliminate the browser errors.
If there is a real need for security (PCI compliance, HIPAA etc) however, you should stop using that API. However it seems unlikely considering the circumstantial evidence in your question.
Our web application has a button that is supposed to send data to a server on the local network that in turn prints something on a printer.
So far it was easy: The button triggered an AJAX POST request to http://printerserver/print.php with a token, that page connected to the web application to verify the token and get the data to print and then printed.
However, we are now delivering our web application via HTTPs (and I would rather not go back to HTTP for this) and newer versions of Chrome and Firefox don't make the request to the HTTP address anymore, they don't even send the request to check CORS headers.
Now, what is a modern alternative to the cross-protocol XHR? Do Websockets suffer from the same problem? (A Google search did not make clear what is the current state here.) Can I use TCP Sockets already? I would rather not switch to GET requests either, because the action is not idempotent and it might have practical implications with preloading and caching.
I can change the application on the printerserver in any way (so I could replace it with NodeJS or something) but I cannot change the users' browsers (to trust a self-signed certificate for printerserver for example).
You could store the print requests on the webserver in a queue and make the printserver periodically poll for requests to print.
If that isn't possible I would setup a tunnel or VPN between the webserver and printserver networks. That way you can make the print request from the webserver on the server-side instead of the client. If you use curl, there are flags to ignore invalid SSL certificates etc. (I still suspect it's nicer to introduce a queue anyway, so the print requests aren't blocking).
If the webserver can make an ssh connection to something on the network where the printserver is on, you could do something like: ssh params user#host some curl command here.
Third option I can think of, if printserver can bind to for example a subdomain of the webserver domain, like: print.somedomain.com, you may be able to make it trusted by the somedomain.com certificate, IIRC you have to create a CSR (Certificate Signing Request) from the printserver certificate, and sign it with the somedomain.com certificate. Perhaps it doesn't even need to be a subdomain for this per se, but maybe that's a requirement for the browser to do it client-side.
The easiest way is to add a route to the webapp that does nothing more than relay the request to the print server. So make your AJAX POST request to https://myapp.com/print, and the server-side code powering that makes a request to http://printerserver/print.php, with the exact same POST content it received itself. As #dnozay said, this is commonly called a reverse proxy. Yes, to do that you'll have to reconfigure your printserver to accept (authenticated) requests from the webserver.
Alternatively, you could switch the printserver to https and directly call it from the client.
Note that an insecure (http) web-socket connection on a secure (https) page probably won't work either. And for good reason: generally it's a bad idea to mislead people by making insecure connections from what appears to them to be a secure page.
The server hosting the https webapp can reverse proxy the print server,
but since the printer is local to the user, this may not work.
The print server should have the correct CORS headers
Access-Control-Allow-Origin: *
or:
Access-Control-Allow-Origin: https://www.example.com
However there are pitfalls with using the wildcard.
From what I understand from the question, printserver is not accessible from the web application so the reverse proxy solution won't work here.
You are restricted from making requests from the browser to the printserver by cross-origin-policy.
If wish to communicate with the printserver from an HTTPS page you will need the printserver to expose print.php as HTTPS too.
You could create a DNS A record as a subdomain of your web application that resolves to the internal address of your printserver.
With those steps in place you should be able to update your printserver page to respond with permissive CORS headers which the browser should then respect. I don't think the browser will even issue CORS requests across different protocol schemes (HTTPS vs HTTP) or to internal domains, without a TLD.