How to prevent HTTPS man-in-middle attack from the server side? - https

In the HTTPS security model, the weakest part is the list of trusted CA in the browser. There are many ways that someone could inject addition CA to the list that users will trust the wrong guy.
For example, a public computer, or PC in your company. The administrator could force you to trust a CA issued by himself, it could be very insecure with a HTTPS proxy server with HTTPS relay. As a result, they will able to SPY your message, login, and password even browser tell you that your are on trusted SSL connection.
In this case, what can web application developer could do to protect user and also the system?

As a web application developer there is very little you can do about this.
This issue needs to be dealt with further down the stack.
If someone half way around the world wants to:
a. Put a false root CA on someone's computer
b. Issue a cert for your domain under that CA
c. Impersonate your site
d. Point someone's local DNS entry for your domain to a different ip
In none of the above steps is your application involved or consulted so this is where good network administration and security is important.
Aside from that, maybe there's a legitimate reason for someone to do just this locally on their personal network. Who am I to stop them?
This is essentially what corporate web proxy filters do and they are within their rights to do it.
As far as stopping someone malicious from taking the above steps thats something that needs to be put on the administrators of your customers machines.

Theoretically speaking, if the user's terminal is owned by an adversary, you've already lost and there's nothing you can do about it -- if push comes to shove they can filter out your countermeasures or even scrape and spoof the entire site.
In practice, you can do things to make the adversary's job harder, but it's an arms race. You'll likely have to use all the same sorts of countermeasures that malicious software uses against scanners -- because from the adversary's point of view your site is behaving maliciously by trying to prevent itself from being overridden! -- and know that anything you do can and will be rapidly countered if your adversary cares enough.
You could, for example, open sockets or useXmlHttpRequest from JavaScript or applets, but you can't stop your adversary from updating their filters to strip out anything you add.
You might get more mileage by emitting polymorphic output or using other anti-reverse-engineering techniques, so it appears that no two hits to the site produce similar code/resources sent to the browser. It's an inordinate amount of work but gives your adversary a puzzle to chew on if they want to play man in the middle.

Related

Securing communication on a portable intranet (changing IP addresses)

I have the following scenario:
A network will be set up on a Windows infrastructure
A website will be put on that network - It is not given a domain name and is not available on the internet. It will be addressed only via an internally recognised IP address.
A piece of software within that network will communicate with the website
(we want to avoid the 'Could not establish trust relationship issue' found with self-signed certificates without reducing security as, I believe, the accepted answer does).
The website will also be viewed on tablets and PCs.
After a few days, the service will be be put on a different network (with different IPs).
It will installed on many PCs/Networks.
I want to secure this via SSL, but it seems tricky following the 2015 update that disallowed IP addresses to have certificates.
This post suggests going via a public IP, but the solution may be completely offline in an area without internet access.
I've spent hours researching, but seem to be missing something.
How should this be done please?
I would setup a DNS server with an app.local domain that gets issued the certificate.
Even if you serve up the intermediate certificates in the TLS handshake (which you should ALWAYS do and not rely on AIA), verifying the chain becomes problematic without Internet access as browsers won't be able to reach the CRL URL (Certificate Revocation List). That is, of course, unless we're talking about your own CA (living in the same network) that issues the site certificate.
If everything you describe runs in a well guarded sandbox then you probably don't need the TLS layer at all, ask yourself WHO is the attacker and WHAT are you protecting.

Edge AJAX calls fail to a domain with SSL pointing to localhost

We have a product which relies on a thin client installed on users machine. We make an ajax get request to a domain pointing to local host which has a real ssl. This fails in edge, works in every other browser including IE11. Note that same works if there is no ssl involved. It also works on Windows 10 Home edition.
Adding a datatype, content-type or request method does not resolve this. Only way to fix this seems to be running following command.
CheckNetIsolation LoopbackExempt -a -n="Microsoft.MicrosoftEdge_8wekyb3d8bbwe"
If this is expected behavior, can someone explain why microsoft would block this on a enterprise version but it works on home edition ?
Microsoft Edge, and Windows 10 apps in general, use AppContainer Isolation:
Isolating the application from network resources beyond those
specifically allocated, AppContainer prevents the application from
'escaping' its environment and maliciously exploiting network
resources. Granular access can be granted for Internet access,
Intranet access, and acting as a server.
Your thin-client is running on win10 enterprise edge against an intranet ssl service (localhost), so access is by default restricted by this mechanism. With the command
CheckNetIsolation LoopbackExempt -a -n="Microsoft.MicrosoftEdge_8wekyb3d8bbwe"
you are disabling network isolation on that host for the loopback network adapter (localhost) for MS Edge so your app client (and any other locally sourced app) can run on it without restriction against any localhost service.
This fails in edge, works in every other browser including IE11.
They clearly wanted to improve the default security policy of previous versions. It's never too late, MS :) There is actually an Enhanced Protected Mode (EPM) that could prevent your app from running on IE too. Chrome has its Google Chrome Sandbox that can also be tuned like this. Safari and Firefox also have sand-boxing features although I am not familiar with their particularities.
Note that same works if there is no ssl involved.
Typically, if you are using ssl is because you are dealing with sensitive data and/or a critical service. If you are not it is ok to be more lax. Again, just a matter of security policy.
It also works on Windows 10 Home edition. If this is expected behavior, can someone explain why microsoft would block this on a enterprise version but it works on home edition?
Enterprise versions of any product are known to be more restrictive since their target users are more security concerned (IT people typically don't want to expose their company's intranet payroll db service to external attackers, and things like that). Also, in this case the default behavior can be easily defined/altered by experts on the IT department (check out domain security policies) so it's better to leave the default settings to "paranoid" mode and let the experts tweak according to the company's needs.
Note there are other mechanisms at work when you are running a thin client on the browser that make this kind of protection redundant (same domain policy, XSS protection and so on). Nevertheless one can never be too safe: There are ways to work around those defenses such as Self-XSS that require isolation between the browser and the local network to avoid compromising the system. In the end, less exposed surface means less attack vectors, so isolation is good if you can afford it :)

SSL without a certificate

I am developing a backend section for a company where they need as much security as possible because they will put sensible information in it.
They asked me to add SSL, which I added(the website is coded in codeigniter) but I don't know if a SSL certificate is really needed.
Having in mind that this website will only be accesible by a set of two different IPs, the two offices they have I don't think getting a certificate would be needed. Am I right?
Edit 24 Feb:
The data has the information of projects, list of clients so it is sensitive.
So I think I will go with a self signed ssl.
Thank you all
There are a few other issues you should keep in mind, in addition to those who have previously answered:
Will the users of this site be connecting and transferring this sensitive information over a wireless connection at any point in time? If this is the case, then yes, you need SSL.
HTTPS is not such a burden on server resources as it once was. Especially with a site only being used by a limited number of users at defined locations, you should certainly be able to provide for the maximum number of users.
If this is a private site, and cost is an issue, go with a self-signed certificate. The OpenSSL toolkit is your best bet for this. Numerous guides for setting up self-signed certificates with OpenSSL are available.
Are there legal issues involved with this sensitive data? If you are transmitting customer information in a client database with phone numbers, postal addresses, email addresses, login information - or even more seriously, credit card numbers - then you need SSL. Ask yourself if you would trust a company who transmitted this same information of yours without SSL.
If the client asked for it and is paying for it, this isn't an undue burden on you as a developer, and as a developer you really never want to be in the position where you're arguing for less security. If there's a problem later, it comes back to bite you. Cover your rear end.
Combine this with IP restricted access. If you can, do that at the Apache configuration level. If not, then do it with a .htaccess. Why at the Apache level? Again, that covers your rear end in case you forget to put the restrictions in a .htaccess, or in case someone else comes in and removes them by accident.
If there's even a question about it, use SSL.
SSL certificate is needed because somebody could read the request content on the way to your server. You can't add https to the web page without certificate (check if you have https protocol in the browser page address).
In developing stage you can generate SSL certificate by yourself (use openssl) there is no need to buy valid one. There will be warning in the browser that the certificate is not signed by any authority, but I don't think that is big problem.
If you need to protect clinet-server communication (http request/response headers and data) from being overheard, then you need to install SSL on your sever, and use https protocol. This is useful when you want to hide i.e. user credentials.
If it is just about allow/disable access for mentioned ip addresses, than it is enough to limit it in http server configuration (usually .htaccess).
Please keep in mind, that https requests requires more server/client resources (CPU), so it should not be used if not necessary.

What is the best way to restrict access to a development website?

I have a site i am working on that i would like to display only to a few others for now. Is there anything wrong with setting up windows user names and using windows auth to prompt the user before getting into the development site?
There are several ways, with varying degrees of security:
Don't put it on the internet - put it on a private network, and use a VPN to access it
Restrict access with HTTP authentication (as you suggest). The downside to this is it can interfere with the actual site, if you are using HTTP auth, or some other type of authentication as part of the application.
Restrict access based on remote IP. Just allow the IPs of users you want to be able to access it.
Use a custom hostname. Have it on a public IP, but don't publish the hostname. This means make an entry in your HOSTS file (or configure your own DNS server, if possible) so that "blah.mysite.com" goes to the site, but that is not available on the internet. Obviously you'd only make the site accessible when using that hostname (and not the IP).
That depends on what you mean by "best": for example, do you mean "easiest" or "most secure"?
The best way might be to have it on a private network, which you attach to via VPN.
I do this frequently. I use Hamachi to allow them to access my dev box so they can see whats going on. they have access to it when they want , and/or when I allow. When they are done I evict them from my Hamachi network and change the password.
Hamachi is a software VPN. Heres a link to Hamachi - AKA LogMeIn
Hamachi
They have a free version which works quite well.
Of course, there's nothing wrong with Windows auth. There are couple of (not too big) drawbacks, though:
your website auth scheme is different from the final product.
you are giving them more access to the box they really need.
you automatically reimaging the machine and redeploying the website is more complex, as you have to automate the windows account creation.
I would suggest two alternatives:
to do whatever auth you plan on doing in the final website and make sure all pager require auth
do a token cookie based auth - send them a link that sets a particular token in a cookie and in your website code add quick check for that token before you even go to the regular user auth
If you aren't married to IIS, and you need developers to be able to change the content, I would consider Apache + SSL + WebDav (aka Web Folders). This will allow you to offer a secure sandbox where developers can change and view the content without having user accounts on the server.
This setup requires some knowledge of Apache so it only makes sense if you are already using Apache or if you frequently need to provide outsiders access to your web server.
First useful link I found on the topic: http://pascal.thivent.name/2007/08/howto-setup-apache-224-webdav-under.html
Why don't you just set up an NTFS user and assign it to the website (and remove anonymous access)

Does disabling anonymous access in IIS create a security risk?

If I uncheck the "Enable anonymous access" checkbox in IIS, so as to password protect a site, i.e. by restricting read access to designated Windows accounts, does the resulting password dialogue which is then presented to all anonymous http requests, represent a security risk in that it (seemingly) offers all and sundry an unlimited number of attempts to guess at any Windows account password?
EDIT:
Okay, not much joy with this so far, so I'm attaching a bounty. Just 50 points sorry, I am a man of modest means. To clarify what I'm after: does disabling anonymous access in IIS offer a password guessing opportunity to the public which did not exist previously, or is it the case that the browser's user credentials dialogue can be simulated by including a username and password in a http request directly, and that the response would indicate whether the combination was correct even though the page was open to anonymous users anyway? Furthermore, are incorrect password attempts submitted via http subject to the same lockout policy enforced for internal logins, and if so does this represent a very easy opportunity to deliberately lock out known usernames, or alternatively, if not, is there anything that can be done to mitigate this unlimited password guessing opportunity?
The short answer to your question is yes. Any time you give any remote access to any resource on your network it presents a security risk. Your best bet would be to follow IIS best practices and then take some precautions of your own. Rename your built in administrator account. Enforce strong password policies. Change the server header. Removing anonymous access, while a password guessing risk, is a very manageable one if used with the proper layered security model.
When you choose an authentication other than Anonymous, you certainly can be subject to password hacking. However, the account that is uses is subject to the standard account lockout policies set in Local Security Policy and your Domain's security policy.
For example, if you have a local account "FRED" and the account lockout policy is set to 5 invalid attempts within 30 minutes, then this effectively prevents account password guessing, at the risk of a denial of service attack. However, setting the reset window to a value (15 minutes?) effectively limits the DOS.
Basic Authentication is not recommeded for a non-SSL connection since the password will travel in plain text.
Digest Authentication requires passwords to be stored on the server using a reversible encryption, so while better than Basic, Digest has its flaws.
Windows Integrated Authentication
includes NTLM and Kerberos.
The IIS Server should be configured via Group Policy or Local Security settings to disable LM authentication ( Network security: LAN Manager authentication level set to "Send NTLMv2 response only" or higher, preferred is "Send NTLMv2 response only\refuse LM & NTLM") to prevent trivial LM hash cracking and to prevent NTLM man in the middle proxy attacks.
Kerberos can be used, however it only works if both machines are members of the same domain and the DC's can be reached. Since this doesn't typically happen over the internet, you can ignore Kerberos.
So the end result is, yes, disabling anonymous does open you up for password cracking attempts and DOS attacks, but these can be prevented and mitigated.
You should read about differnet authentication mechanisms available: Basic, Digest, NTLM, Certificates, etc. The IETF compiled a document that dicusses the pros and cons of some of these (NTLM is propriatary MS protocol).
Bottom line is: You are not done with just disabling anonymous access. You definitely have to consider carefully what the attack scenarios are, what the potential damage might be, what user may be willing to accept and so on.
If you introduce authorization you need to address the risk of credentials being compromised. You should also think if what you actually want to achieve is confidential transport of the content: In this case you will have to instroduce transport layer security like SSL.
I am by know means a hosting guru and I imagine there are ways and means of doing this but my personal opinion is that what you are talking about doing is defiantly an unnecessary security risk. If this site is to be available on the internet i.e. it will have public access then you probably don't want to disable anonymous access in IIS.
Please remember that the idea of being able to configure the anonymous access for a site in IIS is so that you can create a user which has specific permission to read the relevant files for a particular site. What we are talking about here is file access on a physical disc. For one thing a public web server should be in a DMZ and not part of your companies domain so users should not be able to log in with their domain credentials anyway.
The only reason why I could imagine that you would want to switch off anonymous access and force users to input their Windows credentials is for a site which will only be used internally and even then I would probably not choose to restrict access in this manner.
If you want to restrict access to content on a public website then you would probably be better of writing something which handles authentication as part of the site itself or a service which the site can consume. Then if someone were to obtain user credentials then at least all they will be able to do is gain access to the site and there is no potential for a breach of your internal network by any means.
There is a reason why developers spend allot of time writing user management solutions. You will find plenty of advice on how to write something like this and plenty of libraries that will do most of the work for you.

Resources