One of my clients uses McAfee ScanAlert (i.e., HackerSafe). It basically hits the site with about 1500 bad requests a day looking for security holes. Since it demonstrates malicious behavior it is tempting to just block it after a couple bad requests, but maybe I should let it exercise the UI. Is it a true test if I don't let it finish?
Isn't it a security flaw of the site to let hackers throw everything in their arsenal against the site?
Well, you should focus on closing holes, rather than trying to thwart scanners (which is a futile battle). Consider running such tests yourself.
It's good that you block bad request after a couple of trials, but you should let it continue.
If you block it after 5 bad requests you won't know if the 6th request wouldn't crash your site.
EDIT:
I meant that some attacker might send only one request but similar to one of those 1495 that You didn't test because you blocked., and this one request might chrash your site.
Preventing security breaches requires different strategies for different attacks. For instance, it would not be unusual to block traffic from certain sources during a denial of service attack. If a user fails to provide proper credentials more than 3 times the IP address is blocked or the account is locked.
When ScanAlert issues hundreds of requests which may include SQL injection--to name one--it certainly matches what the site code should consider "malicious behavior".
In fact, just putting UrlScan or eEye SecureIIS in place may deny many such requests, but is that a true test of the site code. It's the job of the site code to detect malicious users/requests and deny them. At what layer is the test valid?
ScanAlert presents in two different ways: the number of requests which are malformed and the variety of each individual request as a test. It's seems like the 2 pieces of advice that emerge are as follows:
The site code should not try to detect malicious traffic from a particular source and block that traffic, because that is a futile effort.
If you do attempt such a futile effort, as least make an exception for requests from ScanAlert in order to test lower layers.
If it's not hurting the performance of the site, I think its a good thing. If you had 1000 clients to the same site all doing that, yeah, block it.
But if the site was built for that client, I think it's fair enough they do that.
Related
My MVC-5 website gets a lot of false registrations, or real and followed by experiments in escalation of privileges. I can write a request filter - but how to block web requests from one particular country? Are there publicly available list of IPs that I can block ... or what? How do people approach solving this issue?
Depending on the country you might have a little difficulty in blocking everything but there are lists out there (such as the one maintained by nirsoft). Usually it's better to block specific IP addresses where the bad behavior is originating by using software that watches for the behavior and dynamically blocks it. That way you're covered regardless of where it originates. Especially since managing IP address blacklists is a real pain. I've made use of IPTables on linux before for this and it works like a charm.
I want to keep no-good scrapers (aka. bad bots that by defintition ignores robots.txt) that steal content and consume bandwidth off my site. At the same time, I do not want to interfere with the user experience of legitimate human users, or stop well-behaved bots (such as Googlebot) from indexing the site.
The standard method for dealing with this has already been described here: Tactics for dealing with misbehaving robots. However, the solution presented and upvoted in that thread is not what I am looking for.
Some bad bots connect through tor or botnets, which means that their IP address is ephemeral and may well belong to a human being using a compromised computer.
I've therefore been thinking about how to improve the industry standard method by letting the "false positives" (i.e. humans) that has their IP blacklisted get access to my website again. One idea is to stop blocking these IPs outright, and instead asking them to pass a CAPTCHA before being allowed access. While I consider CAPTCHA to be a PITA for legitimate users, vetting suspected bad bots with a CAPTCHA seems to be a better solution than blocking access for these IPs completely. By tracking the session of users that completes the CAPTCHA, I should be able to determine whether they are human (and should have their IP removed from the blacklist), or robots smart enough to solve a CAPTCHA, placing them on an even blacker list.
However, before I go ahead and implement this idea, I want to ask the good people here if they foresee any problems or weaknesses (I am already aware that some CAPTCHAs has been broken - but I think that I shall be able to handle that).
The question I believe is whether or not there are foreseeable problems with captcha. Before I dive into that, I also want to address the point of how you plan on catching bots to challenge them with a captcha. TOR and proxy nodes change regularly so that IP list will need to be constantly updated. You can use Maxmind for a decent list of proxy addresses as your baseline. You can also find services that update the addresses of all the TOR nodes. But not all bad bots come from those two vectors, so you need find other ways of catching bots. If you add in rate limiting and spam lists then you should get to over 50% of the bad bots. Other tactics really have to be custom built around your site.
Now to talk about problems with Captchas. First, there are services like http://deathbycaptcha.com/. I dont know if I need to elaborate on that one, but it kind of renders your approach useless. Many of the other ways people get around Captcha's are using OCR software. The better the Captcha is at beating OCR, the harder it is going to be on your users. Also, many Captcha systems use client side cookies that someone can solve once and then upload to all their bots.
Most famous I think is Karl Groves's list of 28 ways to beat Captcha. http://www.karlgroves.com/2013/02/09/list-of-resources-breaking-captcha/
For full disclosure, I am a cofounder of Distil Networks, a SaaS solution to block bots. I often pitch our software as a more sophisticated system than simply using captcha and building it yourself so my opinion of the effectivity of your solution is biased.
Is it possible to create a script that is executed outside of the server,
or with a browser add-on for example it automatically fills in form values, then submits the form all ready to be parsed by the server ? this way in three minutes a billion fake accounts could get registered very easily, imagine facebook which does not use any visible to the human captcha, a browser add on that performs the form submission and inserts the vals retrieved from a local database for new emails to be associated as that is a check - no duplicate emails, can thousands and thousands of fake accounts be created each day accross the globe?
What is the best method to prevent fake accounts? Even imagining the scenario of a rotating ips center with human beings registering just to choke the databases, achieving 30-50 million accounts in a year. Thanks
This is probably better on the Security.Stackexchange.com website, but...
According to the OWASP Guide to Authentication, CAPTCHAs are actually a bad thing. Not only do they not work, induce additional headaches, but in come cases (per OWASP) they are illegal.
CAPTCHA
CAPTCHA (Completely automated Turing Tests To Tell Humans and Computers Apart) are illegal in any jurisdiction that prohibits
discrimination against disabled citizens. This is essentially the
entire world. Although CAPTCHAs seem useful, they are in fact, trivial
to break using any of the following methods:
• Optical Character Recognition. Most common CAPTCHAs are solvable using specialist
CAPTCHA breaking OCR software.
• Break a test, get free access to foo,> where foo is a desirable resource
• Pay someone to solve the CAPTCHAs.
The current rate at the time of writing is $12 per 500 tests.
Therefore implementing CAPTCHAs in your software is most likely to be
illegal in at least a few countries, and worse - completely
ineffective.
Other methods are commonly used.
The most common, probably, is the e-mail verification process. You sign up, they send you an email, and only when you confirm it is the account activated and accessible.
There are also several interesting alternatives to CAPTCHA that perform the same function, but in a manner that's (arguably, in some cases) less difficult.
More difficult may be to track form submissions from a single IP address, and block obvious attacks. But that can be spoofed and bypassed.
Another technique to use JavaScript to time the amount of time the user spent on the web page before submitting. Most bots will submit things almost instantly (if they even run the JavaScript at all), so checking that a second or 2 has elapsed since the page rendered can detect bots. but bots can be crafted to fool this as well
The Honeypot technique can also help to detect such form submissions. There's a nice example of implementation here.
This page also talks about a Form Token method. The Form Token is one I'd never heard of until just now in this context. It looks similar to an anti-csrf token in concept.
All told, your best defense, as with anything security related, is a layered approach, using more than one defense. The idea is to make it more difficult than average, so that your attacker gives up ad tries a different site. This won't block persistent attackers, but it will scale down on the drive-by attacks.
To answer your original question, it all depends on what preventative measures the website developer took to prevent people from automatic account creation.
Any competent developer would address this in the requirements gathering phase, and plan for it. But there are plenty of websites out there coded by incompetent developers/teams, even among big-name companies that should know better.
This is possible using simple scripts, which may or may not use browser extension (for example scripts written in Perl or shell scripts using wget/curl).
Most websites rely on tracking the number of requests received from a particular browser/IP before they enable CAPTCHA.
This information can be tracked on the server side with a finite expiry time, or in case of clients using multiple IPs (for example users on DHCP connection), this information can be tracked using cookies.
Yes. It is very easy to automate form submissions, for instance using wget or curl. This is exactly why CAPTCHAs exist, to make it difficult to automate form submissions.
Verification e-mails are also helpful, although it'd be fairly straightforward to automate opening them and clicking on the verification links. They do provide protection if you're vigilant about blocking spammy e-mail domains (e.g. hotmail.com).
I have a site that works very well when everything is in HTTPS (authentication, web services etc). If I mix http and https it requires more coding (cross domain problems).
I don't seem to see many web sites that are entirely in HTTPS so I was wondering if it was a bad idea to go about it this way?
Edit: Site is to be hosted on Azure cloud where Bandwidth and CPU usage could be an issue...
EDIT 10 years later: The correct answer is now to use https only.
you lose a lot of features with https (mainly related to performance)
Proxies cannot cache pages
You cannot use a reverse proxy for performance improvement
You cannot host multiple domains on the same IP address
Obviously, the encryption consumes CPU
Maybe that's no problem for you though, it really depends on the requirements
HTTPS decreases server throughput so may be a bad idea if your hardware can't cope with it. You might find this post useful. This paper (academic) also discusses the overhead of HTTPS.
If you have HTTP requests coming from a HTTPS page you'll force the user to confirm the loading of unsecure data. Annoying on some websites I use.
This question and especially the answers are OBSOLETE. This question should be tagged: <meta name="robots" content="noindex"> so that it no longer appears in search results.
To make THIS answer relevant:
Google is now penalizing website search rankings when they fail to use TLS/https. You will ALSO be penalized in rankings for duplicate content, so be careful to serve a page EITHER as http OR https BUT NEVER BOTH (Or use accurate canonical tags!)
Google is also aggressively indicating insecure connections which has a negative impact on conversions by frightening-off would-be users.
This is in pursuit of a TLS-only web/internet, which is a GOOD thing. TLS is not just about keeping your passwords secure — it's about keeping your entire world-facing environment secure and authentic.
The "performance penalty" myth is really just based on antiquated obsolete technology. This is a comparison that shows TLS being faster than HTTP (however it should be noted that page is also a comparison of encrypted HTTP/2 HTTPS vs Plaintext HTTP/1.1).
It is fairly easy and free to implement using LetsEncrypt if you don't already have a certificate in place.
If you DO have a certificate, then batten down the hatches and use HTTPS everywhere.
TL;DR, here in 2019 it is ideal to use TLS site-wide, and advisable to use HTTP/2 as well.
</soapbox>
If you've no side effects then you are probably okay for now and might be happy not to create work where it is not needed.
However, there is little reason to encrypt all your traffic. Certainly login credentials or other sensitive data do. One the main things you would be losing out on is downstream caching. Your servers, the intermediate ISPs and users cannot cache the https. This may not be completely relevant as it reads that you are only providing services. However, it completely depends on your setup and whether there is opportunity for caching and if performance is an issue at all.
It is a good idea to use all-HTTPS - or at least provide knowledgeable users with the option for all-HTTPS.
If there are certain cases where HTTPS is completely useless and in those cases you find that performance is degraded, only then would you default to or permit non-HTTPS.
I hate running into pointlessly all-https sites that handle nothing that really requires encryption. Mainly because they all seem to be 10x slower than every other site I visit. Like most of the documentation pages on developer.mozilla.org will force you to view it with https, for no reason whatsoever, and it always takes long to load.
Our company runs a website which currently supports only http traffic.
We plan to support https traffic too as some of the customers who link to our pages want us to support https traffic.
Our website gets moderate amount of traffic, but is expected to increase over time.
So my question is this:
Is it a good idea to make our website https only?(redirect all http traffic to https)
Will this bring down the websites performance?
Has anyone done any sort of measurement?
PS: I am a developer who also doubles up as a apache admin.
Yes, it will impact performance, but it's usually not too bad compared to the running all the DB queries that go into the typical dymanically generated page.
Of course the real answer is: don't guess, benchmark it. Try it both ways and see the difference. You can use tools like siege and ab to simulate traffic.
Also, I think you may have more luck with this question over at http://www.serverfault.com/
I wouldn't worry about the load on the server; unless you are serving high volumes of static content, the encryption itself won't create much of a burden, in my experience.
However, using SSL dramatically slows down web sites by creating a lot more latency in connection setup.
An encrypted session requires about* three times as much time to set up as an unencrypted one, and the exact time depends on the latency.
Even on low latency connections, it is noticeable to the end user, but on higher latency (e.g. different continents, especially Australasia where latency to America/Europe is quite high) it makes a dramatic difference and will severely impact the user experience.
There are things you can do to mitigate it, such as ensuring that keep-alives are on (But don't turn them on without understanding exactly what the impact is), minimising the number of requests and maximising the use of browser cache.
Using HTTPS also affects browser behaviour in some cases. Certain optimisations tend to get turned off for security reasons, and some web browsers don't store objects loaded over HTTPS in the disc cache, which means they'll need to get them again in a later session, further impacting the user experience.
* An estimate based on some informal measurement
Is it a good idea to make our website
https only?(redirect all http traffic
to https) Will this bring down the
websites performance?
I'm not sure if you really mean all HTTP traffic or just page traffic. A lot of sites unnecessarily encrypt images, javascript and a bunch of other content that doesn't need to be hidden. This kind of content comprises most of the data transferred in a request so
if you do find feel that HTTPs is taking too much out of the system you can recommend the programmers separate content that needs to be secured from the content that does not.
Most webservers, unless severely underpowered, do not even use a fraction of the CPU power for serving up content. Most production servers I've seen are under 10%, even when using some SSL traffic. I think it would be best to see where your current CPU usage is at, and then do some of your own benchmarking to see how much extra CPU usage is used by an SSL request. I would guess it isn't that much.
No, it is not good idea to make any website as only https. Page loading speed might be little slower, because your server has to perform redirection operation unnecessarily for each web page request. It is better idea to make only pages as https that may contain secure/personal/sensitive information of users or organization. Even if the user information passing through web pages, you can use https. The web page which have information that can be shown to all in the world can normally use http. Finally, it is up to your requirement. If all pages contain secure information, you may make the website as https only.