Is there a way to delete/get rid of a Man-In-the-Browser infection? - infection

I was cruising around the browser reading articles about MItB and still can't find a technical way of getting rid of an MItB infection. hence, I was wondering: Is there is such a way to delete an MItB attack technically? If you were to click an infected link on a computer or mobile through a browser, in which triggered the MItB and infects your web browser, can you destroy the vulnerability by reinstalling the browser, whether in your phone or computer? More importantly, does MItB make any difference in computer and phones?

Man in the browser (mitb) is a nasty attack because "traditional" security mechanisms are not very effective against it. This is a classic example of a Trojan because the "enemy" is behind your city wall (security layers). Encryption won't help because the data the attacker is accessing is already decrypted. So the attacker has the chance to inject scripts, modify transactions, collect personal data, etc., without the user's knowledge. From the user's POV, everything is fine. They won't notice anything is wrong until the damage is done.
Your idea of reinstalling the browser is unlikely to work. The Trojan can survive the reinstall because it is not part of the browser itself. It is either an extension (or "browser helper object"), malicious JavaScript, or an external program which messes with the browser's API calls.
Also, active detection and mitigation by antivirus and other anti-malware software is not very successful. AV will detect some Trojans, but the detection rates are low. Trojans are, by design, engineered to avoid detection.
One approach you will often hear mentioned is 2-factor authentication or out-of-band transaction verification. The most common is to send a code to the user's phone or e-mail. In some systems, this code will also include information about the specific transaction which is being verified. The idea here is that the phone or other communication channel will not be impacted by the Trojan, so it should be safe from interference. But honestly I don't really think this is 100% safe. You will still have users who ignore any warning signs in the message and just blindly continue typing in the verification code into their browser because they are 1) ignorant 2) in a hurry, or both. And even then, you are assuming the the out-of-band communication mechanism has not been compromised. That's a big assumption. If you're wrong, then it will be completely ineffective.
Another approach is to sidestep the problem and look at the user's behavior from the server side. If you can establish a model of their "normal" behavior, then there is a reasonable chance of identifying suspicious activity. What is suspicious activity? It can be anything like a sudden increase in large transactions, changing IP address in the middle of a session, and navigating between pages in an "unnatural" way. When this type of behavior is detected, you can notify the user or take steps like locking their account or just rejecting a transaction. Of course, this will be limited to a specific service (e.g. the user's bank) and there is always a chance of false positives. It doesn't address the root of the problem, because the user's platform will still be infected.
The defense right now is not detection but prevention. Stop the Trojan from getting in. The most obvious one. Don't download and open or execute anything unless you trust the source 100%. That means the source should have E2E encryption and a trustworthy SSL (TLS) cert, preferably extended validation (EV).
Also make sure your OS is up to date with the latest security patches. Finally, don't use browsers with known vulnerabilities. And even then, avoid suspicious browser plugins/extensions.

Related

How to I block bad bots from my site without interfering with real users?

I want to keep no-good scrapers (aka. bad bots that by defintition ignores robots.txt) that steal content and consume bandwidth off my site. At the same time, I do not want to interfere with the user experience of legitimate human users, or stop well-behaved bots (such as Googlebot) from indexing the site.
The standard method for dealing with this has already been described here: Tactics for dealing with misbehaving robots. However, the solution presented and upvoted in that thread is not what I am looking for.
Some bad bots connect through tor or botnets, which means that their IP address is ephemeral and may well belong to a human being using a compromised computer.
I've therefore been thinking about how to improve the industry standard method by letting the "false positives" (i.e. humans) that has their IP blacklisted get access to my website again. One idea is to stop blocking these IPs outright, and instead asking them to pass a CAPTCHA before being allowed access. While I consider CAPTCHA to be a PITA for legitimate users, vetting suspected bad bots with a CAPTCHA seems to be a better solution than blocking access for these IPs completely. By tracking the session of users that completes the CAPTCHA, I should be able to determine whether they are human (and should have their IP removed from the blacklist), or robots smart enough to solve a CAPTCHA, placing them on an even blacker list.
However, before I go ahead and implement this idea, I want to ask the good people here if they foresee any problems or weaknesses (I am already aware that some CAPTCHAs has been broken - but I think that I shall be able to handle that).
The question I believe is whether or not there are foreseeable problems with captcha. Before I dive into that, I also want to address the point of how you plan on catching bots to challenge them with a captcha. TOR and proxy nodes change regularly so that IP list will need to be constantly updated. You can use Maxmind for a decent list of proxy addresses as your baseline. You can also find services that update the addresses of all the TOR nodes. But not all bad bots come from those two vectors, so you need find other ways of catching bots. If you add in rate limiting and spam lists then you should get to over 50% of the bad bots. Other tactics really have to be custom built around your site.
Now to talk about problems with Captchas. First, there are services like http://deathbycaptcha.com/. I dont know if I need to elaborate on that one, but it kind of renders your approach useless. Many of the other ways people get around Captcha's are using OCR software. The better the Captcha is at beating OCR, the harder it is going to be on your users. Also, many Captcha systems use client side cookies that someone can solve once and then upload to all their bots.
Most famous I think is Karl Groves's list of 28 ways to beat Captcha. http://www.karlgroves.com/2013/02/09/list-of-resources-breaking-captcha/
For full disclosure, I am a cofounder of Distil Networks, a SaaS solution to block bots. I often pitch our software as a more sophisticated system than simply using captcha and building it yourself so my opinion of the effectivity of your solution is biased.

Can someone just make a post to register.php and register one billion accounts?

Is it possible to create a script that is executed outside of the server,
or with a browser add-on for example it automatically fills in form values, then submits the form all ready to be parsed by the server ? this way in three minutes a billion fake accounts could get registered very easily, imagine facebook which does not use any visible to the human captcha, a browser add on that performs the form submission and inserts the vals retrieved from a local database for new emails to be associated as that is a check - no duplicate emails, can thousands and thousands of fake accounts be created each day accross the globe?
What is the best method to prevent fake accounts? Even imagining the scenario of a rotating ips center with human beings registering just to choke the databases, achieving 30-50 million accounts in a year. Thanks
This is probably better on the Security.Stackexchange.com website, but...
According to the OWASP Guide to Authentication, CAPTCHAs are actually a bad thing. Not only do they not work, induce additional headaches, but in come cases (per OWASP) they are illegal.
CAPTCHA
CAPTCHA (Completely automated Turing Tests To Tell Humans and Computers Apart) are illegal in any jurisdiction that prohibits
discrimination against disabled citizens. This is essentially the
entire world. Although CAPTCHAs seem useful, they are in fact, trivial
to break using any of the following methods:
• Optical Character Recognition. Most common CAPTCHAs are solvable using specialist
CAPTCHA breaking OCR software.
• Break a test, get free access to foo,> where foo is a desirable resource
• Pay someone to solve the CAPTCHAs.
The current rate at the time of writing is $12 per 500 tests.
Therefore implementing CAPTCHAs in your software is most likely to be
illegal in at least a few countries, and worse - completely
ineffective.
Other methods are commonly used.
The most common, probably, is the e-mail verification process. You sign up, they send you an email, and only when you confirm it is the account activated and accessible.
There are also several interesting alternatives to CAPTCHA that perform the same function, but in a manner that's (arguably, in some cases) less difficult.
More difficult may be to track form submissions from a single IP address, and block obvious attacks. But that can be spoofed and bypassed.
Another technique to use JavaScript to time the amount of time the user spent on the web page before submitting. Most bots will submit things almost instantly (if they even run the JavaScript at all), so checking that a second or 2 has elapsed since the page rendered can detect bots. but bots can be crafted to fool this as well
The Honeypot technique can also help to detect such form submissions. There's a nice example of implementation here.
This page also talks about a Form Token method. The Form Token is one I'd never heard of until just now in this context. It looks similar to an anti-csrf token in concept.
All told, your best defense, as with anything security related, is a layered approach, using more than one defense. The idea is to make it more difficult than average, so that your attacker gives up ad tries a different site. This won't block persistent attackers, but it will scale down on the drive-by attacks.
To answer your original question, it all depends on what preventative measures the website developer took to prevent people from automatic account creation.
Any competent developer would address this in the requirements gathering phase, and plan for it. But there are plenty of websites out there coded by incompetent developers/teams, even among big-name companies that should know better.
This is possible using simple scripts, which may or may not use browser extension (for example scripts written in Perl or shell scripts using wget/curl).
Most websites rely on tracking the number of requests received from a particular browser/IP before they enable CAPTCHA.
This information can be tracked on the server side with a finite expiry time, or in case of clients using multiple IPs (for example users on DHCP connection), this information can be tracked using cookies.
Yes. It is very easy to automate form submissions, for instance using wget or curl. This is exactly why CAPTCHAs exist, to make it difficult to automate form submissions.
Verification e-mails are also helpful, although it'd be fairly straightforward to automate opening them and clicking on the verification links. They do provide protection if you're vigilant about blocking spammy e-mail domains (e.g. hotmail.com).

is it reasonable to protect drm'd content client side

Update: this question is specifically about protecting (encipher / obfuscate) the content client side vs. doing it before transmission from the server. What are the pros / cons on going in an approach like itune's one - in which the files aren't ciphered / obfuscated before transmission.
As I added in my note in the original question, there are contracts in place that we need to comply to (as its the case for most services that implement drm). We push for drm free, and most content providers deals are on it, but that doesn't free us of obligations already in place.
I recently read some information regarding how itunes / fairplay approaches drm, and didn't expect to see the server actually serves the files without any protection.
The quote in this answer seems to capture the spirit of the issue.
The goal should simply be to "keep
honest people honest". If we go
further than this, only two things
happen:
We fight a battle we cannot win. Those who want to cheat will succeed.
We hurt the honest users of our product by making it more difficult to use.
I don't see any impact on the honest users in here, files would be tied to the user - regardless if this happens client or server side. This does gives another chance to those in 1.
An extra bit of info: client environment is adobe air, multiple content types involved (music, video, flash apps, images).
So, is it reasonable to do like itune's fairplay and protect the media client side.
Note: I think unbreakable DRM is an unsolvable problem and as most looking for an answer to this, the need for it relates to it already being in a contract with content providers ... in the likes of reasonable best effort.
I think you might be missing something here. Users hate, hate, hate, HATE DRM. That's why no media company ever gets any traction when they try to use it.
The kicker here is that the contract says "reasonable best effort", and I haven't the faintest idea of what that will mean in a court of law.
What you want to do is make your client happy with the DRM you put on. I don't know what your client thinks DRM is, can do, costs in resources, or if your client is actually aware that DRM can be really annoying. You would have to answer that. You can try to educate the client, but that could be seen as trying to explain away substandard work.
If the client is not happy, the next fallback position is to get paid without litigation, and for that to happen, the contract has to be reasonably clear. Unfortunately, "reasonable best effort" isn't clear, so you might wind up in court. You may be able to renegotiate parts of the contract in the client's favor, or you may not.
If all else fails, you hope to win the court case.
I am not a lawyer, and this is not legal advice. I do see this as more of a question of expectations and possible legal interpretation than a technical question. I don't think we can help you here. You should consult with a lawyer who specializes in this sort of thing, and I don't even know what speciality to recommend. If you're in the US, call your local Bar Association and ask for a referral.
I don't see any impact on the honest users in here, files would be tied to the user - regardless if this happens client or server side. This does gives another chance to those in 1.
Files being tied to the user requires some method of verifying that there is a user. What happens when your verification server goes down (or is discontinued, as Wal-Mart did)?
There is no level of DRM that doesn't affect at least some "honest users".
Data can be copied
As long as client hardware, standalone, can not distinguish between a "good" and a "bad" copy, you will end up limiting all general copies, and copy mechanisms. Most DRM companies deal with this fact by a telling me how much this technology sets me free. Almost as if people would start to believe when they hear the same thing often enough...
Code can't be protected on the client. Protecting code on the server is a largely solved problem. Protecting code on the client isn't. All current approaches come with stingy restrictions.
Impact works in subtle ways. At the very least, you have the additional cost of implementing client-side-DRM (and all follow-up cost, including the horde of "DMCA"-shouting lawyer gorillas) It is hard to prove that you will offset this cost with the increased revenue.
It's not just about code and crypto. Once you implement client-side DRM, you unleash a chain of events in Marketing, Public Relations and Legal. A long as they don't stop to alienate users, you don't need to bother.
To answer the question "is it reasonable", you have to be clear when you use the word "protect" what you're trying to protect against...
For example, are you trying to:
authorized users from using their downloaded content via your app under certain circumstances (e.g. rental period expiry, copied to a different computer, etc)?
authorized users from using their downloaded content via any app under certain circumstances (e.g. rental period expiry, copied to a different computer, etc)?
unauthorized users from using content received from authorized users via your app?
unauthorized users from using content received from authorized users via any app?
known users from accessing unpurchased/unauthorized content from the media library on your server via your app?
known users from accessing unpurchased/unauthorized content from the media library on your server via any app?
unknown users from accessing the media library on your server via your app?
unknown users from accessing the media library on your server via any app?
etc...
"Any app" in the above can include things like:
other player programs designed to interoperate/cooperate with your site (e.g. for flickr)
programs designed to convert content to other formats, possibly non-DRM formats
hostile programs designed to
From the article you linked, you can start to see some of the possible limitations of applying the DRM client-side...
The third, originally used in PyMusique, a Linux client for the iTunes Store, pretends to be iTunes. It requested songs from Apple's servers and then downloaded the purchased songs without locking them, as iTunes would.
The fourth, used in FairKeys, also pretends to be iTunes; it requests a user's keys from Apple's servers and then uses these keys to unlock existing purchased songs.
Neither of these approaches required breaking the DRM being applied, or even hacking any of the products involved; they could be done simply by passively observing the protocols involved, and then imitating them.
So the question becomes: are you trying to protect against these kinds of attack?
If yes, then client-applied DRM is not reasonable.
If no (for example, you're only concerned about people using your app, like Apple/iTunes does), then it might be.
(repeat this process for every situation you can think of. If the adig nswer is always either "client-applied DRM will protect me" or "I'm not trying to protect against this situation", then using client-applied DRM is resonable.)
Note that for the last four of my examples, while DRM would protect against those situations as a side-effect, it's not the best place to enforce those restrictions. Those kinds of restrictions are best applied on the server in the login/authorization process.
If the server serves the content without protection, it's because the encryption is per-client.
That being said, wireshark will foil your best-laid plans.
Encryption alone is usually just as good as sending a boolean telling you if you're allowed to use the content, since the bypass is usually just changing the input/output to one encryption API call...
You want to use heavy binary obfuscation on the client side if you want the protection to literally hold for more than 5 minutes. Using decryption on the client side, make sure the data cannot be replayed and that the only way to bypass the system is to reverse engineer the entire binary protection scheme. Properly done, this will stop all the kids.
On another note, if this is a product to be run on an operating system, don't use processor specific or operating system specific anomalies such as the Windows PEB/TEB/syscalls and processor bugs, those will only make the program even less portable than DRM already is.
Oh and to answer the question title: No. It's a waste of time and money, and will make your product not work on my hardened Linux system.

Where should you enable SSL?

My last couple of projects have involved websites that sell a product/service and require a 'checkout' process in which users put in their credit card information and such. Obviously we got SSL certificates for the security of it plus giving peace of mind to the customers. I am, however, a little clueless as to the subtleties of it, and most importantly as to which parts of the website should 'use' the certificate.
For example, I've been to websites where the moment you hit the homepage you are put in https - mostly banking sites - and then there are websites where you are only put in https when you are finally checking out. Is it overkill to make the entire website run through https if it doesn't deal with something on the level of banking? Should I only make the checkout page https? What is the performance hit on going all out?
I personally go with "SSL from go to woe".
If your user never enters a credit card number, sure, no SSL.
But there's an inherent possible security leak from the cookie replay.
User visits site and gets assigned a cookie.
User browses site and adds data to cart ( using cookie )
User proceeds to payment page using cookie.
Right here there is a problem, especially if you have to handle payment negotiation yourself.
You have to transmit information from the non-secure domain to the secure domain, and back again, with no guarantees of protection.
If you do something dumb like share the same cookie with unsecure as you do with secure, you may find some browsers ( rightly ) will just drop the cookie completely ( Safari ) for the sake of security, because if somebody sniffs that cookie in the open, they can forge it and use it in the secure mode to, degrading your wonderful SSL security to 0, and if the Card details ever get even temporarily stored in the session, you have a dangerous leak waiting to happen.
If you can't be certain that your software is not prone to these weaknesses, I would suggest SSL from the start, so their initial cookie is transmitted in the secure.
If the site is for public usage, you should probably put the public parts on HTTP. This makes things easier and more efficient for spiders and casual users. HTTP requests are much faster to initiate than HTTPS and this is very obvious especially on sites with lots of images.
Browsers also sometimes have a different cache policy for HTTPS than HTTP.
But it's alright to put them into HTTPS as soon as they log on, or just before. At the point at which the site becomes personalised and non-anonymous, it can be HTTPS from there onwards.
It's a better idea to use HTTPS for the log on page itself as well as any other forms, as it gives the use the padlock before they enter their info, which makes them feel better.
I have always done it on the entire website.
I too would use HTTPS all the way. This doesn't have a big performance impact (since browser cache the negociated symmetric key after the first connection) and protects against sniffing.
Sniffing was once on its way out because of fully switched wired networks, where you would have to work extra hard to capture anyone else's traffic (as opposed to networks using hubs), but it's on its way back because of wireless networks, which create a broadcast medium once again an make session hijacking easy, unless the traffic is encrypted.
I think a good rule of thumb is forcing SSL anywhere where sensitive information is going to possibly be transmitted. For example: I'm a member of Wescom Credit Union. There's a section on the front page that allows me to log on to my online bank account. Therefore, the root page forces SSL.
Think of it this way: will sensitive, private information be transmitted? If yes, enable SSL. Otherwise you should be fine.
In our organization we have three classifications of applications -
Low Business Impact - no PII, clear-text storage, clear-text transmission, no access restrictions.
Medium Business Impact - non-transactional PII e.g. email address. clear-text storage, SSL from datacenter to client, clear-text in data center, limited storage access.
High Business Impact - transactional data e.g. SSN, Credit Card etc. SSL within and outside of datacenter. Encrypted & Audited Storage. Audited applications.
We use these criteria to determine partitioning of data, and which aspects of the site require SSL. Computation of SSL is either done on server or through accelerators such as Netscaler. As level of PII increases so does the complexity of the audit and threat modelling.
As you can imagine we prefer to do LBI applications.
Generally anytime you're transmitting sensitive or personal data you should be using SSL - e.g. adding an item to a basket probably doesn't need SSL, logging in with your username/password, or entering your CC details should be encrypted.
I only ever redirect my sites to SSL when it requires the user to enter sensitive information. With a shopping cart as soon as they have to fill out a page with their personal information or credit card details I redirect them to a SSL page.
For the rest of the site its probably not needed - if they are just viewing information/products on your commerce site.
SSL is pretty computationally intensive and should not be used to transmit large amounts of data if possible. Therfore it would be better to enable it at the checkout stage where the user would be transmitting sensitive information.
There is one major downside to a full https site and it's not the speed (thats ok).
It will be very hard to run Youtube, "Like"boxes etc without the unsecure warning.
We are running a full forces secured website and shop for two years now and this is the biggest drawback. We managed to get Youtube to work now but the "Add this" is still a big challenge. And if they change anything to the protocol then it could be that all our Youtube movies are blank...

How do banks remember "your computer"?

As many of you probably know, online banks nowadays have a security system whereby you are asked some personal questions before you even enter your password. Once you have answered them, you can choose for the bank to "remember this computer" so that in the future you can login by only entering your password.
How does the "remember this computer" part work? I know it cannot be cookies, because the feature still works despite the fact that I clear all of my cookies. I thought it might be by IP address, but my friend with a dynamic IP claims it works for him, too (but maybe he's wrong). He thought it was MAC address or something, but I strongly doubt that! So, is there a concept of https-only cookies that I don't clear?
Finally, the programming part of the question: how can I do something similar myself in, say, PHP?
In fact they most probably use cookies. An alternative for them would be to use "flash cookies" (officially called "Local Shared Objects"). They are similar to cookies in that they are tied to a website and have an upper size limit, but they are maintained by the flash player, so they are invisible to any browser tools.
To clear them (and test this theory), you can use the instructions provided by Adobe. An other nifty (or maybe worrying, depending on your viewpoint) feature is that the LSO storage is shared by all browsers, so using LSO you can identify users even if they switched browser (as long as they are logged in as the same user).
The particular bank I was interested in is Bank of America.
I have confirmed that if I only clear my cookies or my LSOs, the site does not require me to re-enter info. If, however, I clear both, I had to go through additional authentication. Thus, that appears to be the answer in my particular case!
But thank you all for the heads-up regarding other banks, and possibilities such as including the User-Agent string.
This kind of session tracking is very likely to be done using a combination of a cookie with a unique id identifying your current session, and the website pairing that id with the last IP address you used to connect to their server. That way, if the IP changes, but you still have the cookie, you're identified and logged in, and if the cookie is absent but you have the same IP address as the one save on the server, then they set your cookie to the id paired with that IP.
Really, it's that second possibility that is tricky to get right. If the cookie is missing, and you only have your IP address to show for identification, it's quite unsafe to log someone in just based of that. So servers probably store additional info about you, LSO seem like a good choice, geo IP too, but User Agent, not so much because they don't really say anything about you, every body using the same version of the same browser as you has the same.
As an aside, it has been mentioned above that it could work with MAC adresses. I strongly disagree! Your MAC address never reaches your bank's server, as they are only used to identify sides of an Ethernet connection, and to connect to your bank you make a bunch of Ethernet connections: from your computer to your home router, or your ISP, then from there to the first internet router you go through, then to the second, etc... and each time a new connection is made, each machine on each side provide their very own MAC addresses. So your MAC address can only be known to the machines directly connected to you through a switch or hub, because anything else that routes your packets will replace your MAC with their own. Only the IP address stays the same all the way.
If MAC addresses did go all the way, it would be a privacy nightmare, as all MAC addresses are unique to a single device, hence to a single person.
This is a slightly simplified explanation because it's not the point of the question, but it seemed useful to clear what looked like a misunderstanding.
It is possible for flash files to store a small amount of data on your computer. It's also possible that the bank uses that approach to "remember" your computer, but it's risky to rely on users having (and not having disabled) flash.
My bank's site makes me re-authenticate every time a new version of Firefox is out, so there's definitely a user-agent string component in some.
It could be a combination of cookies, and ip address logging.
Edit: I have just checked my bank and cleared the cookies. Now I have to re-enter all of my info.
I think it depends on the bank. My bank does use a cookie since I lose it when I wipe cookies.
Are you using a laptop? Does it remember you, after you delete your cookies, if you access from a different WiFi network? If so, IP/physical location mapping is highly unlikely.
Based on all these posts, the conclusions that I'm reaching are (1) it depends on the bank and (2) there's probably more than one piece of data that's involved, but see (1).
MAC address is possible.
IP to physical location mapping is also a possibility.
User agents and other HTTP headers are quiet unique to each of the machines too.
I'm thinking about those websites that prevents you from using an accelerating download managers. There must be a way.

Resources