Third party scripts detection with ad-blockers - adblock

Conceptually how do ad-blockers detect third party scripts? Is it a reverse firewall, or do they read all of the incoming code when a page is requested? or something else?

In the case of Adblock Plus it checks whether the requesting host has the same base domain as the requested host by comparing the two URLs that the browser provide when a new request is being made.
Source:
Adblock Plus for Firefox
Adblock Plus for Chrome/Opera/Safari

Related

How to validate that a certain domain is reachable from browser?

Our single page app embeds videos from Youtube for the end-users consumption. Everything works great if the user does have access to the Youtube domain and to the content of that domain's pages.
We however frequently run into users whose access to Youtube is blocked by a web filter box on their network, such as https://us.smoothwall.com/web-filtering/ . The challenge here is that the filter doesn't actually kill the request, it simply returns another page instead with a HTTP status 200. The page usually says something along the lines of "hey, sorry, this content is blocked".
One option is to try to fetch https://www.youtube.com/favicon.ico to prove that the domain is reachable. The issue is that these filters usually involve a custom SSL certificate to allow them to inspect the HTTP content (see: https://us.smoothwall.com/ssl-filtering-white-paper/), so I can't rely TLS catching the content being swapped for me with the incorrect certificate, and I will instead receive a perfectly valid favicon.ico file, except from a different site. There's also the whole CORS issue of issuing an XHR from our domain against youtube.com's domain, which means if I want to get that favicon.ico I have to do it JSONP-style. However even by using a plain old <img> I can't test the contents of the image because of CORS, see Get image data in JavaScript? , so I'm stuck with that approach.
Are there any proven and reliable ways of dealing with this situation and testing browser-level reachability towards a specific domain?
Cheers.
In general, web proxies that want to play nicely typically annotate the HTTP conversation with additional response headers that can be detected.
So one approach to building a man-in-the-middle detector may be to inspect those response headers and compare the results from when behind the MITM, and when not.
Many public websites will display the headers for a arbitrary request; redbot is one.
So perhaps you could ask the party whose content is being modified to visit a url like: youtube favicon via redbot.
Once you gather enough samples, you could heuristically build a detector.
Also, some CDNs (eg, Akamai) will allow customers to visit a URL from remote proxy locations in their network. That might give better coverage, although they are unlikely to be behind a blocking firewall.

Why we need HTTPS when we send result to user

The reason we need HTTPS(Secured/Encrypted Data over network):
We need to get the user side data(Either via form or by URL which ever way users sends their data to server via network) securely Which is done by http + ssl encryption - so in that case only the form or which ever URL that user posting/sending data to server has to be secure URL and not the page that I am sending to browser[ Eg. When I need to have customer register form From server itself I have to send it as https url - if I dont do that then browser will give warning like mixed content error. Instead is it wrong that browsers could have had some sort of param to mention the form I have has to be secure url.
In some cases my server side content cant be read by anyone outside other than who I allow to be - for that I can use https to deliver the content with extra security measurements in server side.
Other than these two scenarios I dont see any reason on having https based encoded content over network. Lets assume a site with 10+ css, 10+ js, 50+ images with 200k of content weight and total weight may be ~2 - 3MB - so this whole content is encrypted - have no doubt this is going to be min. of 100 - 280 connection creation between browser and server.
Please explain - why we need to follow the way we deliver[Most of us doing because browsers/google like search engines/w3o standards asks us to use on every page].
why we need to follow the way we deliver
Because otherwise it's not secure. The browsers which warn about this are not wrong.
Let's assume a site with 10+ css, 10+ js
Just 1 .js served over non-HTTPS and a man-in-the-middle attacker could inject abitrary code into your HTTPS page, from which origin they can completely control the user's interaction with your site. That's why browsers don't allow it, and give you the mixed content warning.
(And .css can have the same impact in many cases.)
Plus it's just plain bad security-usability to switch between HTTP and HTTPS for different pages. The user is likely to fail to notice the switch, and may be tricked into entering data into (or accepting data from) a non-HTTPS page. All the attacker would have to do would be to change one of the HTTP links so it pointed to HTTP instead of HTTPS, and the usual process would be subverted.
have no doubt this is going to be min. of 100 - 280 connection creation between browser and server.
HTTP[S] reuses connections. You don't pay the SSL handshake latency for every resource linked.
HTTPS is really not that expensive today to be worth worrying about performance for a typical small web app.

security of sending passwords through Ajax

Is it ok to pass passwords like this or should the method be POST or does it not matter?
xmlhttp.open("GET","pas123",true);
xmlhttp.send();
Additional info: I'm building this using a local virtual web server so I don't think I'll have https until I put upfront some money on a real web server :-)
EDIT: According to Gumo's link encodeURIComponent should be used. Should I do xmlhttp.send(encodeURIComponent(password)) or would this cause errors in the password matching?
Post them via HTTPS than you don't need to matter about that ;)
But note that you need that the page which sends that data must be accessed with https too due the same origin policy.
About your money limentation you can use self signed certificates or you can use a certificate from https://startssl.com/ where you can get certificates for free.
All HTTP requests are sent as text, so the particulars of whether it's a GET or POST or PUT... don't really matter. What matters for security in transmission is that you send it via SSL (and handle it safely on the other end, of course).
You can use a self-signed cert until something better is made available. It will be a special hell later if you don't design with https in mind now :)
It shouldn't matter, the main reason for not using GET on conventional web forms is the fact that the details are visible in the address bar, which isn't an issue when using AJAX.
All HTTP requests (GET/POST/ect) are sent in plain text so could be obtained using network tracing software (e.g. Wireshark) to protect against this you will need to use HTTPS

Is a change required only in the code of a web application to support HSTS?

If I want a client to always use a HTTPs connection, do I only need to include the headers in the code of the application or do I also need to make a change on the server? Also how is this different to simply redirecting a user to a HTTPs page make every single time they attempt to use HTTP?
If you just have HTTP -> HTTPS redirects a client might still try to post sensitive data to you (or GET a URL that has sensitive data in it) - this would leave it exposed publicly. If it knew your site was HSTS then it would not even try to hit it via HTTP and so that exposure is eliminated. It's a pretty small win IMO - the bigger risks are the vast # of root CAs that everyone trusts blindly thanks to policies at Microsoft, Mozilla, Opera, and Google.

Cannot make ajax call between servers that differ only in port in HTML5/jQuery/Chrome stack

The parts
I am developing against two Pylons servers and testing locally. One server is on port 5000 and is the called server. The other is on port 7000. The latter creates a cookie that specifies the same domain as used by the former server. Essentially, the first server uses credentials provided by the second server to impersonate the user.
The first server expects to find an auth token (a cookie, really) in its response.environ at run time. When I authenticate on the server on port 7000 and browser to a service on port 5000, the latter server uses the cookie created by the former and the app works.
The fly in the ointment is that the first server creates an HTML5 app that uses an ajax call to the second server, and I cannot get the cookie to be included in the ajax call. I believe that Chrome (the browser we are using/requiring for HTML5 support reasons) refuses to send the cookie for cross domain reasons: going from foo.net:7000 to foo.net:5000 is considered cross domain.
Oh, and the ajax call is through jQuery.
The question
Is there any way to make an ajax call from an HTML5 app created on a port in the same domain to a server in the same domain but a different port?
What I've tried or discard out of hand
I do not believe I can use dynamic script tag insertion because I am making the call from javascript and the HTML is generated on the client at runtime from other javascript. At least, I don't think that is a desirable solution.
I don't believe Access-Control-Allow-* is applicable because I am going from client to server, not the other way.
I've seen this on jQuery and ports in ajax calls. I've seen this, too.
I know about the same-origin policy.
And this does not work.
Agree with Michael that the simplest solution is JSONP. But even in JSONP you need to configure your server such that it supports JSONP. Many Servers deny this to keep their data secure and sound. JSONP expect your server to send data in the format that can be evaluated as the valid JSON. But its not the case in every JSONP Request and response. So, just watch out for that.
The absolutely simplest solution to this is to use JSON/P. I wish there were an easier, softer way to accomplish this, but I certainly haven't found one.

Resources