I have tried using a Data URI with this CSS property:
background-image: url("data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAA9JREFUeNpiYGBg8AUIMAAAUgBOUWVeTwAAAABJRU5ErkJggg==");
And locally it works fine.
However, when I am debugging the file appears missing in chrome. If I try to navigate to it, I get: A potentially dangerous Request.Path value was detected from the client (:).
So obviously my application considers the URI for this image suspicious.
How do I get it to show?
I tried relaxing the validation using:
<httpRuntime requestPathInvalidCharacters="" requestValidationMode="2.0" />
<pages validateRequest="false"></pages>
Ideally I wouldn't want to relax the rules too much, only enough to get these data URI images loading.
I would bet that the application considers the request suspicious because of the Base-64 encoded URI. Encoding malicious URLs in Base-64 is a common strategy by attackers to get URLs through front end filters that strip and/or escape URLs, and to obscure the request from any humans reading the code. XSS attacks are commonly done by getting one of these URIs stored in a database and served back to other users.
Because of the high risks of XSS these days, I would hesitate to disable the check. If you can, just use a non-encoded URI. If you can't, you should ask yourself why. If you are trying to enhance security by obfuscating the URI, do know that this is very trivial for an attacker to decode. It is not any form of encryption, just a different way to represent data.
Related
I'm trying to find a method to take a URI/URL string from a user and determine a working, canonical form (or failing if the resource isn't valid). Simultaneously, it should also verify that the URL exists. So we're checking for both valid "syntax" and also existence.
For instance, a string like google.com should be turned into http://www.google.com, and a string like google.com/insights should be turned into http://www.google.com/insights. A string like http://thiswebsitedoesntexistatall.com should return some sort of error or exception.
I believe a portion of the solution may likely be calling an HTTP get_response() method and following redirects until I get a 200 OK status.
It seems like the URI.parse() method is not forgiving of leaving off the http. I realize I could write a simple thing to try adding http in front, etc., but I was hoping there was some existing gem or little-known library function that would be really forgiving about URLs and canonicalize them for me.
Both the built in net/http and HTTParty seem to be too strict for what I'm looking for. Is there a nice way to do this?
There are some problems with what you're asking for:
A URL parser shouldn't assume the value passed in is HTTP, when FTP and many other protocols are equally valid. If you know the protocol is likely to be HTTP, then you need to add the protocol.
If you try to connect to a site and follow redirects until you get a 200 response, you've only proven that the URL resolves to a valid page of some sort. That 200 could be an error-page returned because the one you want is a dead link or invalid, or that the site is temporarily down. To prove otherwise means you have to have some intimate pre-knowledge about the page you're looking for, such as specific content to search for.
Assuming the URL is good after you follow the redirects, is not quite safe. Many sites add on all sorts of session data to the URL, so what could start as a simple and clean URL can resolve to a long and convoluted one.
I'd recommend you look at the Addressable::URI gem. It's much more full-featured than Ruby's URI. It won't make the decisions for you, but at least it will give you a more complete API and can rewrite/normalize URLs. Cleaning them up and/or determining if they are good is still left as an exercise for you.
The AntiForgeryToken is used to prevent CSRF attacks, however the links on MSDN don't give me much insight to what exactly the AntiForgeryToken does, or how it works, or why things are done the way they are.
From what I gather, it creates a hash inside a web page and a cookie. One or both of them use the hashed IPrincipal.Name, and use symmetric encryption.
Can anyone shed light as to:
How the AntiForgeryToken works internally
What should it be used to protect
What should it NOT be used to protect
What is the reasoning behind the implementation choices for #1 above?
Example:
is the implementation safe from "DoubleSubmit" cookies and other common vulnerability
Are there implementation issues if the user opens multiple tabs
What makes MSFT's implementation different from the one available at SANS
Okay, here is my best shot.
1) Internally, mvc uses RNG crypto methods to create a 128 bit string to act as the XSRF token. This string is stored in a cookie as well as in a hidden field somewhere on the form. The cookie name seems to be in the form of __RequestVerificationToken + a base 64 encoded version of the application path(server side). The html part of this uses the AntiForgeryDataSerializer to serialize the following pieces of data
- salt
- value(the token string)
- the ticks of the creation date
- the username (seems like Context.User)
The validate method basically deserializes the values out of the cookie and that of the form and compares them based on the values (salt/value/ticks/username).
2/3) I would think this discussion is more for when to use XSRF tokens and when not to. In my mind, you should use this on every form (I mean why not). The only thing I can think of that this doesn't protect is if you have actually hit the form in question or not. Knowing the base64 encoding of the app name will allow the attacker to be able to view the cookie during the XSRF attack. Maybe my interpretation of that is incorrect.
4) Not sure exactly what you are looking for here? I guess I would have built a mechanism where I would try and store the XSRF token in the session (if one was already available) and if not, then try the cookie approach. As for type of crypto used, I found this SO artcile.
Pros and cons of RNGCryptoServiceProvider
How could a client detect if a server is using Search Engine Optimizing techniques such as using mod_rewrite to implement "seo friendly urls."
For example:
Normal url:
http://somedomain.com/index.php?type=pic&id=1
SEO friendly URL:
http://somedomain.com/pic/1
Since mod_rewrite runs server side, there is no way a client can detect it for sure.
The only thing you can do client side is to look for some clues:
Is the HTML generated dynamic and that changes between calls? Then /pic/1 would need to be handled by some script and is most likely not the real URL.
Like said before: are there <link rel="canonical"> tags? Then the website likes to tell the search engine, which URL of multiple with the same content it should use from.
Modify parts of the URL and see, if you get an 404. In /pic/1 I would modify "1".
If there is no mod_rewrite it will return 404. If it is, the error is handled by the server side scripting language and can return a 404, but in most cases would return a 200 page printing an error.
You can use a <link rel="canonical" href="..." /> tag.
The SEO aspect is usually on words in the URL, so you can probably ignore any parts that are numeric. Usually SEO is applied over a group of like content, such that is has a common base URL, for example:
Base www.domain.ext/article, with fully URL examples being:
www.domain.ext/article/2011/06/15/man-bites-dog
www.domain.ext/article/2010/12/01/beauty-not-just-skin-deep
Such that the SEO aspect of the URL is the suffix. Algorithm to apply is typify each "folder" after the common base assigning it a "datatype" - numeric, text, alphanumeric and then score as follows:
HTTP Response Code is 200: should be obvious, but you can get a 404 www.domain.ext/errors/file-not-found that would pass the other checks listed.
Non Numeric, with Separators, Spell Checked: separators are usually dashes, underscores or spaces. Take each word and perform a spell check. If the words are valid - including proper names.
Spell Checked URL Text on Page if the text passes a spell check, analyze the page content to see if it appears there.
Spell Checked URL Text on Page Inside a Tag: if prior is true, mark again if text in its entirety is inside an HTML tag.
Tag is Important: if prior is true and tag is <title> or <h#> tag.
Usually with this approach you'll have a max of 5 points, unless multiple folders in the URL meet the criteria, with higher values being better. Now you can probably improve this by using a Bayesian probability approach that uses the above to featurize (i.e. detects the occurrence of some phenomenon) URLs, plus come up with some other clever featurizations. But, then you've got to train the algorithm, which may not be worth it.
Now based on your example, you also want to capture situations where the URL has been designed such that a crawler will index because query parameters are now part of the URL instead. In that case you can still typify suffixes' folders to arrive at patterns of data types - in your example's case that a common prefix is always trailed by an integer - and score those URLs as being SEO friendly as well.
I presume you would be using of the curl variants.
You could try sending the same request but with different "user agent" values.
i.e. send the request one using user agent "Mozzilla/5.0" and a second time using User Agent "Googlebot" if the server is doing something special for web crawlers then there should be a different response
With the frameworks today and url routing they provide I don't even need to use mod_rewrite to create friendly urls such http://somedomain.com/pic/1 so I doubt you can detect anything. I would create such urls for all visitors, crawlers or not. Maybe you can spoof some bot headers to pretend you're a known crawler and see if there's any change. Dunno how legal that is tbh.
For the dynamic url's pattern, its better to use <link rel="canonical" href="..." /> tag for other duplicate
Internet Explorer 8 has a new security feature, an XSS filter that tries to intercept cross-site scripting attempts. It's described this way:
The XSS Filter, a feature new to Internet Explorer 8, detects JavaScript in URL and HTTP POST requests. If JavaScript is detected, the XSS Filter searches evidence of reflection, information that would be returned to the attacking Web site if the attacking request were submitted unchanged. If reflection is detected, the XSS Filter sanitizes the original request so that the additional JavaScript cannot be executed.
I'm finding that the XSS filter kicks in even when there's no "evidence of reflection", and am starting to think that the filter simply notices when a request is made to another site and the response contains JavaScript.
But even that is hard to verify because the effect seems to come and go. IE has different zones, and just when I think I've reproduced the problem, the filter doesn't kick in anymore, and I don't know why.
Anyone have any tips on how to combat this? What is the filter really looking for? Is there any way for a good-guy to POST data to a 3rd-party site which can return HTML to be displayed in an iframe and not trigger the filter?
Background: I'm loading a JavaScript library from a 3rd-party site. That JavaScript harvests some data from the current HTML page, and posts it to the 3rd-party site, which responds with some HTML to be displayed in an iframe. To see it in action, visit an AOL Food page and click the "Print" icon just above the story.
What does it really do? It allows third parties to link to a messed-up version of your site.
It kicks in when [a few conditions are met and] it sees a string in the query submission that also exists verbatim in the page, and which it thinks might be dangerous.
It assumes that if <script>something()</script> exists in both the query string and the page code, then it must be because your server-side script is insecure and reflected that string straight back out as markup without escaping.
But of course apart from the fact that's it's a perfectly valid query someone might have typed that matches by coincidence, it's also just as possible that they match because someone looked at the page and deliberately copied part of it out. For example:
http://www.bing.com/search?q=%3Cscript+type%3D%22text%2Fjavascript%22%3E
Follow that in IE8 and I've successfully sabotaged your Bing page so it'll give script errors, and the pop-out result bits won't work. Essentially it gives an attacker whose link is being followed license to pick out and disable parts of the page he doesn't like — and that might even include other security-related measures like framebuster scripts.
What does IE8 consider ‘potentially dangerous’? A lot more and a lot stranger things than just this script tag. eg. What's more, it appears to match against a set of ‘dangerous’ templates using a text pattern system (presumably regex), instead of any kind of HTML parser like the one that will eventually parse the page itself. Yes, use IE8 and your browser is pařṣinͅg HT̈́͜ML w̧̼̜it̏̔h ͙r̿e̴̬g̉̆e͎x͍͔̑̃̽̚.
‘XSS protection’ by looking at the strings in the query is utterly bogus. It can't be ‘fixed’; the very concept is intrinsically flawed. Apart from the problem of stepping in when it's not wanted, it can't ever really protect you from anything but the most basic attacks — and the attackers will surely workaround such blocks as IE8 becomes more widely used. If you've been forgetting to escape your HTML output correctly you'll still be vulnerable; all XSS “protection” has to offer you is a false sense of security. Unfortunately Microsoft seem to like this false sense of security; there is similar XSS “protection” in ASP.NET too, on the server side.
So if you've got a clue about webapp authoring and you've been properly escaping output to HTML like a good boy, it's definitely a good idea to disable this unwanted, unworkable, wrong-headed intrusion by outputting the header:
X-XSS-Protection: 0
in your HTTP responses. (And using ValidateRequest="false" in your pages if you're using ASP.NET.)
For everyone else, who still slings strings together in PHP without taking care to encode properly... well you might as well leave it on. Don't expect it to actually protect your users, but your site is already broken, so who cares if it breaks a little more, right?
To see it in action, visit an AOL Food page and click the "Print" icon just above the story.
Ah yes, I can see this breaking in IE8. Not immediately obvious where IE has made the hack to the content that's stopped it executing though... the only cross-domain request I can see that's a candidate for the XSS filter is this one to http://h30405.www3.hp.com/print/start:
POST /print/start HTTP/1.1
Host: h30405.www3.hp.com
Referer: http://recipe.aol.com/recipe/oatmeal-butter-cookies/142275?
csrfmiddlewaretoken=undefined&characterset=utf-8&location=http%253A%2F%2Frecipe.aol.com%2Frecipe%2Foatmeal-butter-cookies%2F142275&template=recipe&blocks=Dd%3Do%7Efsp%7E%7B%3D%25%3F%3D%3C%28%2B.%2F%2C%28%3D3%3F%3D%7Dsp%7Ct#kfoz%3D%25%3F%3D%7E%7C%7Czqk%7Cpspm%3Db3%3Fd%3Do%7Efsp%7E%7B%3D%25%3F%3D%3C%7D%2F%27%2B%2C.%3D3%3F%3D%7Dsp%7Ct#kfoz%3D%25%3F%3D%7E%7C%7Czqk...
that blocks parameter continues with pages more gibberish. Presumably there is something there that (by coincidence?) is reflected in the returned HTML and triggers one of IE8's messed up ideas of what an XSS exploit looks like.
To fix this, HP need to make the server at h30405.www3.hp.com include the X-XSS-Protection: 0 header.
You should send me (ericlaw#microsoft) a network capture (www.fiddlercap.com) of the scenario you think is incorrect.
The XSS filter works as follows:
Is XSSFILTER enabled for this process?
If yes– proceed to next check
If no – bypass XSS Filter and continue loading
Is a "document" load (like a frame, not a subdownload)?
If yes– proceed to next check
If no – bypass XSS Filter and continue loading
Is it a HTTP/HTTPS request?
If yes– proceed to next check
If no – bypass XSS Filter and continue loading
Does RESPONSE contain x-xss-protection header?
Yes:
Value = 1: XSS Filter Enabled (no urlaction check)
Value = 0: XSS Filter Disabled (no urlaction check)
No: proceed to next check
Is the site loading in a Zone where URLAction enables XSS filtering? (By default: Internet, Trusted, Restricted)
If yes– proceed to next check
If no – bypass XSS Filter and continue loading
Is a cross site Request? (Referrer header: Does the final (post-redirect) fully-qualified domain name in the HTTP request referrer header match the fully-qualified domain name of the URL being retrieved?)
If yes – bypass XSS Filter and continue loading
If no – then the URL in the request should be neutered.
Does the heuristic indicate of the RESPONSE data came from unsafe REQUEST DATA?
If yes – modify the response.
Now, the exact details of #7 are quite complicated, but basically, you can imagine that IE does a match of request data (URL/Post Body) to response data (script bodies) and if they match, then the response data will be modified.
In your site's case, you'll want to look at the body of the POST to http://h30405.www3.hp.com/print/start and the corresponding response.
Actually, it's worse than might seem. The XSS filter can make safe sites unsafe. Read here:
http://www.h-online.com/security/news/item/Security-feature-of-Internet-Explorer-8-unsafe-868837.html
From that article:
However, Google disables IE's XSS filter by sending the X-XSS-Protection: 0 header, which makes it immune.
I don't know enough about your site to judge if this may be a solution, but you can probably try.
More in depth, technical discussion of the filter, and how to disable it is here: http://michael-coates.blogspot.com/2009/11/ie8-xss-filter-bug.html
I am using jQuery to call PHP files via the $.get method
function fetchDepartment(company_id)
{
$.get("ajax/fetchDepartment.php?sec=departments&company_id="+company_id, function(data){
$("#department_id").html(data);
});
}
What I am thinking is can I secure the filename even further?
Currently I have a global access check within the .php file that check if the user is logged in, if he can access this data etc.
But I am wondering if there are further steps I can take so a user couldn't see this filename, or what other steps you recommend to take.
Encoded requests
You could make the request details effectively invisible to the casual miscreant by encoding almost all of the URL and then decoding the request details server-side.
The request details would include the action you wish to perform plus the parameters relevant to that action.
All requests would be sent to a single URL, where a server-side process would decode the request details and perform the relevant action as required.
Example Original URL:
/ajax/delete.php?parameter1=foo¶meter2=bar
Request details:
action=delete¶meter1=foo¶meter2=bar
Encoded request details (encoded using base64):
YWN0aW9uPWRlbGV0ZSZwYXJhbWV0ZXIxPWZvbyZwYXJhbWV0ZXIyPWJhcg==
Encoded URL:
/ajax/?request=YWN0aW9uPWRlbGV0ZSZwYXJhbWV0ZXIxPWZvbyZwYXJhbWV0ZXIyPWJhcg==
I don't believe there is native functionality to encode to base64 in JavaScript, but it's far from impossible to find a suitable method or to write your own.
With obfuscated/minified client-side JavaScript it would be quite tricky for someone to determine how to make a request 'by hand'.
Hide implementation details
There are a number of practices you can follow to make your application less susceptible to attack through URL misuse.
Let's start with a URL of: ajax/fetchDepartment.php?sec=departments&company_id=99
There's no need to reveal what server-side technology you're using (PHP) nor, through the query string (sec, company_id), what the query string values actually mean.
Masking the server-side technology
Assuming you have index.php defined as a default, the following URLs are equivalent:
ajax/fetchDepartment.php?sec=departments&company_id=99
ajax/fetchDepartment/index.php?sec=departments&company_id=99
ajax/fetchDepartment/?sec=departments&company_id=99
The third URL does not reveal the server-side technology you're using. This limits the range of possible attacks. It also makes it easier for you to switch over to a different server-side technology without changing your URLs.
Hiding the meaning of request parameters
ajax/fetchDepartment/?sec=departments&company_id=99
ajax/99/departments/
The latter URL still conveys enough information to perform the request without revealing what the information means.
Whilst someone could still change the values, they won't know what they're changing. This will make it more difficult for an attacker to evaluate and understand the result of any URL changes they make.
Pretty much the only way you can obscure the URL for a certain piece of information from the user is by not loading it in through http. Maybe you can load a set of data on the calling page, or another page with a more generic url.