I have a Web Forms website on IIS7 and .NET 4.5.1 and I want the http requests to be validated using Microsoft's Request validation. The web.config default values for validateRequest and requestValidationMode are supposed to be "true" and "4.0" respectively and that should be what I want (I tried specifying them just in case).
<pages validateRequest="true">
<httpRuntime requestValidationMode="4.0" />
For some reason, when I input an html tag (tried < script > and < a >) in a form then submitting it, I get the expected Potentially Dangerous request error, but the tag gets saved in the database. Why did it go through? I simply take the textbox's Text value as is and send it to my DB, but I expect the error to stop that from happening.
When I tried setting:
<httpRuntime requestValidationMode="2.0" />
The error was the same, but this time, the tag didn't end up in the database, which is what I want.
I would like to understand why the lesser safe validation mode "2.0" is the only one that actually prevents the request from going through in my case, which doesn't seem to make much sense. There must be something I'm missing, please let me know if I should provide other information.
I have found a solution to my own problem. It would appear that Microsoft's documentation about requestValidationMode states that all values above "4.0" is interpreted as "4.0", but that isn't true. Reading this interesting page, I have found out there's a "4.5" value that is valid and does exactly what I wanted.
Related
I have been using thymeleaf th:onclick attribute to call javascript function with parameters as below
th:onclick="|myFunction('${parameter1}')|"
But with thymeleaf 3.1.10 this has been removed. and they are suggesting to use th:data attribute.
I however found workaround on as below and both of them are working perfectly.
th:attr="onclick=|myFunction('${parameter1}')|"
th:onclick="#{myFunction('${parameter1}')}">
Now i am not sure if these workarounds are correct way to do things and if yes which one is the better way.
The first will work like you want -- however, you are bypassing the the security restriction and now your pages are vulnerable to javascript injection (which is the original reason this change was made).
The second one just plain doesn't work. It doesn't expand out the variable ${parameter1}, instead just encoding it as a url like this:
onclick="myFunction?$%7Bparameter1%7D"
You really should be doing it as shown on the page.
th:data-parameter1="${parameter1}" onclick="myFunction(this.getAttribute('data-parameter1'));"
I wrote a code as follows in JSF facelet(xhtml file)
${cookie}
If I run the xhtml file on a web app server. The below is displayed on the screen.
{JSESSIONID=javax.servlet.http.Cookie#faf91d8}
However, it seems to be the address of where the cookie instance is stored.
I want to see the value(sessionid) in the cookie.
I tried this code, but it did not work.
${cookie[value]}
I tried reading the following specifications in JCP, but I could not find the answer.
https://jcp.org/en/jsr/detail?id=372
Could you please tell me how to properly write a code to display a value in a cookie? I would appreciate your help.
As you can see from what is printed, it looks like a key-value pair and since the spec says it maps to a single cookie,
#{cookie['JSESSIONID']}
is what returns an actual single cookie. But you still need the value of it so
#{cookie['JSESSIONID'].value}
is most likely what you need
See also
http://incepttechnologies.blogspot.com/p/jsf-implicit-objects.html
https://docs.oracle.com/javase/7/docs/api/java/net/HttpCookie.html
I was wondering if someone has a clue of what is happening here, and could point me in the right direction.
Ok ..lets put the code in context.
I have ajax methods (jquery) like this:
$xmlHttp("/Api/GetWaitingMessages", { count: 20 })
.always(processResult);
($xmlHttp simply wraps a jQuery defered, and some basic $ajax options)
And in our healthmonitoring back-office i see things like this:
Exception information:
Exception type: System.ArgumentException
Exception message: The parameters dictionary contains a null entry for parameter 'count' of non-nullable type 'System.Int32' for method 'System.Web.Mvc.ActionResult GetWaitingMessages(Int32)' in 'AjaxController'. An optional parameter must be a reference type, a nullable type, or be declared as an optional parameter.
Parameter name: parameters
Now the thing is, i placed some traces & try/catches (for testing) to make sure that jQuery never calls GetWaitingMessages with an empty or undefined "count", but as far as the healthmonitoring exeptions go: GetWaitingMessages was instantiated and passed null as a parameter. (from what i understand, MVC instantiates methods via reflection)
btw: the error only happens like maybe 1 out of many thousands of requests
The signature of GetWaitingMessages is:
public virtual ActionResult GetWaitingMessages(int count)
{
....
}
So i suppose, mvc shouldn't even hit the method since there should be no signature match..
Does MVC have problems with high traffic websites (ie. multi-threading problems) ?
The site mentioned above is running on a cluster of 5 web-farm servers with Network Load Balancing and IP affinity.
Each server gets around 1500 request/sec at peak times.
The site is using url rewriting to map domains to areas (ie test.com will simply insert /test into the url) since it's a skinable & multilingual white label site.
Some more details on site configuration:
The controller that serves ajax requests is decorated with
[SessionState(SessionStateBehavior.Disabled)]
HttpModules that where considered useless where removed since we need to run: runAllManagedModulesForAllRequests="true" in MVC. I could have set runAllManagedModulesForAllRequests="false", and then try to figure out what to add, in which order, but found it simpler to just remove what i know is not essential.
<remove name="AnonymousIdentification" />
<remove name="Profile" />
<remove name="WindowsAuthentication" />
<remove name="UrlMappingsModule" />
<remove name="FileAuthorization" />
<remove name="RoleManager" />
<remove name="Session" />
<remove name="UrlAuthorization" />
<remove name="ScriptModule-4.0" />
<remove name="FormsAuthentication" />
The following are all activated and configured in the web.config
<pages validateRequest="false" enableEventValidation="false" enableViewStateMac="true" clientIDMode="Static">
and also:
urlCompression
staticContent
caching
outputCache
EDIT : just analyzed my trace logs a bit more. When the error occurs, i see (Content-Length: 8), which corresponds to (count=20). However i do not see any query parameters in the logs. I dumped the HttpInputStream to the logs, and it's completely empty ..but like i just mentioned, the logs also say that Content-Length = 8, so something is very wrong here.
Could IIS (eventually url rewriting) be mixing up it's stuff somewhere along the way ?
-
Any help would be greatly appreciated ..i'm ripping my hair out trying to understand what could possibly be going wrong here.
Thanks, Robert
What type of a request does your xmlHttp issues to a server (GET, POST or something else)?
What is the definition of GetWaitingMessages action method?
It might very well be the case of mismatching accepted verbs or argument names.
I have a feeling that this could be a problem with MVC not being able to bind to your 'count' parameter. By default, it expects the parameter to be named 'id'.
You can try the following:
Modify your GetWaitingMessages action to define it with a parameter called 'id' instead of 'count'
Create a custom model binding as described in the accepted answer to the stackoverflow question at asp.net mvc routing id parameter
Hope this helps
EDIT: Just saw your reply to another answer stating that the action is a POST. In which case, binding may not be an issue.
Just for testing try this to see if there is any problem .
public virtual ActionResult GetWaitingMessages(FormCollection form)
{
var count=Int32.Parse(form["count"]);
....
}
Of course it will throw if the count field isn't set. If it always works correctly then the problem is with routing or model binding.
How could a client detect if a server is using Search Engine Optimizing techniques such as using mod_rewrite to implement "seo friendly urls."
For example:
Normal url:
http://somedomain.com/index.php?type=pic&id=1
SEO friendly URL:
http://somedomain.com/pic/1
Since mod_rewrite runs server side, there is no way a client can detect it for sure.
The only thing you can do client side is to look for some clues:
Is the HTML generated dynamic and that changes between calls? Then /pic/1 would need to be handled by some script and is most likely not the real URL.
Like said before: are there <link rel="canonical"> tags? Then the website likes to tell the search engine, which URL of multiple with the same content it should use from.
Modify parts of the URL and see, if you get an 404. In /pic/1 I would modify "1".
If there is no mod_rewrite it will return 404. If it is, the error is handled by the server side scripting language and can return a 404, but in most cases would return a 200 page printing an error.
You can use a <link rel="canonical" href="..." /> tag.
The SEO aspect is usually on words in the URL, so you can probably ignore any parts that are numeric. Usually SEO is applied over a group of like content, such that is has a common base URL, for example:
Base www.domain.ext/article, with fully URL examples being:
www.domain.ext/article/2011/06/15/man-bites-dog
www.domain.ext/article/2010/12/01/beauty-not-just-skin-deep
Such that the SEO aspect of the URL is the suffix. Algorithm to apply is typify each "folder" after the common base assigning it a "datatype" - numeric, text, alphanumeric and then score as follows:
HTTP Response Code is 200: should be obvious, but you can get a 404 www.domain.ext/errors/file-not-found that would pass the other checks listed.
Non Numeric, with Separators, Spell Checked: separators are usually dashes, underscores or spaces. Take each word and perform a spell check. If the words are valid - including proper names.
Spell Checked URL Text on Page if the text passes a spell check, analyze the page content to see if it appears there.
Spell Checked URL Text on Page Inside a Tag: if prior is true, mark again if text in its entirety is inside an HTML tag.
Tag is Important: if prior is true and tag is <title> or <h#> tag.
Usually with this approach you'll have a max of 5 points, unless multiple folders in the URL meet the criteria, with higher values being better. Now you can probably improve this by using a Bayesian probability approach that uses the above to featurize (i.e. detects the occurrence of some phenomenon) URLs, plus come up with some other clever featurizations. But, then you've got to train the algorithm, which may not be worth it.
Now based on your example, you also want to capture situations where the URL has been designed such that a crawler will index because query parameters are now part of the URL instead. In that case you can still typify suffixes' folders to arrive at patterns of data types - in your example's case that a common prefix is always trailed by an integer - and score those URLs as being SEO friendly as well.
I presume you would be using of the curl variants.
You could try sending the same request but with different "user agent" values.
i.e. send the request one using user agent "Mozzilla/5.0" and a second time using User Agent "Googlebot" if the server is doing something special for web crawlers then there should be a different response
With the frameworks today and url routing they provide I don't even need to use mod_rewrite to create friendly urls such http://somedomain.com/pic/1 so I doubt you can detect anything. I would create such urls for all visitors, crawlers or not. Maybe you can spoof some bot headers to pretend you're a known crawler and see if there's any change. Dunno how legal that is tbh.
For the dynamic url's pattern, its better to use <link rel="canonical" href="..." /> tag for other duplicate
Internet Explorer 8 has a new security feature, an XSS filter that tries to intercept cross-site scripting attempts. It's described this way:
The XSS Filter, a feature new to Internet Explorer 8, detects JavaScript in URL and HTTP POST requests. If JavaScript is detected, the XSS Filter searches evidence of reflection, information that would be returned to the attacking Web site if the attacking request were submitted unchanged. If reflection is detected, the XSS Filter sanitizes the original request so that the additional JavaScript cannot be executed.
I'm finding that the XSS filter kicks in even when there's no "evidence of reflection", and am starting to think that the filter simply notices when a request is made to another site and the response contains JavaScript.
But even that is hard to verify because the effect seems to come and go. IE has different zones, and just when I think I've reproduced the problem, the filter doesn't kick in anymore, and I don't know why.
Anyone have any tips on how to combat this? What is the filter really looking for? Is there any way for a good-guy to POST data to a 3rd-party site which can return HTML to be displayed in an iframe and not trigger the filter?
Background: I'm loading a JavaScript library from a 3rd-party site. That JavaScript harvests some data from the current HTML page, and posts it to the 3rd-party site, which responds with some HTML to be displayed in an iframe. To see it in action, visit an AOL Food page and click the "Print" icon just above the story.
What does it really do? It allows third parties to link to a messed-up version of your site.
It kicks in when [a few conditions are met and] it sees a string in the query submission that also exists verbatim in the page, and which it thinks might be dangerous.
It assumes that if <script>something()</script> exists in both the query string and the page code, then it must be because your server-side script is insecure and reflected that string straight back out as markup without escaping.
But of course apart from the fact that's it's a perfectly valid query someone might have typed that matches by coincidence, it's also just as possible that they match because someone looked at the page and deliberately copied part of it out. For example:
http://www.bing.com/search?q=%3Cscript+type%3D%22text%2Fjavascript%22%3E
Follow that in IE8 and I've successfully sabotaged your Bing page so it'll give script errors, and the pop-out result bits won't work. Essentially it gives an attacker whose link is being followed license to pick out and disable parts of the page he doesn't like — and that might even include other security-related measures like framebuster scripts.
What does IE8 consider ‘potentially dangerous’? A lot more and a lot stranger things than just this script tag. eg. What's more, it appears to match against a set of ‘dangerous’ templates using a text pattern system (presumably regex), instead of any kind of HTML parser like the one that will eventually parse the page itself. Yes, use IE8 and your browser is pařṣinͅg HT̈́͜ML w̧̼̜it̏̔h ͙r̿e̴̬g̉̆e͎x͍͔̑̃̽̚.
‘XSS protection’ by looking at the strings in the query is utterly bogus. It can't be ‘fixed’; the very concept is intrinsically flawed. Apart from the problem of stepping in when it's not wanted, it can't ever really protect you from anything but the most basic attacks — and the attackers will surely workaround such blocks as IE8 becomes more widely used. If you've been forgetting to escape your HTML output correctly you'll still be vulnerable; all XSS “protection” has to offer you is a false sense of security. Unfortunately Microsoft seem to like this false sense of security; there is similar XSS “protection” in ASP.NET too, on the server side.
So if you've got a clue about webapp authoring and you've been properly escaping output to HTML like a good boy, it's definitely a good idea to disable this unwanted, unworkable, wrong-headed intrusion by outputting the header:
X-XSS-Protection: 0
in your HTTP responses. (And using ValidateRequest="false" in your pages if you're using ASP.NET.)
For everyone else, who still slings strings together in PHP without taking care to encode properly... well you might as well leave it on. Don't expect it to actually protect your users, but your site is already broken, so who cares if it breaks a little more, right?
To see it in action, visit an AOL Food page and click the "Print" icon just above the story.
Ah yes, I can see this breaking in IE8. Not immediately obvious where IE has made the hack to the content that's stopped it executing though... the only cross-domain request I can see that's a candidate for the XSS filter is this one to http://h30405.www3.hp.com/print/start:
POST /print/start HTTP/1.1
Host: h30405.www3.hp.com
Referer: http://recipe.aol.com/recipe/oatmeal-butter-cookies/142275?
csrfmiddlewaretoken=undefined&characterset=utf-8&location=http%253A%2F%2Frecipe.aol.com%2Frecipe%2Foatmeal-butter-cookies%2F142275&template=recipe&blocks=Dd%3Do%7Efsp%7E%7B%3D%25%3F%3D%3C%28%2B.%2F%2C%28%3D3%3F%3D%7Dsp%7Ct#kfoz%3D%25%3F%3D%7E%7C%7Czqk%7Cpspm%3Db3%3Fd%3Do%7Efsp%7E%7B%3D%25%3F%3D%3C%7D%2F%27%2B%2C.%3D3%3F%3D%7Dsp%7Ct#kfoz%3D%25%3F%3D%7E%7C%7Czqk...
that blocks parameter continues with pages more gibberish. Presumably there is something there that (by coincidence?) is reflected in the returned HTML and triggers one of IE8's messed up ideas of what an XSS exploit looks like.
To fix this, HP need to make the server at h30405.www3.hp.com include the X-XSS-Protection: 0 header.
You should send me (ericlaw#microsoft) a network capture (www.fiddlercap.com) of the scenario you think is incorrect.
The XSS filter works as follows:
Is XSSFILTER enabled for this process?
If yes– proceed to next check
If no – bypass XSS Filter and continue loading
Is a "document" load (like a frame, not a subdownload)?
If yes– proceed to next check
If no – bypass XSS Filter and continue loading
Is it a HTTP/HTTPS request?
If yes– proceed to next check
If no – bypass XSS Filter and continue loading
Does RESPONSE contain x-xss-protection header?
Yes:
Value = 1: XSS Filter Enabled (no urlaction check)
Value = 0: XSS Filter Disabled (no urlaction check)
No: proceed to next check
Is the site loading in a Zone where URLAction enables XSS filtering? (By default: Internet, Trusted, Restricted)
If yes– proceed to next check
If no – bypass XSS Filter and continue loading
Is a cross site Request? (Referrer header: Does the final (post-redirect) fully-qualified domain name in the HTTP request referrer header match the fully-qualified domain name of the URL being retrieved?)
If yes – bypass XSS Filter and continue loading
If no – then the URL in the request should be neutered.
Does the heuristic indicate of the RESPONSE data came from unsafe REQUEST DATA?
If yes – modify the response.
Now, the exact details of #7 are quite complicated, but basically, you can imagine that IE does a match of request data (URL/Post Body) to response data (script bodies) and if they match, then the response data will be modified.
In your site's case, you'll want to look at the body of the POST to http://h30405.www3.hp.com/print/start and the corresponding response.
Actually, it's worse than might seem. The XSS filter can make safe sites unsafe. Read here:
http://www.h-online.com/security/news/item/Security-feature-of-Internet-Explorer-8-unsafe-868837.html
From that article:
However, Google disables IE's XSS filter by sending the X-XSS-Protection: 0 header, which makes it immune.
I don't know enough about your site to judge if this may be a solution, but you can probably try.
More in depth, technical discussion of the filter, and how to disable it is here: http://michael-coates.blogspot.com/2009/11/ie8-xss-filter-bug.html