Related
I'm new to JS and AJAX, and one day, I tried a cross-domain AJAX request. After some researchs, I found out that AJAX could not work over cross domains (natively) because it is unsecure.
From Wikipedia: " This policy prevents a malicious script on one page from obtaining access to sensitive data on another web page through that page's Document Object Model. "
But how could an AJAX request access to "sensitive data", while you can't with default HTTP?
An AJAX request is an HTTP request.
AJAX stands for Asyncronous Javascript And XML. It's kind of named that, after the first browser-based javascript HTTP client API, XMLHttpRequest.
HTTP requests are not inherently insecure, but certain things might make HTTP requests problematic.
A big one related to 'Ajax' requests is that, in the past at least, a HTTP request can carry session/cookie information.
This means that if Ajax requests were not restricted in browser sandboxes (cross-domain), it could mean that the owner of Site A, could make a request to Site B on behalf of a user.
Example: You're logged into a popular social network. Your browser uses a cookie to identify your logged in session. I send you a link to evil.example.org. If cross-site restrictions didn't exist, I could now make a HTTP for you + your session to the social network and act on your behalf.
However, this is not the end of this story. It is possible to do cross-site requests. This is called a CORS requests.
BUT: the way this works is that the owner of the site that you want to make a request to, has to allow in. In our previous example that means that the social network needs to explicitly allow "evil.example.org" to make these kind of requests.
The way this site gives you permission is via CORS headers.
Other ways to work around it is via:
Frames that are hosted on the site you're trying to access. (with specific code)
A proxy you control.
If the server you're trying to access delivers its content in a very specific way. (again, you need control of the target server).
If you control the target server, your best options is to just use CORS though. If you don't your best bet is to setup a proxy you control.
I was trying to understand CORS. As per my understanding, it is a security mechanism implemented in browsers to avoid any AJAX request to domain other than the one open by the user (specified in the URL).
Now, due to this limitation many CORS was implemented to enable websites to do cross origin request. but as per my understanding implementing CORS defy the security purpose of the "Same Origin Policy" (SOP).
CORS is just to provide extra control over which request server wants to serve. Maybe it can avoid spammers.
From Wikipedia:
To initiate a cross-origin request, a browser sends the request with
an Origin HTTP header. The value of this header is the site that
served the page. For example, suppose a page on
http://www.social-network.example attempts to access a user's data
in online-personal-calendar.example. If the user's browser implements
CORS, the following request header would be sent:
Origin: http://www.social-network.example
If online-personal-calendar.example allows the request, it sends an
Access-Control-Allow-Origin header in its response. The value of the
header indicates what origin sites are allowed. For example, a
response to the previous request would contain the following:
Access-Control-Allow-Origin: http://www.social-network.example
If the server does not allow the cross-origin request, the browser
will deliver an error to social-network.example page instead of
the online-personal-calendar.example response.
To allow access to all pages, a server can send the following response
header:
Access-Control-Allow-Origin: *
However, this might not be appropriate for situations in which
security is a concern.
What am I missing here? what is the the intend of CORS to secure the server vs secure the client.
Same-origin policy
What is it?
The same-origin policy is a security measure standardized among browsers. The "origin" mostly refers to a "domain". It prevents different origins from interacting with each other, to prevent attacks such as Cross Site Request Forgery.
How does a CSRF attack work?
Browsers allow websites to store information on a client's computer, in the form of cookies. These cookies have some information attached to them, like the name of the cookie, when it was created, when it will expire, who set the cookie etc. A cookie looks something like this:
Cookie: cookiename=chocolate; Domain=.bakery.example; Path=/ [// ;otherDdata]
So this is a chocolate cookie, which should be accessible from http://bakery.example and all of its subdomains.
This cookie might contain some sensitive data. In this case, that data is... chocolate. Highly sensitive, as you can see.
So the browser stores this cookie. And whenever the user makes a request to a domain on which this cookie is accessible, the cookie would be sent to the server for that domain. Happy server.
This is a good thing. Super cool way for the server to store and retrieve information on and from the client-side.
But the problem is that this allows http://malicious-site.example to send those cookies to http://bakery.example, without the user knowing! For example, consider the following scenario:
# malicious-site.example/attackpage
var xhr = new XMLHttpRequest();
xhr.open('GET', 'http://bakery.example/order/new?deliveryAddress="address of malicious user"');
xhr.send();
If you visit the malicious site, and the above code executes, and same-origin policy was not there, the malicious user would place an order on behalf of you, and get the order at his place... and you might not like this.
This happened because your browser sent your chocolate cookie to http://bakery.example, which made http://bakery.example think that you are making the request for the new order, knowingly. But you aren't.
This is, in plain words, a CSRF attack. A forged request was made across sites. "Cross Site Request Forgery". And it would not work, thanks to the same-origin policy.
How does Same-origin policy solve this?
It stops the malicious-site.example from making requests to other domains. Simple.
In other words, the browser would not allow any site to make a request to any other site. It would prevent different origins from interacting with each other through such requests, like AJAX.
However, resource loading from other hosts like images, scripts, stylesheets, iframes, form submissions etc. are not subject to this limitation. We need another wall to protect our bakery from malicious site, by using CSRF Tokens.
CSRF Tokens
As stated, malicious site can still do something like this without violating the same-origin policy:
<img src='http://bakery.example/order/new?deliveryAddress="address of malicious user"'/>
And the browser will try to load an image from that URL, resulting in a GET request to that URL sending all the cookies. To stop this from happening, we need some server side protection.
Basically, we attach a random, unique token of suitable entropy to the user's session, store it on the server, and also send it to the client with the form. When the form is submitted, client sends that token along with the request, and server verifies if that token is valid or not.
Now that we have done this, and malicious website sends the request again, it will always fail since there is no feasible way for the malicious website to know the token for user's session.
CORS
When required, the policy can be circumvented, when cross site requests are required. This is known as CORS. Cross Origin Resource Sharing.
This works by having the "domains" tell the browser to chill, and allow such requests. This "telling" thing can be done by passing a header. Something like:
Access-Control-Allow-Origin: //comma separated allowed origins list, or just *
So if http://bakery.example passes this header to the browser, and the page creating the request to http://bakery.example is present in the origin list, then the browser will let the request go, along with the cookies.
There are rules according to which the origin is defined1. For example, different ports for the same domain are not the same origin. So the browser might decline this request if the ports are different. As always, our dear Internet Explorer is the exception to this. IE treats all ports the same way. This is non-standard and no other browser behaves this way. Do not rely on this.
JSONP
JSON with Padding is just a way to circumvent same-origin policy, when CORS is not an option. This is risky and a bad practice. Avoid using this.
What this technique involves is making a request to the other server like following:
<script src="http://badbakery.example/jsonpurl?callback=cake"></script>
Since same-origin policy does not prevent this2 request, the response of this request will be loaded into the page.
This URL would most probably respond with JSON content. But just including that JSON content on the page is not gonna help. It would result in an error, ofcourse. So http://badbakery.example accepts a callback parameter, and modifies the JSON data, sending it wrapped in whatever is passed to the callback parameter.
So instead of returning,
{ user: "vuln", acc: "B4D455" }
which is invalid JavaScript throwing an error, it would return,
cake({user: "vuln", acc:"B4D455"});
which is valid JavaScript, it would get executed, and probably get stored somewhere according to the cake function, so that the rest of the JavaScript on the page can use the data.
This is mostly used by APIs to send data to other domains. Again, this is a bad practice, can be risky, and should be strictly avoided.
Why is JSONP bad?
First of all, it is very much limited. You can't handle any errors if the request fails (at-least not in a sane way). You can't retry the request, etc.
It also requires you to have a cake function in the global scope which is not very good. May the cooks save you if you need to execute multiple JSONP requests with different callbacks. This is solved by temporary functions by various libraries but is still a hackish way of doing something hackish.
Finally, you are inserting random JavaScript code in the DOM. If you aren't 100% sure that the remote service will return safe cakes, you can't rely on this.
References
1. https://developer.mozilla.org/en-US/docs/Web/Security/Same-origin_policy#Definition_of_an_origin
2. https://www.w3.org/Security/wiki/Same_Origin_Policy#Details
Other worthy reads
http://scarybeastsecurity.blogspot.dk/2009/12/generic-cross-browser-cross-domain.html
https://www.rfc-editor.org/rfc/rfc3986 (sorry :p)
https://developer.mozilla.org/en-US/docs/Web/Security/Same-origin_policy
https://www.owasp.org/index.php/Cross-Site_Request_Forgery_(CSRF)
The Same Origin Policy (SOP) is the policy browsers implement to prevent vulnerabilities via Cross Site Scripting (XSS). This is mainly for protecting the server, as there are many occasions when a server can be dealing with authentication, cookies, sessions, etc.
The Cross Origin Resource Sharing (CORS) is one of the few techniques for relaxing the SOP. Because SOP is "on" by default, setting CORS at the server-side will allow a request to be sent to the server via an XMLHttpRequest even if the request was sent from a different domain. This becomes useful if your server was intended to serve requests from other domains (e.g. if you are providing an API).
I hope this clears up the distinction between SOP and CORS and the purposes of each.
I'm not sure I understand what types of vulnerabilities this causes.
When I need to access data from an API I have to use ajax to request a PHP file on my own server, and that PHP file accesses the API. What makes this more secure than simply allowing me to hit the API directly with ajax?
For that matter, it looks like using JSONP http://en.wikipedia.org/wiki/JSONP you can do everything that cross-domain ajax would let you do.
Could someone enlighten me?
I think you're misunderstanding the problem that the same-origin policy is trying to solve.
Imagine that I'm logged into Gmail, and that Gmail has a JSON resource, http://mail.google.com/information-about-current-user.js, with information about the logged-in user. This resource is presumably intended to be used by the Gmail user interface, but, if not for the same-origin policy, any site that I visited, and that suspected that I might be a Gmail user, could run an AJAX request to get that resource as me, and retrieve information about me, without Gmail being able to do very much about it.
So the same-origin policy is not to protect your PHP page from the third-party site; and it's not to protect someone visiting your PHP page from the third-party site; rather, it's to protect someone visiting your PHP page, and any third-party sites to which they have special access, from your PHP page. (The "special access" can be because of cookies, or HTTP AUTH, or an IP address whitelist, or simply being on the right network — perhaps someone works at the NSA and is visiting your site, that doesn't mean you should be able to trigger a data-dump from an NSA internal page.)
JSONP circumvents this in a safe way, by introducing a different limitation: it only works if the resource is JSONP. So if Gmail wants a given JSON resource to be usable by third parties, it can support JSONP for that resource, but if it only wants that resource to be usable by its own user interface, it can support only plain JSON.
Many web services are not built to resist XSRF, so if a web-site can programmatically load user data via a request that carries cross-domain cookies just by virtue of the user having visited the site, anyone with the ability to run javascript can steal user data.
CORS is a planned secure alternative to XHR that solves the problem by not carrying credentials by default. The CORS spec explains the problem:
User agents commonly apply same-origin restrictions to network requests. These restrictions prevent a client-side Web application running from one origin from obtaining data retrieved from another origin, and also limit unsafe HTTP requests that can be automatically launched toward destinations that differ from the running application's origin.
In user agents that follow this pattern, network requests typically use ambient authentication and session management information, including HTTP authentication and cookie information.
EDIT:
The problem with just making XHR work cross-domain is that many web services expose ambient authority. Normally that authority is only available to code from the same origin.
This means that a user that trusts a web-site is trusting all the code from that website with their private data. The user trusts the server they send the data to, and any code loaded by pages served by that server. When the people behind a website and the libraries it loads are trustworthy, the user's trust is well-placed.
If XHR worked cross-origin, and carried cookies, that ambient authority would be available to code to anyone that can serve code to the user. The trust decisions that the user previously made may no longer be well-placed.
CORS doesn't inherit these problems because existing services don't expose ambient authority to CORS.
The pattern of JS->Server(PHP)->API makes it possible and not only best, but essential practice to sanity-check what you get while it passes through the server. In addition to that, things like poisened local resolvers (aka DNS Worms) etc. are much less likely on a server, than on some random client.
As for JSONP: This is not a walking stick, but a crutch. IMHO it could be seen as an exploit against a misfeature of the HTML/JS combo, that can't be removed without breaking existing code. Others might think different of this.
While JSONP allows you to unreflectedly execute code from somwhere in the bad wide world, nobody forces you to do so. Sane implementations of JSONP allways use some sort of hashing etc to verify, that the provider of that code is trustwirthy. Again others might think different.
With cross site scripting you would then have a web page that would be able to pull data from anywhere and then be able to run in the same context as your other data on the page and in theory have access to the cookie and other security information that you would not want access to be given too. Cross site scripting would be very insecure in this respect since you would be able to go to any page and if allowed the script on that page could just load data from anywhere and then start executing bad code hence the reason that it is not allowed.
JSONP on the otherhand allows you to get data in JSON format because you provide the necessary callback that the data is passed into hence it gives you the measure of control in that the data will not be executed by the browser unless the callback function does and exec or tries to execute it. The data will be in a JSON format that you can then do whatever you wish with, however it will not be executed hence it is safer and hence the reason it is allowed.
The original XHR was never designed to allow cross-origin requests. The reason was a tangible security vulnerability that is primarily known by CSRF attacks.
In this attack scenario, a third party site can force a victim’s user agent to send forged but valid and legitimate requests to the origin site. From the origin server perspective, such a forged request is not indiscernible from other requests by that user which were initiated by the origin server’s web pages. The reason for that is because it’s actually the user agent that sends these requests and it would also automatically include any credentials such as cookies, HTTP authentication, and even client-side SSL certificates.
Now such requests can be easily forged: Starting with simple GET requests by using <img src="…"> through to POST requests by using forms and submitting them automatically. This works as long as it’s predictable how to forge such valid requests.
But this is not the main reason to forbid cross-origin requests for XHR. Because, as shown above, there are ways to forge requests even without XHR and even without JavaScript. No, the main reason that XHR did not allow cross-origin requests is because it would be the JavaScript in the web page of the third party the response would be sent to. So it would not just be possible to send cross-origin requests but also to receive the response that can contain sensitive information that would then be accessible by the JavaScript.
That’s why the original XHR specification did not allow cross-origin requests. But as technology advances, there were reasonable requests for supporting cross-origin requests. That’s why the original XHR specification was extended to XHR level 2 (XHR and XHR level 2 are now merged) where the main extension is to support cross-origin requests under particular requirements that are specified as CORS. Now the server has the ability to check the origin of a request and is also able to restrict the set of allowed origins as well as the set of allowed HTTP methods and header fields.
Now to JSONP: To get the JSON response of a request in JavaScript and be able to process it, it would either need to be a same-origin request or, in case of a cross-origin request, your server and the user agent would need to support CORS (of which the latter is only supported by modern browsers). But to be able to work with any browser, JSONP was invented that is simply a valid JavaScript function call with the JSON as a parameter that can be loaded as an external JavaScript via <script> that, similar to <img>, is not restricted to same-origin requests. But as well as any other request, a JSONP request is also vulnerable to CSRF.
So to conclude it from the security point of view:
XHR is required to make requests for JSON resources to get their responses in JavaScript
XHR2/CORS is required to make cross-origin requests for JSON resources to get their responses in JavaScript
JSONP is a workaround to circumvent cross-origin requests with XHR
But also:
Forging requests is laughable easy, although forging valid and legitimate requests is harder (but often quite easy as well)
CSRF attacks are a not be underestimated threat, so learn how to protect against CSRF
Lets consider next scenario: assume I have a web app, and authentication of users is performed through a modal dialog window (lets say, that when a user clicks login button, ajax request is sent and depending on the callback I either close the window or display an error), and I use only HTTP protocol. Why is it considered to be not secure way to do things?
Also, please make sure that a modal dialog window is taken into account, because this is vital info. There may be some data displayed underneath the dialog window and can be accessible if modality is broken.
The question includes both:
How can you break an app security by
utilizing ajax call?
Is Ajax HTTP less secure than a
regular form HTTP?
Whoever told you - he is wrong. The ajax through post is not less secure than post with regular forms. Just because it is the same thing.
Update 1 according to the last edit:
You cannot
No
Argument: the AJAX request is the same http request as any other (such as request sent by html form). Absolutely the same. So by definition it cannot be less or more secure.
I don't know how to explain more and what to say else: ajax is a http request. the same request as your browser does when you open SO page or when you post the SO question form.
I can rephrase your question to something like "Why A is less secure than A". Answer to it: A is not less secure than A, because A is A :-S
Any sensitive data should be channeled through HTTPS. GET data is sent in the querystring. POST data is sent in the HTTP Request header. Ajax can do both. BOTH are not secure. You need a channel level encryption to really secure it.
HTTP isn't secure for private data because the data is transmitted in plaintext. This can be intercepted anywhere between the client and server (eg. wifi.) Ajax over HTTPS would be much better.
I think the issue is that you are using http. No matter how you look at it it wont be secure. If you use https the ajax request will be just as secure as a html form.
Somy answer would be to use https and you will be all set.
I'm no security expert, but I think it might be more secure sending it over HTTPS. Just googling learns me that it can be done securely though:
http://www.indicthreads.com/1524/secure-ajax-based-user-authentication/
http://msdn.microsoft.com/en-us/magazine/cc793961.aspx (focused on ASP.NET)
etc.
Since browsers use the same network stack for HTTP and HTTPS, be it AJAX or not, there is no difference. All the headers, cookies, authentication, etc work exactly the same.
Why was it decided that using XMLHTTPRequest for doing XML calls should not do calls across the domain boundary? You can retrieve JavaScript, images, CSS, iframes, and just about any other content I can think of from other domains. Why are the Ajax HTTP requests not allowed to cross the domain boundaries? It seems like an odd limitation to put, considering the only way I could see it being abused, would be if someone were to inject Javascript into the page. However, in this case, you could simply add an img, script, or iframe element to the document to get it to request the third party URL and send it to the server.
[Edit]
Some of the answers point out the following reasons, let's point out the reasons they don't create a major reason to disallow this.
XSRF (Cross Site Request Forgery, also known as CSRF, XSRF)
Your can do XSRF attacks without using this at all. As a general rule, XMLHTTPRequest isn't used at all, simply because it's so hard to make an XMLHTTPRequest in a way that's compatible with all major browsers. It's much easier to just add an img tag to the URL if you want them to load your URL.
Posting to third party site
<script type="text/javascript">
$.post("http://some-bank.com/transfer-money.php",
{ amount: "10000", to_account: "xxxx" })
</script>
Could be accomplished with
<body onload="document.getElementById('InvisbleForm').submit()"
<div style="display:none">
<form id="InvisbleForm" action="http://some-bank.com/transfer-money.php" method="POST">
<input type="hidden" name="amount" value="10000">
<input type="hidden" name="to_account" value="xxxxx">
</form>
</div>
</body>
JPunyon: why would you leave the vulnerability in a new feature
You aren't creating any more insecurities. You are just inconveniencing developers who want to use it in a way for good. Anybody who wants to use this feature for evil (aka awesome) could just use some other method of doing it.
Conclusion
I'm marking the answer from bobince as correct because he pointed out the critical problem. Because XMLHTTPRequest allows you to post, with credentials (cookies) to the destination site, and read the data sent back from the site, along with sending the persons credentials, you could orchestrate some javascript that would submit a series of forms, including confirmation forms, complete with any random keys generated that were put in place to try to prevent a XSRF. In this way, you could browse through the target site, like a bank, and the bank's webserver would be unable to tell that it wasn't just a regular user submitting all these forms.
Why are Ajax HTTP Requests not allowed to cross domain boundaries.
Because AJAX requests are (a) submitted with user credentials, and (b) allow the caller to read the returned data.
It is a combination of these factors that can result in a vulnerability. There are proposals to add a form of cross-domain AJAX that omits user credentials.
you could simply add an img, script, or iframe element to the document
None of those methods allow the caller to read the returned data.
(Except scripts where either it's deliberately set up to allow that, for permitted cross-domain scripting - or where someone's made a terrible cock-up.)
Your can do XSS attacks without using this at all. Posting to third party site
That's not an XSS attack. That's a cross-site request forgery attack (XSRF). There are known ways to solve XSRF attacks, such as including one-time or cryptographic tokens to verify that the submission came deliberately from the user and was not launched from attacker code.
If you allowed cross-domain AJAX you would lose this safeguard. The attacking code could request a page from the banking site, read any authorisation tokens on it, and submit them in a second AJAX request to perform the transfer. And that would be a cross-site scripting attack.
An important difference between the POST:
<body onload="document.getElementById('InvisbleForm').submit()" ...
and Ajax is that after doing any POST the browser will replace the page and after doing the Ajax call - not. The result of the POST will be:
Clearly visible to the user.
The attack will be stuck at this point because the response page from my-bank.com will take the control. No bank will implement a one-click-transfer.
The scenario of XSRF, if the cross domain Ajax would be allowed, will look like the following:
User somehow visits www.bad-guy.com.
If there no opened page to my-bank.com in other instance of the browser, the attack is unsuccessful.
But if such page is opened and the user has already entered his user-name/password, this means that there is a cookie for this session in the cache of the browser.
JavaScript code on the page from www.bad-guy.com makes an Ajax call to my-bank.com.
For the browser this is a regular HTTP call, it has to send the my-bank cookies to my-bank.com and it sends them.
Bank processes this request because it cannot distinguish this call from the regular activity of the user.
The fact that JavaScript code can read the response is not important. In the attack case this might be not necessary. What is really important is that the user in front of the computer will have no idea that this interaction takes place. He will look at nice pictures on the www.bad-guy.com page.
JavaScript code makes several other calls to my-bank.com if this is needed.
The gist is that no injection or any page tampering is needed.
A better solution might be to allow the call itself but not to send any cookies. This is very simple solution that does not require any extensive development. In many cases Ajax call goes to unprotected location and not sending cookies will not be a limitation.
The CORS (Cross Origin Resource Sharing) that is under discussion now, among other things, speaks about sending/not sending cookies.
Well, apparently you're not the only person that feels that way...
http://www.google.com/search?q=xmlhttp+cross+site
EDIT: There is an interesting discussion linked from the above search:
http://blogs.msdn.com/ie/archive/2008/06/23/securing-cross-site-xmlhttprequest.aspx
Looks like proposals are under way to permit cross site xmlhttp requests (IE 8, FF3 etc.), although I wish they'd been there when I was writing the code for my sites :)
And then there's the problem of compatibility... It will be a while before it's ubiquitous.
When you send a HTTP request to the server, the cookies set by the server are also sent back by the browser to the server. The server uses those cookies to establish the fact that the user is logged in, etc.
This can be exploited by a malicious attacker who, with the help of some JavaScript, can steal information or perform unauthorised commands on other websites without the user knowing anything about this.
For example, one could ask an user to visit a site which has the following JavaScript code (assuming jQuery):
<script type="text/javascript">
$.post("http://some-bank.com/transfer-money.php",
{ amount: "10000", to_account: "xxxx" })
</script>
Now, if the user were really logged into the bank while the above code was executed, the attacker could have transferred USD 10K to the account XXX.
This kind of attacks are called Cross Site Request Forgery (XSRF). There is more info about this on Wikipedia.
It's mainly due to this reason the same-origin policy exists and browsers won't allow you to perform XMLHttpRequests on domains different from the origin.
There is some discussion going on to actually allow cross-domain XHR, but we have to see whether this really gets accepted.
It's a concern because it can be used for bad purposes, as you mentioned. It can also be used with good intent, and for that reason, cross domain protocols are being developed.
The two biggest concerns are when it is used in conjunction with cross-site scripting (XSS) and cross-site request forgery (CSRF). Both are serious threats (which is why they made it into the OWASP top 10 and the SANS 25).
the only way I could see it being abused, would be if someone were to inject Javascript
This is XSS Far too many apps are still vulnerable, and if browser security models don't prevent X-domain AJAX, they are opening their users to a considerable attack vector.
you could simply add an img, script, or iframe element to the document to get it to request the third party URL
Yes, but those will send a HTTP_REFERRER and (through other means) can be blocked to prevent CSRF. AJAX calls can spoof headers more easily and would allow other means of circumventing traditional CSRF protections.
I think another thing that separates this from a normal XSRF attack is that you can do stuff with the data you get back as well via javascript.
I don't know what the huge problem is? Have AJAX calls sent towards other domains firs sent to your application and then forwarded elsewhere with filtered data, parse the returned data if you really need to, and feed it to the user.
Handling sensitive AJAX requests? Nail down the incoming suckers by checking for headers, storing session time data or by filtering incoming IP addresses down to sources of you trust or your applications.
What I'd personally like to see in the future is rock solid security on all incoming requests by default on web servers, frameworks and CMSs, and then explicitly define resources that will parse request from outside sources.
With <form> you can post data, but you can't read it. With XHR you can do both.
Page like http://bank.example.com/display_my_password is safe against XSRF (assuming it only displays and not sets the password) and frames (they have same-origin policy). However cross-domain XHR would be a vulnerability.
You turn unsuspecting visitors into denial of service attackers.
Also, Imagine a cross site script that steals all your facebook stuff. It opens an IFrame and navigates to Facebook.com
You're already logged in to facebook (cookie) and it goes reads your data/friends. And does more nasties.