Use of JSON-P with Sensitive Information - ajax

I have a secured website that requires a user to authenticate, and would like to return sensitive data to the client from my API via JSON-P so that I can get around ajax cross-domain issues. I own both the client and server, so I am not concerned about the security from the client perspective (i.e. reading malicious js from the server).
I have been researching ways to secure the JSON-P to prevent Cross-Site Request Forgery, but haven't been able to clearly determine whether checking the Referer is a foolproof method for securing the data. As I understand it, the Referer header cannot be spoofed in this situation because the calls would be from javascript, and Headers cannot be changed. Is this a correct assumption?
I would like some clear-cut examples of why or why not checking the Referer would/wouldn't work to secure JSON-P.
Thanks!
EDIT:
Just to clarify - the JSON-P is secured via Spring Security, so it wouldn't only be secured by the Referer header. I am mostly concerned here about session hijacking...

Jsonp urls can be called using normal curl code. Http refer can easily be forged.

I would like some clear-cut examples of why or why not checking the Referer would/wouldn't work to secure JSON-P.
Referer is not guaranteed to be sent, so:
if you require it to be present and match a trusted site, you will be breaking the app for everyone whose browser or network setup doesn't send it;
if you permit it to be absent to get around that, you open yourself to attack not just for those users, but for everyone where the attacker can induce Referer not to be sent (most notably, from HTTPS pages;
also, to behave properly with proxies you would have to no-cache all your responses (or Vary: Referer, but that won't work right in IE)
Referrer-checking is a weak and problematic method which sometimes sees use as a desperate last measure... it's not something you should build when you've got the choice. If you control both servers you can easily include a request token on one page that gets recognised by the script on the either.

Related

API instagram can't get data [duplicate]

tl;dr; About the Same Origin Policy
I have a Grunt process which initiates an instance of express.js server. This was working absolutely fine up until just now when it started serving a blank page with the following appearing in the error log in the developer's console in Chrome (latest version):
XMLHttpRequest cannot load https://www.example.com/
No 'Access-Control-Allow-Origin' header is present on the requested
resource. Origin 'http://localhost:4300' is therefore not allowed access.
What is stopping me from accessing the page?
tl;dr — When you want to read data, (mostly) using client-side JS, from a different server you need the server with the data to grant explicit permission to the code that wants the data.
There's a summary at the end and headings in the answer to make it easier to find the relevant parts. Reading everything is recommended though as it provides useful background for understanding the why that makes seeing how the how applies in different circumstances easier.
About the Same Origin Policy
This is the Same Origin Policy. It is a security feature implemented by browsers.
Your particular case is showing how it is implemented for XMLHttpRequest (and you'll get identical results if you were to use fetch), but it also applies to other things (such as images loaded onto a <canvas> or documents loaded into an <iframe>), just with slightly different implementations.
The standard scenario that demonstrates the need for the SOP can be demonstrated with three characters:
Alice is a person with a web browser
Bob runs a website (https://www.example.com/ in your example)
Mallory runs a website (http://localhost:4300 in your example)
Alice is logged into Bob's site and has some confidential data there. Perhaps it is a company intranet (accessible only to browsers on the LAN), or her online banking (accessible only with a cookie you get after entering a username and password).
Alice visits Mallory's website which has some JavaScript that causes Alice's browser to make an HTTP request to Bob's website (from her IP address with her cookies, etc). This could be as simple as using XMLHttpRequest and reading the responseText.
The browser's Same Origin Policy prevents that JavaScript from reading the data returned by Bob's website (which Bob and Alice don't want Mallory to access). (Note that you can, for example, display an image using an <img> element across origins because the content of the image is not exposed to JavaScript (or Mallory) … unless you throw canvas into the mix in which case you will generate a same-origin violation error).
Why the Same Origin Policy applies when you don't think it should
For any given URL it is possible that the SOP is not needed. A couple of common scenarios where this is the case are:
Alice, Bob, and Mallory are the same person.
Bob is providing entirely public information
… but the browser has no way of knowing if either of the above is true, so trust is not automatic and the SOP is applied. Permission has to be granted explicitly before the browser will give the data it has received from Bob to some other website.
Why the Same Origin Policy applies to JavaScript in a web page but little else
Outside the web page
Browser extensions*, the Network tab in browser developer tools, and applications like Postman are installed software. They aren't passing data from one website to the JavaScript belonging to a different website just because you visited that different website. Installing software usually takes a more conscious choice.
There isn't a third party (Mallory) who is considered a risk.
* Browser extensions do need to be written carefully to avoid cross-origin issues. See the Chrome documentation for example.
Inside the webpage
Most of the time, there isn't a great deal of information leakage when just showing something on a webpage.
If you use an <img> element to load an image, then it gets shown on the page, but very little information is exposed to Mallory. JavaScript can't read the image (unless you use a crossOrigin attribute to explicitly enable request permission with CORS) and then copy it to her server.
That said, some information does leak so, to quote Domenic Denicola (of Google):
The web's fundamental security model is the same origin policy. We
have several legacy exceptions to that rule from before that security
model was in place, with script tags being one of the most egregious
and most dangerous. (See the various "JSONP" attacks.)
Many years ago, perhaps with the introduction of XHR or web fonts (I
can't recall precisely), we drew a line in the sand, and said no new
web platform features would break the same origin policy. The existing
features need to be grandfathered in and subject to carefully-honed
and oft-exploited exceptions, for the sake of not breaking the web,
but we certainly can't add any more holes to our security policy.
This is why you need CORS permission to load fonts across origins.
Why you can display data on the page without reading it with JS
There are a number of circumstances where Mallory's site can cause a browser to fetch data from a third party and display it (e.g. by adding an <img> element to display an image). It isn't possible for Mallory's JavaScript to read the data in that resource though, only Alice's browser and Bob's server can do that, so it is still secure.
CORS
The Access-Control-Allow-Origin HTTP response header referred to in the error message is part of the CORS standard which allows Bob to explicitly grant permission to Mallory's site to access the data via Alice's browser.
A basic implementation would just include:
Access-Control-Allow-Origin: *
… in the response headers to permit any website to read the data.
Access-Control-Allow-Origin: http://example.com
… would allow only a specific site to access it, and Bob can dynamically generate that based on the Origin request header to permit multiple, but not all, sites to access it.
The specifics of how Bob sets that response header depend on Bob's HTTP server and/or server-side programming language. Users of Node.js/Express.js should use the well-documented CORS middleware. Users of other platforms should take a look at this collection of guides for various common configurations that might help.
NB: Some requests are complex and send a preflight OPTIONS request that the server will have to respond to before the browser will send the GET/POST/PUT/Whatever request that the JS wants to make. Implementations of CORS that only add Access-Control-Allow-Origin to specific URLs often get tripped up by this.
Obviously granting permission via CORS is something Bob would only do only if either:
The data was not private or
Mallory was trusted
How do I add these headers?
It depends on your server-side environment.
If you can, use a library designed to handle CORS as they will present you with simple options instead of having to deal with everything manually.
Enable-Cors.org has a list of documentation for specific platforms and frameworks that you might find useful.
But I'm not Bob!
There is no standard mechanism for Mallory to add this header because it has to come from Bob's website, which she does not control.
If Bob is running a public API then there might be a mechanism to turn on CORS (perhaps by formatting the request in a certain way, or a config option after logging into a Developer Portal site for Bob's site). This will have to be a mechanism implemented by Bob though. Mallory could read the documentation on Bob's site to see if something is available, or she could talk to Bob and ask him to implement CORS.
Error messages which mention "Response for preflight"
Some cross-origin requests are preflighted.
This happens when (roughly speaking) you try to make a cross-origin request that:
Includes credentials like cookies
Couldn't be generated with a regular HTML form (e.g. has custom headers or a Content-Type that you couldn't use in a form's enctype).
If you are correctly doing something that needs a preflight
In these cases then the rest of this answer still applies but you also need to make sure that the server can listen for the preflight request (which will be OPTIONS (and not GET, POST, or whatever you were trying to send) and respond to it with the right Access-Control-Allow-Origin header but also Access-Control-Allow-Methods and Access-Control-Allow-Headers to allow your specific HTTP methods or headers.
If you are triggering a preflight by mistake
Sometimes people make mistakes when trying to construct Ajax requests, and sometimes these trigger the need for a preflight. If the API is designed to allow cross-origin requests but doesn't require anything that would need a preflight, then this can break access.
Common mistakes that trigger this include:
trying to put Access-Control-Allow-Origin and other CORS response headers on the request. These don't belong on the request, don't do anything helpful (what would be the point of a permissions system where you could grant yourself permission?), and must appear only on the response.
trying to put a Content-Type: application/json header on a GET request that has no request body the content of which to describe (typically when the author confuses Content-Type and Accept).
In either of these cases, removing the extra request header will often be enough to avoid the need for a preflight (which will solve the problem when communicating with APIs that support simple requests but not preflighted requests).
Opaque responses (no-cors mode)
Sometimes you need to make an HTTP request, but you don't need to read the response. e.g. if you are posting a log message to the server for recording.
If you are using the fetch API (rather than XMLHttpRequest), then you can configure it to not try to use CORS.
Note that this won't let you do anything that you require CORS to do. You will not be able to read the response. You will not be able to make a request that requires a preflight.
It will let you make a simple request, not see the response, and not fill the Developer Console with error messages.
How to do it is explained by the Chrome error message given when you make a request using fetch and don't get permission to view the response with CORS:
Access to fetch at 'https://example.com/' from origin 'https://example.net' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
Thus:
fetch("http://example.com", { mode: "no-cors" });
Alternatives to CORS
JSONP
Bob could also provide the data using a hack like JSONP which is how people did cross-origin Ajax before CORS came along.
It works by presenting the data in the form of a JavaScript program that injects the data into Mallory's page.
It requires that Mallory trust Bob not to provide malicious code.
Note the common theme: The site providing the data has to tell the browser that it is OK for a third-party site to access the data it is sending to the browser.
Since JSONP works by appending a <script> element to load the data in the form of a JavaScript program that calls a function already in the page, attempting to use the JSONP technique on a URL that returns JSON will fail — typically with a CORB error — because JSON is not JavaScript.
Move the two resources to a single Origin
If the HTML document the JS runs in and the URL being requested are on the same origin (sharing the same scheme, hostname, and port) then the Same Origin Policy grants permission by default. CORS is not needed.
A Proxy
Mallory could use server-side code to fetch the data (which she could then pass from her server to Alice's browser through HTTP as usual).
It will either:
add CORS headers
convert the response to JSONP
exist on the same origin as the HTML document
That server-side code could be written & hosted by a third party (such as CORS Anywhere). Note the privacy implications of this: The third party can monitor who proxies what across their servers.
Bob wouldn't need to grant any permissions for that to happen.
There are no security implications here since that is just between Mallory and Bob. There is no way for Bob to think that Mallory is Alice and to provide Mallory with data that should be kept confidential between Alice and Bob.
Consequently, Mallory can only use this technique to read public data.
Do note, however, that taking content from someone else's website and displaying it on your own might be a violation of copyright and open you up to legal action.
Writing something other than a web app
As noted in the section "Why the Same Origin Policy only applies to JavaScript in a web page", you can avoid the SOP by not writing JavaScript in a webpage.
That doesn't mean you can't continue to use JavaScript and HTML, but you could distribute it using some other mechanism, such as Node-WebKit or PhoneGap.
Browser extensions
It is possible for a browser extension to inject the CORS headers in the response before the Same Origin Policy is applied.
These can be useful for development but are not practical for a production site (asking every user of your site to install a browser extension that disables a security feature of their browser is unreasonable).
They also tend to work only with simple requests (failing when handling preflight OPTIONS requests).
Having a proper development environment with a local development server
is usually a better approach.
Other security risks
Note that SOP / CORS do not mitigate XSS, CSRF, or SQL Injection attacks which need to be handled independently.
Summary
There is nothing you can do in your client-side code that will enable CORS access to someone else's server.
If you control the server the request is being made to: Add CORS permissions to it.
If you are friendly with the person who controls it: Get them to add CORS permissions to it.
If it is a public service:
Read their API documentation to see what they say about accessing it with client-side JavaScript:
They might tell you to use specific URLs
They might support JSONP
They might not support cross-origin access from client-side code at all (this might be a deliberate decision on security grounds, especially if you have to pass a personalized API Key in each request).
Make sure you aren't triggering a preflight request you don't need. The API might grant permission for simple requests but not preflighted requests.
If none of the above apply: Get the browser to talk to your server instead, and then have your server fetch the data from the other server and pass it on. (There are also third-party hosted services that attach CORS headers to publically accessible resources that you could use).
Target server must allowed cross-origin request. In order to allow it through express, simply handle http options request :
app.options('/url...', function(req, res, next){
res.header('Access-Control-Allow-Origin', "*");
res.header('Access-Control-Allow-Methods', 'POST');
res.header("Access-Control-Allow-Headers", "accept, content-type");
res.header("Access-Control-Max-Age", "1728000");
return res.sendStatus(200);
});
As this isn't mentioned in the accepted answer.
This is not the case for this exact question, but might help others that search for that problem
This is something you can do in your client-code to prevent CORS errors in some cases.
You can make use of Simple Requests.
In order to perform a 'Simple Requests' the request needs to meet several conditions. E.g. only allowing POST, GET and HEAD method, as well as only allowing some given Headers (you can find all conditions here).
If your client code does not explicit set affected Headers (e.g. "Accept") with a fix value in the request it might occur that some clients do set these Headers automatically with some "non-standard" values causing the server to not accept it as Simple Request - which will give you a CORS error.
This is happening because of the CORS error. CORS stands for Cross Origin Resource Sharing. In simple words, this error occurs when we try to access a domain/resource from another domain.
Read More about it here: CORS error with jquery
To fix this, if you have access to the other domain, you will have to allow Access-Control-Allow-Origin in the server. This can be added in the headers. You can enable this for all the requests/domains or a specific domain.
How to get a cross-origin resource sharing (CORS) post request working
These links may help
This CORS issue wasn't further elaborated (for other causes).
I'm having this issue currently under different reason.
My front end is returning 'Access-Control-Allow-Origin' header error as well.
Just that I've pointed the wrong URL so this header wasn't reflected properly (in which i kept presume it did). localhost (front end) -> call to non secured http (supposed to be https), make sure the API end point from front end is pointing to the correct protocol.
I got the same error in Chrome console.
My problem was, I was trying to go to the site using http:// instead of https://. So there was nothing to fix, just had to go to the same site using https.
This bug cost me 2 days. I checked my Server log, the Preflight Option request/response between browser Chrome/Edge and Server was ok. The main reason is that GET/POST/PUT/DELETE server response for XHTMLRequest must also have the following header:
access-control-allow-origin: origin
"origin" is in the request header (Browser will add it to request for you). for example:
Origin: http://localhost:4221
you can add response header like the following to accept for all:
access-control-allow-origin: *
or response header for a specific request like:
access-control-allow-origin: http://localhost:4221
The message in browsers is not clear to understand: "...The requested resource"
note that:
CORS works well for localhost. different port means different Domain.
if you get error message, check the CORS config on the server side.
In most housing services just add in the .htaccess on the target server folder this:
Header set Access-Control-Allow-Origin 'https://your.site.folder'
I had the same issue. In my case i fixed it by adding addition parameter of timestamp to my URL. Even this was not required by the server I was accessing.
Example yoururl.com/yourdocument?timestamp=1234567
Note: I used epos timestamp
"Get" request with appending headers transform to "Options" request. So Cors policy problems occur. You have to implement "Options" request to your server. Cors Policy about server side and you need to allow Cors Policy on your server side. For Nodejs server:details
app.use(cors)
For Java to integrate with Angular:details
#CrossOrigin(origins = "http://localhost:4200")
You should enable CORS to get it working.

CORS with client https certificates

I have a site with two https servers. One (frontend) serves up a UI made of static pages. The other (backend) serves up a microservice. Both of them happen to be using the same (test) X509 certificate to identify themselves. Individually, I can connect to them both over https requiring the client certificate "tester".
We were hiding CORS issues until now by going through an nginx setup that makes the frontend and backend appear that they are same Origin. I have implemented the headers 'Access-Control-Allow-Origin', 'Access-Control-Allow-Credentials' for all requests; with methods, headers for preflight check requests (OPTIONS).
In Chrome, cross-site like this works just fine. I can see that front-end URLs and backend URLs are different sites. I see the OPTIONS requests being made before backend requests are made.
Even though Chrome doesn't seem to need it, I did find the xmlhttprequest object that will be used to perform the request and did a xhr.withCredentials = true on it, because that seems to be what fetch.js does under the hood when it gets "credentials":"include". I noticed that there is an xhr.setRequestHeader function available that I might need to use to make Firefox happy.
Firefox behaves identically for the UI calls. But for all backend calls, I get a 405. When it does this, there is no network connection being made to the server. The browser just decided that this is a 405 without executing any https request. Even though this is different behavior from Chrome, it kind of makes sense. Both the front-end UI and backend service need a client certificate to be chosen. I chose the certificate "tester" when I connected to the UI. When it goes to make a backend request, it could assume that the same client certificate should be used to reach the back-end. But maybe it assumes that it could be different, and there is something else I need to tell Firefox.
Is anybody here using CORS in combination with 2 way SSL certificates like this, and had this Firefox problem and fixed it somewhere. I suspect that it's not a server-side fix, but something that the client needs to do.
Edit: see the answer here: https://stackoverflow.com/a/74744206/537554
I haven't actually tested this using client certificates, but I seem to recall that Firefox will not send credentials if Access-Control-Allow-Origin is set to the * wildcard instead of an actual domain. See this page on MDN.
Also there's an issue with Firefox sending a CORS request to a server that expects the client certificate to be presented in the TLS handshake. Basically, Firefox will not send the certificate during the preflight, creating a chicken and the egg problem. See this bug on bugzilla.
When using CORS with credentials (basic auth, cookies, client certificate, etc.):
Access-Control-Allow-Credentials must be true
Access-Control-Allow-Origin must not be *
Access-Control-Allow-Origin must not be multi-value (neither duplicated nor comma-delimited)
Access-Control-Allow-Origin must be set to exactly the value from the request's Origin header in order for the request to work (either hard-coded that way or if it passes a whitelist of allowed values)
The preflight OPTIONS request must not require credentials (including the client certificate). Part of the purpose of the preflight is to ask what is allowed in a CORS request, and therefore sending credentials before knowing if they are allowed is incorrect.
The preflight OPTIONS request must return a 200-level response, generally 204
Note: For Access-Control-Allow-Origin, you may want to consider allowing the value null since redirect chains (like the ones typically used for OAuth) can cause that Origin value in a request from a browser.

What is the motivation behind the introduction of preflight CORS requests?

Cross-origin resource sharing is a mechanism that allows a web page to make XMLHttpRequests to another domain (from Wikipedia).
I've been fiddling with CORS for the last couple of days and I think I have a pretty good understanding of how everything works.
So my question is not about how CORS / preflight work, it's about the reason behind coming up with preflights as a new request type. I fail to see any reason why server A needs to send a preflight (PR) to server B just to find out if the real request (RR) will be accepted or not - it would certainly be possible for B to accept/reject RR without any prior PR.
After searching quite a bit I found this piece of information at www.w3.org (7.1.5):
To protect resources against cross-origin requests that could not originate from certain user agents before this specification existed a
preflight request is made to ensure that the resource is aware of this
specification.
I find this is the hardest to understand sentence ever. My interpretation (should better call it 'best guess') is that it's about protecting server B against requests from server C that is not aware of the spec.
Can someone please explain a scenario / show a problem that PR + RR solves better than RR alone?
I spent some time being confused as to the purpose of the preflight request but I think I've got it now.
The key insight is that preflight requests are not a security thing. Rather, they're a not-changing-the-rules thing.
Preflight requests have nothing to do with security, and they have no bearing on applications that are being developed now, with an awareness of CORS. Rather, the preflight mechanism benefits servers that were developed without an awareness of CORS, and it functions as a sanity check between the client and the server that they are both CORS-aware. The developers of CORS felt that there were enough servers out there that were relying on the assumption that they would never receive, e.g. a cross-domain DELETE request that they invented the preflight mechanism to allow both sides to opt-in. They felt that the alternative, which would have been to simply enable the cross-domain calls, would have broken too many existing applications.
There are three scenarios here:
Old servers, no longer under development, and developed before CORS. These servers may make assumptions that they'll never receive e.g. a cross-domain DELETE request. This scenario is the primary beneficiary of the preflight mechanism. Yes these services could already be abused by a malicious or non-conforming user agent (and CORS does nothing to change this), but in a world with CORS the preflight mechanism provides an extra 'sanity check' so that clients and servers don't break because the underlying rules of the web have changed.
Servers that are still under development, but which contain a lot of old code and for which it's not feasible/desirable to audit all the old code to make sure it works properly in a cross-domain world. This scenario allows servers to progressively opt-in to CORS, e.g. by saying "Now I'll allow this particular header", "Now I'll allow this particular HTTP verb", "Now I'll allow cookies/auth information to be sent", etc. This scenario benefits from the preflight mechanism.
New servers that are written with an awareness of CORS. According to standard security practices, the server has to protect its resources in the face of any incoming request -- servers can't trust clients to not do malicious things. This scenario doesn't benefit from the preflight mechanism: the preflight mechanism brings no additional security to a server that has properly protected its resources.
What was the motivation behind introducing preflight requests?
Preflight requests were introduced so that a browser could be sure it was dealing with a CORS-aware server before sending certain requests. Those requests were defined to be those that were both potentially dangerous (state-changing) and new (not possible before CORS due to the Same Origin Policy). Using preflight requests means that servers must opt-in (by responding properly to the preflight) to the new, potentially dangerous types of request that CORS makes possible.
That's the meaning of this part of the original specification: "To protect resources against cross-origin requests that could not originate from certain user agents before this specification existed a preflight request is made to ensure that the resource is aware of this specification."
Can you give me an example?
Let's imagine that a browser user is logged into their banking site at A.com. When they navigate to the malicious B.com, that page includes some Javascript that tries to send a DELETE request to A.com/account. Since the user is logged into A.com, that request, if sent, would include cookies that identify the user.
Before CORS, the browser's Same Origin Policy would have blocked it from sending this request. But since the purpose of CORS is to make just this kind of cross-origin communication possible, that's no longer appropriate.
The browser could simply send the DELETE and let the server decide how to handle it. But what if A.com isn't aware of the CORS protocol? It might go ahead and execute the dangerous DELETE. It might have assumed that—due to the browser's Same Origin Policy—it could never receive such a request, and thus it might have never been hardened against such an attack.
To protect such non-CORS-aware servers, then, the protocol requires the browser to first send a preflight request. This new kind of request is something that only CORS-aware servers can respond to properly, allowing the browser to know whether or not it's safe to send the actual DELETE.
Why all this fuss about the browser, can't the attacker just send a DELETE request from their own computer?
Sure, but such a request won't include the user's cookies. The attack that this is designed to prevent relies on the fact that the browser will send cookies (in particular, authentication information for the user) for the other domain along with the request.
That sounds like Cross-Site Request Forgery, where a form on site B.com can be submitted to A.com with the user's cookies and do damage.
That's right. Another way of putting this is that preflight requests were created so as to not increase the CSRF attack surface for non-CORS-aware servers.
But POST is listed as a method that doesn't require preflights. That can change state and delete data just like a DELETE!
That's true! CORS does not protect your site from CSRF attacks. Then again, without CORS you are also not protected from CSRF attacks. The purpose of preflight requests is just to limit your CSRF exposure to what already existed in the pre-CORS world.
Sigh. OK, I grudgingly accept the need for preflight requests. But why do we have to do it for every resource (URL) on the server? The server either handles CORS or it doesn't.
Are you sure about that? It's not uncommon for multiple servers to handle requests for a single domain. For example, it may be the case that requests to A.com/url1 are handled by one kind of server and requests to A.com/url2 are handled by a different kind of server. It's not generally the case that the server handling a single resource can make security guarantees about all resources on that domain.
Fine. Let's compromise. Let's create a new CORS header that allows the server to state exactly which resources it can speak for, so that additional preflight requests to those URLs can be avoided.
Good idea! In fact, the header Access-Control-Policy-Path was proposed for just this purpose. Ultimately, though, it was left out of the specification, apparently because some servers incorrectly implemented the URI specification in such a way that requests to paths that seemed safe to the browser would not in fact be safe on the broken servers.
Was this a prudent decision that prioritized security over performance, allowing browsers to immediately implement the CORS specification without putting existing servers at risk? Or was it shortsighted to doom the internet to wasted bandwidth and doubled latency just to accommodate bugs in a particular server at a particular time?
Opinions differ.
Well, at the very least browsers will cache the preflight for a single URL?
Yes. Though probably not for very long. In WebKit browsers the maximum preflight cache time is currently 10 minutes.
Sigh. Well, if I know that my servers are CORS-aware, and therefore don't need the protection offered by preflight requests, is there any way for me to avoid them?
Your only real option is to make sure that your requests use CORS-safe methods and headers. That might mean leaving out custom headers that you would otherwise include (like X-Requested-With), changing the Content-Type, or more.
Whatever you do, you must make sure that you have proper CSRF protections in place, since CORS will not block all unsafe requests. As the original specification puts it: "resources for which simple requests have significance other than retrieval must protect themselves from Cross-Site Request Forgery".
Consider the world of cross-domain requests before CORS. You could do a standard form POST, or use a script or an image tag to issue a GET request. You couldn't make any other request type other than GET/POST, and you couldn't issue any custom headers on these requests.
With the advent of CORS, the spec authors were faced with the challenge of introducing a new cross-domain mechanism without breaking the existing semantics of the web. They chose to do this by giving servers a way to opt-in to any new request type. This opt-in is the preflight request.
So GET/POST requests without any custom headers don't need a preflight, since these requests were already possible before CORS. But any request with custom headers, or PUT/DELETE requests, do need a preflight, since these are new to the CORS spec. If the server knows nothing about CORS, it will reply without any CORS-specific headers, and the actual request will not be made.
Without the preflight request, servers could begin seeing unexpected requests from browsers. This could lead to a security issue if the servers weren't prepared for these types of requests. The CORS preflight allows cross-domain requests to be introduced to the web in a safe manner.
CORS allows you to specify more headers and method types than was previously possible with cross-origin <img src> or <form action>.
Some servers could have been (poorly) protected with the assumption that a browser cannot make, e.g. cross-origin DELETE request or cross-origin request with X-Requested-With header, so such requests are "trusted".
To make sure that server really-really supports CORS and not just happens to respond to random requests, the preflight is executed.
I feel that the other answers aren't focusing on the reason pre-fight enhances security.
Scenarios:
1) With pre-flight. An attacker forges a request from site dummy-forums.com while the user is authenticated to safe-bank.com
If the Server does not check for the origin, and somehow has a flaw, the browser will issue a pre-flight request, OPTION method. The server knows none of that CORS that the browser is expecting as a response so the browser will not proceed (no harm whatsoever)
2) Without pre-flight. An attacker forges the request under the same scenario as above, the browser will issue the POST or PUT request right away, the server accepts it and might process it, this will potentially cause some harm.
If the attacker sends a request directly, cross origin, from some random host it's most likely one is thinking about a request with no authentication. That's a forged request, but not a xsrf one. so the server has will check credentials and fail.
CORS doesn't attempt to prevent an attacker who has the credentials to issue requests, although a whitelist could help reduce this vector of attack.
The pre-flight mechanism adds safety and consistency between clients and servers.
I don't know if this is worth the extra handshake for every request since caching is hardy use-able there, but that's how it works.
Here's another way of looking at it, using code:
<!-- hypothetical exploit on evil.com -->
<!-- Targeting banking-website.example.com, which authenticates with a cookie -->
<script>
jQuery.ajax({
method: "POST",
url: "https://banking-website.example.com",
data: JSON.stringify({
sendMoneyTo: "Dr Evil",
amount: 1000000
}),
contentType: "application/json",
dataType: "json"
});
</script>
Pre-CORS, the exploit attempt above would fail because it violates the same-origin policy. An API designed this way did not need XSRF protection, because it was protected by the browser's native security model. It was impossible for a pre-CORS browser to generate a cross-origin JSON POST.
Now CORS comes on the scene – if opting-in to CORS via pre-flight was not required, suddenly this site would have a huge vulnerability, through no fault of their own.
To explain why some requests are allowed to skip the pre-flight, this is answered by the spec:
A simple cross-origin request has been defined as congruent with those
which may be generated by currently deployed user agents that do not
conform to this specification.
To untangle that, GET is not pre-flighted because it is a "simple method" as defined by 7.1.5. (The headers must also be "simple" in order to avoid the pre-flight).
The justification for this is that "simple" cross-origin GET request could already be performed by e.g. <script src=""> (this is how JSONP works). Since any element with a src attribute can trigger a cross-origin GET, with no pre-flight, there would be no security benefit to requiring pre-fight on "simple" XHRs.
Additionally, for HTTP request methods that can cause side-effects on
user data (in particular, for HTTP methods other than GET, or for POST
usage with certain MIME types), the specification mandates that
browsers "preflight" the request
Source
In a browser supporting CORS, reading requests (like GET) are already protected by the same-origin policy: A malicious website trying to make an authenticated cross-domain request (for example to the victim's internet banking website or router's configuration interface) will not be able to read the returned data because the bank or the router doesn't set the Access-Control-Allow-Origin header.
However, with writing requests (like POST) the damage is done when the request arrives at the webserver.* A webserver could check the Origin header to determine if the request is legit, but this check is often not implemented because either the webserver has no need for CORS or the webserver is older than CORS and is therefore assuming that cross-domain POSTs are completely forbidden by the same-origin policy.
That is why webservers are given the chance to opt-in into receiving cross-domain write requests.
* Essentially the AJAX version of CSRF.
Aren't the preflighted requests about Performance? With the preflighted requests a client can quickly know if the operation is allowed before send a large amount of data, e.g., in JSON with PUT method. Or before travel sensitive data in authentication headers over the wire.
The fact of PUT, DELETE, and other methods, besides custom headers, aren't allowed by default(They need explicit permission with "Access-Control-Request-Methods" and "Access-Control-Request-Headers"), that sounds just like a double-check, because these operations could have more implications to the user data, instead GET requests.
So, it sounds like:
"I saw that you allow cross-site requests from http://foo.example, BUT are you SURE that you'll allow DELETE requests? Did you consider the impacts that these requests might cause in the user data?"
I didn't understand the cited correlation between the preflighted requests and the old servers benefits. A Web Service that was implemented before CORS, or without a CORS awareness, will never receive ANY cross-site request, because first their response won't have the "Access-Control-Allow-Origin" header.

Same origin Policy and CORS (Cross-origin resource sharing)

I was trying to understand CORS. As per my understanding, it is a security mechanism implemented in browsers to avoid any AJAX request to domain other than the one open by the user (specified in the URL).
Now, due to this limitation many CORS was implemented to enable websites to do cross origin request. but as per my understanding implementing CORS defy the security purpose of the "Same Origin Policy" (SOP).
CORS is just to provide extra control over which request server wants to serve. Maybe it can avoid spammers.
From Wikipedia:
To initiate a cross-origin request, a browser sends the request with
an Origin HTTP header. The value of this header is the site that
served the page. For example, suppose a page on
http://www.social-network.example attempts to access a user's data
in online-personal-calendar.example. If the user's browser implements
CORS, the following request header would be sent:
Origin: http://www.social-network.example
If online-personal-calendar.example allows the request, it sends an
Access-Control-Allow-Origin header in its response. The value of the
header indicates what origin sites are allowed. For example, a
response to the previous request would contain the following:
Access-Control-Allow-Origin: http://www.social-network.example
If the server does not allow the cross-origin request, the browser
will deliver an error to social-network.example page instead of
the online-personal-calendar.example response.
To allow access to all pages, a server can send the following response
header:
Access-Control-Allow-Origin: *
However, this might not be appropriate for situations in which
security is a concern.
What am I missing here? what is the the intend of CORS to secure the server vs secure the client.
Same-origin policy
What is it?
The same-origin policy is a security measure standardized among browsers. The "origin" mostly refers to a "domain". It prevents different origins from interacting with each other, to prevent attacks such as Cross Site Request Forgery.
How does a CSRF attack work?
Browsers allow websites to store information on a client's computer, in the form of cookies. These cookies have some information attached to them, like the name of the cookie, when it was created, when it will expire, who set the cookie etc. A cookie looks something like this:
Cookie: cookiename=chocolate; Domain=.bakery.example; Path=/ [// ;otherDdata]
So this is a chocolate cookie, which should be accessible from http://bakery.example and all of its subdomains.
This cookie might contain some sensitive data. In this case, that data is... chocolate. Highly sensitive, as you can see.
So the browser stores this cookie. And whenever the user makes a request to a domain on which this cookie is accessible, the cookie would be sent to the server for that domain. Happy server.
This is a good thing. Super cool way for the server to store and retrieve information on and from the client-side.
But the problem is that this allows http://malicious-site.example to send those cookies to http://bakery.example, without the user knowing! For example, consider the following scenario:
# malicious-site.example/attackpage
var xhr = new XMLHttpRequest();
xhr.open('GET', 'http://bakery.example/order/new?deliveryAddress="address of malicious user"');
xhr.send();
If you visit the malicious site, and the above code executes, and same-origin policy was not there, the malicious user would place an order on behalf of you, and get the order at his place... and you might not like this.
This happened because your browser sent your chocolate cookie to http://bakery.example, which made http://bakery.example think that you are making the request for the new order, knowingly. But you aren't.
This is, in plain words, a CSRF attack. A forged request was made across sites. "Cross Site Request Forgery". And it would not work, thanks to the same-origin policy.
How does Same-origin policy solve this?
It stops the malicious-site.example from making requests to other domains. Simple.
In other words, the browser would not allow any site to make a request to any other site. It would prevent different origins from interacting with each other through such requests, like AJAX.
However, resource loading from other hosts like images, scripts, stylesheets, iframes, form submissions etc. are not subject to this limitation. We need another wall to protect our bakery from malicious site, by using CSRF Tokens.
CSRF Tokens
As stated, malicious site can still do something like this without violating the same-origin policy:
<img src='http://bakery.example/order/new?deliveryAddress="address of malicious user"'/>
And the browser will try to load an image from that URL, resulting in a GET request to that URL sending all the cookies. To stop this from happening, we need some server side protection.
Basically, we attach a random, unique token of suitable entropy to the user's session, store it on the server, and also send it to the client with the form. When the form is submitted, client sends that token along with the request, and server verifies if that token is valid or not.
Now that we have done this, and malicious website sends the request again, it will always fail since there is no feasible way for the malicious website to know the token for user's session.
CORS
When required, the policy can be circumvented, when cross site requests are required. This is known as CORS. Cross Origin Resource Sharing.
This works by having the "domains" tell the browser to chill, and allow such requests. This "telling" thing can be done by passing a header. Something like:
Access-Control-Allow-Origin: //comma separated allowed origins list, or just *
So if http://bakery.example passes this header to the browser, and the page creating the request to http://bakery.example is present in the origin list, then the browser will let the request go, along with the cookies.
There are rules according to which the origin is defined1. For example, different ports for the same domain are not the same origin. So the browser might decline this request if the ports are different. As always, our dear Internet Explorer is the exception to this. IE treats all ports the same way. This is non-standard and no other browser behaves this way. Do not rely on this.
JSONP
JSON with Padding is just a way to circumvent same-origin policy, when CORS is not an option. This is risky and a bad practice. Avoid using this.
What this technique involves is making a request to the other server like following:
<script src="http://badbakery.example/jsonpurl?callback=cake"></script>
Since same-origin policy does not prevent this2 request, the response of this request will be loaded into the page.
This URL would most probably respond with JSON content. But just including that JSON content on the page is not gonna help. It would result in an error, ofcourse. So http://badbakery.example accepts a callback parameter, and modifies the JSON data, sending it wrapped in whatever is passed to the callback parameter.
So instead of returning,
{ user: "vuln", acc: "B4D455" }
which is invalid JavaScript throwing an error, it would return,
cake({user: "vuln", acc:"B4D455"});
which is valid JavaScript, it would get executed, and probably get stored somewhere according to the cake function, so that the rest of the JavaScript on the page can use the data.
This is mostly used by APIs to send data to other domains. Again, this is a bad practice, can be risky, and should be strictly avoided.
Why is JSONP bad?
First of all, it is very much limited. You can't handle any errors if the request fails (at-least not in a sane way). You can't retry the request, etc.
It also requires you to have a cake function in the global scope which is not very good. May the cooks save you if you need to execute multiple JSONP requests with different callbacks. This is solved by temporary functions by various libraries but is still a hackish way of doing something hackish.
Finally, you are inserting random JavaScript code in the DOM. If you aren't 100% sure that the remote service will return safe cakes, you can't rely on this.
References
1. https://developer.mozilla.org/en-US/docs/Web/Security/Same-origin_policy#Definition_of_an_origin
2. https://www.w3.org/Security/wiki/Same_Origin_Policy#Details
Other worthy reads
http://scarybeastsecurity.blogspot.dk/2009/12/generic-cross-browser-cross-domain.html
https://www.rfc-editor.org/rfc/rfc3986 (sorry :p)
https://developer.mozilla.org/en-US/docs/Web/Security/Same-origin_policy
https://www.owasp.org/index.php/Cross-Site_Request_Forgery_(CSRF)
The Same Origin Policy (SOP) is the policy browsers implement to prevent vulnerabilities via Cross Site Scripting (XSS). This is mainly for protecting the server, as there are many occasions when a server can be dealing with authentication, cookies, sessions, etc.
The Cross Origin Resource Sharing (CORS) is one of the few techniques for relaxing the SOP. Because SOP is "on" by default, setting CORS at the server-side will allow a request to be sent to the server via an XMLHttpRequest even if the request was sent from a different domain. This becomes useful if your server was intended to serve requests from other domains (e.g. if you are providing an API).
I hope this clears up the distinction between SOP and CORS and the purposes of each.

What makes cross domain ajax insecure?

I'm not sure I understand what types of vulnerabilities this causes.
When I need to access data from an API I have to use ajax to request a PHP file on my own server, and that PHP file accesses the API. What makes this more secure than simply allowing me to hit the API directly with ajax?
For that matter, it looks like using JSONP http://en.wikipedia.org/wiki/JSONP you can do everything that cross-domain ajax would let you do.
Could someone enlighten me?
I think you're misunderstanding the problem that the same-origin policy is trying to solve.
Imagine that I'm logged into Gmail, and that Gmail has a JSON resource, http://mail.google.com/information-about-current-user.js, with information about the logged-in user. This resource is presumably intended to be used by the Gmail user interface, but, if not for the same-origin policy, any site that I visited, and that suspected that I might be a Gmail user, could run an AJAX request to get that resource as me, and retrieve information about me, without Gmail being able to do very much about it.
So the same-origin policy is not to protect your PHP page from the third-party site; and it's not to protect someone visiting your PHP page from the third-party site; rather, it's to protect someone visiting your PHP page, and any third-party sites to which they have special access, from your PHP page. (The "special access" can be because of cookies, or HTTP AUTH, or an IP address whitelist, or simply being on the right network — perhaps someone works at the NSA and is visiting your site, that doesn't mean you should be able to trigger a data-dump from an NSA internal page.)
JSONP circumvents this in a safe way, by introducing a different limitation: it only works if the resource is JSONP. So if Gmail wants a given JSON resource to be usable by third parties, it can support JSONP for that resource, but if it only wants that resource to be usable by its own user interface, it can support only plain JSON.
Many web services are not built to resist XSRF, so if a web-site can programmatically load user data via a request that carries cross-domain cookies just by virtue of the user having visited the site, anyone with the ability to run javascript can steal user data.
CORS is a planned secure alternative to XHR that solves the problem by not carrying credentials by default. The CORS spec explains the problem:
User agents commonly apply same-origin restrictions to network requests. These restrictions prevent a client-side Web application running from one origin from obtaining data retrieved from another origin, and also limit unsafe HTTP requests that can be automatically launched toward destinations that differ from the running application's origin.
In user agents that follow this pattern, network requests typically use ambient authentication and session management information, including HTTP authentication and cookie information.
EDIT:
The problem with just making XHR work cross-domain is that many web services expose ambient authority. Normally that authority is only available to code from the same origin.
This means that a user that trusts a web-site is trusting all the code from that website with their private data. The user trusts the server they send the data to, and any code loaded by pages served by that server. When the people behind a website and the libraries it loads are trustworthy, the user's trust is well-placed.
If XHR worked cross-origin, and carried cookies, that ambient authority would be available to code to anyone that can serve code to the user. The trust decisions that the user previously made may no longer be well-placed.
CORS doesn't inherit these problems because existing services don't expose ambient authority to CORS.
The pattern of JS->Server(PHP)->API makes it possible and not only best, but essential practice to sanity-check what you get while it passes through the server. In addition to that, things like poisened local resolvers (aka DNS Worms) etc. are much less likely on a server, than on some random client.
As for JSONP: This is not a walking stick, but a crutch. IMHO it could be seen as an exploit against a misfeature of the HTML/JS combo, that can't be removed without breaking existing code. Others might think different of this.
While JSONP allows you to unreflectedly execute code from somwhere in the bad wide world, nobody forces you to do so. Sane implementations of JSONP allways use some sort of hashing etc to verify, that the provider of that code is trustwirthy. Again others might think different.
With cross site scripting you would then have a web page that would be able to pull data from anywhere and then be able to run in the same context as your other data on the page and in theory have access to the cookie and other security information that you would not want access to be given too. Cross site scripting would be very insecure in this respect since you would be able to go to any page and if allowed the script on that page could just load data from anywhere and then start executing bad code hence the reason that it is not allowed.
JSONP on the otherhand allows you to get data in JSON format because you provide the necessary callback that the data is passed into hence it gives you the measure of control in that the data will not be executed by the browser unless the callback function does and exec or tries to execute it. The data will be in a JSON format that you can then do whatever you wish with, however it will not be executed hence it is safer and hence the reason it is allowed.
The original XHR was never designed to allow cross-origin requests. The reason was a tangible security vulnerability that is primarily known by CSRF attacks.
In this attack scenario, a third party site can force a victim’s user agent to send forged but valid and legitimate requests to the origin site. From the origin server perspective, such a forged request is not indiscernible from other requests by that user which were initiated by the origin server’s web pages. The reason for that is because it’s actually the user agent that sends these requests and it would also automatically include any credentials such as cookies, HTTP authentication, and even client-side SSL certificates.
Now such requests can be easily forged: Starting with simple GET requests by using <img src="…"> through to POST requests by using forms and submitting them automatically. This works as long as it’s predictable how to forge such valid requests.
But this is not the main reason to forbid cross-origin requests for XHR. Because, as shown above, there are ways to forge requests even without XHR and even without JavaScript. No, the main reason that XHR did not allow cross-origin requests is because it would be the JavaScript in the web page of the third party the response would be sent to. So it would not just be possible to send cross-origin requests but also to receive the response that can contain sensitive information that would then be accessible by the JavaScript.
That’s why the original XHR specification did not allow cross-origin requests. But as technology advances, there were reasonable requests for supporting cross-origin requests. That’s why the original XHR specification was extended to XHR level 2 (XHR and XHR level 2 are now merged) where the main extension is to support cross-origin requests under particular requirements that are specified as CORS. Now the server has the ability to check the origin of a request and is also able to restrict the set of allowed origins as well as the set of allowed HTTP methods and header fields.
Now to JSONP: To get the JSON response of a request in JavaScript and be able to process it, it would either need to be a same-origin request or, in case of a cross-origin request, your server and the user agent would need to support CORS (of which the latter is only supported by modern browsers). But to be able to work with any browser, JSONP was invented that is simply a valid JavaScript function call with the JSON as a parameter that can be loaded as an external JavaScript via <script> that, similar to <img>, is not restricted to same-origin requests. But as well as any other request, a JSONP request is also vulnerable to CSRF.
So to conclude it from the security point of view:
XHR is required to make requests for JSON resources to get their responses in JavaScript
XHR2/CORS is required to make cross-origin requests for JSON resources to get their responses in JavaScript
JSONP is a workaround to circumvent cross-origin requests with XHR
But also:
Forging requests is laughable easy, although forging valid and legitimate requests is harder (but often quite easy as well)
CSRF attacks are a not be underestimated threat, so learn how to protect against CSRF

Resources