I've created a RESTful API that supports GET/POST/PUT/DELETE requests. Now I want my API to have a Javascript client library, and I thought to use JSONP to bypass the cross-domain policy. That works, but of course only for GET requests.
So I started thinking how to implement such a thing and at the same time trying to make it painless to use.
I thought to edit my API implementation and check every HTTP request. If it's a JSONP requests (it has a "callback" parameter in the querystring) I force every API method to be executed by a GET request, even if it should be called by other methods like POST or DELETE.
This is not a RESTful approach to the problem, but it works. What do you think?
Maybe another solution could be to dynamically generate an IFrame to send non-GET requests. Any tips?
There's some relevant points on a pretty similar question here...
JSONP Implications with true REST
The cross-domain restrictions are there for a reason ;-)
Jsonp allows you to expose a limited, safe, read-only view of the API to cross domain access - if you subvert that then you're potentially opening up a huge security hole - malicious websites can make destructive calls to your API simply by including an image with an href pointing to the right part of the API
Having your webapp expose certain functionality accessed through iframes, where all the ajax occurs within the context of your webapp's domain is definitely the safer choice. Even then you still need to take CSRF into consideration. (Take a look at Django's latest security announcement on the Django blog for a prime example - as of a release this week all javascript calls to a Django webapp must be CSRF validated by default)
The Iframe hack is not working anymore on recent browsers, do not use it anymore (source : http://jquery-howto.blogspot.de/2013/09/jquery-cross-domain-ajax-request.html)
Related
Maybe I'm just not understanding this right, but this doesn't seem to make sense to me.
I have an MVC4 project exposing an ASP.NET WebApi. It works great making calls to the API within that project, but obviously making calls to it from another running project (on another port) requires cross-site scripting.
But here's my question: Doesn't this defeat the purpose of an API? If I want to make calls to the reddit API from my site, the fact that this is considered cross-site scripting makes it not only a bad security practice, but in some cases impossible.
If XSS is required to do this, doesn't that make AJAX pretty useless as a whole?
Simple answer: Of course not!! Pretty much the whole of the modern web is built on AJAX, if it was so pointless it would never have gone from a MS proprietary API to being the backbone of web 2.0 and all that has come since.
Complex answer: Firstly, XSS is a form of attack/vulnerability, not a form of request. What you're referring to is the same-origin policy, which limits AJAX requests to the same domain, for security reasons.
JSONP is typically used to make async requests to third party APIs. Your own API will typically sit on the same domain as your website so you will not have problems. If your API must be on another domain, you can either look at CORS or setup of a transparent reverse proxy to forward your requests to another server.
Hopefully this all makes sense, it'll at least give you a good foundation of knowledge to build from.
Traditionally, most apps have had both a server and a client component. The server component would do all the heavy lifting, including making requests to other APIs. Since the API request is done server-side, the request could go to any remote API server. There was never any thought given to accessing APIs from the client, since people expected the server to do it.
In recent years, we've seen more and more functionality pushed from the server onto the client, specifically through JavaScript. But making remote requests is one of the things that couldn't move to the client, due to browser's same-origin policy. So its not that the purpose of the API is defeated, its that we are now using APIs in ways we didn't conceive of before.
It would be irresponsible for browsers to suddenly ignore the same-origin policy. This would break the thousands of sites out there who depend on same-origin policies for security. So instead, the W3C has proposed the Cross-Origin Resource Sharing (CORS) spec. The CORS spec allows requests to be made across domains, but does so securely by letting the the server have the final say in who can access the API. This makes cross-domain requests possible, without breaking existing APIs.
I know all about SQL injections, and peeking into javascript files that a website uses, and also that GET requests contain all of the information in a URL.
Is there any security concern that is special to AJAX and only pertains to using AJAX?
For example, sending post requests via AJAX seems completely safe to me. Barring SQL injections, I can't think of one thing that could go wrong... is this the correct case?
Also, are "requests" of any kind that a user's browser sends or any information it receives available to be viewed by a third party who should not be viewing? And can that happen to AJAX post requests ('post' requests specifically; not 'get')?
It's like any other form of data input: validate your values, check the referrer, authenticate the session, use SSL.
I have recently been experimenting with building a cross domain web api, and wow has it been a bumpy journey. I have not had any problems with modern browsers such as Chrome, FF and Safari. The problem is with IE, which requires you to use XDR as opposed $.ajax when making cross domain calls. First Question: If I was using Backbone.js, what is the recommended way of making cross browser and cross domain ajax calls?
Another problem I had with IE was that when you make cross domain ajax requests, IE has a bunch of restrictions and limitations such as "Only text/plain is supported for the request's Content-Type header" - a link. Therefore in my case, I was unable to bind to my model using the MVC C# framework, unless I bind it manually.
Anyway my second and last question is: How do companies like Instagram, Facebook, and Twitter go about building their API's? I am not looking for a complete guide, but just want to know how difficult it is.
JSONP
The current standard is using JSONP. It is basically a trick to send a JSON payload wrapped in a single JavaScript function, the browser treats it like a script file and executes it.
CORS
Moving forward the way to go is CORS. Sadly browser support (IE) isn't there yet and there are still some implementation differences between the modern browsers that do implement it.
HTTP Method Overloading
Some APIs overload GET and POST request using X-HTTP-Method-Override: PUT or ?_method=PUT.
easyXDM
A number of API providers implement easyXDM. This tends to be used more when they provide a JavaScript API or widget API where developers load their JS and integrate it directly in to the frontend code.
I'm not sure I understand what types of vulnerabilities this causes.
When I need to access data from an API I have to use ajax to request a PHP file on my own server, and that PHP file accesses the API. What makes this more secure than simply allowing me to hit the API directly with ajax?
For that matter, it looks like using JSONP http://en.wikipedia.org/wiki/JSONP you can do everything that cross-domain ajax would let you do.
Could someone enlighten me?
I think you're misunderstanding the problem that the same-origin policy is trying to solve.
Imagine that I'm logged into Gmail, and that Gmail has a JSON resource, http://mail.google.com/information-about-current-user.js, with information about the logged-in user. This resource is presumably intended to be used by the Gmail user interface, but, if not for the same-origin policy, any site that I visited, and that suspected that I might be a Gmail user, could run an AJAX request to get that resource as me, and retrieve information about me, without Gmail being able to do very much about it.
So the same-origin policy is not to protect your PHP page from the third-party site; and it's not to protect someone visiting your PHP page from the third-party site; rather, it's to protect someone visiting your PHP page, and any third-party sites to which they have special access, from your PHP page. (The "special access" can be because of cookies, or HTTP AUTH, or an IP address whitelist, or simply being on the right network — perhaps someone works at the NSA and is visiting your site, that doesn't mean you should be able to trigger a data-dump from an NSA internal page.)
JSONP circumvents this in a safe way, by introducing a different limitation: it only works if the resource is JSONP. So if Gmail wants a given JSON resource to be usable by third parties, it can support JSONP for that resource, but if it only wants that resource to be usable by its own user interface, it can support only plain JSON.
Many web services are not built to resist XSRF, so if a web-site can programmatically load user data via a request that carries cross-domain cookies just by virtue of the user having visited the site, anyone with the ability to run javascript can steal user data.
CORS is a planned secure alternative to XHR that solves the problem by not carrying credentials by default. The CORS spec explains the problem:
User agents commonly apply same-origin restrictions to network requests. These restrictions prevent a client-side Web application running from one origin from obtaining data retrieved from another origin, and also limit unsafe HTTP requests that can be automatically launched toward destinations that differ from the running application's origin.
In user agents that follow this pattern, network requests typically use ambient authentication and session management information, including HTTP authentication and cookie information.
EDIT:
The problem with just making XHR work cross-domain is that many web services expose ambient authority. Normally that authority is only available to code from the same origin.
This means that a user that trusts a web-site is trusting all the code from that website with their private data. The user trusts the server they send the data to, and any code loaded by pages served by that server. When the people behind a website and the libraries it loads are trustworthy, the user's trust is well-placed.
If XHR worked cross-origin, and carried cookies, that ambient authority would be available to code to anyone that can serve code to the user. The trust decisions that the user previously made may no longer be well-placed.
CORS doesn't inherit these problems because existing services don't expose ambient authority to CORS.
The pattern of JS->Server(PHP)->API makes it possible and not only best, but essential practice to sanity-check what you get while it passes through the server. In addition to that, things like poisened local resolvers (aka DNS Worms) etc. are much less likely on a server, than on some random client.
As for JSONP: This is not a walking stick, but a crutch. IMHO it could be seen as an exploit against a misfeature of the HTML/JS combo, that can't be removed without breaking existing code. Others might think different of this.
While JSONP allows you to unreflectedly execute code from somwhere in the bad wide world, nobody forces you to do so. Sane implementations of JSONP allways use some sort of hashing etc to verify, that the provider of that code is trustwirthy. Again others might think different.
With cross site scripting you would then have a web page that would be able to pull data from anywhere and then be able to run in the same context as your other data on the page and in theory have access to the cookie and other security information that you would not want access to be given too. Cross site scripting would be very insecure in this respect since you would be able to go to any page and if allowed the script on that page could just load data from anywhere and then start executing bad code hence the reason that it is not allowed.
JSONP on the otherhand allows you to get data in JSON format because you provide the necessary callback that the data is passed into hence it gives you the measure of control in that the data will not be executed by the browser unless the callback function does and exec or tries to execute it. The data will be in a JSON format that you can then do whatever you wish with, however it will not be executed hence it is safer and hence the reason it is allowed.
The original XHR was never designed to allow cross-origin requests. The reason was a tangible security vulnerability that is primarily known by CSRF attacks.
In this attack scenario, a third party site can force a victim’s user agent to send forged but valid and legitimate requests to the origin site. From the origin server perspective, such a forged request is not indiscernible from other requests by that user which were initiated by the origin server’s web pages. The reason for that is because it’s actually the user agent that sends these requests and it would also automatically include any credentials such as cookies, HTTP authentication, and even client-side SSL certificates.
Now such requests can be easily forged: Starting with simple GET requests by using <img src="…"> through to POST requests by using forms and submitting them automatically. This works as long as it’s predictable how to forge such valid requests.
But this is not the main reason to forbid cross-origin requests for XHR. Because, as shown above, there are ways to forge requests even without XHR and even without JavaScript. No, the main reason that XHR did not allow cross-origin requests is because it would be the JavaScript in the web page of the third party the response would be sent to. So it would not just be possible to send cross-origin requests but also to receive the response that can contain sensitive information that would then be accessible by the JavaScript.
That’s why the original XHR specification did not allow cross-origin requests. But as technology advances, there were reasonable requests for supporting cross-origin requests. That’s why the original XHR specification was extended to XHR level 2 (XHR and XHR level 2 are now merged) where the main extension is to support cross-origin requests under particular requirements that are specified as CORS. Now the server has the ability to check the origin of a request and is also able to restrict the set of allowed origins as well as the set of allowed HTTP methods and header fields.
Now to JSONP: To get the JSON response of a request in JavaScript and be able to process it, it would either need to be a same-origin request or, in case of a cross-origin request, your server and the user agent would need to support CORS (of which the latter is only supported by modern browsers). But to be able to work with any browser, JSONP was invented that is simply a valid JavaScript function call with the JSON as a parameter that can be loaded as an external JavaScript via <script> that, similar to <img>, is not restricted to same-origin requests. But as well as any other request, a JSONP request is also vulnerable to CSRF.
So to conclude it from the security point of view:
XHR is required to make requests for JSON resources to get their responses in JavaScript
XHR2/CORS is required to make cross-origin requests for JSON resources to get their responses in JavaScript
JSONP is a workaround to circumvent cross-origin requests with XHR
But also:
Forging requests is laughable easy, although forging valid and legitimate requests is harder (but often quite easy as well)
CSRF attacks are a not be underestimated threat, so learn how to protect against CSRF
So, I implemented an API provider to be accessed by both web application and mobile applications.
Most likely this will not be a large scale project, but I want to maximize my learning experience and geek out where I can.
Anyway, from what I understand, it seems like it's better to put the API provider service and the actual website on separate domains to make scaling easier.
For example, twitter has the website twitter.com and api.twitter.com.
One immediate issue would be dealing with the cross-domain issue with AJAX.
From what I gather, there are 2 ways to implement cross-domain AJAX
JSONP: I heard about it, but don't know much beyond the name
Proxy Server: so, my website is build on top of ASP.NET MVC and I was thinking about creating a APIProxy controller to handle all cross-domain API requests.
That way, I would make an AJAX call via $.ajax(settings) and then pass in the website URL that corresponds to the APIProxy controller. The APIProxy controller would then make the appropriate POST server calls and process the JSON responses and return the response back to AJAX callback functions.
I heard about flXHR about I don't want to use Flash because devices like the iPad or any a lot of mobile browsers don't support Flash.
Anyway, I just wanted to ask what are some of the best practices in managing a website with the API provider on a separate domain or subdomain.
When you request some JSON, it returns an object or array. Script tags are not subject to the same-domain rule. So instead making an AJAX call, you would essentially do this:
<script src="Http://api.example.com?param1=something&etc"></script>
That would load the JSON, and it would execute as JavaScript.
...But a simple object or array "executing" by itself isn't very useful. So when you request the JSON, you also include the name of a callback function. If the provider sees that a callback was provided, instead of just returning JSON, it actually returns JavaScript: the JSON is passed to your function as an argument.
Then, when the script loads, your function (which you already defined) is called, and given the JSON to work with.
That's JSONP.
Bibliography
Newton, Aaron. "Request.JSONP." Clientcide. 7 Dec. 2009. Web. 28 Jan. 2011.