I've made a chrome browser extension that makes some simple get/post requests via JS to my sinatra server.
Every time a POST comes in, I see:
attack prevented by Rack::Protection::HttpOrigin
in the logs. The correct response is beint sent back.
I imagine it has something to do with browsers' Same origin policy, but I'm not exactly sure what is going on. Is it expecting a header that is missing?
Related
This is something I was wondering, but could not get a definitive answer elsewhere.
Is a http get request asynchronous?
If they're different, are there any major differences?
Not looking for opinions, just definitive answers.
Googling has just repeatedly led me to examples of one or the other.
HTTP is the most common protocol used to transfer data on the web. It's what the browser users on port 80 for all websites. Pages, AJAX, etc.
GET is a particular "verb" used in an HTTP request. A GET request is usually distinct in that it doesn't have a request body and it doesn't expect to modify anything on the server, simply "get" data.
AJAX requests are essentially HTTP requests made from JavaScript code, rather than from navigation in the browser. They may be GET requests, or they may be other kinds of HTTP requests. Structurally they're no different from any other HTTP request made by the browser, they're just made from code instead of the browser's UI.
There is overlap between these three terms, because they're not mutually exclusive versions of the same thing. They're apples and oranges, really. HTTP isn't different from the other two, it would be different from something like FTP. GET isn't different from the other two, it would be different from something like POST.
You can see a lot of this in action by taking a look at your browser's debugging tools. Visiting any reasonably active page (such as Stack Overflow, for example) will show you a number of requests being made and the server's responses to those requests. As you interact with a page which uses AJAX, watch those requests in the debugging tools and see how they're structured. Load a page or two by navigation and see how those requests are structured.
There's not much to it, really. It's all requests and responses, each of which is simply headers and content.
Ajax used so web applications can send data to and retrieve from a server asynchronously (in the background) without interfering with the display and behavior of the existing page.
HTTP GET or HTTP POST are method in the HTTP Protocol, which are a way to send and receive the data.
While Ajax is the Car, HTTP Protocol is the Driving laws.
Few examples of everyday surfing using Ajax:
Facebook Feed - When scrolling to the bottom of Facebook a Loader circle appears that loads a more prior updates on your wall, this is happening without surfing to another page, but rather retrieving it while still on the same page.
Google Omnibox Prediction - When typing part of the text in the Omnibox, google will suggest you the completion of your text while you're still typing.
First try to get through : GET vs POST.
An ajax call can be GET or POST or PUT or any other.
To differentiate between ajax GET & normal HTTP GET.
Ajax GET seems asynchronous by as the request is sent using another thread by the browser.
Ajax GET request has additional X-Requested-With: XMLHttpRequest.
GET Request is captured by browser history, whilw Ajax GET does not get captured.
Many (probably the majority) of AJAX calls are done by a browser on a webpage and that webpage has a URL. Is it possible for a webserver to that's receiving the AJAX request to determine the URL of the webpage where the AJAX call was made? I assume there isn't a standard that requires this data in the headers, but perhaps some browsers include that info? Obviously this doesn't apply if the AJAX call was made from a phone app or other application without a URL.
Very generically (though unreliable), check incoming request headers for Referer. That should give you information about the source page.
Just keep in mind it can be spoofed, absent, etc. and shouldn't be considered bullet-proof (though it doesn't sound like you need it to be anyways).
Via Firefox, if I do a GET text/html request to my web app, I get a 200 response back, and then Firefox sends 3 more of the same request right afterward. All return 200s. Does anyone know what would cause this?
*Some other observations about the issue:
In Firebug's network tab, only one request shows up. I can only see the extra requests using Tamper Data or another tool that sees the Http requests sent from my browser.
This issue does not happen in prior versions of my web app. When I compare the responses that get returned by the two different versions of the web app, I can't see anything that would cause this issue (but then, I really don't know what to look for). The responses are identical except for the web app's cookies, which are different.
This issue happens with JavaScript enabled or disabled.
Something similar is happening with Chrome, though it seems to be sending only 2 extra requests.
I don't see any browser redirects in the Html header.
This is only happening with text/html requests, not css requests, for example.
All 4 responses returned seem to have the complete Html page in the body, and they also have the cookie that the web app uses.
In Tamper Data, the 'Load Flags' column (whatever that is) says the following: First request is VALIDATE_ALWAYS_LOAD_DOCUMENT_URI LOAD_INITIAL_DOCUMENT_URI; second and third requests are LOAD_NORMAL; fourth request is LOAD_FROM_CACHE VALIDATE_NEVER
I don't see it happening with POSTs
It does not happen when the response is a 302.
If I go into the firefox config and set network.http.max-connections-per-server to 1, then Firefox only sends one request (the issue does not occur). (I don't think I can ask all our users to do that. :-))
*Why this issue is a problem:
This site has been around a long time and wasn't designed for this behavior. It's probably not going to go well.
(edited to add new findings)
I'm working on extensions for Firefox and Chrome. The data used by my extensions is mostly generated from ajax requests. The type of data being returned is private, so it needs to be secure. My server supports https and the ajax calls are being sent to an https domain. Information is being sent back and forth, and the extensions are working correctly.
My questions are:
Do the extensions actually make secure connections with the server, or is this considered the same as cross domain posting, sending a request from a http page to a https page?
Am I putting my users' information at more risk during the transfers than if the user were to access the information directly from an https web page in the browser?
Thanks in advance!
The browser absolutely makes a secure connection when you use HTTPS. Certainly, a browser would never downgrade the security of your connection without telling you: it will either complete the request as written or it throw some sort of error if it is not possible.
Extensions for both Chrome and Firefox are permitted to make cross-domain AJAX requests. In Chrome, you simply need to supply the protocol/name of the host as a permission in your manifest.json. In Firefox, I think you may need to use Components.classes to get a cross-domain requester, as described in the MDN page for Using XMLHttpRequest, but I'm not 100% sure about that. Just try doing a normal request and see if it succeeds; if not, use the Components.classes solution.
I have done a bit of testing on this myself (During the server side processing of a DWR Framework Ajax request handler to be exact) and it seems you CAN successfully manipulate cookies, but this goes against much that I have read on Ajax best practices and how browsers interpret the response from an XmlHttpRequest. Note I have tested on:
IE 6 and 7
Firefox 2 and 3
Safari
and in all cases standard cookie operations on the HttpServletResponse object during Ajax request handling were correctly interpreted by the browser, but I would like to know if it best practice to push the cookie manipulation to the client side, or if this (much cleaner) server side cookie handling can be trusted.
I would welcome answers both specific to the DWR Framework and Ajax in general.
XMLHttpRequest always uses the Web Browser's connection framework. This is a requirement for AJAX programs to work correctly as the user would get logged out if the XHR object lacked access to the browser's cookie pool.
It's theoretically possible for a web browser to simply share session cookies without using the browser's connection framework, but this has never (to my knowledge) happened in practice. Even the Flash plugin uses the Web Browser's connections.
Thus the end result is that it IS safe to manipulate cookies via AJAX. Just keep in mind that the AJAX call might never happen. They are not guaranteed events, so don't count on them.
In the context of DWR it may not be "safe".
From reading the DWR site it says:
It is important that you treat the HTTP request and response as read-only. While HTTP headers might get through OK, there is a good chance that some browsers will ignore them.
I've taken this to mean that setting cookies or request attributes is a no-no.
Saying that, I have code which does set request attributes (code I wrote before I read that page) and it appears to work fine (apart from deleting cookies which I mentioned in my comment above).
Manipulating cookies on the client side is rather the opposite of "best practice". And it shouldn't be necessary, either. HttpOnly cookies weren't introduced for nothing.