I'm writing a web service in Sinatra. I use middleware to add simple, dumb CORS support (by simply spamming Access-Control-Allow-Origin=* on all requests). I know there's a gem with more robust support, but this does what I want with 3 lines of code. Except...
Except that if #app.call env raises an exception, I never get a chance to modify the headers. The exception bubbles all the way up to Rack::ShowExceptions, and I can't find a way to inject my extra header into its response.
Do I have to stop using Rack::ShowExceptions? Do I have to mokeypatch it? Should I put more middleware further down the stack that catches non-CORS-aware error messages and adds them? I'm not sure I know how to do any of those.
Turns out I guessed right: I simply had to make sure that use DumbCorsSupport was before use Rack::ShowExceptions. It's worth noting, though, that at least in the browsers I'm using, if your 500 page doesn't offer the same CORS headers as the rest of your service, your client-side error callbacks won't get triggered!
Related
I noticed that an ajax request was taking too long, and apparently, after a while Chrome decided to execute the request again. So I ended with two duplicated ajax requests (and thus two database entries).
I googled about this Chrome behaviour and someone said that it was possible to tell Chrome not to retry ajax requests after timeout with a "do not retry" flag.
Questions:
Is it possible in angular to setup the $http with this flag?
Any other solution? The first time I had this problem I thought it was a "double click" problem, but now I'm pretty sure it is not. (In fact I always disable buttons while the request is working)
I feel like I don't have enough information, so don't downvote me if this is wrong. But when you say, a "do not retry" flag, do you mean a custom header? If so, you can attach custom headers in your config like this, which will attach the header to any and all http requests throughout your application (I personally have used this for including the X-Requested-With header:
.config(["$httpProvider", function($httpProvider) {
$httpProvider.defaults.headers.common['do-not-retry'] = 'true';
}])
I'm using AFNetworking 2.0 to receive response from server. For first response, it works fine. However, after I change the data on the admin site, and verify that the change is made in a browser, then I run the app again, but I still get the previous response. I don't understand why? It seems that AFNetworking is caching the old response. I want to download the current feed. Who can help me, please????
I had the exact opposite problem. I was getting the same image from my server twice, AFNetworking wasn't caching. As I debugged it I realized that I was calling 2 slightly different URLs, in one case I was specifying an option that was the default on the server.
So this gave me the idea for a work around hack for you. It isn't the right answer but it should work. Just pass a useless parameter to the server. Change this parameter for each server call.
https://example.com/myrequest?index=0
then
https://example.com/myrequest?index=1
where index is the unused parameter.
Note: this is actually a pretty gross hack, it should get you running but you really should find the "correct" answer.
I found the source of my problem for SuperAgent (http://visionmedia.github.com/superagent/) on Firefox. Not sure if SuperAgent is doing it in its AJAX call or if FireFox is triggering it.
Essentially, every time I make an AJAX call an OPTIONS method is being fired on the URL before the actual AJAX call. Quite annoying since the server currently doesn't support OPTIONS. How can I make a call without it going to crap and re-coding the server?
Thanks
Ok found out some more details. Thankfully testing on Safari gave me more insight into what was actually happening and I applied my knowledge here.
It seems to be the standard that browsers are calling an OPTIONS method before making an actual AJAX call. Seems a bit overbearing.
So to get around it I simply added a catch-all in my reverse proxy server to handle each OPTIONS call. You can see the question below for the code:
Play! 2.0 easy fix to OPTIONS response for router catch-all?
And if you want to read up more on why browsers are doing this, see here:
Why am I getting an OPTIONS request instead of a GET request?
OPTIONS is from the CORS standard.
Disabling web-secuty in phantomjs also helped to resolve this problem (--web-security=no). Because I didn't have access to API server to make changes for OPTION method.
I am currently working on a Django development. There is a problem, which isn't a true problem but very annoying. Often, when I try to debug my Django app by putting down some break points, I get this error at the server end:
error: [Errno 32] Broken pipe
After reading this other post, Django + WebKit = Broken pipe, I have learned that this has nothing to do with the server but the client browser used. Basically, what happened is that the browser has a http request timeout. If it doesn't receive a response within the timeout, it will close down the connection with the server.
I find this timeout isn't really needed, indeed causing headache, during debugging. Is there any way I can lift this timeout or increase it for my browser (Chrome)? Or maybe a substitute browser that doesn't have this constraint?
Note: Although I am using Django and have mentioned about it, this isn't a Django-related question. It's more like a question on how to make my debugging process more effective.
I prefer using linux/unix curl command for debugging web applications. It's good approach, especially if you want to focus on some specific request, for example: POST does not work fine for some set of parameters, or cookies are not set as expected.
Of course it may take some time at the beginning to find out how to use it, but then, you will have a total control about every single piece of request: timeouts, cookies, headers and so on. It's very helpful, because you can be sure that what you wanted to send is actually sent (no additional data is added by the web browser).
There are several topics about the problem with cross-domain AJAX. I've been looking at these and the conclusion seems to be this:
Apart from using somthing like JSONP, or a proxy sollution, you should not be able to do a basic jquery $.post() to another domain
My test code looks something like this (running on "http://myTestdomain.tld/path/file.html")
var myData = {datum1 : "datum", datum2: "datum"}
$.post("http://External-Ip:port", myData,function(return){alert(return);});
When I tried this (the reason I started looking), chrome-console told me:
XMLHttpRequest cannot load
http://External-IP:port/page.php. Origin
http://myTestdomain.tld is not allowed
by Access-Control-Allow-Origin.
Now this is, as far as I can tell, expected. I should not be able to do this. The problem is that the POST actually DOES come trough. I've got a simple script running that saves the $_POST to a file, and it is clear the post gets trough. Any real data I return is not delivered to my calling script, which again seems expected because of the Access-control issue. But the fact that the post actually arrived at the server got me confused.
Is it correct that I assume that above code running on "myTestdomain" should not be able to do a simple $.post() to the other domain (External-IP)?
Is it expected that the request would actually arrive at the external-ip's script, even though output is not received? or is this a bug. (I'm using Chrome 11.0.696.60 )
I posted a ticket about this on the WebKit bugtracker earlier, since I thought it was weird behaviour and possibly a security risk.
Since security-related tickets aren't publicly viewable, I'll quote the reply from Justin Schuh here:
This is implemented exactly as required by the spec. For simple cross-origin requests http://www.w3.org/TR/cors/#simple-method> there is no pre-flight check; the request is made and the response cannot be read if the appropriate headers do not authorize the requesting origin. Functionally, this is no different than creating a form and using script to make an off-origin POST (which has always been possible).
So: you're allowed to do the POST since you could have done that anyway by embedding a form and triggering the submit button with javascript, but you can't see the result. Because you wouldn't be able to do that in the form scenario.
A solution would be to add a header to the script running on the target server, e.g.
<?php
header("Access-Control-Allow-Origin: http://your_source_domain");
....
?>
Haven't tested that, but according to the spec, that should work.
Firefox 3.6 seems to handle it differently, by first doing an OPTIONS to see whether or not it can do the actual POST. Firefox 4 does the same thing Chrome does, or at least it did in my quick experiment. More about that is on https://developer.mozilla.org/en/http_access_control
The important thing to note about the JavaScript same-origin policy restriction is that it is something built into modern browsers for security - it is not a limitation of the technology or something enforced by servers.
To answer your question, neither of these are bugs.
Requests are not stopped from reaching the server - this gives the server the option to allow these cross-domain requests by setting the appropriate headers1.
The response is also received back by the browser. Before the use of the access control headers 1, responses to cross-domain requests would be stopped dead in their tracks by a security conscious browser - the browser would receive the response but it would not hand it off to the script. With the access control headers, the server has the option of setting the appropriate headers indicating to a compliant browser that it would like to allow certain origin URLs to make cross domain requests.
The exact behaviour on response might differ between browsers - I can't recall for sure now but I think Chrome calls the success callback function when using jQuery's ajax() but the response is empty. IIRC, Firefox will not invoke the success function.
I get the same thing happening for me. You are able to post across domains but are not able to receive a response. This is what I expected to be able to do and happens for me in Firefox, Chrome, and IE.
One way to kind of get around this caveat is having a local php file with will call the data via curl and respond the response to your javascript. (Kind of restated what you said you knew already.)
Yes, it's correct and you won't be able to do that unless you use any proxy.
No, request won't go to the external IP as soon as there is such limitation.