Ajax call from local html page in Webkit Qt - ajax

I'm trying to perform an Ajax/XMLHTTPrequest from within a local HTML file in QT 4.7RC QWebview. It consistently fails with an empty responseText and status 0. I've set the follwing
page->settings()->setAttribute(QWebSettings::LocalContentCanAccessRemoteUrls,true);
but it has no effect (I can load remote images without problems though).
It seems to be a known issue and I'm not sure if there is a solution already.
https://bugs.webkit.org/show_bug.cgi?id=31875
Any ideas for a workaround would be very helpful. Basically what I'm trying to do is running a HTML/Javascript WebApp in QWebview that talks to a local server at 127.0.0.0 and this problem is kind of a show-stopper. Interestingly, the actual query is sent and my server responds with 200 and the requested data. But the response never arrives in my Javascript callbacks.

Not sure about your question but are you sure that you are inside an AJAX security sandbox that works with webkit? In Firefox, IE and others using AJAXin different domains does not work. In fact, http://demo1.demo.com is different than demo2.demo.com

Related

How do I prevent SuperAgent AJAX from calling OPTIONS?

I found the source of my problem for SuperAgent (http://visionmedia.github.com/superagent/) on Firefox. Not sure if SuperAgent is doing it in its AJAX call or if FireFox is triggering it.
Essentially, every time I make an AJAX call an OPTIONS method is being fired on the URL before the actual AJAX call. Quite annoying since the server currently doesn't support OPTIONS. How can I make a call without it going to crap and re-coding the server?
Thanks
Ok found out some more details. Thankfully testing on Safari gave me more insight into what was actually happening and I applied my knowledge here.
It seems to be the standard that browsers are calling an OPTIONS method before making an actual AJAX call. Seems a bit overbearing.
So to get around it I simply added a catch-all in my reverse proxy server to handle each OPTIONS call. You can see the question below for the code:
Play! 2.0 easy fix to OPTIONS response for router catch-all?
And if you want to read up more on why browsers are doing this, see here:
Why am I getting an OPTIONS request instead of a GET request?
OPTIONS is from the CORS standard.
Disabling web-secuty in phantomjs also helped to resolve this problem (--web-security=no). Because I didn't have access to API server to make changes for OPTION method.

How to debug a failed ajax request in google chrome?

I have a web application that crashes on ajax requests with google chrome (it works with every other web browser it was tested it). After debugging I found that the error is caused by response.responseText being undefined. The xhr object looks like this:
argument: undefined
isAbort: false
isTimeout: undefined
status: 0
statusText: "communication failure"
tId: 3
In debugger in the 'network' tab I get "(failed)", however all the headers are there and I can even copy into clipboard the response body (which is a valid JSON).
My question is - how can I debug this problem? Where to find additional information, what causes this request to fail?
I finally found the solution to my problem : AdBlocks, when it blocks an ajax request, it just says "communication failure".
The first thing I would double-check is that the data coming back from the response is valid JSON. Just pass it through a JSON validator like this online JSONLint: http://jsonlint.com/
I assume that you are using something like jQuery to make your AJAX requests. If so, then make sure that you are using the development version of that library. Now that you are using the development version (uncompressed) of the script, find the particular function that you are using (eg. $.ajax) and then, within the Chrome inspector, insert a breakpoint in the code where the AJAX response is first handled (eg. https://github.com/jquery/jquery/blob/master/src/ajax.js#L579). Then proceed to step through the code, inspecting various return-values to see exactly what is going wrong.
If you are not using something like jQuery to make AJAX calls, then I'd recommend using a framework to avoid possible cross-browser compatibility issues like you might be experiencing right now.

How can I scrape an image that doesn't have an extension?

Sometimes I come across an image that I can't scrape so that it can be saved. An example of this is:
https://s3.amazonaws.com/plumdistrict.com-production/perks/12321/image/original.?1325898487
When I hit the url from Internet Explorer I see the image but when I try to get it from the code below I get the following error message "System.Net.WebException The remote server returned an error: (403) Forbidden" error with GetResponse:
string url = "https://s3.amazonaws.com/plumdistrict.com-production/perks/12321/image/original.?1325898487";
WebRequest request = WebRequest.Create(url);
WebResponse response = request.GetResponse();
Any ideas on how to get this image?
Edit:
I am able to get to save images that do have extensions. For example I can scrape the following image just fine:
https://s3.amazonaws.com/plumdistrict.com-production/perks/12659/image/original.jpg?1326828951
Although HTTP is originally supposed to be stateless, there are a lot of implementations that rely on it being stateless. I could configure my webserver to only accept requests for "http://mydomain.com/sexy_avatar.jpg" if you provide a cookie proving you were logged in. If not, I send you a redirect 303 to "http://mydomain.com/avatar_for_public_use.jpg".
Amazon could be doing the same. Try to load the web page using Chrome, and look at the Network view in developer mode (CTRL+SHIFT+J) to see all headers supplied to the website. Maybe you even need to do a full navigation in the same session before you are allowed to see the image. This is certainly the case in many web applications I have developed :-)
Well, it looks like it's being generated from a script (possibly being retrieved from a database). The server should be sending a file/content type to go along with that... but it doesn't seem to be, which I believe is a violation of standards.
My Linux box knows full well that that's a JPEG image once it's on my hard drive, because it examines file headers rather than relying on extensions. Perhaps there is a tool to do the same in Windows?
Edit: Actually, on further contemplation, it seems odd that you'd get a 403 for that. Perhaps the server is actually blocking you from retrieving the file in that manner.

Cross domain ajax POST in chrome

There are several topics about the problem with cross-domain AJAX. I've been looking at these and the conclusion seems to be this:
Apart from using somthing like JSONP, or a proxy sollution, you should not be able to do a basic jquery $.post() to another domain
My test code looks something like this (running on "http://myTestdomain.tld/path/file.html")
var myData = {datum1 : "datum", datum2: "datum"}
$.post("http://External-Ip:port", myData,function(return){alert(return);});
When I tried this (the reason I started looking), chrome-console told me:
XMLHttpRequest cannot load
http://External-IP:port/page.php. Origin
http://myTestdomain.tld is not allowed
by Access-Control-Allow-Origin.
Now this is, as far as I can tell, expected. I should not be able to do this. The problem is that the POST actually DOES come trough. I've got a simple script running that saves the $_POST to a file, and it is clear the post gets trough. Any real data I return is not delivered to my calling script, which again seems expected because of the Access-control issue. But the fact that the post actually arrived at the server got me confused.
Is it correct that I assume that above code running on "myTestdomain" should not be able to do a simple $.post() to the other domain (External-IP)?
Is it expected that the request would actually arrive at the external-ip's script, even though output is not received? or is this a bug. (I'm using Chrome 11.0.696.60 )
I posted a ticket about this on the WebKit bugtracker earlier, since I thought it was weird behaviour and possibly a security risk.
Since security-related tickets aren't publicly viewable, I'll quote the reply from Justin Schuh here:
This is implemented exactly as required by the spec. For simple cross-origin requests http://www.w3.org/TR/cors/#simple-method> there is no pre-flight check; the request is made and the response cannot be read if the appropriate headers do not authorize the requesting origin. Functionally, this is no different than creating a form and using script to make an off-origin POST (which has always been possible).
So: you're allowed to do the POST since you could have done that anyway by embedding a form and triggering the submit button with javascript, but you can't see the result. Because you wouldn't be able to do that in the form scenario.
A solution would be to add a header to the script running on the target server, e.g.
<?php
header("Access-Control-Allow-Origin: http://your_source_domain");
....
?>
Haven't tested that, but according to the spec, that should work.
Firefox 3.6 seems to handle it differently, by first doing an OPTIONS to see whether or not it can do the actual POST. Firefox 4 does the same thing Chrome does, or at least it did in my quick experiment. More about that is on https://developer.mozilla.org/en/http_access_control
The important thing to note about the JavaScript same-origin policy restriction is that it is something built into modern browsers for security - it is not a limitation of the technology or something enforced by servers.
To answer your question, neither of these are bugs.
Requests are not stopped from reaching the server - this gives the server the option to allow these cross-domain requests by setting the appropriate headers1.
The response is also received back by the browser. Before the use of the access control headers 1, responses to cross-domain requests would be stopped dead in their tracks by a security conscious browser - the browser would receive the response but it would not hand it off to the script. With the access control headers, the server has the option of setting the appropriate headers indicating to a compliant browser that it would like to allow certain origin URLs to make cross domain requests.
The exact behaviour on response might differ between browsers - I can't recall for sure now but I think Chrome calls the success callback function when using jQuery's ajax() but the response is empty. IIRC, Firefox will not invoke the success function.
I get the same thing happening for me. You are able to post across domains but are not able to receive a response. This is what I expected to be able to do and happens for me in Firefox, Chrome, and IE.
One way to kind of get around this caveat is having a local php file with will call the data via curl and respond the response to your javascript. (Kind of restated what you said you knew already.)
Yes, it's correct and you won't be able to do that unless you use any proxy.
No, request won't go to the external IP as soon as there is such limitation.

Allowing Cross domain ajax calls from firefox

I want to change the settings of firefox so as to allow it to make cross domain ajax calls. Since due to the security feature of the firefox it doen't allow ajax calls to be made. I know if it is in same domain it will allow. I have a code given bellow which in safari works fine but firefox doesn't display the results when it calls csce server then since the code is on local machine doesn't allow it and returns error. I know it will start working if I load my this code to csce server but I want to run the code from my machine. So can anyone help me in resolving this. I have spent past couple of days just searching for this solution.
Kindly suggest how to achieve this or should I go with some older version of firefox?
I googled and set the parameters of browser in config file as specified in this site but it still doesn't work.
http://code.google.com/p/httpfox/issues/detail?id=20
Maybe you could use privoxy and tell it to inject something like "Access-Control-Allow-Origin: *" in the server response.
To do this, you would have to go into the file user.filter (create it if it doesn't exist) in privoxys configuration directory and insert something like this:
SERVER-HEADER-FILTER: allow-crossdomain
s|Server: .*|Access-Control-Allow-Origin: *|
Instead of Server, you can also use any other header that's always present and you don't need.
And this into user.action:
{+server-header-filter{allow-crossdomain}}
csce.unl.edu
Note: I didn't test it.
https://developer.mozilla.org/En/HTTP_access_control
http://config.privoxy.org/user-manual/
This appears to enable XSS from file:// pages in Firefox 4, although it prompts you so might not be suitable for more than simple test pages:
netscape.security.PrivilegeManager.enablePrivilege("UniversalXPConnect");

Resources