Webkit under Windows with PyQt doesn't get remote resources via xhr - windows

I would like to write a Qt application which uses Webkit as its gui to get data from a server and display it. I got it working unter Linux and OS X without problems but under windows the XMLHttpRequest always returns status 0 and I don't know why. Here is the pyqt code I use:
import sys, os
from PyQt4.QtCore import *
from PyQt4.QtGui import *
from PyQt4.QtWebKit import *
app = QApplication(sys.argv)
web = QWebView()
web.page().settings().setAttribute(QWebSettings.LocalContentCanAccessRemoteUrls, True)
path = os.path.abspath(os.path.join(os.path.dirname(__file__), 'index.html'))
url = "file://localhost/" + path
web.load(QUrl(url))
web.show()
sys.exit(app.exec_())
and here is html HTML/JS I use to test it:
<!DOCTYPE html>
<title>TEST</title>
<h1>TEST</h1>
<div id="test"></div>
<script type="text/javascript">
function t(text) { document.getElementById("test").innerHTML = text }
var xhr = new XMLHttpRequest();
xhr.onreadystatechange = function() {
if(this.status != 0)
t(this.responseText)
else
t("Status is 0")
}
xhr.open("GET", "https://jeena.net/")
xhr.send()
</script>
On Linux it opens a new Window with a WebKit view in it, loads html local index.html file into it and renders it which shows the TEST headline. After that it runs the XMLHttpRequest code to get a websites content and set it with innerHTML into the prepared div.
On windows it loads and shows the title but then when it runs the xhr code the status is always just 0 and it never changes, no matter what I do.
As far as I understand LocalContentCanAccessRemoteUrls should make it possible for the xhr to get that content from the remote website even on windows, any idea why this is not working? I am using Qt version 4.9.6 on my windows machine and python v2.7.

I think there are two simple attempts to solve this problem.
My first thinking is that it can be due to cross domain request.
Seems that there is no easy way to disable cross domain protection in QWebkit.
I got the information from this stackoverflow question:
QtWebkit Same-Origin-policy
As stated in the accepted answer:
"By default, Qt doesn't expose method to disable / whitelist the same origin policy. Extended the same (qwebsecurityorigin.cpp) and able to get it working."
But since you've got everything working on linux and mac, the above may not be the cause.
Another possibility is you don't have openssl enabled with your Qt on windows. Since I noticed you have requested to a https page, which should require openssl. You can change the page to a http one to quick test this possibility.

Related

Prevent external files in src of iframe from being cached (with CloudFlare)

I am trying to make a playground like plunker. I just noticed a very odd behavior on production (with active mode in Cloudflare), whereas it works well in localhost.
By iframe, the playground previews index_internal.html which may contain links to other files (eg, .html, .js, .css). iframe is able to interpret external links such as <script src="script.js"></script>.
So each time a user modifies their file (eg, script.js) on the editor, my program saves the new file into a temporary folder on the server, then refresh the iframe by iframe.src = iframe.src, it works well on localhost.
However, I realized that, in production, the browser always keeps loading the initial script.js, even though users modify it in the editor and a new version is written in the folder in the server. For example, what I see in Dev Tools ==> Network is always the initial version of script.js, whereas I can check the new version of script.js saved in the server by less on the left hand.
Does anyone know why it is like this? And how to fix it?
Edit 1:
I tried the following, which did not work with script.js:
var iframe = document.getElementById('myiframe');
iframe.contentWindow.location.reload(true);
iframe.contentDocument.location.reload(true);
iframe.contentWindow.location.href = iframe.contentWindow.location.href
iframe.contentWindow.src = iframe.contentWindow.src
iframe.contentWindow.location.replace(iframe.contentWindow.location.href)
I tried to add a version, it worked with index_internal.html, but did not reload script.js:
var newSrc = iframe.src.split('?')[0]
iframe.src = newSrc + "?uid=" + Math.floor((Math.random() * 100000) + 1);
If I turn Cloudflare to development mode, script.js is reloaded, but I do want to keep Cloudflare in active mode.
I found it.
We can create a custom rule for caching in CloudFlare:
https://support.cloudflare.com/hc/en-us/articles/200168306-Is-there-a-tutorial-for-Page-Rules-#cache
For example, I could set Bypass as Cache Level for the folder www.mysite.com/tmp/*.

How does React deal with pre-compiled HTML from PhantomJS?

I compiled my reactjs using webpack and got a bundle file bundles.js. My bundles.js contains a component that make API calls to get the data.
I put this file in my html and pass the url to phantom.js to pre-compile static html for SEO reasons.
I am witnessing something strange here, the ajax calls for APIS are not getting fired at all.
For example, I have a component called Home which is called when I request for url /home. My Home component makes an ajax request to backend (django-rest) to get some data. Now when I call home page in phantomjs this api call is not getting fired.
Am I missing something here?
I have been using React based app rendering in Phantomjs since 2014. Make sure you use the latest Phantomjs version v2.x. The problems with Phantomjs occur because it uses older webkit engine, so if you have some CSS3 features used make sure they are prefixed correctly example flexbox layout.
From the JS side the PhantomJS does not support many newer APIs (example fetch etc.), to fix this add the polyfills and your fine. The most complicated thing is to track down errors, use the console.log and evaluate code inside the Phantomjs. There is also debugging mode which is actually quite difficult to use, but this could help you track down complex errors. I used webkit engine based browser Aurora to track down some of the issues.
For debugging the network traffic, try logging the requested and received events:
var page = require('webpage').create();
page.onResourceRequested = function(request) {
console.log('Request ' + JSON.stringify(request, undefined, 4));
};
page.onResourceReceived = function(response) {
console.log('Receive ' + JSON.stringify(response, undefined, 4));
};

Show warning on page only when IE7

I want to show a warning on a particular page I have if and only if the user is using IE 7.
I am currently seeing an issue where if the user is using IE 8 in compatibility mode they are seeing this warning message, the logic is as follows:
Please note: Some customers using Internet Explorer 7 web browser may not be able to use parts of this site. You may wish to upgrade.
How can I fix the page so that the customer is only shown this warning if they are really using IE7?
The specifics depends on the web framework in use (asp.net etc), but you can check the browser version by checking the user agent submitted by the browser. Each browser will have a unique user agent that includes the version number as well. The user agent will be in the http request.
Here's a link with a few options if you're using asp.net
http://msdn.microsoft.com/en-us/library/ms537509(v=vs.85).aspx
<script type="text/javascript">
var $buoop = {vs:{i:7,f:5,o:12,s:5,n:9}};
$buoop.ol = window.onload;
window.onload=function(){
try {if ($buoop.ol) $buoop.ol();}catch (e) {}
var e = document.createElement("script");
e.setAttribute("type", "text/javascript");
e.setAttribute("src", "//browser-update.org/update.js");
document.body.appendChild(e);
}
</script>

HTML 5 Web Worker Example doesn't work in 8.0.552.231

I'm following this example at: http://www.whatwg.org/specs/web-workers/current-work/
page.html
<!DOCTYPE HTML>
<html>
<head>
<title>Worker example: One-core computation</title>
</head>
<body>
<p>The highest prime number discovered so far is: <output id="result"></output></p>
<script>
var worker = new Worker('worker.js');
worker.onmessage = function (event) {
document.getElementById('result').textContent = event.data;
};
</script>
</body>
</html>
worker.js
var n = 1;
search: while (true) {
n += 1;
for (var i = 2; i <= Math.sqrt(n); i += 1)
if (n % i == 0)
continue search;
// found a prime!
postMessage(n);
}
This example works fine in firefox and Safari Version 5.0.2 (6533.18.5) on Mac OSX but doesn't work in chrome. When I debug it, worker.js is not even listed as one of the sources. What is bizarre is that the example page link on the same website works fine in chrome, which is the same code as my local code. But my local code doesn't work in chrome.
When I try to manually run code in Javascript debugger like this
var w = new Worker('worker.js')
I get an error saying:
Error: SECURITY_ERR: DOM Exception 18
Did anyone else have this experience? Can anyone suggest a solution?
Thanks
Are you viewing this file in the file:/// protocol or over http://? You'll have to serve the page in order for security to process it correctly.
Uncaught Error: SECURITY_ERR: DOM Exception 18 when I try to set a cookie
rxgx is spot on, I've seen this error often. As for a solution, either purchase some cheap shared hosting for development, or, run a web server off your own machine. For Windows, download and install the Apache installer made available from the Apache foundation, and follow the instructions. For Mac OS X, just enable Web Sharing in the Sharing section of System Preferences. For Linux, install an apache or lighttpd package through your package manager.

Problems with jQuery getJSON using local files in Chrome

I have a very simple test page that uses XHR requests with jQuery's $.getJSON and $.ajax methods. The same page works in some situations and not in others. Specificially, it doesn't work in Chrome on Ubuntu.
I'm testing on Ubuntu 9.10 with Chrome 5.0.342.7 beta and Mac OSX 10.6.2 with Chrome 5.0.307.9 beta.
It works correctly when files are installed on a web server from both Ubuntu/Chrome and Mac/Chrome (try it out here).
It works correctly when files are installed on local hard drive in Mac/Chrome (accessed with file:///...).
It FAILS when files are installed on local hard drive in Ubuntu/Chrome (access with file:///...).
The small set of 3 files can be downloaded in a tar/gzip file from here:
http://issues.tauren.com/testjson/testjson.tgz
When it works, the Chrome console will say:
XHR finished loading: "http://issues.tauren.com/testjson/data.json".
index.html:16Using getJSON
index.html:21
Object
result: "success"
__proto__: Object
index.html:22success
XHR finished loading: "http://issues.tauren.com/testjson/data.json".
index.html:29Using ajax with json dataType
index.html:34
Object
result: "success"
__proto__: Object
index.html:35success
XHR finished loading: "http://issues.tauren.com/testjson/data.json".
index.html:46Using ajax with text dataType
index.html:51{"result":"success"}
index.html:52undefined
When it doesn't work, the Chrome console will show this:
index.html:16Using getJSON
index.html:21null
index.html:22Uncaught TypeError: Cannot read property 'result' of null
index.html:29Using ajax with json dataType
index.html:34null
index.html:35Uncaught TypeError: Cannot read property 'result' of null
index.html:46Using ajax with text dataType
index.html:51
index.html:52undefined
Notice that it doesn't even show the XHR requests, although the success handler is run. I swear this was working previously in Ubuntu/Chrome, and am worried something got messed up. I already uninstalled and reinstalled Chrome, but that didn't help.
Can someone try it out locally on your Ubuntu system and tell me if you have any troubles? Note that it seems to be working fine in Firefox.
Another way to do it is to start a local HTTP server on your directory. On Ubuntu and MacOs with Python installed, it's a one-liner.
Go to the directory containing your web files, and :
python -m SimpleHTTPServer
Then connect to http://localhost:8000/index.html with any web browser to test your page.
This is a known issue with Chrome.
Here's the link in the bug tracker:
Issue 40787: Local files doesn't load with Ajax
On Windows, Chrome might be installed in your AppData folder:
"C:\Users\\AppData\Local\Google\Chrome\Application"
Before you execute the command, make sure all of your Chrome windows are closed and not otherwise running. Or, the command line param would not be effective.
chrome.exe --allow-file-access-from-files
You can place your json to js file and save it to global variable. It is not asynchronous, but it can help.
An additional way to get around the problem is by leveraging Flash Player's Local Only security sandbox and ExternalInterface methods. One can have JavaScript request a Flash application published using the Local Only security sandbox to load the file from the hard drive, and Flash can pass the data back to JavaScript via Flash's ExternalInterface class. I've tested this in Chrome, FF and IE9, and it works well. I'd be happy to share the code if anyone is interested.
EDIT: I've started a google code (ironic?) project for the implementation: http://code.google.com/p/flash-loader/
#Mike On Mac, type this in Terminal:
open -b com.google.chrome --args --disable-web-security
This code worked fine with sheet.jsonlocally with browser-sync as the local server.
-But when on my remote server I got a 404 for the sheet.json file using Chrome.
It worked fine in Safari and Firefox.
-Changed the name sheet.json to sheet.JSON. Then it worked on the remote server.
Anyone else have this experience?
getthejason = function(){
var dataurl = 'data/sheet.JSON';
var xhr = new XMLHttpRequest();
xhr.open('GET', dataurl, true);
xhr.responseType = 'text';
xhr.send();
console.log('getthejason!');
xhr.onload = function() {
.....
}

Resources