Ajax, subdomains, 200 response, and images -- ok? - ajax

This is a very similar question to AJAX, Subdomains and the 200 OK response (and JavaScript Same Origin Policy - How does it apply to different subdomains?), but with a twist. I have a situation in which:
A domain (www.example.com)
Where the page at a subdomain (sd.example.com/cat/id)
Needs to make ajax-style requests to another subdomain (cdn.example.com)
In contrast to the aforementioned question, what I am requesting is images.
GET requests of images (using jQuery $.load())
This seems to be working just fine. Because it was working just fine, when someone pointed out it was generating errors in Firebug the same-origin policy didn't immediately occur to me.
Images ARE loading at localhost (apache VirtualHost url of test.sd.example.com/cat/id)
However, now that it has come to mind thanks to that question I linked, I am concerned that this will not work reliably in production.
Will this continue to work in a production environment -- and will it work reliably cross-browser?
Answer: No -- it only looked like it was working; it wasn't really
If not, how do I fix it? (I don't think I can JSONP images...can I?)
Answer: Continue setting src of image & wait to show until load event triggered.
If so, how do I stop the Firebug errors? If I can. (They're scaring fellow devs.)
Answer: Same as above -- get rid of step where actually doing GET request for image file.
Initial Code
function(imageUrl, placeTarget){
var i = new Image();
var img = $(i);
img.hide()
.load(imageUrl, function(e){
// console.log("loadImage: loaded");
placeTarget.attr("src", imageUrl);
return true;
})
.error(function(){
// error handling - do this part
// console.log("loadImage: error");
return false;
});
return;
} // loadImage

Why not insert the images into the page by creating image elements and setting the src. what could be simpler?
edit: ... via javascript
I'm not sure this is exactly right, but in jquery:
img = $('<img>');
img.attr('src', 'http://somewhere.com/some_image.jpg');
$('#place_to_add').append(img);
img.ready(fade_into_next);

Checkt this post: JavaScript Same Origin Policy - How does it apply to different subdomains?

Related

Prevent JavaScript redirects in Firefox

I want to prevent JavaScript redirects in Firefox for one domain (youtube.com), and I was wondering whether there's a plugin that will do it. I'm trying to use NoScript, and I'd like to allow scripts globally because I don't want to disable most JavaScript, but this seems to just allow JavaScript redirects. Is there a way for me to just disable JavaScript redirects (or ideally, display a prompt)?
The only other way I can think of doing it is to write my own extension that messes around with window.onbeforeunload and window.unload, but ideally I'd like to use an existing addon.
var {utils: Cu, classes: Cc, instances: Ci, results: Cr} = Components
Cu.import('resource://gre/modules/Services.jsm');
var myobserve = function(aSubject, aTopic, aData) {
var httpChannel = aSubject.QueryInterface(Ci.nsIHttpChannel);
if (httpChannel.loadFlags & Ci.nsIHttpChannel.LOAD_REPLACE) {
//its a redirect, lets block it
httpChannel.cancel(Cr.NS_BINDING_ABORTED);
}
}
I use the presence of flags to test for redirect. Here are some notes i took on flags awhile back, im not totally sure of how accurate these notes are:
if has LOAD_DOCUMENT_URI it usually also has LOAD_INITIAL_DOCUMENT_URI if its top window, but on view source we just see LOAD_DOCUMENT_URI. If frame we just see LOAD_DOCUMENT_URI. js css files etc (i think just some imgs fire http modify, not sure, maybe all but not if cached) come in with LOAD_CLASSIFY_URI or no flags (0)
note however, that if there is a redirect, you will get the LOAD_DOC_URI and the LOAD_INIT_DOC_URI on initial and then on consequent redirects until final redirect. All redirects have LOAD_REPLACE flag tho
note: i think all imgs throw http-on-modify but cahced images dont. images can also have LOAD_REPLACE flag
Of course to start observing:
Services.obs.addObserver(myobserve, 'http-on-modify-request', false);
and to stop:
Services.obs.removeObserver(myobserve, 'http-on-modify-request', false);

Disable X-Frame-Option on client side

I would like to disbale the X-Frame-Option Header on client side on Firefox(and Chrome).
What I've found:
Overcoming "Display forbidden by X-Frame-Options"
A non-client side solution isn't suitable for my purpose
https://bugzilla.mozilla.org/show_bug.cgi?id=707893
This seems to be pretty close. I tried creating the user.js in the profile dir with the code user_pref("b2g.ignoreXFrameOptions", true);
but it didn't work. The second last entry seems to imply compiling ff with modified code? If this is the case, it's also not a possible solution for me.
I just wrote a little HTML Page with some JS that loops a list of YouTube videos by successively loading them into an iframe. I know youtube supports playlists but they suck and I dont want to download the videos.
Also, it would be nice if the browser only ignores the X-Frame-Option for local files. This would somewhat minimize the security hole I tear open by disabling this. As for Chrome, a solution would be nice but isn't that important.
I guess another approach would be to intercept incoming TCP/IP packets which contain a HTTP Respone and remove this header line but this is quite an overkill.
[edit]
Using youtube.com/embed is a bad workaround since a lot of videos dont allow to be embedded...
This can be easily achieved using an HTTP Observer through a Firefox extension. That observer will look something like this:
let myListener =
{
observe : function (aSubject, aTopic, aData)
{
if (aTopic == "http-on-examine-response")
{
let channel = aSubject.QueryInterface(Ci.nsIHttpChannel);
try
{ // getResponseHeader will throw if the header isn't set
let hasXFO = channel.getResponseHeader('X-Frame-Options');
if (hasXFO)
{
// Header found, disable it
channel.setResponseHeader('X-Frame-Options', '', false);
}
}
catch (e) {}
}
}
}
You can find further info such as how to install the observer on MDN[1][2]
[1] : https://developer.mozilla.org/en/docs/Observer_Notifications#HTTP_requests
[2] : https://developer.mozilla.org/en-US/docs/Setting_HTTP_request_headers#Registering
Using diegocr code, I've created an Firefox add-on to allow the displaying of webpages that have X-Frame-Options in their header, so they will be displayed when accessed via an iframe. It can be downloaded/installed here: https://addons.mozilla.org/en-US/firefox/addon/ignore-x-frame-options/
The Firefox extension mentioned by René Houkema in the other answer no longer works anymore so I created a new one.
https://addons.mozilla.org/fr/firefox/addon/ignore-x-frame-options-header/
This extension is also compatible with Quantum.
Source & updates:
https://github.com/ThomazPom/Moz-Ext-Ignore-X-Frame-Options

Crawlable Ajax content. SEO-ing without hashbang. Is my way ok?

I'm going to build my application based on ajax, and my URLs are something like:
http://server.com/module/#function_name,param1,param2...etc
After referencing some discussions about google's suggestion: hashbang (#!), it's not hard for me to realize that it was not the best solution. There are several reasons:
The URL is pretty ugly, anyway.
It's terrible if someday Google(or some other search-engines) suggest a better solution other than hashbang. I must keep my ugly url with hashbang, or write some js-code to make link to my page still alive.
HTML5 pushState will be popular someday.
For all things above, I decide to make it my way: my navigation links will be like this:
<a href="http://server.com/module/for-crawler/function-name/param1/param2/...">
Some text </a>
And some jQuery code will make it capable to load ajax content instead of page-change like a normal link:
$(function(){
$('a').live('click',function(e){
var realURL = translateURL( $(this).attr('href') )
loadContent( realURL );
e.prevetnDefault();
return false;
})
})
/* -- the function translateURL will turn url like :
..... http://server.com/module/for-crawler/function-name/param1/param2/...
Into:
..... http://server.com/module/#function-name/param1/param2/...
That's the real url I think all ajaxers are used to dealing with
*/
When crawler reads my page, it will follow the url in "href" attribute, and I will provide it with static non-js version of my page just for google to read. After some days, my page is indexed & user will see my page on Google's results like this:
http://server.com/module/for-crawler/function-name/param1/param2/...
I'm going to user js again to redirect user to my normal ajax version, I mean, to the real URL :
http://server.com/module/#function-name/param1/param2/...
That's the best approach I can think about at this time. Please give me advices : should I do it that way, or can I make it better ? Thanks all guys !
Depending on your audience I would suggest to use HTML5 PushState anyway.
If the client is not supporting HTML5 PushState let him simply use the same version of your app as the crawlers do. In my opinion a page reload is not as bad as a hashed URL. Since users share URLs your hashed URL gets exposed to other users. This URL wouldn't work for, let's say Facebooks Link sharing previews or any other client that doesn't support JavaScript.
Instead I would only use the crawler-friendly app in combination with HTML5 PushState. With PushState you will always expose a single URL, independent of the JavaScript support of your client.
First, detect whether PushState is supported:
function supports_history_api() {
return !!(window.history && history.pushState);
}
Then your click-handler would look something like this:
$('a').live('click',function(e){
var url = $(this).attr('href');
e.preventDefault();
loadContent( url );
history.pushState({"url":url}, $(this).attr('title'), url);
return false;
})

Cannot make unload event fire in Firefox 4

I have run ajax-calls on the unload event for about a year.
It has generally worked in FF and IE but not to 100%, I cannot say when it has failed.
I register the event by writing in the bodytag:
onunload="...."
I got error messages in FF4 since the unload event also wanted to write in a div-tag of the page that just had unloaded. Fixed this by making the ajax-routine write nothing if the id of the target div is 'dummy'
I am no expert on AJAX, but the following code has worked:
http://yorabbit.info/e-dog.info/tmp/ajax_ex.php (the link is a text-page)
(You call ajaxfunction2 with the following arguments: filename, queryString for PHP, string to show in target div during update, name of target div)
I don't get any error messages in the FF error console and IE9 works. Is there any way I can make it work in FF too?? I have just started trying FF4, but my impression is that it works less well than in FF3.
Thanks.
(I am on a trip and ay not have the possibility to reply immediately, but I really appreciate suggestions and will reply in due course)
EDIT:
I had bettter add this:
The AJAX-call I make on unload does only send some data (how long time the user stayed on the page) to the PHP-MySQL server
I don't know what is happening here, but Firefox 4 has made notable changes to how unloading works: For example, if you do an alert() during a link click event, it will no longer freeze the page, but load the new location anyway. Maybe this is something similar.
However, you are never guaranteed for the Ajax call to finish if it is not synchronous in any browser anyway - the request may or may not come back with a response until the page has been closed. Whether this works will be down to chance, and the user's network speed.
Try using a synchronous request first, as outlined here: How does jQuery's synchronous AJAX request work?
this will usually guarantee that the request comes back. However, use it very sparingly - blocking behaviour at page unload can be very annoying for the user, and even freeze the browser.
I suggest to use jQuery instead of keeping track of browser changes yourself.
Solution:
Find working sample here: http://jsfiddle.net/ezmilhouse/4PMcc/1/
Assuming that your internal links are set relatively, and your external links therefore set starting with 'http':
Leave ...
Stay ...
You could hijack 'a' tags via jQuery events and ask the user to confirm the leaving (in case of external links). In 'ok' case you kick off your 'onleave' ajax call (async=true) and redirect user to external link:
$('a').live('click', function(event){
// cache link
var link = $(this).attr('href');
// check if external link (assuming that internal links are relative)
if (link.substr(0,4) === "http") {
// prevent default a tag event
event.preventDefault();
// popup confirm message
var reply = confirm('Do you really want to leave?');
if (reply) {
var url = 'http:mydomain.com/ajax.php';
var data = {'foo': 'bar', 'fee':'bo'};
// kick off your 'onleave' ajax call
// forced to be synchronous
$.ajax({
type: "POST",
async: false,
url: url,
data: data,
success: function( data ) {
// ok case: leave page, cached link
window.location.href = link;
}
});
}
return false;
}
});

How can I return binary image data from an abortable AJAX request and set the result to the src of an HTML/DOM image?

I'm writing a web application that involves a continuous cycle of creating (and removing) a fair number of images on a webpage. Each image is dynamically generated by the server.
var img = document.createElement("img");
img.src = "http://mydomain.com/myImageServer?param=blah";
In certain cases, some of these images outlive their usefulness before they've finished downloading. At that point, I remove them from the DOM.
The problem is that the browser continues to download those images even after they've been removed from the DOM. That creates a bottleneck, since I have new images waiting to be downloaded, but they have to wait for the old unneeded images to finish downloading first.
I would like to abort those unneeded image downloads. The obvious solution seems to be to request the binary image data via AJAX (since AJAX requests can be aborted), and set the img.src once the download is complete:
// Code sample uses jQuery, but jQuery is not a necessity
var img = document.createElement("img");
var xhr = $.ajax({
url: "http://mydomain.com/myImageServer?param=blah",
context: img,
success: ImageLoadedCallback
});
function ImageLoadedCallback(data)
{
this.src = data;
}
function DoSomethingElse()
{
if (condition)
xhr.abort();
}
But the problem is that this line does not work the way I had hoped:
this.src = data;
I've searched high and low. Is there no way to set an image source to binary image data sent via AJAX?
You would have to base64-encode the data into a data: URI to achieve that. But it wouldn't work in IE6-7, and there are limitations on how much data you can put in there, especially on IE8. It might be worth doing as an optimisation for browsers where it's supported, but I wouldn't rely on it.
Another possible approach is to use the XMLHttpRequest to preload the image, then just discard the response and set the src of a new Image to point to the same address. The image should be loaded from the browser's cache. However, in the case where you're generating the images dynamically you would need to pay some attention to your caching headers to ensure the response is cachable.
Try e.g.
this.src="data:image/png;base64,XXX"
...where XXX is your binary data, base64-encoded. Adjust the content-type if necessary. I wouldn't be optimistic about wide browser support for this, though.
You should be able to use data URIs, similar to the solution I identified in an earlier question. Note that this will not work with older browsers.

Resources