IE history push state - ajax

I have a webpage where the user has the possibility to display the terms and conditions without reloading the page, via AJAX. That, in itself, is no problem, however, I am also trying to push a history state.
That works fine in most browsers, except in IE. For some inexplicable reason, there, the content is loaded via AJAX, but also, a new tab is opened with the previous page. How can I fix this?
You can see the example on this webpage ( http://galaxy-battle.de ), try clicking on "T&Cs" in the "Join"-box.

IE9 and below doesn't support pushState. You have an exception when calling the following line
window.history.pushState(null, null, pathFullPage);
SCRIPT438: Object doesn't support property or method 'pushState'
?terms_and_conditions, line 62 character 21
You may probably be interesting looking on some workarounds discussed at Emulate/polyfill history.pushstate() in IE

Old but still current question. I just want to say I would recommend not to try to emulate pushState on IE.
Instead of that, you can use a feature detection :
if history.pushState is not null (browser supports pushState), you use it and load your content with nifty javascript
if history.pushState is null (browser does not support pushState), you change url / follow the link and make a full page change
Of course, this means IE<=9 users won't have all the cool animations other users have. But the question I want to ask is : do you want to see on the net links to your website containing # ?
Users may think your app is useful and paste links to it on the web. That's cool, because it brings you some google juice. Now, if that user uses IE and you use history.js, user will paste a link containing a hash.
This will defeats proper indexation of public pages of your app, and also will look ugly. My personal opinion is that having a js animation or lightbox for IE users doesn't worth those trade off.

Related

Ajax Website, problems with History and Seo

I have a few problems i could use some input on.
I have a website, where all the content is loaded with ajax, it works quite well. There are a few issues with that approach though, or some UX issues.
User cannot copy URL from loaded content, since it will allways show the default URL only.
SEO will take a hit, since it cannot be crawled, the sitemap is like 2 pages only, even though when a normal user browses, they will see alot more.
Browser history, back and forward, does not work. Hitting the back button goes to the main page.
Now, i have searched and read alot.
Google has a hack, that seems to allow the site to be crawled, IF you use # in your url, does not work with empty url, which leads me to...
Manipulating the browser history with pushState/popState.
Now, i have tried getting it to work, but i just cant get my head around which process is the best way to take. Should i redo all my ajax?
Right now i have 2 div boxes, and i switch between them with loaded content, to get that nice sweet transition between pages. My frontpage is basically just 2 empty divs, nothing else. It works, but i get the feeling it is a pretty bad way to do it, thoughts?
If anyone know some good guides, feel free to give me, i have as i said read alot, but i might have missed some golden ones out there.
Google does execute some Javascript when indexing and ranking pages. However, text which is not immediately visible to users is demoted when establishing content relevancy.
Manipulating the browser history with pushState/popState.
It is very unlikely Google will trust your content if you need to use those tricks. And content which is not trusted is not ranked.
UPDATE: Manipulating browser history with pushState is ok.
Moreover, if your URLs change all the time, Google won't appreciate it, unless you manage to set canonical links.

AdSense on history.pushState enabled page

First off, I know this has been discussed over and over again. But let's take this as a "late 2012 edition" since things tend to change rapidly on the internet.
I have this web page which is a "classical" web page with full page refreshes. Every internal click produces new content. We can show AdSense ads this way without a problem.
Now I started looking into "ajaxifying" (PJAX) the whole page for performance reasons (I've actually made a prototype version and it works superbly). The whole thing works only on browsers that support history.pushState, and whenever a user clicks on a internal link a AJAX request is triggered that fetches only the content part of the page (everything between the header and footer) and replaces old content with it.
The end result is, that the user is presented with a brand new page (including the changed URL and what not) and only the mechanism for delivering the page has changed (full reload vs. AJAX). As far as google (and older browsers) is concerned this is still a regular page with regular links (progressive enhancement and all that).
And yet there isn't a way to display AdSense, what with the document.write's and AdSense's TOS ruining the party.
My question: is there a Google approved (I'm not interested in hacks that will get us banned) way to display AdSense ads on a page like this (and I haven't found it). Or if there isn't, does Google have any plans on supporting this in the future (again, I haven't found anything related to this).
update
After some more digging around I came across Google DFP, which seems to support async loading of adds. But, I'm not sure I can load AdSense ads through it dynamically without breaking the TOS. I'm 100% sure I can load other ads this way, but not for AdSense. Could somebody clear this up for me?
According to this page loading Adsense ads through DFP you are subject to the both the DFP and Adsense terms. So I guess if you are following the current Adsense terms you are not allowed to do what you are talking about... at the same time Google provides a rather easy method to do exactly what you want to do with DFP...
Its still a grey area...

Javascript user request layer

I am currently writing a compatibility layer between browsers and for this I need to ask the user to confirm an action. Currently the only standard way in JavaScript to do this is window.confirm which is synchronous and I do not want to block the whole site. So I would be searching for a library which can display a asynchronous browser-like request (e.g. the ones they use for Geolocation).
EDIT: And similar to the native one I do not need/want the user interaction to be modal. Just displaying and reacting on user input that is all.
I remember having seen such sites, but cannot remember where.
Can someone point me in the right direction?
As a bonus it would be great if it would work an look like the native ones in IE, FF and Opera.
the jQueryUI library has a dialog plugin that can be made modal. Since it is JS, it does not block the rest of the page execution.

Screen scraping an ASP.NET web page to retrieve data displayed in the grid view

I am using RUBY to screen scrap a web page (created in asp.net) which uses gridview to display data. I am successfully able to read the data displayed on page-1 of the grid but unable to figure out how I can move to the next page in the grid to read all the data.
Problem is the page number hyperlinks are not normal hyperlinks (with URL) but instead are javascript hyperlink which causes postback to the same page..
An example of the hyperlink:-
6
I recommend using Watir, a ruby library designed for browser testing, if you're already using ruby for processing. For one thing, it gives you a much nicer interface to the DOM elements on the page, and it makes clicking links like this easier:
ie.link(:text, '6').click
Then, of course you have easier methods for navigating the table as well. It's easy enough to automate this process:
1..total_number_of_pages.each do |next_page|
ie.link(:text, next_page).click
# table processing goes here
end
I don't know your use case, but this approach has its advantages and disadvantages. For one thing, it actually runs a browser instance, so if this is something you need to frequently run quietly in the background in completely automated way, this may not be the best approach. On the other hand, if it's ok to launch a browser instance, then you don't have to worry about all that postback nonsense, and you can just click the link as if you were a user.
Watir: http://wtr.rubyforge.org/
You'll need to figure out the actual URL.
Option 1a: Open the page in a browser with good developer support (e.g. firefox with the web development tools) and look through the source to find where _doPostBack is defined. Figure out what URL it's constructing. Note that it might not be in the main page source, but instead in something that the page loads.
Option 1b: Ditto, but have ruby do it. If you're fetching the page with Net:HTTP you've got the tools to find the definition of __doPostBack already (the body as a string, ruby's grep, and the ability to request additional files, such as those in script tags).
Option 2: Monitor the traffic between a browser and the page (e.g. with a logging proxy) to find out what the URL is.
Option 3: Ask the owner of the web page.
Option 4: Guess. This may not be as bad as it sounds (e.g. if the original URL ends with "...?page=1" or something) but in general this is the least likely to work.
Edit (in response to your comment on the other question):
Assuming you're using the Net:HTTP library, you can do a postback by just replacing your get with a post, e.g. my_http.post(my_url) instead of my_http.get(my_url)
Edit (in response to danieltalsky's answer):
watir may be a really good solution for you (I'm kicking myself for not having thought of it), but be aware that you may have to manually fire the event or go through other hoops to get what you want. As a specific gotcha, with any asynchronous fetch like this you need to make sure that the full response has come back before you scrape it; that isn't a problem when you're doing the request inline yourself.
You will have to perform the postback. The data is pass with a form POST back to the server. Like Markus said use something like FireBug or the Developer Tools in IE 8 and fiddler to watch the traffic. But honestly this is a web form using the bloated GridView and you will be in for a fun adventure. ;)
You'll need to do some investigation in order to figure out what HTTP request the javascript execution is performing. I've used the Mozilla browser with the Firebug plugin and also the "Live HTTP Headers" plugin to help determine what is going on. It will likely become clear to you which requests you will need to make in order to traverse to the next page. Make sure you pay attention to any cookies getting set.
I've had really good success using Mechanize for scraping. It wraps all of the HTTP communication, html parsing and searching(using Nokogiri), redirection, and holding onto cookies. But it doesn't know how to execute Javascript, which is why you will need to figure out what http request to perform on your own.

Pass information back from an iframe?

Right now i'm building a firefox plugin that duplicates some functionality on my website. It takes in an email address and then returns information to the user. The easiest way to do this in the plugin is to use an Iframe and render that super simple form on my website. All of this works great, but to make the plugin really useful, i would like the plugin to have access to the information that the iframe renders, so it can use it in the current window that the user is in.
Is it possible to pass information back through an Iframe in this manner? I know there are quite a few domain access restrictions with Iframes, so any help or insight is appreciated!!
I've done this two ways.
If the iframe is on the same domain as the parent website, you can just, in javascript, access window.parent.
If it isn't, however...I've done a dirty trick. I'll share it here, though, as it may help.
We created a page on the other domain, which would call to window.parent.parent. We put that in a hidden iframe inside the iframed page, and send it a querystring argument or two. It's not pretty, but it gets around cross-domain scripting problems.
This basically means that you have this sort of thing:
admin.example.com
content.example.com - iframe
admin.example.com?contentid=350 - hidden iframe that makes a window.parent.parent call.
Is the point of this whole exercise functional testing of your website? If so, instead of your custom Firefox plugin, consider using Selenium to automate interactions with websites. It works with all major browsers and supports the inspection of page elements you are trying to do (using XPath). It also features a Firefox plugin called Selenium IDE that allows you to conveniently "record" your interactions with a website for automated playback later.

Resources