Difference between pushstate and hasbangs? - ajax

I wanted to ask you guys, what is the better use for anchor links? Rather using hashbangs or using pushstate? I have read that hashbangs are out of time and pushstate would be better for SEO, but pushstate is only supported in new modern browseres...
So what would you recommend to change the url of my anchor links and why?
Best regards

Related

AngularJS / AJAX app and search engine crawlers

I've got a web app which heavily uses AngularJS / AJAX and I'd like it to be crawlable by Google and other search engines. My understanding is that I need to do something special to make it work, as described here: https://developers.google.com/webmasters/ajax-crawling
Unfortunately, that looks quite nasty and I'd rather not introduce the hash tags. What I'd like to do is to serve a static page to Googlebot (based on the User-Agent), either directly or by sending it a 302 redirect. That way, the web app can be the same, and the whole Googlebot workaround is nicely isolated until it is no longer necessary.
My worry is that Google may mistakenly assume that I'm trying to trick Googlebot, while my goal is to help it. What do you guys think about this approach, and what would you recommend?
Recently I come upon this excellent post from yearofmoo, explaining in details how to make your Angular app SEO friendly. In essence, when bots see an uri with a hash tag they will know it's an ajaxed page and will try to reach the same uri by replacing '#!' in your uri with '?_escaped_fragment_='. This alternative uri instructs bots that they should expect to find a definitive static version of the page they were accessing.
Of course, to achieve this you'd have to introduce hash tags into your uris. I don't see why are you trying to avoid them. Isn't gmail using hash tags?
Yeah unfortunately, if you want to be indexed - you have to adhere to the scheme :( If your running a ruby app - there's a gem that implements the crawling scheme for any rack app....
gem install google_ajax_crawler
writeup of how to use it is at http://thecodeabode.blogspot.com.au/2013/03/backbonejs-and-seo-google-ajax-crawling.html, source code at https://github.com/benkitzelman/google-ajax-crawler
Have a look at these links and it will give you a good direction:
Set up your own Prerender service using Prerender.io open source code:
https://prerender.io/
Use a different existing service such as BromBone, Seo.js or SEO4AJAX:
http://www.brombone.com/
http://getseojs.com/
http://www.seo4ajax.com/
Create your own service for rendering and serving snapshots to search engines. Read this article. It will give you the big picture:
http://scotch.io/tutorials/javascript/angularjs-seo-with-prerender-io
As of May 2014 GoogleBot now executes JavaScript. Check WebmasterTools to see how Google sees your site.
http://googlewebmastercentral.blogspot.no/2014/05/understanding-web-pages-better.html
Edit: Note that this does not mean other crawlers (Bing, Facebook, etc.) will execute Javascript. You may still need to take additional steps to ensure that these crawlers can see your site.

Facebook and Ajax

How does Facebook Ajax work? 2-3 months ago they were using # but now the whole addressbar is changing.
The first approach used is called "Ajax Crawling" (also refer to this answer).
But I think the new approach you are talking about is just the HTML5 History API. Github is using this approach for their tree browsing, and you can learn more about it here. (I recommend ALL readers to read and watch the video as it's very informative)
EDIT:
Just to point out that Facebook is definitely using the HTML5 History API (direct link from the previous github article).
They still use # as far as I can tell (but maybe we are on different versions?). For me, their links are for different pages, but they intercept my onclick and change the click to an Ajax request instead. Maybe this is to make cleaner URLs when copying and/or make it work without JS?

When using Ajax History and Bookmark, is it always good to use "#!" instead of just "#"?

Facebook is doing Ajax History (Back and Forward button) and Bookmark using #! instead of just # in the URL. Is it always a good idea to do that, because I was thinking that a usual anchor could interfere with the Ajax History mechanism to trigger it into processing a normal anchor.
So, the Ajax History function will only process a hash portion only when it sees #! instead of just #.
And is using ! compatible with major browsers? If Facebook is using !, a guess is that it may be fairly well supported.
See Google's Making AJAX Applications Crawlable
for a possible use case (don't know if this is why Facebook used this fragment).
Update: This answer has been superseeded by this article. It discusses the issues with the Hashbang (#!), Hashes (#) and the HTML5 History API (pushState, popState) and the solutions.
In regards to usability on your website, it doesn't matter and you can use anything you like.
In regards to search engine optimization, having it and not having it both provide different avenues to go down.
For instance, Facebook uses the ! according to the Google Proposal for Making Ajax Applications Crawlable. Adding the ! will tell google that it should listen in on that ajax request and add that url to search engine results. This is great for websites which have already implemented ajax, as all you need to do is add the !.
The downside of this is that it only solves the problem of making your ajax crawlable. It does not solve the problems of:
Keeping the URLs clean and consistent for Ajax and Non-Ajax users. Eg. you could end up with www.facebook.com/profile.php?pid=123#!profile.php?pid=123
Keeping the website accessible by Non-Ajax users.
Keeping the URLs the same for both Ajax and Non-Ajax users.
Requires some severely complicated server side changes for escaping and translation of states in regards to query strings.
It is not compatible with the new HTML5 PopState functionality which is designed to truly solve these problems.
For websites which don't currently use ajax for everything, due to the problems above it is far better NOT using the Google Proposal as it is only workaround for sites like facebook which went ajax crazy and needed a desperate solution to SEO. There are alternatives which solve more of these problems (and with the HTML5 PopState now available can solve all the problems). One such alternative is jQuery Ajaxy (as seen on balupton.com) which works by simply upgrading your website into a ajax application, while keeping the experience for a Ajax-Enabled rich and interactive and continuing to work perfectly for Ajax-Disabled users.

Could someone explain hash tag usage for deeplinking ajax applications?

I am currently trying to full appreciate how and when to use hash tags in urls when building an ajax powered website. There seems to be a distinct lack of reading material on the web regarding this technique and as such I don't feel like I've got a good handle on it.
Could someone explain in the simplest terms how the hash tag can be used in urls to enable things like loading pages via ajax.
Thanks
You might want to take a look at Google's Making AJAX Applications Crawlable website.

full ajax site and SEO

i am planing to start a full ajax site project, and i was wondering about SEO.
The site will have urls like www.mysite.gr/#/category1 etc
Can Google crawl the site.
Is something that i have to noticed about full ajax and SEO
Any reading suggestions are welcome
Thanks
https://stackoverflow.com/questions/768233/do-hashes-in-urls-affect-seo
You might want to read about so called progressive enhancement.
Google supports indexing of AJAX sites, but unfortunately it involves extra work for the developer. See http://code.google.com/web/ajaxcrawling/docs/getting-started.html
I don't think Google is capable of doing so (yet)
http://googlewebmastercentral.blogspot.com/2009/10/proposal-for-making-ajax-crawlable.html
However you can of course make your site usable with or without JavaScript. That way, browsers will have the full candy stuff and Google (and text browsers) still can navigation your site.
In addition to SEO, you also need to think about usability standards here. A site that is that reliant on AJAX isn't going to work for things like screen-readers as well as spiders. You need a system for graceful degreadation. A website that can't function without JavaScript isn't really a functioning website.
The search engines will spider the initial page load - what happens to the page (with ajax) after that is irrelevant to listings.
Google itself doesn't crawl ajax content but advice a mechanism for it. For this you first need to change # to #!
Whole process to SEO AJAX content is explained here along with simple asp.net code to start working on it.
Imagine having to hit the “refresh” button in your browser to update your Twitter feed rather than just hitting the button on the page itself and having it instantly update? These are the types of problems that AJAX solves, although it does come with its pitfalls. Google might claim it’s able to crawl and parse AJAX websites, yet it’s risky to just take its word for it and leave your website’s organic traffic up to chance. Even though Google can usually index dynamic AJAX content, it’s not always that simple. This guide covers some of the things that can go wrong and how you can make sure your AJAX website is crawlable: https://prerender.io/ajax-seo/

Resources