Google indexing of AJAX site: how to transition from _escaped_fragment_ method? - ajax

My site currently uses hashbang URLs with the Google deprecated recommendation of having a static page served when requesting with _escaped_fragment_ query parameter.
Example static pre-generate page using deprecate method:
https://tweepi.com/app/#!/help is statically served when requesting https://tweepi.com/app/?_escaped_fragment_=/help
I am building a dynamic page and do not want to keep regenerating a static HTML file. I read Google's new recommendation and it says simply do not disallow Googlebot from crawling your site's CSS or JS files.
Assuming a new dynamic page with the URL https://tweepi.com/app/#!/reviews, what status code should the following URL return to ensure best SEO results when Google/Bing crawls my site? 404? 500? 301 redirect?
https://tweepi.com/app/?_escaped_fragment_=/reviews

For SEO bots: html snapshot for url: https://tweepi.com/app/?_escaped_fragment_=/reviews, (status code 200) that's mean nothing special for https://tweepi.com/app/#!/reviews
Users: 404 for https://tweepi.com/app/?_escaped_fragment_=/reviews

Related

ngRoute for Single Page AJAX app and Google Search indexing

I read numerous resources on using ngRoute html5Mode (true) in AngularJS Single Page Applications (SPA) to facilitate Google indexing of the AJAX SPA app. I still have a following question.
Setup:
I have set up server redirection which redirects browser for URL which are not found to the root of my AngularJS app like so:
RewriteRule ^(.*) /SPA/index.html [NC,L]
I also verified in Google Web Master tools that Google can access, index and correctly render individual page using direct url like this http://braginski.com/SPA/about
this page is a referenced in index.html like this:
a href="#about"
Question:
Now since I verified that Google bot can render my Ajax app correctly, can I just submit XML site map to Google with all URLs which I would like to index? This site map will include all URL in the conventional form (no #, no !), like so:
/SPA/about (full URL http://braginski.com/SPA/about)
Google will then index each URL (my redirection will ensure that each URL like /SPA/about is routed to root of AngularJS app for proper rednering). This way I don't have to deal with escape_fragments or any other middleware pre-rendering?
Thanks

Ajax content indexing, Google

I've followed the instructions from the Google website to enable Ajax crawling on my AngularJS site by adding the following meta tag:
<meta name="fragment" content="!">
The rendered content has some links like:
User 1
User 2
User 3
Also some Ajax tabs which render dynamic content like:
Popular
Recent
Looking at the server logs, GoogleBot did came and passed in correctly the _escaped_fragement in the Uri, which is correct:
_escaped_fragment_=%2fpopular
_escaped_fragment_=%2frecent
Problem is that looking at actual indexed content using site:www.somesite.com and logs on server, I see that GoogleBot attempted to index pages like:
/user/1/#!/popular
/user/1/#!/recent
Why would something like this happen considering those urls are relative and don't have #! on them to indicate ajax content and is there a way to prevent this?
If those URLs are available on all pages, it will simply add them.
So, if I would go to: User 1 and there are again Popular there pages, then it's logical that Google loads: /user/1#!/popular
You might want to know that I've solved this puzzle with a script that's on Github: https://github.com/kubrickology/Logical-escaped_fragment
Simply build your AJAX pages with: __init()

Prevent usesrs from landing on non ajax #! pages on my site without loop and SEO _escaped_fragment trouble

My site is AJAX but it pulls content from .html files. Some of those files have been indexed without the #!, so they just function as a basic html site. I want to redirect users that land on the non ajax page to the #! version. I tried a redirect (without thinking about it) and it created an endless loop with the dynamic content.
If you look at the code, you will see that it uses js to place the static pages into a content wrapper.
I am equally having trouble with an seo issue, where google does not appear to be requesting the escaped_fragment version... that or I need some help. I thought that since it was pulling content from html files, I could just copy those pages and add name it _escaped_fragment_=page.html it is not working. I tried a redirect, but google fetch just showed the redirect request and not content.
It was a template that I purchased... I figured out how to modify the theme and content, but this is beyond me.
Closed
I decided to scrap the hashbang method. I have real pages, and I decided to let them be searched and indexed. I am waiting on a solution to pull only the body into the ajax content warapper; however, I was able to apply basic CSS to the pages without messing anything up when loaded into the main page via ajax.
I used
$("a").attr("href", function(i, href) (some js stuff to add a hash-- hostname +# href)
to add a hash to the clean urls that were internal from the main menu. This created a loop if added to the pages, so I used a clean url with an onclick redirect to the ajax version. "/" before the link.
onclick="window.location = '/#link.html';return false;"
I had a JS redirect that detected if there was a hash before the page link, and if not, added it; however, google did not like it! Sure the pages are not as nice. That said, I have content for non JS enabled browsers. As soon as I get the main.js modified so that it ignores head elements, I can dress them up even more. Each page has links that will get a user to the ajax version, including the home button "/#".

Why is my ajax content not being indexed by google

I have tried to set my site up ( http://www.diablo3values.com )according to the guidelines set out here : https://developers.google.com/webmasters/ajax-crawling/ However, it appears that Google has updated their indexes (because I see the revisions to the meta description tags) but the ajax content does not show up in the index.
I am trying to use the “Handle pages without hash fragments” option.
If you view either of the following:
http://www.diablo3values.com/?_escaped_fragment_=
http://www.diablo3values.com/about?_escaped_fragment_=
you will correctly see the HTML snap shot with my content. (those are the two pages I an most concerned about).
Any Ideas? Am I doing something wrong? How do you get google to correclty recognize the tag.
I'm typing this as an answer, since it got a little to long to be a comment.
First of all, your links seems to point to localhost:8080/about, and not /about, which probably is why google doesn't index it in the first place.
Second, here's my experience with pushstate urls and Google AJAX crawling:
My experience is that ajax crawling with pushstate urls is handled a little differently by google than with hashbang urls. Since google won't know that your url is a pushstate url (since it looks just like a regular url), you need to add <meta name="fragment" content="!"> to all your pages, not only the "root" page. And google doesn't seem to know that the pages are part of the same application, so it treats every page as a separate Ajax application. So the Google bot will never actually create a navigation structure inside _escaped_fragment_, like _escaped_fragment_=/about, as it would with a hashbang url (#!/about). Instead, it will request /about?_escaped_fragment_= (which you aparently already have set up). This goes for all your "deep links". Instead of /?_escaped_fragment_=/thelink, google will always request /thelink?_escaped_fragment_=.
But as said initially, the reason it doesn't work for you is probably because you have localhost:8080 urls in your _escaped_fragment_ generated html.
Googlebot only knows to crawl the escaped fragment if your urls conform to the hash bang standard. As users navigate your site, your urls need to be:
http://www.diablo3values.com/
http://www.diablo3values.com/#!contact
http://www.diablo3values.com/#!about
Googlebot actually needs to see these urls in the source code so that it can follow them. Then it knows to download the following urls:
http://www.diablo3values.com/?_escaped_fragment=contact
http://www.diablo3values.com/?_escaped_fragment=about
On your site you appear to be loading a new page on each click, and then loading the content of each page via AJAX too. This is not how I would expect an AJAX site to work. Usually the purpose of using AJAX is so that the user never has to load a whole new page. When the user clicks, the new content section is loaded and inserted into the page. You serve the navigation once and then you only serve escaped fragments of the content.

Yandex AJAX crawling

For AJAX crawling of Googlebot I use "_escaped_fragment_" argument in my website.
Now I checked Yandex's search results for my site.
I saw that AJAX reponses don't exist in search results.
Is there an option for Yandex like "_escaped_fragment_" ?
Else, should I check user agent and if user agent includes "YandexBot" then serve non-AJAX page?
Thank you
I found out that Yandex also supports Google's proposition for AJAX crawling.
No need to change any code if you optimized your site for Googlebot crawling.

Resources