Does Google Shows the AJAX based website results in search? - ajax

I want to know that i am making a website which shows all the latest information on the front page... the website is completely AJAX working , So all the data is in the Server Databases , If someone searches for a perticular data available in my site through Search engines then will the search engine searches the database?

By default, no, AJAX-only accessible content is not going to be crawled by Google. There is a convention you can use, however, to ensure that it does get crawled - the "Hash Wallop".
When Google (and other search engines), encounter a link that starts with "#!", it will "crawl" the dynamic content returning from an AJAX call. To take advantage of this, rather than having something like:
trigger ajax
...you will want to use something like:
trigger ajax
There is some more info on the technique here (and lots of others sites, just google it): http://www.seomoz.org/blog/how-to-allow-google-to-crawl-ajax-content

Some insight: http://coding.smashingmagazine.com/2011/09/27/searchable-dynamic-content-with-ajax-crawling/
Also, you might want to look into HTML caching, it will boost your SEO performance.

Related

MEAN-SEO not working as expected

I have a project in meanjs.
It has html5mode disabled so my URLS are like that:
http://localhost:3000/#!/products
I am trying to implement AJAX snapshoots in order to allow Google Crawlers to see content generated by javascript on client side.
I installed a module called MEAN-SEO:
http://blog.meanjs.org/post/78474995741/mean-seo
Now when I access the following URL:
http://localhost:3000/?_escaped_fragment_=
I am redirected to:
http://localhost:3000/?_escaped_fragment_=/#!/
And when I click on "products" or when I access directly, I am redirected to:
http://localhost:3000/?_escaped_fragment_=/#!/products
After reading the Google specification detailed here https://developers.google.com/webmasters/ajax-crawling/docs/getting-started , what I need is to get is something without hashbangs, like the following:
http://localhost:3000/?_escaped_fragment_=/products
What I am doing wrong?
Kind Regards.
Any specific reasons why you want html5mode off?
Here is something a lot of people have missed: Search engines (both Google and Bing) can now handle AJAX based content.
Their crawlers now understands pushstates, so if you just turn html5mode on you don't need any special handling to get your SEO working. You can load your content via AJAX, you can set title tags and meta tags with javascript and so on and so forth, and the crawlers will understand your content the same as if you had rendered things server-side. There is no need to do html-snapshotting or escaped_fragment handling for SEO anymore.
This has been announced on their developer blogs but unfortunately most of the documentation hasn't been updated with this information, so it's gone under the radar for a lot of people.
One word of warning though, Facebook does not handle pushstates, so if you want to support the Facebook crawler you still need to handle that separately.

AJAX Crawling with question mark instead of hashbang

Where I'm at: I've read Google's documentation regarding it's AJAX crawling, and I've searched around a bit in this website and others, but I'm quite confused, as it seems that all problems address the same issue: AJAX crawing with hashbangs?
I've developed an app which, among other purposes, let's the user search for locations worldwide, using an AJAX searcher quite similar to Google's, but my app uses exclusively the question mark in AJAX, instead of hashbang. Due to compatibility issues, changing it to the hashbang is not an option.
Not only am I largely confused by the fact that I could not find anyone else using the question mark instead of the hashbang, I'm also wondering if there is any documentation regarding my issue: how to let google bot crawl all my AJAX content when I'm using the question mark instead of a hashbang in my AJAX app.
The AJAX crawling schema was created explicitly for applications and websites using hashbang (#!) in the URL structure, because the fragment part of the URLs only exist on the client side; the URL rewriting in the specs, i.e. from #! to ?_escaped_fragment_= is meant to solve that.
Since most of the web is already making use of Javascript in a way or other, we (Google) needed a better solution, so we started executing Javascript in the pages we crawled and effectively render every page, just like a normal browser would. To quote our blogpost, Understanding web pages better:
In order to solve this problem, we decided to try to understand pages by executing JavaScript. It’s hard to do that at the scale of the current web, but we decided that it’s worth it. We have been gradually improving how we do this for some time. In the past few months, our indexing system has been rendering a substantial number of web pages more like an average user’s browser with JavaScript turned on.
You can also see what we "see" using Fetch as Google in Search Console (former Webmaster Tools); read more about the feature in our post titled Rendering pages with Fetch as Google
Before you do anything else, please try to fetch a few pages from your site with Fetch as Google. You might not have to do anything at all, it might actually work out of the box. And the good news is that it's not only Google that's rendering pages!

How can I make my AngularJS Wordpress AJAX blog searchable and SEO-friendly?

I'm working on a Wordpress site which displays posts through a JSON api and AngularJS. I render all post thumbnails on a page and when one is clicked the post is rendered in an overlay on the same page. The post url becomes something like mysite.com/#!/post-name.
Here's the development page http://givakt.kund.griffel.se/blogg-jobb/
Since everything is fetched by AJAX calls none of this info is available to search engines. I have tried to figure out a good approach to make it indexable but it's all very new ground to me.
Would it be possible to get content from or redirect the search engine to a php-rendered (wordpress) page, say like mysite.com/post-name, while thinking it's getting the correct content at mysite.com/#!/post-name. Is it even allowed or even frowned upon? The actual content would of course be as identical as possible at both sources.
Not sure if this is legit approach however, or if it could even work. Is there any other easier or preferred approach that I'm missing?
BTW, I have read http://www.yearofmoo.com/2012/11/angularjs-and-seo.html and how to use PhantomJS and so on to provide indexable pages. So what I'm basically asking is if there's a way to utilize wordpress pages to serve the content instead.
I'm not exactly sure how to do it in terms of technicalities, but Google is usually not happy if you show one version of the page to search engines and something else to actually visitors. It's called cloaking. Just keep it in mind.

Google Search optimisation for ajax calls

I have a page on my site which has a list of things which gets updated frequently. This list is created by calling the server via jsonp, getting json back and transforming it into html. Fast and slick.
Unfortunately, Google isn't able to index it. After reading up on how to get this done according to Google's AJAX crawling guide, I am bit confused and need some clarification and confirmation:
The ajax pages need to be implement the rules only, right?
I currently have a rest url like
[site]/base/junkets/browse.aspx?page=1&rows=18&sidx=ScoreAll&sord=desc&callback=jsonp1295964163067
this would need to become something like:
[site]/base/junkets/browse.aspx#page=1&rows=18&sidx=ScoreAll&sord=desc&callback=jsonp1295964163067
And when google calls it like this
[site]/base/junkets/browse.aspx#!page=1&rows=18&sidx=ScoreAll&sord=desc&callback=jsonp1295964163067
I would have to deliver the html snapshot.
Why replace the ? with # ?
Creating html snapshots seems very cumbersome. Would it suffice to just serve simple links? In my case I would be happy if google would only index the things pages.
It looks like you've misunderstood the AJAX crawling guide. The #! notation is to be used on links to the page your AJAX application lives within, not on the URL of the service your appliction makes calls to. For example, if I access your app by going to example.com/app/, then you'd make page crawlable by instead linking to example.com/app/#!page=1.
Now when Googlebot sees that URL in a link, instead of going to example.com/app/#!page=1 – which means issuing a request for example.com/app/ (recall that the hash is never sent to the server) – it will request example.com/app/?_escaped_fragment_=page=1. If _escaped_fragment_ is present in a request, you know to return the static HTML version of your content.
Why is all of this necessary? Googlebot does not execute script (nor does it know how to index your JSON objects), so it has no way of knowing what ends up in front of your users after your scripts run and content is loaded. So, your server has to do the heavy lifting of producing a HTML version of what your users ultimately see in the AJAXy version.
So what are your next steps?
First, either change the links pointing to your application to include #!page=1 (or whatever), or add <meta name="fragment" content="!"> to your app's HTML. (See item 3 of the AJAX crawling guide.)
When the user changes pages (if this is applicable), you should also update the hash to reflect the current page. You could simply set location.hash='#!page=n';, but I'd recommend using the excellent jQuery BBQ plugin to help you manage the page's hash. (This way, you can listen to changes to the hash if the user manually changes it in the address bar.) Caveat: the currently released version of BBQ (1.2.1) does not support AJAX crawlable URLs, but the most recent version in the Git master (1.3pre) does, so you'll need to grab it here. Then, just set the AJAX crawlable option:
$.param.fragment.ajaxCrawlable(true);
Second, you'll have to add some server-side logic to example.com/app/ to detect the presence of _escaped_fragment_ in the query string, and return a static HTML version of the page if it's there. This is where Google's guidance on creating HTML snapshots might be helpful. It sounds like you might want to pursue option 3. You could also modify your service to output HTML in addition to JSON.
I've more or less given up on this. There really seems no alternative to generating the html on the server and delivering it in the html bdoy if you want goolge to index your directory.
I even tried adding a section wraped a .net user control which implemented a simple html version of the directory. But google also managed to ignore ..
So in the end my directory has been de-ajaxified. :(

Search engine opimization dos and don'ts for AJAX

I've created an AJAX enabled web application. In my application all contents [that I want to be appear in search pages] are loaded using AJAX. However I observed that despite of valid sitemap submitted to google, my page raking is very very poor.
What all I need to do and what to avoid in order to improve page ranking.
Thanks in advance.
you probably want to make it enabled for bookmark and history. There are many ways. One of them is jQuery's history plugin: https://github.com/tkyk/jquery-history-plugin
you probably want to create a page for search engines to crawl your website with those links http://www.mysite.com/foobar.php#!fetch_content=xyz. The #! is a way recognized by Google to crawl and index its content.
reference: http://googlewebmastercentral.blogspot.com/2007/11/spiders-view-of-web-20.html
Don'ts would be interesting. But here's a do, for all of JS as well.
Make sure that all links degrade gracefully, this can be easily achieved by giving the links real URLs that lead to the same content that is to be loaded in the event that JS is not enabled. This makes crawling your website possible.
You would also have to disable default for all the affected links.

Resources