Non-SPA AJAX Partials for SEO
Sadly, 101% of the Angular SEO examples assume the use of a singe-page-application (SPA). My app is not a SPA. Currently, my stack is:
Node/Express - for routing and rendering Jade templates. The URLs are real, and don't use HTML pushstate, hash-bang or anything similar. for this reason, url-escaped-fragment won't work for me (I don't think)
Angular for communicating with my RESTful API(s)
My problem is that my page itself only includes pieces that are loaded via AJAX—the rest of page is rendered server side. Node/Express is not responsible for any of this logic, Angular pulls in the data that will be in my first h1.
Google Bot and similar see: <h1>{{this_unrendered_string}}</h1> which is no good.
Has anyone come up with any clever solutions for working around this scenario?
FWIW I found a service called SEO.js that will host a rendered version of any page I pass to it. If I could just tell GoogleBot and similar "Hey, don't use this page, use this page instead" But I'm not entirely sure how SEO feels about a different host serving content. Maybe some trickery could work here..
Google have documented an approach to "Making AJAX Applications Crawlable" here. https://developers.google.com/webmasters/ajax-crawling/
Implementing this isn't completely simple (basically you have to run a headless browser and return the HTML snapshots in response to specially formatted requests by Google).
It's not a simple as just returning a snapshot when you detect GoogleBot, but doing it this way probably eliminates any risk of being penalized.
There are a few companies that offer this a service - I'm getting on well with this one: https://ajaxsnapshots.com - they say that Bing and Yandex (Russian search engine) support it too.
AjaxSnapshots have an API you can use to tell them when your page is ready to snapshot - you could call that after all of your client side rendering is done.
Related
I have developed a website using angularjs and web api.
The problem is that the ajax rendered content is not crawable by google. And no one can find the website using google search.
After reading many articles regarding this issue, including:
This one with all links of explanation going out,
Google ajax crawling protocol, and also stack over flow question, I couldn't find the proper solution. Those that mention asp.net solutions, are talking about mvc, and I need only the simple REST by web api, other articles are not talking about asp.net.
Is there any simple explanation?
I'm the one who asked this same question long ago, so I will answer from my experience:
Firstly, if all your content are accessible via unique URIs (including the hashbang if you use it), modern search engines should index it just fine. In fact Google can index javascript generated content now. You can try that via the Google Webmaster tool and see how your site is indexed.
Secondly, there are libraries that help you to serve parsed content to search engines if you need to, but in my case I didn't bother much with it since Google is indexing js nicely.
I've seen others ask this question, and maybe I'm missing something or this is outdated, but I don't see why AngularJS needs to be an issue with SEO.
Say you have a landing page and it has a bunch of links. Assuming you're using html5 mode in AngularJS (and I'm not sure that's 100% necessary) and something like ng-route then the links on the landing page can work both as "angular" (JavaScript) links and "old school" (full page load) links.
If you're a human user you can click a link and it will do angular magic and adjust the content without loading the full page. Ok, all fine.
But if you instead copy the link and paste it in a new tab or new browser, it will still work - assuming you've set up routes correctly.
I'm not an SEO expert by any stretch of the imagination, but as I understand it, having links that load pages and having those pages have real and useful content is the core of SEO, and done this way, AngularJS should work fine. The key thing to check is if you copy and paste the link (not just click it) that it works.
I am displaying internationalized strings within a Polymer element as follows:
<div>
<span class="content">{{myContent}}</span>
</div>
... and have the following dart code:
#observable String myContent;
//...
void onUpdateLocale(_locale) {
myContent = getMyContent();
}
//...
getMyContent() => Intl.message('All my content ...', name:'myContent',
desc: 'This is my content',
args: [],
examples: {'None' : 0});
However; when Google crawls the app, it only pulls "{{myContent}}" and not its interpolated value, the actual internationalized content. Is there a way to work around this and make an internationalized Polymer.dart app that is also SEO-friendly?
Its not really clear. Although recently Google announced that they are evaluating Javascript to index the page, I've not seen any deep evaluation of how this compares to the server rendered pages approach.
And then there is the issue of non Google search engines like Bing.
Polymer as it stands today doesn't really do server side rendering and as far as I can tell the team doesn't have plans to offer than in the near future.
If your project/business depends on SEO I would not risk using polymer.
You have two options to address this issue:
Use phantom.js to render the page on server side whenever a crawler is requesting the page.
Use a third party service like ajaxsnapshots.
Forget polymer and use react.js component framework. React has a way to render the virtual DOM on the server side. This will work seamlessly if you are using node.js frameworks. It should be possible with JVM frameworks as well as Java 6+ ships with a Javascript engine (vastly improved in Java 8. Google "nashhorn").
Google have a spec that lets you serve snapshots of your page's HTML after all necessary Javascript (or Dart) has run to search engines: https://developers.google.com/webmasters/ajax-crawling/
The basic idea is to render the pages on the server side and then follow a set of URL conventions that lets you serve search engines the pre-generated HTML in a way that they wont confuse with cloaking.
Google, Bing, Yandex and some social bots support this spec.
You can implement this spec yourself or use a service that does it for you (I work for one of these: https://ajaxsnapshots.com) The solution is typically plugged in at web server level so you don't need to make any changes to your app.
So, I don't know much about Polymer, aside from the documentation on databinding I just viewed. It seems fairly similar to AngularJS by Google, in that it is using JavaScript in a declarative way to render data into an HTML document. That being the case, the browser is still fundamentally seeing the underlying calls to {{something}} as just a raw string. The JS libraries are then what change that data into text on the screen.
That being the case, you might consider handling SEO like Angular developers do. Here is the definitive resource on the subject: http://www.yearofmoo.com/2012/11/angularjs-and-seo.html
I've got a very unique situation that I don't believe any of the other topics here can relate.
I have a ecommerce module that is dynamically loaded / embedded into third party sites, no iframe straight JSON to web client into content. I have no access to these third part sites at all, other then my javascript file being loaded from their page and dynamically generating the content.
I'm aware of the #! method, but that's no good here, my JS does generate "urls" within the embedded platform, but they're fake and for the address bar only, and I don't believe google crawlers can reach this far.
So my question is, is there a meta that we can set to point outside the url to i.e. back to my server with static crawlable content. I.e. pointing the canonical to my server... but again I don't think that would work.
If you implement #! then you have to make sure the url your embedded in supports the fragment parameter versions, which you probably can't. It's server side stuff.
You probably can't influence the canonical tag of the page either. It again has to be done server side. Any meta tag you set via JavaScript will not be seen by a bot.
Disqus solved the problem by providing an API so the embedding websites could get there comments server side and render then in plain html. WordPress has a plugin to do this. Disqus are also one of the few systems that Google has worked out how to crawl their AJAX pages.
Some plugins request people to also include a plain link with the JavaScript. Be careful with this as you may break Google Guidelines if you do it wrong. But you may be able to integrate the plain link with your plugin so that it directs bots and users to a crawlable version of the content.
Look into Google's crawlable ajax standard (and why it's a bad idea) and canonical URLs.
Now you can actually do this. A complete guide and examples can be found here: https://github.com/kubrickology/Logical-escaped_fragment
I have been looking at Hulu's new website and I am very impressed from a developer's standpoint (as well as a designer's).
I have found that, unless you switch between http/https, you are served content entirely from json requests. That is a HUGE feat to have this level of ajax while maintaining browse back button support as well as allowing each url to be visited directly.
I want to create a website like this as a learning experience. Is there any type of framework out there that can give me this kind of support?
I was thinking I could...
leverage jQuery
use clientside MVVM frameworks like KnockoutJS?
use ASP.NET MVC content negotiation to serve html or json determined by an accept header.
using the same codebase.
use the same template for client side and server side rendering
provide ways to update pagetitle/meta tags/etc.
Ajax forms/widgets/etc would still be used, by I am thinking about page level ajax using json and client side templates.
What do you think? Any frameworks out there? Any patterns I could follow?
It is always best to first build a website without AJAX support, then add AJAX on top of that. Doing this means that:
users without javascript can already access your website
users can already visit any URL directly.
Adding AJAX support can be accomplished by various javascript libraries. So that you can render json content, you will want to look at javascript templating. You will want to use javascript templating even on your server side for when you add AJAX support (file extension .ejs). This will probably require some appropriate libraries to run javascript on the server.
When you add AJAX support, you will want to use the "History.js" library for browser back/forward/history support.
Make no mistake. This is a HUGE project (unless your website only has a few pages). So it is going to take a LONG time to add all the AJAX support to the best possible standard.
to answer your bullet point about using the same template server side as well as client side: check outdust. It was originally developed by akdubya, but has since been adopted and enhanced by linkedin. They use it to render templates on their mobile app client side. Personally I've used it on the server side and it works great.
i am planing to start a full ajax site project, and i was wondering about SEO.
The site will have urls like www.mysite.gr/#/category1 etc
Can Google crawl the site.
Is something that i have to noticed about full ajax and SEO
Any reading suggestions are welcome
Thanks
https://stackoverflow.com/questions/768233/do-hashes-in-urls-affect-seo
You might want to read about so called progressive enhancement.
Google supports indexing of AJAX sites, but unfortunately it involves extra work for the developer. See http://code.google.com/web/ajaxcrawling/docs/getting-started.html
I don't think Google is capable of doing so (yet)
http://googlewebmastercentral.blogspot.com/2009/10/proposal-for-making-ajax-crawlable.html
However you can of course make your site usable with or without JavaScript. That way, browsers will have the full candy stuff and Google (and text browsers) still can navigation your site.
In addition to SEO, you also need to think about usability standards here. A site that is that reliant on AJAX isn't going to work for things like screen-readers as well as spiders. You need a system for graceful degreadation. A website that can't function without JavaScript isn't really a functioning website.
The search engines will spider the initial page load - what happens to the page (with ajax) after that is irrelevant to listings.
Google itself doesn't crawl ajax content but advice a mechanism for it. For this you first need to change # to #!
Whole process to SEO AJAX content is explained here along with simple asp.net code to start working on it.
Imagine having to hit the “refresh” button in your browser to update your Twitter feed rather than just hitting the button on the page itself and having it instantly update? These are the types of problems that AJAX solves, although it does come with its pitfalls. Google might claim it’s able to crawl and parse AJAX websites, yet it’s risky to just take its word for it and leave your website’s organic traffic up to chance. Even though Google can usually index dynamic AJAX content, it’s not always that simple. This guide covers some of the things that can go wrong and how you can make sure your AJAX website is crawlable: https://prerender.io/ajax-seo/