I'm taking an online course from a site that does not provide for asking questions about the content, so I am asking you.
This is one of the MUSTS for page titles:
"On a so-called single-page application — in which AJAX is used to bring in new content without refreshing or loading the entire web page — any time that the URL changes, the page title should be updated accordingly"
I always thought that if you used ajax to change part of a page the page itself would remain static. That is why would a URL change with an AJAX call. Are there any examples showing what this looks like?
AJAX requests don’t change the browser URL or history. To mimic the behaviour of traditional websites, single page apps will modify the URL and history and use the URL to change the view.
For more on changing the browser history, see Adding and modifying History entries.
Related
I was reading up on ajax and how it empowers us to exchange data with a server behind the scenes and consequently avoid full page reloads. My confusion lies here, I don't really understand what full-page reloads mean. I think it's probably cause I've been working with ajax/react since the start I guess and have not really seen any webpage of mine fully reload when I access stuff from a database or an api.
It'd be great if someone could explain what they are and why did we need them before ajax?
A full page load is where the entire page is downloaded from the server. A page typically consists of several sections: header, footer, navigation, and content. In a classic web application without AJAX, a user clicks on a link to another page, and has to download the full page, even though only the main content is changing. The header, footer, and navigation all get downloaded again even though they don't change.
With AJAX there is the opportunity to only change the parts of the page that will change. When a user clicks on the link, JavaScript loads just the content for that link and inserts it into the current page. The header, footer, and navigation don't need to reload.
This introduces other problems that need attention.
When AJAX inserts new content into the page, the URL doesn't change. That makes it difficult for users to bookmark or link to specific content. Well written AJAX applications use history.pushState() to update the URL when loading content via AJAX.
There are then two paths to get to every piece of content. Users can either load the URL containing that content directly, or load the content into some other page by following a link. Web developers need to test and ensure both work.
Search engines have trouble crawling AJAX powered sites. For best compatibility, you need to employ server side rendering (SSR) or pre-rendering to serve initial content on a page load that doesn't require JavaScript.
Even for Googlebot (which executes JavaScript) care must be taken to make an AJAX powered site crawlable. Googlebot doesn't simulate user actions like clicking, scrolling, hovering, or moving the mouse.
Content needs to appear on page load without any user interaction
You must use <a href=...> links for navigation so that Googlebot can find other pages by scanning the document object model (DOM). For users, JavaScript can intercept clicks on those links and prevent a full page load by using return false from the onclick handler or event.preventDefault() in the click handler.
I have tried to set my site up ( http://www.diablo3values.com )according to the guidelines set out here : https://developers.google.com/webmasters/ajax-crawling/ However, it appears that Google has updated their indexes (because I see the revisions to the meta description tags) but the ajax content does not show up in the index.
I am trying to use the “Handle pages without hash fragments” option.
If you view either of the following:
http://www.diablo3values.com/?_escaped_fragment_=
http://www.diablo3values.com/about?_escaped_fragment_=
you will correctly see the HTML snap shot with my content. (those are the two pages I an most concerned about).
Any Ideas? Am I doing something wrong? How do you get google to correclty recognize the tag.
I'm typing this as an answer, since it got a little to long to be a comment.
First of all, your links seems to point to localhost:8080/about, and not /about, which probably is why google doesn't index it in the first place.
Second, here's my experience with pushstate urls and Google AJAX crawling:
My experience is that ajax crawling with pushstate urls is handled a little differently by google than with hashbang urls. Since google won't know that your url is a pushstate url (since it looks just like a regular url), you need to add <meta name="fragment" content="!"> to all your pages, not only the "root" page. And google doesn't seem to know that the pages are part of the same application, so it treats every page as a separate Ajax application. So the Google bot will never actually create a navigation structure inside _escaped_fragment_, like _escaped_fragment_=/about, as it would with a hashbang url (#!/about). Instead, it will request /about?_escaped_fragment_= (which you aparently already have set up). This goes for all your "deep links". Instead of /?_escaped_fragment_=/thelink, google will always request /thelink?_escaped_fragment_=.
But as said initially, the reason it doesn't work for you is probably because you have localhost:8080 urls in your _escaped_fragment_ generated html.
Googlebot only knows to crawl the escaped fragment if your urls conform to the hash bang standard. As users navigate your site, your urls need to be:
http://www.diablo3values.com/
http://www.diablo3values.com/#!contact
http://www.diablo3values.com/#!about
Googlebot actually needs to see these urls in the source code so that it can follow them. Then it knows to download the following urls:
http://www.diablo3values.com/?_escaped_fragment=contact
http://www.diablo3values.com/?_escaped_fragment=about
On your site you appear to be loading a new page on each click, and then loading the content of each page via AJAX too. This is not how I would expect an AJAX site to work. Usually the purpose of using AJAX is so that the user never has to load a whole new page. When the user clicks, the new content section is loaded and inserted into the page. You serve the navigation once and then you only serve escaped fragments of the content.
How can I manage page state history in share (surf?) so that I remember for example which yui tab was active and on which page the pager was on?
I noticed that alfresco share does something like that after form submit. You get redirected to the exact same page url where you were before. If any "ajax state" (don't know what they are called) parameters are in url like #something=asdf you get the same url.
But when using manual navigation like moving through site pages those parameters aren't saved.
Is this even a good idea to do? To save page state in session for example?
Some pages have support for URL parameters that are passed in. In those cases the browser history is used, e.g. we editing metadata in the full page meta data view the user is send back to the page he is coming from. This is done in javascript by calling window.history.go(-1) after form submit but only works when parameters are set/retrieved by URL. The document library implements page specific javascript for setting the URL and parsing parameters from it.
I some places Alfresco uses the preference service to store user settings between different pages permanently. For example this is used in the document library for the "show folders" and "simple/thumbnail view" buttons. Here is some sample code from the document library javascript setting a preference option:
var PREFERENCES_DOCLIST = "org.alfresco.share.documentList",
PREF_SHOW_FOLDERS = PREFERENCES_DOCLIST + ".showFolders";
var preferences = new Alfresco.service.Preferences();
this.services.preferences.set(PREF_SHOW_FOLDERS, true);
The evaluation of the properties is usually done in the Share component webscripts, you can have a look in share\WEB-INF\classes\alfresco\site-webscripts\org\alfresco\components\documentlibrary\include\documentlist.lib.js for an example.
In any case you have to dig into Alfresco's javascript code in the browser and the share tier to see how to implement it.
You could check parameters sended to the server side after form submiting in the firebug (firefox plugin) and then you could use the same parameters.
Also perhaps you should use yui history manager:
I am looking at the Gawker blogs (http://io9.com, http://lifehacker.com/) and I'm curious about how they are made.
When I click for on a link only the article part of the page reloads displaying a loading icon while it does.
But what I can't figure out is that links point to new URLs like io9.com/something/something and its not something like I see on ajax pages that they put a site.com/#something tag at the end of the url from javascript to mark the page after an ajax request.
Can I change the full blown URL from javascript or what is happening?
When it happens, the website is using the HTML5 History API. This API can change the url (via JavaScript) without changing the page.
See caniuse.com for browser support.
If you would like to implement it in yout website, backbonejs.org would be very useful.
My website uses ajax.
I've got a user list page which list users in an ajax table (with paging and more information stuff...).
The url of this page is :
/user-list
User list is created by ajax. When the user click on one user, he is redirected to a page which url is : /member/memberName
So we can see here that ajax is used to generate content and not to manage navigation (with the # character).
I want to detect bot to index all pages.
So, in ajax I want to display an ajax table with paging and cool ajax effetcs (more info...) and when I detect a bot I want to display all users (without paging) with a link to the member page like this :
JohnBob...
Do you think I can be black listed with this technique ? If you think so, could you please provide an alternative solution by keeping these clean urls and without redeveloping the user-list (without ajax) ?
Google support a specification to make AJAX crawlable:
http://code.google.com/web/ajaxcrawling/docs/specification.html
I did an experiment and it works:
http://seo-website-designer.com/SEO-Ajax-Google-Solution
As this is a Google specification, you won't get penalised (unless you abuse it).
Saying that, only Google support it at the moment (AFAIK).
Also, I believe following the concept of Progressive Enhancement is a better approach. That is, create a working html website then make the JavaScript enhance it
Maybe use the urls with an onclick to trigger your AJAX scripting? Like
Some URL
I don't think Google would punish you for this, you primarily use JScript, but you do provide a fall back for their bot, so your site doesn't get any less accessible.
EDIT
Ok, I misunderstood. Then my guess would be you basically have two options:
1. Write a different part of your site where bots end up, or,
2. Rewrite your current site to for example always give a 'full' page, with an option to only get, say, the content div. Then you can get only the content with JavaScript, but bots will always get a nice page.