I am trying to test with "fetch as google" an orchard website which has ajax content . Shouldn't google replace http://cmbbeta.azurewebsites.net/#! with http://cmbbeta.azurewebsites.net/?_escaped_fragment_ (both links work). When i hit my beta website with fetch as google, the preview shows me that the page is loading the ajax content,and not the static one.
Am i missing something?
The preview that appears when you put your mouse over the link always seem to show the dynamic website. The important thing to look at is the fetch result that you can access by clicking the "Success" link in the "Fetch Status" column.
This is probably not affecting your site, but the Fetch as Google feature doesn't work for AJAX urls that are specified with the <meta> tag. See here.
Related
I am trying to figure out why my website's posts and pages such as my resume are getting a "Complete" status with a green check mark (seemingly no errors or redirects) when fetching and rendering as google, but all of them "render" and look like my homepage. The page speed insights tool seems to be using the same rendering engine as it seems to have the same issue.
Notes:
The html served from my website on initial page load is the correct HTML and content. No redirects occur. The initial page load does not fetch content via JS. I mention this because although my website is not a one page application (I'm using Wordpress), I do use ajax in combination with a post variable flag to fetch new page content when the user navigates to the next page (after the initial page load).
I have verified that all of my pages have been indexed using the "site:" trick in Google search. They are indexed properly, but they aren't "rendering" properly.
Should I be worried? Should I just ignore that the pages aren't rendering properly? It doesn't make any sense. Is anyone else having this issue?
Your resume page has a response type of content-type image/gif so google thinks that the page is an image??
in mywebsit, when users click on each subject(url:http://mywebsite/subject/id(1234567)). title and description were showed and content of this subject loaded by ajax.
I add this url (http://mywebsite/subject/id(1234567)) in Fetch as Google (in the google/webmaster) and click on Fetch an Render. Fetch and Render in Googlebot , show ajax content for Rendering tab (the result for "This is how Googlebot saw the page:" show content that was loaded by ajax) but not show in Fetch tab. In Fetch tab exist only source html of my page with out ajax content.
This mean that googlebot crawling will index ajax content of my website?
The answer was no: you have to use ajax crawling scheme as described https://developers.google.com/webmasters/ajax-crawling/
But yesterday Google changed the rules of the game:
http://googlewebmastercentral.blogspot.com/2015/10/deprecating-our-ajax-crawling-scheme.html
Google states "we are generally able to render and understand your web pages like modern browsers."
I'm working on a website that uses html5's push- and popstate in combination with ajax-calls to create a website that dynamically loads in Wordpress posts and pages, without causing a page refresh.
I've got that working fine, but I would love it if the black Wordpress toolbar/adminbar that shows at the top of the site when you're logged in as an admin, also reflected the change of content. Is there any way at all to make this happen? So that when I go from a post to page, for example, the "edit" link in the admin bar updates.
I don't think it's as easy as I hope it is, and if it can't be worked out I think I'll just disable the adminbar on the front-end. But it could be that I'm missing something.
Thanks in advance!
I'm working on this myself. I'm gonna build an ajax action and jquery function to do this. I will post here when it's done. For NOW i've instructed my users to just refresh to get the edit link. if you're using HTML5 history then you're on the permalink you want anyways. refresh and let the server regenerate the bar. not perfect but not terrible.
another option is to put edit post links in your template.
I am looking at the Gawker blogs (http://io9.com, http://lifehacker.com/) and I'm curious about how they are made.
When I click for on a link only the article part of the page reloads displaying a loading icon while it does.
But what I can't figure out is that links point to new URLs like io9.com/something/something and its not something like I see on ajax pages that they put a site.com/#something tag at the end of the url from javascript to mark the page after an ajax request.
Can I change the full blown URL from javascript or what is happening?
When it happens, the website is using the HTML5 History API. This API can change the url (via JavaScript) without changing the page.
See caniuse.com for browser support.
If you would like to implement it in yout website, backbonejs.org would be very useful.
Does anyone can tell me about what are urls like blablalba.com**/#!/**dasdas?
Twitter use them.
I have a problem with requesting page using AJAX, when I hit Back button, it doesn't load previous page that loaded using AJAX.
But, I saw in twitter, when you are in Timeline tab/page, and click #Mention tab, and hit Back button, it will bring you to Timeline tab/page again not to your previous loaded page (non AJAX). is there a relation between it and url with /#!/ characters ?
I am not familiar with Twitter, but your description reads a bit as if you were looking for Chris Coyer's screencast on using the hash fragment from JavaScript.