Typically, if it were a regular page-by-page website, I would install the analytics javascript before the body tag.
But, with a site where content is on overlays, how can analytics be installed to track views? (i.e. a one page portfolio site)
Thanks for any insight!
See Tracking Google Analytics Page Views with Angular.js. Even though I'm not sure if you are using something like Angular or just straight javascript, you could use a similar technique described there with hash urls that are set when a user clicks on a different part of the page, that way you could track how a user interacts with your single page site by making different urls for their interactions.
For more information see Pushing Functions onto the Queue.
In javascript that calls the overlay, you can add:
_gaq.push(['_trackPageview', '/url/of/page']);
or:
ga('send','pageview','/url/of/page');
Related
I have a project in meanjs.
It has html5mode disabled so my URLS are like that:
http://localhost:3000/#!/products
I am trying to implement AJAX snapshoots in order to allow Google Crawlers to see content generated by javascript on client side.
I installed a module called MEAN-SEO:
http://blog.meanjs.org/post/78474995741/mean-seo
Now when I access the following URL:
http://localhost:3000/?_escaped_fragment_=
I am redirected to:
http://localhost:3000/?_escaped_fragment_=/#!/
And when I click on "products" or when I access directly, I am redirected to:
http://localhost:3000/?_escaped_fragment_=/#!/products
After reading the Google specification detailed here https://developers.google.com/webmasters/ajax-crawling/docs/getting-started , what I need is to get is something without hashbangs, like the following:
http://localhost:3000/?_escaped_fragment_=/products
What I am doing wrong?
Kind Regards.
Any specific reasons why you want html5mode off?
Here is something a lot of people have missed: Search engines (both Google and Bing) can now handle AJAX based content.
Their crawlers now understands pushstates, so if you just turn html5mode on you don't need any special handling to get your SEO working. You can load your content via AJAX, you can set title tags and meta tags with javascript and so on and so forth, and the crawlers will understand your content the same as if you had rendered things server-side. There is no need to do html-snapshotting or escaped_fragment handling for SEO anymore.
This has been announced on their developer blogs but unfortunately most of the documentation hasn't been updated with this information, so it's gone under the radar for a lot of people.
One word of warning though, Facebook does not handle pushstates, so if you want to support the Facebook crawler you still need to handle that separately.
Let's say I am creating a webapp for a library. My base url is http://mylibrary.com. I want to use "pretty" URLs as follows:
http://mylibrary.com/books (list all books)
http://mylibrary.com/books/book1 (details of a particular book)
At present my approach is to create a single page app and use history api to manage the URLs. i.e I load all CSS and JS files when the user visits the home page. From then I just get data from server using AJAX, in JSON format and then create the required HTML using Javascript.
But I have learnt that this is not so good from SEO point of view.If a crawler were to visit http://mylibrary.com/books it will not see booklist at all because AJAX calls would not take place.
My question is what is the other approach to design this kind of app ? Specifically:
Should the server create entire web page and send it to browser? I mean will the response from server include everything from <html> to </html> or only the required parts?
Do programming languages like PHP efficiently manage to send the HTML to clients ? I would rather have the webserver do it ..
It appears to me that in this scenario AJAX would have very little role to play other than may be change minor parts of the page. Is that a correct understanding ? ..and here I was thinking AJAX is the modern way of doing things
A library would have many books.
So the list would be long..
Using ajax allows you to fetch only the part of it the user is trying to read, without having to retrieve the entire list, or navigate by reloading.
so for low bandwidth, and impatient users, ajax is a godsend.
for crawlers that need the entire page to collect data from, not so much..
so really you want to provide different content depending on the visistor.
How to identify web-crawler?
IMHO: Provide the page from php, if the user agent is a robot, provide the list, otherwise provide the fancy ajax based site, that shows only what you want, when you want..
I've got a very unique situation that I don't believe any of the other topics here can relate.
I have a ecommerce module that is dynamically loaded / embedded into third party sites, no iframe straight JSON to web client into content. I have no access to these third part sites at all, other then my javascript file being loaded from their page and dynamically generating the content.
I'm aware of the #! method, but that's no good here, my JS does generate "urls" within the embedded platform, but they're fake and for the address bar only, and I don't believe google crawlers can reach this far.
So my question is, is there a meta that we can set to point outside the url to i.e. back to my server with static crawlable content. I.e. pointing the canonical to my server... but again I don't think that would work.
If you implement #! then you have to make sure the url your embedded in supports the fragment parameter versions, which you probably can't. It's server side stuff.
You probably can't influence the canonical tag of the page either. It again has to be done server side. Any meta tag you set via JavaScript will not be seen by a bot.
Disqus solved the problem by providing an API so the embedding websites could get there comments server side and render then in plain html. WordPress has a plugin to do this. Disqus are also one of the few systems that Google has worked out how to crawl their AJAX pages.
Some plugins request people to also include a plain link with the JavaScript. Be careful with this as you may break Google Guidelines if you do it wrong. But you may be able to integrate the plain link with your plugin so that it directs bots and users to a crawlable version of the content.
Look into Google's crawlable ajax standard (and why it's a bad idea) and canonical URLs.
Now you can actually do this. A complete guide and examples can be found here: https://github.com/kubrickology/Logical-escaped_fragment
I have build a site based on Ajax navigation.
I have build it that way, that whenever someone without javascript visits my site, the nav links, which usually load content via Ajax, are acting like normal links and the user can browse through the pages as usual.
Since, Google bot doesn't run javascript, it should theoretically be able to go through all links and corresponding sites as usual, right? Since they are valid links with the href tag pointed to the corresponding site.
Now I was wondering if thats sufficient or if I need to implant this method from Google too to make sure Google sees all my content?
Thanks for your insights and excuse my poor English!
If you can navigate your site by showing source (ctrl-u in chrome), google can also crawl your site. Yes, its that simple
We have web applications elgifto.com, roadbrake.com in which we used AJAX at many places, especially to update major portions of a page. All the important functionality of elgifto.com was implemented using AJAX. Now we realize a few issues due to AJAX implementation.
All the content implemented using
AJAX is not available to the SEO
bots and it is hurting the page rank
of our site.
Users will not be able to bookmark
some of the pages as they are always
available through AJAX.
When we want to direct the user from
one page through an anchor link to
another page having AJAX, we find it
difficult.
So now we are thinking of removing AJAX for these pages and use it only for small functionality such as something similar to marking a question as favorite in SO. So before going ahead and removing, we want to know expert's opinion on this. Thanks.
The problem is not "AJAX" per se, but your implementation of it. Just as a for instance, you can fix the 'bookmark' problem like google maps does it: provide a generated link for each state of your webapp.
SEO can befixed by supplying various of these state-links to the crawlers, either organically trough links in your site, or by supplying a list (sitemap).
If you implement 2, you can fix 1 and 3 with those links.
In the end you must figure out if the effort is worth it, and if you are not overusing AJAX ofcourse, but the statements you've made are not set in stone at all.
I'm costantly developing ajax based websites, with no problems for SEO at all. You just have to use it in the best possible way.
For example, I have a website with normal links pointing to normal webpages (PHP pages), this for normal navigation if a user doesn't have JS enabled. But if a user has JS enabled, a script will change the links behavior, only fetching the content of the page needed.
This way you still have phisycal separated webpages with all their content, which will be indexed as normal.