I am working with a Wordpress theme that is driven on AJAX. A nice way to load content, but not so much for SEO purposes.
The URL's all and width the 'same' string like for example: #menu-item-44 (the only difference will be the number at the end).
For it is AJAX driven, I can not make use of Wordpress' permalink structure so my question is really, can I fix this with a rewrite in my htaccess file?
For example: www.somesite.com/#menu-item-44 becomes www.somesite.com/contact
Your help will be much appriciated!
Thanks
You can't rewrite the # part of a url as it does not get sent to the server.
Look into the javascript pushstate() and googles ajax solution using hash bangs (#!)
Related
I have human friendly urls without index.php. I had to modify .htaccess file for that. Actually I always use Codeigniter like this. My url-s always look like this:
www.example.com/controller/function/parameter
So if I have an extra url parameter, then the url looks like this:
www.example.com/controller/function/parameter?archive=2013
Now what I want to do: If there is 'archive' parameter in the url, than also add that to the url when anchor function creates a link.
We have some different stuff every year (like stylesheets), so I need to make this navigation automatic. Am I thinking in the right direction?
And the answer is "Yes".
I slightly modified the solution from here:
How to extend anchor() function to anchor_admin() in CodeIgniter?
Hi I started a fact check wiki where each fact check page page ends in a question mark, for example:
http://wecheck.org/wiki/Did_Mitt_Romney_ever_work_as_a_garbage_collector%3F
But when I share this link on many sites including Facebook by pasting it into a comment box, it strips the %3f (thinking it's the start of a query string I guess) making the link unreachable. I have to use bit.ly to connect to the link which is inconvenient and a problem for novice users.
I think I may be able to use mod-rewrite to take the %3F off. My current rewrite rules are:
RewriteEngine On
RewriteRule ^/?wiki(/.*)?$ %{DOCUMENT_ROOT}/w/index.php [L]
RewriteRule ^/?$ %{DOCUMENT_ROOT}/w/index.php [L]
How would I modify them to strip out the %3F ?
It doesn't look like you want to strip out the %3F. Mediawiki has its own routing so if you mess with the title names, you're more likely break something than fix anything. You need to modify your media-wiki to either disallow pages with a ? at the end, or add a module or wiki bot to go through all the pages, and if there's a page that ends with ?, create a #REDIRECT [[]] page without the ? and point it to the page with the ?.
The answer is to create pages that do not have question marks at the end and then set
$wgRestrictDisplayTitle = false; in LocalSettings.php
and use the following magicwords in the page markup:
{{DISPLAYTITLE:{{PAGENAME}}?}}
You can see an example here: http://wecheck.org/wiki/Question_Mark_Problem
Old way
When I used to load page asynchronously in projects that required the content to be indexed by search engines I used a really simple technique, that is
Page
<script type="text/javascript">
$('#example').click(function(){
$.ajax({
url: 'ajax/page.html',
success: function(data){
$('#content').html(data);
}
})
});
</script>
edit: I used to implement the haschange event to support bookmarking for javascript users.
New way
Recently Google came up with the idea of ajax crawling, read about it here:
http://code.google.com/web/ajaxcrawling/
http://www.asual.com/jquery/address/samples/crawling/
Basically they suggest to change "website.com/#page" to "website.com/#!page" and add a page that contains the fragment, like "website.com/?_escaped_fragment_=page"
What's the benefit of using the new way?
To me it seems that the new way adds a lot more work and complexity to something that before I did in a simple way: I designed the website to work without ajax and then I added ajax and hashchange event (to support back button and bookmarking) at a final stage.
From an SEO perspective, what are the benefits of using the new way?
The idea is to make the AJAX applications crawlable. According to the HTTP specifications, URLs refer to the same document regardless of the fragment identifier (the part after the hash mark). Therefore search engines ignore the fragment identifier: if you have a link to www.example.com/page#content, the crawler will simply request www.example.com/page.
With the new schemes, when you use the #! notation the crawler knows that the link refers to additional content. The crawler transforms the URL into another (ugly) URL and requests it from your web server. The web server is supposed to respond with static HTML representing the AJAX content.
EDIT Regarding the original question: if you already had regular links to static pages, then this scheme doesn't help you.
The advantage is not really applicable for you, because you are using progressive enhancement. The new Google feature is for applications written entirely in Javascript, which therefore can't be read by the crawler. I don't think you need to do anything here.
The idea behind it is that Javascript users can bookmark pages too, I think. If you take a look at your 'old' method, it's just replacing content on the page; there is no way to copy the URL to show the page in current state to other people.
So, if you've implemented the new #! method, you have to make sure that these URLs point to the correct pages, through Javascript.
i think it's just easier for google to be sure that you're not working with duplicate content. i'm including the hash like foo/#/bar.html in the urls and pass it to the permalink structure but i'm not quite sure if google likes or not.
interesting question though. +1
Is there any way to fetch the raw contents of a CSS file?
Lets imagine that I wanted to fetch any vendor-specific css properties from a CSS file. I would need to somehow grab the CSS contents and parse them accordingly. Or I could just use the DOM to access the rules of a CSS file.
The problem is that in while using the DOM, mostly all browsers (except for <= IE8) tend to strip out all of the custom properties that do not relate to their browser engine (webkit strips out -moz and -o and -ms). Therefore it wouldn't be possible to fetch the CSS contents.
If I were to use AJAX to fetch the contents of the CSS file, then if that CSS file hosted on another domain, then the same origin policy would break and the CSS contents could not be fetched.
If one were to use a cross-domain AJAX approach then there would only be a JSONP solution which wouldn't work since we're not parsing any javascript code (therefore there is no callback).
Is there any other way to fetch the contents?
If a CSS file is on the same domain as the page you're running the script on, you can just use AJAX to pull in the CSS file:
$.get("/path/to/the.css", function(data) {/* ... */});
If not, you could try using Yahoo! Pipes as a proxy and get the CSS with JSONp.
As for parsing, you can check out Sizzle to parse the selectors. You could also use the CSS grammar (posted in the CSS standards) to use a JS lex/yacc parser to parse out the document. I'll leave you to get creative with that.
Good luck!
No, you've pretty much covered it. Browsers other than IE strip out unknown rules from their object models both in the style/currentStyle objects and in the document.styleSheets interface. (It's usually IE6-7 whose CSS you want to patch up, of course.)
If you wanted to suck a stylesheet from an external domain you would need proxy-assisted-AJAX. And parsing CSS from would be a big nasty job, especially if you needed to replicate browser quirks. I would strenuously avoid any such thing!
JSONP is still a valid solution, though it would hurt the eyes somewhat. Basically, in addition to the callback padding, you would have to add one JSON property "padding" and pass the CSS as a value. For example, a call to a script, http://myserver.com/file2jsonp/?jsonp=myCallback&textwrapper=cssContents could return this:
myCallback("cssContents":"body{text-decoration:blink;}\nb{text-size:10em;}");
You'd have to text-encode all line breaks and wrap the contents of the CSS file in quotes (after encoding any existing quotes). I had to resort to doing this with a Twitter XML feed. It felt like such a horrible idea when I built it, but it did its job.
I like that, these days, we have an option for how we get our web content from the server: we can make an old-style HTTP request (with its own URL in the browser) or we can make an AJAX call and replace parts of the DOM on the fly.
My question is this: how do you decide which method to use when there's an option to use either?
In the "old days" we'd have to redraw the entire page (including the parts that didn't change) if we wanted to show updated content. Now that AJAX has matured we don't need to do that any more; we could, conceivably, render a "page" once and just update the changing parts as needed. But what would be the consequences of doing so? Is there a good rule of thumb for doing a full-page reload vs a partial-page reload via AJAX?
If you want people to be able to bookmark individual pages, use HTTP requests.
If you are changing context, use HTTP requests.
If you are dividing functionality between different pages for better maintainability, use HTTP requests.
If you want to maximize your page views, use HTTP requests.
Lots of good reasons to still use HTTP requests - Stack overflow is a wonderful example of those divisions between AJAX and HTTP requests. Figure out why each function is HTTP or AJAX and I'm sure you will derive lots more reasons when to use each.
My simple rule:
Do everything ajax, especially if this is an application not just pages of content. Unless people are likely to want to link to direct content, like in a blog. Then it is easier to do regular full pages.
Of course there are many blended combinations, it doesn't have to be one or the other completely.
A fair amount of development overhead goes into partial-page reloads with AJAX. You need to create additional javascript handlers for all returned data. If you were to return full HTML blocks, you would still need to specify where the content should go and whether or not it's replacing other content. You would potentially have to re-render header tags to reflect content changes and you would have to implement a history solution to make sure search engines can index each page (using SWFAddress jQuery plugin, for example). If you return JSON encoded data you have an additional processing step.
The trade-off for reduced bandwidth usage by not using a page refresh is offset by an increase in JS code and event bindings which could affect page rendering speed as well as visual effects.
It all really depends on your target audience and the overall feel you are trying to go for on your page. AJAX and preloaders are flashy, and people love flashy things. If you believe the end-user experience will improve by adding partial page loads by all means implement them.