What would be a good approach in general to cache a web page where most of the content living in a database almost never changes (e.g. description) but a little content changes high-frequently (e.g. stock items).
I want to keep the web page cached as long as possible. Would it be an option to get the dynamic content via AJAX request? Do better approaches exist?
You could request the stock data from a separate URL and use JavaScript to insert it into the document. That way, the HTML/CSS/JS remains the same and can be cached. The stock information is loaded using JavaScript and it's not inserted into the HTML by the server.
You could create a URL that returns JSON for this purpose (and similarly for other information that you wish to include using JavaScript).
Related
Let's say I am creating a webapp for a library. My base url is http://mylibrary.com. I want to use "pretty" URLs as follows:
http://mylibrary.com/books (list all books)
http://mylibrary.com/books/book1 (details of a particular book)
At present my approach is to create a single page app and use history api to manage the URLs. i.e I load all CSS and JS files when the user visits the home page. From then I just get data from server using AJAX, in JSON format and then create the required HTML using Javascript.
But I have learnt that this is not so good from SEO point of view.If a crawler were to visit http://mylibrary.com/books it will not see booklist at all because AJAX calls would not take place.
My question is what is the other approach to design this kind of app ? Specifically:
Should the server create entire web page and send it to browser? I mean will the response from server include everything from <html> to </html> or only the required parts?
Do programming languages like PHP efficiently manage to send the HTML to clients ? I would rather have the webserver do it ..
It appears to me that in this scenario AJAX would have very little role to play other than may be change minor parts of the page. Is that a correct understanding ? ..and here I was thinking AJAX is the modern way of doing things
A library would have many books.
So the list would be long..
Using ajax allows you to fetch only the part of it the user is trying to read, without having to retrieve the entire list, or navigate by reloading.
so for low bandwidth, and impatient users, ajax is a godsend.
for crawlers that need the entire page to collect data from, not so much..
so really you want to provide different content depending on the visistor.
How to identify web-crawler?
IMHO: Provide the page from php, if the user agent is a robot, provide the list, otherwise provide the fancy ajax based site, that shows only what you want, when you want..
Imagine a website that is highly cached where the output of almost every GET action is cached into a html file that is accessible directly from the HTTP server without having to perform a server-side CGI operation. Now imagine that in addition to that, JavaScript is used to filter the response of the HTML request using AJAX. The AJAX response contains only the appropriate response of the page (so for standard HTML pages it will contain everything except for the surrounding layout, for modals it will contain only the modal box HTML, etc...).
Now lets imagine that the HTML content may be cached neutrally (when nobody is logged in) or cached for someone who is logged in. There are certain areas of the page that are tied to session data (like the welcome message, the profile link, etc...) and that data is specific to the session. But since we're using JavaScript, we can buffer the AJAX response, change the session element values, and then stick it into the DOM all the while the user is unaware of any session hot swapping. This relies ofcoarse only on GET requests and pages where the actual content is not 100% session dependent.
Now here is my question. If I were to implement this (and trust me I will) then how might I actually keep track of the session activity while the user is browsing the page? With a traditional server-side operation, whenever the user accesses a page then the server-side framework will update the session and keep tabs on the session-related variables. With a static HTTP request operation then all server-side involvement is avoided. So I will need to figure out some way of keeping track of what's going on with the session; here are my approaches:
1) Perform two AJAX requests (or an additional one when needed):
Once the user queries a page then the contents will be downloaded as static HTML. But at the same time that page is queried then another AJAX request will be serviced to a session-specific URL/server updating/querying the status of the session. This can be done side by side or can be performed after every few requests are made.
Pros = HTML files are left unchanged, HTML files can be set to have a ETag or future expires header, JavaScript can cache only the static HTML and use it for offline browsing, a session-server can be dedicated, optimized and configured for session activity.
Cons = Two AJAX requests are performed, excessive polling for potentially redundant data, session handling made be separated from content server.
2) Use a midway proxy that appends the session-data as a trailing session JSON
A request is made to the server. There is a proxy in between that locally accesses the session data and then performs another HTTP request (either locally or remotely) which is then concatenated with the session data findings fetched just before. The browser is responded with a clean copy of HTML code where has JavaScript-specific session content and then everything is updated at the same moment.
Pros = Everything is downloaded at once, only one connection required, works like a normal HTTP request would
Cons = Caching gets difficult when a dynamic content proxy is used, content-length may need to be search and replaced with to append additional data, may not work with some browsers?
3) Use Comet for session data
A persistant, reverse-AJAX comet connection could be established at the start of the website connection. Then, all static-HTML requests could be accessed normally. All session-related requests could be accessed from the comet connection.
Pros = Separation of static content and dynamic content.
Cons = Comet isn't supported very well and doesn't work very well, server latency, may conflict with same origin policy.
How do you guys think this problem should be solved? Do you think its doable?
The solution I've found is to use templated data and dynamic data separate from each other. It's too much work and too messy to implement this on your own so you can go as far as using a MVC framework to provide JSON requests with templating (AngularJS, KnockoutJS, EmbedJS, etc...) or you can just stick to using templates in general. Keep in mind that this destroys SEO.
I have a website which is displayed to visitors via a kiosk. People can interact with it. However, since the website is not locally hosted, and uses an internet connection - the page loads are slow.
I would like to implement some kind of lazy caching mechanism such that as and when people browse the pages - the pages and the resources referenced by the pages get cached, so that subsequent loads of the same page are instant.
I considered using HTML5 offline caching - but it requires me to specify all the resources in the manifest file, and this is not feasible for me, as the website is pretty large.
Is there any other way to implement this? Perhaps using HTTP caching headers? I would also need some way to invalidate the cache at some point to "push" the new changes to the browser...
The usual approach to handling problems like this is with HTTP caching headers, combined with smart construction of URLs for resources referenced by your pages.
The general idea is this: every resource loaded by your page (images, scripts, CSS files, etc.) should have a unique, versioned URL. For example, instead of loading /images/button.png, you'd load /images/button_v123.png and when you change that file its URL changes to /images/button_v124.png. Typically this is handled by URL rewriting over static file URLs, so that, for example, the web server knows that /images/button_v124.png should really load the /images/button.png file from the web server's file system. Creating the version numbers can be done by appending a build number, using a CRC of file contents, or many other ways.
Then you need to make sure that, wherever URLs are constructed in the parent page, they refer to the versioned URL. This obviously requires dynamic code used to construct all URLs, which can be accomplished either by adjusting the code used to generate your pages or by server-wide plugins which affect all text/html requests.
Then, you then set the Expires header for all resource requests (images, scripts, CSS files, etc.) to a date far in the future (e.g. 10 years from now). This effectively caches them forever. This means that all requests loaded by each of your pages will be always be fetched from cache; cache invalidation never happens, which is OK because when the underlying resource changes, the parent page will use a new URL to find it.
Finally, you need to figure out how you want to cache your "parent" pages. How you do this is a judgement call. You can use ETag/If-None-Match HTTP headers to check for a new version of the page every time, which will very quickly load the page from cache if the server reports that it hasn't changed. Or you can use Expires (and/or Max-Age) to reload the parent page from cache for a given period of time before checking the server.
If you want to do something even more sophisticated, you can always put a custom proxy server on the kiosk-- in that case you'd have total, centralized control over how caching is done.
I have a page on my site which has a list of things which gets updated frequently. This list is created by calling the server via jsonp, getting json back and transforming it into html. Fast and slick.
Unfortunately, Google isn't able to index it. After reading up on how to get this done according to Google's AJAX crawling guide, I am bit confused and need some clarification and confirmation:
The ajax pages need to be implement the rules only, right?
I currently have a rest url like
[site]/base/junkets/browse.aspx?page=1&rows=18&sidx=ScoreAll&sord=desc&callback=jsonp1295964163067
this would need to become something like:
[site]/base/junkets/browse.aspx#page=1&rows=18&sidx=ScoreAll&sord=desc&callback=jsonp1295964163067
And when google calls it like this
[site]/base/junkets/browse.aspx#!page=1&rows=18&sidx=ScoreAll&sord=desc&callback=jsonp1295964163067
I would have to deliver the html snapshot.
Why replace the ? with # ?
Creating html snapshots seems very cumbersome. Would it suffice to just serve simple links? In my case I would be happy if google would only index the things pages.
It looks like you've misunderstood the AJAX crawling guide. The #! notation is to be used on links to the page your AJAX application lives within, not on the URL of the service your appliction makes calls to. For example, if I access your app by going to example.com/app/, then you'd make page crawlable by instead linking to example.com/app/#!page=1.
Now when Googlebot sees that URL in a link, instead of going to example.com/app/#!page=1 – which means issuing a request for example.com/app/ (recall that the hash is never sent to the server) – it will request example.com/app/?_escaped_fragment_=page=1. If _escaped_fragment_ is present in a request, you know to return the static HTML version of your content.
Why is all of this necessary? Googlebot does not execute script (nor does it know how to index your JSON objects), so it has no way of knowing what ends up in front of your users after your scripts run and content is loaded. So, your server has to do the heavy lifting of producing a HTML version of what your users ultimately see in the AJAXy version.
So what are your next steps?
First, either change the links pointing to your application to include #!page=1 (or whatever), or add <meta name="fragment" content="!"> to your app's HTML. (See item 3 of the AJAX crawling guide.)
When the user changes pages (if this is applicable), you should also update the hash to reflect the current page. You could simply set location.hash='#!page=n';, but I'd recommend using the excellent jQuery BBQ plugin to help you manage the page's hash. (This way, you can listen to changes to the hash if the user manually changes it in the address bar.) Caveat: the currently released version of BBQ (1.2.1) does not support AJAX crawlable URLs, but the most recent version in the Git master (1.3pre) does, so you'll need to grab it here. Then, just set the AJAX crawlable option:
$.param.fragment.ajaxCrawlable(true);
Second, you'll have to add some server-side logic to example.com/app/ to detect the presence of _escaped_fragment_ in the query string, and return a static HTML version of the page if it's there. This is where Google's guidance on creating HTML snapshots might be helpful. It sounds like you might want to pursue option 3. You could also modify your service to output HTML in addition to JSON.
I've more or less given up on this. There really seems no alternative to generating the html on the server and delivering it in the html bdoy if you want goolge to index your directory.
I even tried adding a section wraped a .net user control which implemented a simple html version of the directory. But google also managed to ignore ..
So in the end my directory has been de-ajaxified. :(
Is there any way to allow search engines to list JSON or XML ajax data ?
I don't think there is a way to directly allow crawlers to index XML and JSON.
I would recommend trying to design your site using progressive enhancement. First, make all of the JSON and XML available in HTML form for users who don't use javascript. These users include some people with disabilities and the crawlers used by search engines. That will ensure your content is searchable.
Once you have that working and tested, add your ajax functionality. You might do this by serving HTML, XML and JSON from a single URL using content negotiation, or you might have seperate URLs.
Another graceful solution is to implement your ajax calls as requests to full HTML pages and have your javascript only use the bit that it's interested in e.g. a div with id "content. The suitability of this solution would depend on your exact requirements.
Hmm, no, not really. Search engines crawl your HTML and they don't really bother clicking around or even just loading your page into a browser and having the AJAX magic happen. Flash and JSON objects are by themselves invisible to search engines, and to get them visible, you have to transform them in some HTML.
The newest technique for getting AJAX requests to be listed in search engines is to ensure they have their own URL. This technique stems from the same one utilized by flash applications where each page has a unique identifier, preceded by a pound (#) sign.
There are currently a few jQuery plugins which will allow you to manage this:
SWFAddress - Deep Linking for Flash & AJAX
jQuery History Plugin