How to prepare half dynamic web page with best performance? [closed] - performance

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Imagine you have a web page which has some static contents and some dynamic contents based on the user's session. For example, you may see a webpage with a menu at top of the page which displays username but the remaining content is completely cachable and static.
There could be a simple solution to achieve that:
You can handle the dynamic part of the page in the client side with ajax request (which is not cachable) e.g. single page applications.
There may be another solution that client sends a request to a middleware(e.g. API Gateway) and the middleware fetches static part from cache and dynamic part from the backend then returns aggregated content to the client.
In my idea, the worst solution is to disable the cache.
What Facebook is doing, loads dynamic part at first request, and loads remaining contents with XHR requests.
Questions:
What is the best practice for this issue?
What would be the drawback of the second solution?
What do you think about Stackoverflow top menu that displays your username?

An AJAX request (or fetch, or any other HTTP based request) may well be cached by using a RESTful service.
For more fine grained controll over what should be cached you could use a service worker, for example by adding https://developers.google.com/web/tools/workbox/ to your application.
If your dynamic data has to be updated live, you should also have a look at WebSockets. Depending on your stack you could use a wrapper library like SignalR, socket.io or simply follow one of the tutorials at http://websocketd.com/

Related

Difference between autocomplete and ajax? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
What is the diffrence between Ajax and autocomplete function.
I know autocomplete is a software function that completes words or strings without the user needing to type them in full.
Ajax is similar to it + other functions.
AJAX stands for "Asynchronous JavaScript and XML" and offers an alternative to the traditional "request-respond-cicle".
With AJAX you can get data from the server without your browser having to re-render the whole page.
Autocomplete on the other hand side mostly uses AJAX to get possible results on every key hit by the users.
Read more about AJAX here:
http://en.wikipedia.org/wiki/Ajax_(programming)
Autocomplete works only using data suggestions from the cache, Ajax works using data suggestions from the server in addition to the data from the cache.
Using Ajax, we can render certain parts of a web page without a rendering the whole web page, which significantly reduces the network bandwidth.

Ruby on Rails - KeyCode in Ruby [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
Is there a possibility to check if a key is pressed in Ruby?
For example:
In CoffeeScript/JavaScript, you are able to say:
$(document).keypress event, ->
alert event.keyCode
Is this possible in Ruby as well? (I know, I could use NodeJS instead of Rails, but that's not my question)
Since Ruby is not interpreted in your browser like JavaScript, it cannot do what you are trying to do by itself. The browser never gets to see any of your Ruby code but only the resulting HTML (and JS) after your controller finished the appropriate method for a request.
Rails is REST based, so each request is executed separately and no state is kept between requests, save for the information in a cookie or similar means. It is not constantly running, waiting for a reply or something like that.
However, you can simply embed JS code like you posted within your Ruby templates. This JS can then check for a keypress event and send a new (AJAX) request to the server for some additional actions. You will need to process the returned values and manually display them using JS code.
If your page or application would make heavy use of such dynamic features, other languages might be a better fit.

Single Page Apps vs PageAx [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Single Page Apps are well known. But PageAx seems to be less well known. I stumbled across it by accident in learning MVC and it has worked quite well for me so far.
(Note: I am aware that this is a topic which may be regarded as "not answerable" and "should be closed" yet I feel this is an important topic which has not yet been covered. Note that this is Single Page Apps vs. PageAx (not Ajax). I am looking for benefits/disadvantage type discussion. I am making up the name "PageAx" as I have not found a better term for it.)
SPA - communication with controller via Ajax and returning Json.
PageAx - communication with controller via Ajax and returning partial views which replace divisions.
Here are the benefits of PageAx as I see it, over SPA's:
Little to no JavaScript on the client.
I find Partial Views very easy to write on the server side.
Benefits of SPA over PageAx:
I can't think of any.
Disadvantages of PageAx over SPA:
Slightly larger payload (but I doubt it would be discernible by end user.)
I can't think of any other.
Disadvantages of SPA over PageAx:
Seems like the amount of JavaScript and required libraries present a fairly large learning curve (i.e., more so than MVC.)
So to re-iterate the question, are there any advantages to SPA over PageAx? The basic reason for the question is that I am starting yet another web project and need to decide which way to take it.
This depends on complexity of your application.
Returning JSON is beneficial because hey—that means you have a full-fledged API that you can reuse in a mobile app or a desktop client. Even if you later decide to completely re-do your frontend, you will already have a ready-to-use backend to code against.
Also, if your webapp is highly dynamic and interactive, replacing partial views may not be enough. You may want to have finer control over transitions (e.g. animating them). For example, see Medium-Style Page Transitions: you can't do something like this with partial AJAX views.
On the other hand, if this flexibility doesn't buy you anything, rendering partial views on the server may work very well for you. Here's David from 37signals blogging about it. He calls this approach SJR (Server-generated JavaScript Responses):
This doesn’t mean that there’s no place for generating JSON on the server and views on the client. We do that for the minority case where UI fidelity is very high and lots of view state is maintained, like our calendar. When that route is called for, we use Sam’s excellent Eco template system (think ERB for CoffeeScript).
If your web application is all high-fidelity UI, it’s completely legit to go this route all the way. You’re paying a high price to buy yourself something fancy. No sweat. But if your application is more like Basecamp or Github or the majority of applications on the web that are proud of their document-based roots, then you really should embrace SJR with open arms.

Ajax / Deep linking and Google indexing / SEO - Is it a bad idea? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm about to embark on building a music oriented website for a friend's band and I want to build something like this template. It uses ajax and deep linking.
My worry is that this site will not be crawlable by Google. Is there anything I can do or can code I can adjust to make it crawlable?
Many thanks in advance!
That template doesn't look crawlable to me. Googlebot will never find your content. If I go to the page for the template and view source, then search for "Gigs schedule with filter", I can't find it in the page source. That is because that particular content is loaded with AJAX and not part of the page source.
That template does not use Google's crawlable AJAX standard with #! in the url. https://developers.google.com/webmasters/ajax-crawling/ Googlebot will not be index the content on your site if you use that template.
Furthermore, there appear that there are some url issues. I see these two very similar URLS http://radykal.de/themeforest/stylico/features.html and http://radykal.de/themeforest/stylico/?page=features.html. As a user, if I visit that second url, I get the content, but I don't see the navigation. It seems likely that if googbot were to find the content, it would index that second url and use it as the landing page for your visitors. Missing navigation in that case would not be a good user experience, as users would not be able to navigate your site.

General web page loading speed and performance best practices [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
What are some general (not specific to LAMP, .NET, Ruby, mySql, etc) tactics and best practices to improve page loading speed?
I am looking for tips about caching, HTTP headers, external file minification (CSS, JS), etc.
And for good tools like Google PageSpeed and Yahoo YSlow.
A "ultimate resource" wiki style checklist of "things not to forget" (moderated and updated by all the wizards here on SO) is the end goal. So folks don't have to Google around endlessly for outdated blog posts on the subject. ;)
I hope the "subjective" mods go easy on me, I know this is a bit open ended. And similar questions have been asked here before. And this material overlaps with the domain of ServerFault and Webmasters too a bit. But there is no central "wiki" question that really covers this so I am hoping to start one. There are great questions like this that I refer to on SO all the time! Thanks
Caching of page content
Load javascript at the bottom of the page
Minify css (and javascript)
Css and javascript should be in their own [external] files
If possible combine all js or css files into one of each type (saves server requests)
Use Google's jQuery and jQuery UI loaders (as it's likely already cached on some computers)
Gzip compression
Images should be the same size as the width and height as in the markup (avoid resizing)
Use image sprites when appropriate (but don't over do it)
Proper use of HTML elements ie. using <H#> tags for headers
Avoid div-itis or the more popular now ul-itis)
Focus javascript selectors as much as possible ie. $('h1.title') is much quicker than $('.title')
Make your dynamic content more static.
If you can render your public pages as static contents you'll help proxy, caches, reverse proxy, things like web application accelerators & DDOS preventing infrastructures.
This can be done in several ways. By handling the cache headers of course, but you can even think about real static pages with ajax queries to feed dynamic content, and certainly a mix between these two solutions, using the cache headers to make your main pages static for hours for most browsers and reverse proxys.
The static with ajax solution as a major drawback, SEO, bots will not see your dynamic content. You need a way to feed bots with this dynamic data (and a way to handle user accessing this data from search engines url, big hell). So the anti pattern is to have the real important SEO data in a static page, not in ajax dynamic content, and to limit ajax fancy user interactions to the user experience. But the user experience on a composite page can maybe be more dynamic than the search egine bots experience. I mean replace the latest new every hours for bots, every minute for users.
You need as well to prevent premature usage of session cookies. Most proxy cache will avoid caching any HTTP request containing a cookie (and this is the official specification of HTTP). The problem with this is often application having the login form on all pages, and which need an existing session on the POST of the login form. This can be fixed by separate login pages, or advanced redirects on the login POST. cookie handling in reverse proxy cache can as well be handled in modern proxy cache like Varnish with some configuration settings.
edit: One advanced usage of reverse proxy page can be really useful: ESI, for example with varnish-esi. You can put on your html render tags that the ESI reverse proxy will identify. ach of these identified regions can have different TTL -Time To Live- (let's say 1 day for the whole page, 10 min for a latest new block, 0 for the chat block). And the reverse proxy will make the requests in is own cache or to your backend to fill these blocks.
Since the web exists handling proxys and caches has always been the main technique to fool the user, thinking the web was fast.

Resources