Ever since I've started asp.net mvc development my experience is almost 80% jQuery, only 20% C#.
Now I am begginning to use Knockout.js to enable myself to better control view on the page.
The question I am now facing is: should I be feeding the browser the "sceleton markup page" and load all data via ajax call, which in turn populates a js viewmodel object and therefore the view, or should I initially populate data via a partial view, and use js page data management for subsequent client experience?
Right now I am doing the latter, but this requires me to write two data retrieval/display logic - one in js, one in mvc razor view.
I am not planning to support browsers with js disabled, so maybe I should do everything via js knockout view model initialization?
There are many additional variables.
How many request per second should your app handle in future? If many, then with full page generation maybe you can cache resulting web page, decreasing load on server.
What kind of clients do you have? If they are slow (like low cost mobile phones), then generating full HTML on client can be slow.
Do your clients appreciate fast response over slow network? With full server page generation you can achieve smaller number of requests and faster response.
On the opposite side, if this is an internal department level business app with good network, low number of requests and good client computers, then you can surely go with minimal initial page and populating everything with AJAX. Also, as Arbiter pointed out, JSON can be smaller in size than HTML, so if you have a big amount of data then you can save on network via JSON.
There is also a middle third way. You can generate JSON data and embed them directly in the webpage (like <script>CallMyJSGenerateMethod({generatedJSON: "goes here"})</script>). This way you'd have only one (JavaScript) procedure for HTML generation, small number of requests (with even lower amount of data) and ability to cache web page. Still, you'd have to have a good clients, so point 2 still stands.
This is more of an opinion, but in my estimation, its the most oft asked question I get regarding building web apps: do I build the pages with HTML/MVC on the server or do I use all JS? There is no clear right answer here that fits all scenarios. Both are great choices. Dmitry's points are all valid, too.
Other things to consider are whether you need to stick with ASP.NET on the server or if other server tech will be used (PHP?). What skills does your dev team have? Will the pages you are creating change a lot on the client, or are they relatively static?
I personally lean towards the client space instead of server side generation, but its mostly a preference.
Related
I am starting to write my first web app with Node.js and Express. I have two approaches in mind.
Create two sets of routes. One that sends JSON, one that renders the page using a templating engine
Create a static website that makes API calls to the backend using AJAX, and have only routes for the API.
I understand that approach #2 depends on AJAX support in the browser, but if this was your project, based on the advantages and disadvantages of each approach, which would you choose and why?
If I am reading it right, both options #1 and #2 first set of routes is an API that returns JSON rather than sends it.
Assuming that in #2 you wouldn't create static pages with JavaScript doing AJAX calls, but rather still use express static routing like app.use('/', express.static(path.join(__dirname, 'dist'))); the difference between 2 approaches isn't that big.
Unless you already know some supported template language, such as mustache the cons are that you have to learn one and before that pick one (which isn't always an easy task from my experience!).
If you don't know one, depending on your application, you might still benefit from learning and using one. As an example you can think of a very generic UI where a single template could be reused very many times - like a generic database UI similar to, say well known phpmyadmin.
In case of static routing, you can achieve similar results by using a JavaScript framework which has components or templates, such as angular. If you aren't planning to use one, that could result in a lot of code duplication of otherwise re-usable widgets. And even when using one I can imagine a case when template engine would result in less code (or less files in your project at the very least). Not sure though if it's going to be easier to navigate and moreover to cover with tests when the project grows.
Without knowing more about your project it is hard to give more advice.
If the product you're developing is primarily static content with some dynamic pieces here and there, I'd recommend serving HTML from your backend via a templating system. A good example of this would be a blog. You'll end up with better SEO and there are less moving pieces to grok in this approach.
However, if you want to develop a single page application, I recommend using your backend entirely as an api, and writing your client side logic entirely in React/Vue/Angular/whatever. Single page applications have their front ends written entirely in javascript, and are best suited to highly dynamic, "app like" experiences online. Think gmail or facebook.
In my current project we use both approaches. Many views are static and data is obtained from API calls (we just use angular) or bootstrapped values (we load some objects with template, like user roles etc.)
However, some views are rendered on server site, because it easily allow us to dynamically inject values, tokens or other supporting data without making extra requests.
So, I vote for flexibility
I used to utilize MVC 3 Razor engine to render pages. Sometimes I had to use AJAX calls to transfer Razor-rendered HTML and inserting it into the page using JQuery. As new project is starting, we do consider to utilize MVC 4 Single Page Application framework which is new to us. I took the first glance at it which left me with mixed feelings:
On the one hand it implies all your data are transferred by JSON and client does all the job to render them and encode some UI-related logic which is great for server and network performance. On the other hand the client (HTML+JavaScript) becomes much more heavy with a bunch of magic strings and hidden relations inside it which seems to be hard to maintain. We got used to VS intellisense, type-safed .NET server code to render pages which we have to exchange for client scripts and Knockout binding statements in case of SPA.
I wonder are any prons and cons of using SPA comparing to Razor, other that this obvious one I've mentioned here? Thanks
Razor is a server based technology where SPA (Single Page Application) is an architecture approach used on the client (web browser). Both can be used together.
From a high level, SPA moves the rendering and data retrieval to the client. The web server becomes a services tier sitting in front of the database. An MVC pattern works best when using SPA. Frameworks like Knockout.js and Backbone.js can be used for this. The net results is a rich responsive desktop like experience.
To achieve this you'll need to be a descent javascript programmer or be willing to learn javascript.
Yes it's moving business requirements from C# into javascript. In Visual Studio there is limited intelli-sense for javascript. To have confidence in your javascript you'll need to lean on unit testing. The up side is the rich user experience (think gmail or google maps).
I think it sounds like you are already fairly well apprised of most of the trade-offs here; you'll have reduced network load with SPA, and will shift a measure of the processing to the client. You will, however, increase the complexity of your code, and make it slightly harder to easily maintain the system (simply because of the increased complexity - not due to any architectural problems inherent in SPA).
Another thing to keep in mind is compatibility. The reason I mentioned a "false choice" in my comment to your question is that to keep the site usable for folks with Javascript disabled, you will still need to provide regular, whole-page views. This is also a good idea to do for the sake of SEO; a crawler will browse your site as a user with JS disabled, and can then index your site. The site should then handle such incoming URLs properly so that those with JS enabled will find themselves in your SPA looking at the same content (as opposed to being dumped in the "no JS" view unnecessarily).
There's something else I'll mention as a possibility that might help with the above, but it breaks the ideals of an SPA; that is, using Ajax-loaded partials in some places, rather than JSON data. For example, say you have a typical "Contact EMail" form on the site; you want that to load within the context of the SPA, but it's probably easier to do so by loading the partial via AJAX. (Though certainly, yes; you could do it with a JSON object describing the fields to display in the e-mail form).
There will also likely be content that is more "content" than "data", which you may still wish to load via partials and Ajax.
An SPA is definitely an interesting project, and I'm about to deploy one myself. I've used a mix of JSON and partials in it, but that may not be your own choice.
I'm planning on experimenting a bit with the HTML5 History API on my website to asynchronously render new content and save states for the browsers that support it. Obviously this means making a lot of AJAX requests to the server, and I've hit a snag in terms of design approach. I've some areas on the site that already render content asynchronously in small ways, and in those places I've just been rolling my own solutions to generate the new HTML on the client side.
However, what I'm trying to do now will require a bit more of a robust solution, and I'd like to do it in a way that takes advantage of the MVC flow rather than relying on a javascript templating engine or my own whacky javascript to handle the raw data returned by my controllers. Since this feature will only be relevant to certain HTML5 capable browsers, I'd rather not introduce a lot of extra bloat on the client side for something a lot of people may not even see.
Essentially, what I'm wondering is: is there a way in Cake to take advantage of the presentation logic that's already in my view files to selectively generate and return just the new, ready-to-go HTML that I need instead of reinventing the wheel to do it on the client side from raw data returned by the controller?
I don't really get your problem but i sounds like you want to cache the view which is in fact rendered by a view class but send back to the browser through the controller by using the CakeResponse object.
Caching: http://book.cakephp.org/2.0/en/core-libraries/caching.html
Response: http://book.cakephp.org/2.0/en/controllers/request-response.html#cakeresponse
I am currently designing an application that will have a few different pages, and each page will have components that update through AJAX. The layout is similar to the new Twitter design where 'Home', 'Discover', and 'Connect' are separate pages, but interacting within the page (such as clicking 'Followers' or 'Following') uses AJAX.
Since the design requires an initial page load with several components (in the context of Twitter: tweets, followers, following), each of which can be updated individually through AJAX, I thought it'd be best to have a default controller for serving pages, and other controllers with actions that, rather than serving full pages, strictly handle querying the database and returning JSON objects. This way, on initial page load several HMVC requests can be made to gather the data for each component, and AJAX calls can also be made to update each component individually.
My idea is to have a Controller_Default that handles serving pages. In the context of Twitter, Controller_Default would contain:
action_home()
action_connect()
action_discover()
I would then have other Controllers that don't deal with serving full pages, but rather components of pages. For instance, in the context of Twitter Controller_Tweet may have:
action_get()
which returns a JSON object containing tweets for a specific user. Action_home() could then make several HMVC requests to get the data for the several different components of the page (i.e. make requests to 'tweet/get', 'followers/get', 'following/get'). While on the page, however, AJAX calls could be made to the function specific controllers (i.e. 'tweet/get') to update the content.
My question: is this a good design? Does it make sense to have the pages served through a default controller, with page components served (in JSON format) through other function specific controllers?
If there is any confusion regarding the question please feel free to ask for clarification!
One of the strengths of the HMVC pattern is that employing this type of layered application doesn't lock you into a workflow that might be difficult to change later on.
From what you've indicated above, this would be perfectly acceptable as a way of serving content to a client; the default controller wraps sub-requests, which avoids multiple AJAX calls from the client to achieve the same goal.
Two suggestions I might make:
Ensure that your Twitter back-end requests are abstracted out and managed in a library to make the application DRY'er and easier to maintain.
Consider whether the default controller is making only the absolutely necessary calls on each request. Employ caching to avoid pulling infrequently changed data on every request (e.g., followers might only be updated every 30 seconds). This of course depends entirely on your application requirements, but if you get heavily loaded you could quickly find your Twitter API request limit being reached.
One final observation: if you do find the server is experiencing high load and Twitter API requests are taking a long time to return, consider provisioning another server and installing a copy of your application. You can then "point" sub-requests from the default gateway application to your second application server, which should help improve response times if the two servers are connected by a high-speed link.
I'm still pretty new to AJAX and javascript, but I'm getting there slowly.
I have a web-based application that relies heavily on mySQL and there are individual user accounts that are accessed and the UI is populated with user specific data.
I'm working on getting rid of a tabbed navigation bar that currently loads new pages because all that changes from page to page is information within one box.
The thing is that box needs to reload info from the database, etc.
I have had great help from users here showing that I need to call the database within the php page that ajax is calling.
OK-so pardon the lengthy intro-what I'm wondering is are there any specific limitations to what ajax can call that I need to know about? IE: someone mentioned that it's best not to call script files and that I should remove scripts from the php page that is being called and keep those in the 'parent' page. Any other things like this I need to keep in mind?
To clarify: I'm not looking to discuss the merits/drawbacks of the technology. I'm wondering about specific coding implementation that I need to be aware of (for example-I didn't until yesterday realize that if even if I had established a mySQL connection on the page, that I would need to re establish that connection in my called page as well...makes perfect sense now).
XMLHttpRequest which powers ajax has a number of limitations. I recommend brushing up on the same origin policy. This is a pivotal rule because it limits where AJAX calls can be made.
First, you can't have Javascript embedded in the HTTP response to an AJAX call. That's a security issue.
No mention of the dynamics of the database, but if the data to be displayed in tabs doesn't have to be real-time, why not cache it server-side?
I find that like any other protocol, Ajax works best in tightly controlled conditions. It wouldn't make much sense for updating nearly the whole page, unless you find that the user experience is improved with an on-page 'loader'. Without going into workarounds, disadvantages will include losing the browser back button / history, issues such as the one your friend mentioned, and also embedded resources and other rich content can suffer as well, and just having an extra layer of complexity to deal with in your app. Don't treat it as magic sauce for your app - make sure every use delivers specific results that benefit your client / audience.
IMHO, it's best to put your client side javascript in a separate page and then import it - neater container. one thing I've faced before is how to call xml back which contains code to run such as more javascript - it's worth checking if this is likely earlier on and avoiding, than having to look at evals.
Mildly interesting.