I'm trying to get my head around Cappuccino. I'd like my StackOverview peers to review the architecture below and see if it makes sense - the aim is to utilize the unique benefits of Django and Cappuccino without doubling up where the technologies overlap...
When the web browser requests a 'friendly' URL (eg, /, /articles, etc):
DJango's urls.py matches this to a
view.
The view, rather than doing
DJangos typical work of filling in a
template with the locals dict,
returns the small 'stub' HTML used in
a Cappuccino app directly.
The client receives the Cappuccino HTML
The client requests the Objective J JS
URLs mentioned in the stub HTML
The end-user app is executed and
displayed in the browser
The browser now has a working app. When the user does something that
requests something from the server:
The browser sends an XMLHTTPRequest to a URL.
Django's URLs.py matches this to a
view.
The view does it work, perhaps interacting with the DB model. But instead of returning a template, Django returns some JSON.
The client recieves the JSON, and
does whatever it needs to do.
Does this make sense? We still have the benefit of friendly URLs, and the database being created for us to model our code. However rather than using templates, we're providing Cappuccino stub pages and JSON responses, in order to give users something more like a real app and less like an HTML templating engine.
Is there perhaps a better way of doing things? What do other Pythonistas use? Thanks for your feedback.
For a low traffic site, using Django's routing layer would be fine, but if you plan on getting a significant amount of traffic, you might consider having your proxying webserver handle the stubs.
As for the rest, it works and the TurboGears community has been doing it for years (I was a TG committer so that's what I normally use). The TG architecture of returning a dictionary to a template makes this trivial since you just set 'json' as your template engine.
Doing the same thing in Django isn't much more complicated. Just use the serialization tools to write the result to the response rather than using the templating calls.
Note that when you do an architecture like this, it's considerably easier to manage if you keep all the application logic in one place. Putting some app logic in Django and some in the browser causes things to start getting messy fairly quickly. If you treat your server as a dumb persistence layer (with the exception of validation/authentication/authorization), life is easier.
FWIW, I find Sproutcore to be easier to work with than Cappuccino if you're interested in heavier non-progressive enhancement frameworks.
Related
I am starting to write my first web app with Node.js and Express. I have two approaches in mind.
Create two sets of routes. One that sends JSON, one that renders the page using a templating engine
Create a static website that makes API calls to the backend using AJAX, and have only routes for the API.
I understand that approach #2 depends on AJAX support in the browser, but if this was your project, based on the advantages and disadvantages of each approach, which would you choose and why?
If I am reading it right, both options #1 and #2 first set of routes is an API that returns JSON rather than sends it.
Assuming that in #2 you wouldn't create static pages with JavaScript doing AJAX calls, but rather still use express static routing like app.use('/', express.static(path.join(__dirname, 'dist'))); the difference between 2 approaches isn't that big.
Unless you already know some supported template language, such as mustache the cons are that you have to learn one and before that pick one (which isn't always an easy task from my experience!).
If you don't know one, depending on your application, you might still benefit from learning and using one. As an example you can think of a very generic UI where a single template could be reused very many times - like a generic database UI similar to, say well known phpmyadmin.
In case of static routing, you can achieve similar results by using a JavaScript framework which has components or templates, such as angular. If you aren't planning to use one, that could result in a lot of code duplication of otherwise re-usable widgets. And even when using one I can imagine a case when template engine would result in less code (or less files in your project at the very least). Not sure though if it's going to be easier to navigate and moreover to cover with tests when the project grows.
Without knowing more about your project it is hard to give more advice.
If the product you're developing is primarily static content with some dynamic pieces here and there, I'd recommend serving HTML from your backend via a templating system. A good example of this would be a blog. You'll end up with better SEO and there are less moving pieces to grok in this approach.
However, if you want to develop a single page application, I recommend using your backend entirely as an api, and writing your client side logic entirely in React/Vue/Angular/whatever. Single page applications have their front ends written entirely in javascript, and are best suited to highly dynamic, "app like" experiences online. Think gmail or facebook.
In my current project we use both approaches. Many views are static and data is obtained from API calls (we just use angular) or bootstrapped values (we load some objects with template, like user roles etc.)
However, some views are rendered on server site, because it easily allow us to dynamically inject values, tokens or other supporting data without making extra requests.
So, I vote for flexibility
As part of a product we are deploying, clients need to access a remote API on our servers to access content and data. Nonetheless, for some reasons and some clients, a solution where the entire page is on our servers is not desireable (reasons include: control over design, but mostly SEO, and them wanting this content to be available under "their domain")... A script that accesses the API server-side is not desirable due to other issues.
My idea follows (and I will point out its flaws so others can please suggest alternatives):
1) Make a simple script to be hosted on the clients server which will obtain all traffic from a certain URI path (catch-all script, similar to any framework router). so /MyApp/*. This script would always return a single code, a "loader javascript and styling"...
2) Through javascript returned from the script above, extract the URL, and process the URI after the desired path /MyApp/[*] and send it to an external call with JSONP or CORS regular ajax, the return is then styled appropriately and displayed.
With this, a url such as /MyApp/abc and /MyApp/def would have the same html/js in the browser source, but the JS would load different data from the ajax call, therefore showing different content...
This would seem like a good solution, the only drawback is that from my understanding, google and other searchengines wouldnt ever be able to access the content from abc and def, they would only access the "loader javascript and styling" (obvious enough, they arent going to be running the JS)...
So this is better than #! in that it wouldnt screw with URLs, but would still be depending on JS, so not search engine friendly...
Due to server restrictions, I'd much rather have a simple "catchall" page, and have the API called from the client-side than have to impose minimum requirements such as curl, etc... plus I'd have access to the end-user ip address more easily this way (although I could make a more elaborate proxy - which would make installing it much harder on clients' servers)...
Is there a way of achieving this without conneting to the api from the server-side?
The easiest method of doing this IMO is to have an AJAX controller (assuming MVC design) to handle all remote requests. Have each action in your controller return JSON, and then you have easy access to the data with a serverside call.
Otherwise you are using the #! solution (which you don't like, and rightly so..), or using JSONP (a hassle as well).
I used to utilize MVC 3 Razor engine to render pages. Sometimes I had to use AJAX calls to transfer Razor-rendered HTML and inserting it into the page using JQuery. As new project is starting, we do consider to utilize MVC 4 Single Page Application framework which is new to us. I took the first glance at it which left me with mixed feelings:
On the one hand it implies all your data are transferred by JSON and client does all the job to render them and encode some UI-related logic which is great for server and network performance. On the other hand the client (HTML+JavaScript) becomes much more heavy with a bunch of magic strings and hidden relations inside it which seems to be hard to maintain. We got used to VS intellisense, type-safed .NET server code to render pages which we have to exchange for client scripts and Knockout binding statements in case of SPA.
I wonder are any prons and cons of using SPA comparing to Razor, other that this obvious one I've mentioned here? Thanks
Razor is a server based technology where SPA (Single Page Application) is an architecture approach used on the client (web browser). Both can be used together.
From a high level, SPA moves the rendering and data retrieval to the client. The web server becomes a services tier sitting in front of the database. An MVC pattern works best when using SPA. Frameworks like Knockout.js and Backbone.js can be used for this. The net results is a rich responsive desktop like experience.
To achieve this you'll need to be a descent javascript programmer or be willing to learn javascript.
Yes it's moving business requirements from C# into javascript. In Visual Studio there is limited intelli-sense for javascript. To have confidence in your javascript you'll need to lean on unit testing. The up side is the rich user experience (think gmail or google maps).
I think it sounds like you are already fairly well apprised of most of the trade-offs here; you'll have reduced network load with SPA, and will shift a measure of the processing to the client. You will, however, increase the complexity of your code, and make it slightly harder to easily maintain the system (simply because of the increased complexity - not due to any architectural problems inherent in SPA).
Another thing to keep in mind is compatibility. The reason I mentioned a "false choice" in my comment to your question is that to keep the site usable for folks with Javascript disabled, you will still need to provide regular, whole-page views. This is also a good idea to do for the sake of SEO; a crawler will browse your site as a user with JS disabled, and can then index your site. The site should then handle such incoming URLs properly so that those with JS enabled will find themselves in your SPA looking at the same content (as opposed to being dumped in the "no JS" view unnecessarily).
There's something else I'll mention as a possibility that might help with the above, but it breaks the ideals of an SPA; that is, using Ajax-loaded partials in some places, rather than JSON data. For example, say you have a typical "Contact EMail" form on the site; you want that to load within the context of the SPA, but it's probably easier to do so by loading the partial via AJAX. (Though certainly, yes; you could do it with a JSON object describing the fields to display in the e-mail form).
There will also likely be content that is more "content" than "data", which you may still wish to load via partials and Ajax.
An SPA is definitely an interesting project, and I'm about to deploy one myself. I've used a mix of JSON and partials in it, but that may not be your own choice.
I'm planning on experimenting a bit with the HTML5 History API on my website to asynchronously render new content and save states for the browsers that support it. Obviously this means making a lot of AJAX requests to the server, and I've hit a snag in terms of design approach. I've some areas on the site that already render content asynchronously in small ways, and in those places I've just been rolling my own solutions to generate the new HTML on the client side.
However, what I'm trying to do now will require a bit more of a robust solution, and I'd like to do it in a way that takes advantage of the MVC flow rather than relying on a javascript templating engine or my own whacky javascript to handle the raw data returned by my controllers. Since this feature will only be relevant to certain HTML5 capable browsers, I'd rather not introduce a lot of extra bloat on the client side for something a lot of people may not even see.
Essentially, what I'm wondering is: is there a way in Cake to take advantage of the presentation logic that's already in my view files to selectively generate and return just the new, ready-to-go HTML that I need instead of reinventing the wheel to do it on the client side from raw data returned by the controller?
I don't really get your problem but i sounds like you want to cache the view which is in fact rendered by a view class but send back to the browser through the controller by using the CakeResponse object.
Caching: http://book.cakephp.org/2.0/en/core-libraries/caching.html
Response: http://book.cakephp.org/2.0/en/controllers/request-response.html#cakeresponse
I am trying to find the optimal architecture for an ajax-heavy Django application I'm currently building. I'd like to keep a consistent way of doing forms, validation, fetching data, JSON message format but find it exceedingly hard to find a solution that can be used consistently.
Can someone point me in the right direction or share their view on best practice?
I make everything as normal views which display normally in the browser. That includes all the replies to AJAX requests (sub pages).
When I want to make bits of the site more dynamic I then use jQuery to do the AJAX, or in this case AJAH and just load the contents of one of the divs in the sub page into the requesting page.
This technique works really well - it is very easy to debug the sub pages as they are just normal pages, and jQuery makes your life very easy using these as part of an AJA[XH]ed page.
For all the answers to this, I can't believe no one's mentioned django-piston yet. It's mainly promoted for use in building REST APIs, but it can output JSON (which jQuery, among others, can consume) and works just like views in that you can do anything with a request, making it a great option for implementing AJAX interactions (or AJAJ [JSON], AJAH, etc whatever). It also supports form validation.
I can't think of any standard way to insert ajax into a Django application, but you can have a look to this tutorial.
You will also find more details on django's page about Ajax
Two weeks ago I made a write up how I implement sub-templates to use them in "normal" and "ajax" request (for Django it is the same). Maybe it is helpful for you.
+1 to Nick for pages displaying normally in the browser. That seems to be the best starting point.
The problem with the simplest AJAX approaches, such as Nick and vikingosegundo propose, is that you'll have to rely on the innerHTML property in your Javascript. This is the only way to dump the new HTML sent in the JSON. Some would consider this a Bad Thing.
Unfortunately I'm not aware of a standard way to replicate the display of forms using Javascript that matches the Django rendering. My approach (that I'm still working on) is to subclass the Django Form class so it outputs bits of Javascript along with the HTML from as_p() etc. These then replicate the form my manipulating the DOM.
From experience I know that managing an application where you generate the HTML on the server side and just "insert" it into your pages, becomes a nightmare. It is also impossible to test using the Django test framework. If you're using Selenium or a similar tool, it's ok, but you need to wait for the ajax request to go return so you need tons of sleeps in your test script, which may slow down your test suite.
If the purpose of using the Ajax technique is to create a good user interface, I would recommend going all in, like the GMail interface, and doing everything in the browser with JavaScript. I have written several apps like this using nothing but jQuery, state machines for managing UI state and JSON with ReST on the backend. Django, IMHO, is a perfect match for the backend in this case. There are even third party software for generating a ReST-interface to your models, which I've never used myself, but as far as I know they are great at the simple things, but you of course still need to do your own business logic.
With this approach, you do run into the problem of duplicating code in the JS and in your backend, such as form handling, validation, etc. I have been thinking about solving this with generating structured information about the forms and validation logic which I can use in JS. This could be compiled at deploy-time and be loaded as any other JS file.
Also, avoid XML. The browsers are slow at parsing it, it is a pain to generate and a pain to work with in the browser. Use JSON.
Im currently testing:
jQuery & backbone.js client-side
django-piston (intermediate layer)
Will write later my findings on my blog http://blog.sserrano.com