Server side caching in openrasta - caching

I have a site that is being polled rather hard for the JSON representation of the same resources from multiple clients (browsers, other applications, unix shell scripts, python scripts, etc).
I would like to add some caching such that some of the resources are cached within the server for a configurable amount of time, to avoid the CPU hit of handling the request and serialising the resource to JSON. I could of course cache them myself within the handlers, but would then take the serialisation hit on every request, and would have to modify loads of handlers too.
I have looked at the openrasta-caching module but think this is only for controlling the browser cache?
So any suggestions for how I can get openrasta to cache the rendered representation of the resource, after the codec has generated it?
Thanks

openrasta-caching does have preliminary support for server-side caching, with an API you can map to the asp.net server-side cache, using the ServerCaching attribute. That said it's not complete, neither is openrasta-caching for that matter. It's a 0.2 that would need a couple of days of work to make it to a good v1 that completely support all the scenarios I wanted to support that the asp.net caching infrastructure doesn't currently support (mainly, make the caching in OpenRasta work exactly like an http intermediary rather than an object and .net centric one as exists in asp.net land, including client control of the server cache for those times you want the client to be allowed to force the server to redo the query). As I currently have no client project working on caching it's difficult to justify any further work on that plugin, so I'm not coding anything for it right now. I do have 4 days free coming up though, so DM me if you want openrasta-caching to be bumped to 0.3 with whatever requirements you have in that that would fit 4 days of work.
You can implement something simpler using an IOperationInterceptor and plugging in the asp.net pipeline using that, or be more web-friendly and implement your caching out of process using squid and rely on openrasta-caching for generating the correct http caching instructions.
That said, for your problem, you may not even need server caching if the cost is in json. If you map a last modified or an Etag to what the handler returns, it'll generate correctly a 304 where appropriate, and bypass json rendering alltogether, provided your client does conditional requests (and it should).
There's also a proposed API to allow you to further optimize your API by doing a first query on last modified/etag to return the 304 without retrieving any data.

Related

How do Web Applications communicate with a server?

Let's say I wanted to create a collaborative web application where users can simultaneously work on e.g. a diagram. I have my interactive web site (HTML/CSS/JavaScript) on the client side, and let's say a Java application synchronizing a centralized model on the server side. How would client and server exchange messages? To get a better idea of how the system is supposed to work like, here are some details and a small illustration:
Requirements
The underlaying technology should support authorized communication to show/hide sensitive information.
The model of the diagram should only exist on the server (centralized model, no copies), whereby changes should be broadcasted to other users (other browsers who are currently editing the same document).
After receiving changes to the model, the server shall perform computationally higher demanding tasks such as consistency checks and inform the user if e.g. an action is invalid.
Constraints
The system must not poll or set flags (it should be a direct communication).
The website must not be reloaded during use.
Software running on the client must be restricted to JavaScript.
Technology I've found
Here are some of the solutions I've looked at, ranked from (imho) most suitable to least, whereby I'm not sure if the list is even complete and all my assumptions are correct.
Java Web Service using javax.jws - The service would offer an API described by WSDL and be capable of responding with SOAP messages using the HTTP protocol, but can JavaScript process these messages?
Servlets & Java Server Pages (JSPs) - Would, as far as I know, require a page reload to fetch new content (isn't this pretty much the same as PHP? Both are executed on the server to generate HTML pages..)
I've already found this question which, however, does not provide concrete technologies in the answer. Thank you for any hints and clarification, I dearly appreciate it!
As you have already found out, there are many approaches to this common problem.
SOAP (sometimes called "Web Services" or "WS*") is a standard that was developed for application to application communication. It's strengths are precise specifications (and therefore many good libraries), interface sharing for client and server (through WSDL), and discoverability.
However, SOAP has very limited -if any- support in Browsers.
The most common modern approach is probably RESTful web services based on XML or JSON. When used in a browser as client, JSON is preferred, as the unmarshalling is trivial in JavaScript. Many frameworks and libraries like Angular and jQuery take on the tedious cross-browser AJAX request crafting. Some boilerplate is required to wrap the data in HTML, however. This downside can also be an upside if you consider the cleaner architecture and maintainability.
Since you're using Java on your server, you might also want to look into JSFs AJAX capabilities. It doesn't really use REST (no well-defined resources), but loads server-side rendered chunks of your HTML page asynchronously (read: AJAX) and re-renders them in the DOM. This can be seen as a compromise between real REST and full page reloading. The good side of it is you don't have to define any resources but your backing beans. The downsides are: possible overhead, since you're downloading HTML instead of pure data; and bad testability since you cannot access the rendered chunks outside the JSF context.
Wholly different issues are 1. the synchronization of concurrent requests and 2. communication with clients from server. For the latter you can either look into Web Sockets (the next big thing) or the good-old polling.
If concurrent edits of the same data are a concern to you, you'll have to implement a sort of strategy to tell whether edits can be merged or whether one has to be declined. The client will need to handle such situations and offer the user reviewing, overriding or reverting etc.
EDIT: more on Java servlets and JSP
Servlets are nothing but Java classes that handle specific HTTP requests, usually bound to certain URL paths. They're comparable to PHP pages: In PHP the web server gets a request and searches for a corresponding PHP file. Then it parses the PHP script and returns the response. Servlets do the same, only the syntax is very different. JSP is mostly a try at bringing more PHP-ish syntax to Java. Behind the curtains, they're rendered to pure Java servlets.
Java servlets have a long history which is older than AJAX, but many frameworks have taken on the challenge of implementing flexible resource handling on the server. Most notably Jersey and RESTEasy, which are implementations of the Java EE standard JAX-RS; and Spring, which is a request-based framework that competes well against Java EE standards. Then there is also JSF, which is sometimes called a successor of JSP. To the user it's little different than JSP, but the syntax is nicer and IMHO it is more feature-rich than JSP. Especially considering extensions like PrimeFaces. In contrast to JSP, JSF embraces AJAX and many built-in components are capable of communicating with the server using AJAX, including calling Java functions and re-rendering parts of the page.
I would recommend you try websockets since connections will be persistent between clients and server and any of them can send a message at any time.
You could implement a websocket server in Java on the server and use Javascript's built in websockets functionallity on the clients.
Take a look at https://www.websocket.org/ to learn more about websockets.

CQRS: what backend to use for my View-Store?

My CQRS-based architecture currently has 4 components. It is more of a prototype so nothing is set in stone yet.
CommandProcessor: Gets commands, executes them, etc. (duh ^^),
publishes events. Is Azure-based
ViewProcessor: Gets view-requests,
responds with the view. Subscribes to events to update view store. Is
Azure-based
WebClient: AJAX-heavy web portal, sends commands and
requests (json-)views. Azure-based
DesktopClient: Not much to say,
also sends commands and requests views (undecided if json or some
other format). Obviously not azure-based.
My original approach was to use an InMemory-Viewstore. Azure VMs have quite a bit of memory available and I didn't really see the need to add the complexity Blob-Storage etc.
Additionally, I am trying to minimize the command-execution latency to at least partially get around the whole asynchronous UI problem so that I can (where needed) simulate a synchronous UI with (fast) callbacks (I hope that sentence made sense ^^).
In creating the web client, I noticed a potential flaw in my plan. The url of the ViewProcessor is obviously different to the WebClient-url, so json requests would fail because of the Same-Origin-Policy. Alternatives/Workarounds like jsonp did not seem that attractive because they don't solve the inherent security problem. Implementing the ajax requests to target the WebClient itself would be an option but then I would have redundant functionality (view-store in both webclient and viewprocessor).
I guess saving the views in blob-storage would solve this problem, but I can't shake the feeling that I am overlooking something important/obvious.
Client --command-- CommandProcessor
CommandProcessor --event-- ViewProcessor
ViewProcessor --view-- Blob
(ViewProcessor or CommandProcessor) --notification-- Client
Blob --view-- Client
That scenario would have quite a bit of latency :|
I would look again the blob storage option. We store serialized view objects in blob storage, and it is very fast and stable. Is there some aspect of blob storage that concerns you?
Erick

Faster API access via javascript or server-side libraries

I would like to display information obtained from a web-API. (say, instagram or last.fm)
Is there generally a noticable speed difference between using a serverside (Ruby) or clientside (JS) API library?
I would guess Javascript would be faster since you can contact the API asynchronously once the page has loaded in the client's browser
Just wondering if there is a "best practice" for this since API libraries generally exist for both client-side and server-side.
Each way has pro and cons, one really isn't better or worse than the other for a general case, you have to use what makes sense in your situation.
With a small amount of data / operations, you will most likely be throttled by the web request itself, not the processing power of either JavaScript or Ruby.
I haven't made an exhaustive list, but some general things to consider:
Client Side:
Processing power is pushed over to the client, you use less resources, but performance will vary.
API must support either JSONp or Access-Control-Allow-Headers.
Very slow for large operations on large data sets.
Any API key will be viewable by any visitor to your website.
Initial response time from your server will be quicker, but if you load most of your content from an API request, you'll still have to wait.
Server Side:
Won't expose an API key to the client.
Uses more resources, but can handle larger amounts of data/operations
Will take a longer amount of time to return the initial page.
Sometimes has additional user features/methods since oAuth or other security features are supported.
One big difference is that the JS API does not add load on your server, so it makes it easier to scale your web app.
Also, in general using JS is likely to be faster to the user because the client's browser will be getting the data directly from the web-API server, rather than going via your server.

Client-side logic OR Server-side logic?

I've done some web-based projects, and most of the difficulties I've met with (questions, confusions) could be figured out with help. But I still have an important question, even after asking some experienced developers: When functionality can be implemented with both server-side code and client-side scripting (JavaScript), which one should be preferred?
A simple example:
To render a dynamic html page, I can format the page in server-side code (PHP, python) and use Ajax to fetch the formatted page and render it directly (more logic on server-side, less on client-side).
I can also use Ajax to fetch the data (not formatted, JSON) and use client-side scripting to format the page and render it with more processing (the server gets the data from a DB or other source, and returns it to the client with JSON or XML. More logic on client-side and less on server).
So how can I decide which one is better? Which one offers better performance? Why? Which one is more user-friendly?
With browsers' JS engines evolving, JS can be interpreted in less time, so should I prefer client-side scripting?
On the other hand, with hardware evolving, server performance is growing and the cost of sever-side logic will decrease, so should I prefer server-side scripting?
EDIT:
With the answers, I want to give a brief summary.
Pros of client-side logic:
Better user experience (faster).
Less network bandwidth (lower cost).
Increased scalability (reduced server load).
Pros of server-side logic:
Security issues.
Better availability and accessibility (mobile devices and old browsers).
Better SEO.
Easily expandable (can add more servers, but can't make the browser faster).
It seems that we need to balance these two approaches when facing a specific scenario. But how? What's the best practice?
I will use client-side logic except in the following conditions:
Security critical.
Special groups (JavaScript disabled, mobile devices, and others).
In many cases, I'm afraid the best answer is both.
As Ricebowl stated, never trust the client. However, I feel that it's almost always a problem if you do trust the client. If your application is worth writing, it's worth properly securing. If anyone can break it by writing their own client and passing data you don't expect, that's a bad thing. For that reason, you need to validate on the server.
Unfortunately if you validate everything on the server, that often leaves the user with a poor user experience. They may fill out a form only to find that a number of things they entered are incorrect. This may have worked for "Internet 1.0", but people's expectations are higher on today's Internet.
This potentially leaves you writing quite a bit of redundant code, and maintaining it in two or more places (some of the definitions such as maximum lengths also need to be maintained in the data tier). For reasonably large applications, I tend to solve this issue using code generation. Personally I use a UML modeling tool (Sparx System's Enterprise Architect) to model the "input rules" of the system, then make use of partial classes (I'm usually working in .NET) to code generate the validation logic. You can achieve a similar thing by coding your rules in a format such as XML and deriving a number of checks from that XML file (input length, input mask, etc.) on both the client and server tier.
Probably not what you wanted to hear, but if you want to do it right, you need to enforce rules on both tiers.
I tend to prefer server-side logic. My reasons are fairly simple:
I don't trust the client; this may or not be a true problem, but it's habitual
Server-side reduces the volume per transaction (though it does increase the number of transactions)
Server-side means that I can be fairly sure about what logic is taking place (I don't have to worry about the Javascript engine available to the client's browser)
There are probably more -and better- reasons, but these are the ones at the top of my mind right now. If I think of more I'll add them, or up-vote those that come up with them before I do.
Edited, valya comments that using client-side logic (using Ajax/JSON) allows for the (easier) creation of an API. This may well be true, but I can only half-agree (which is why I've not up-voted that answer yet).
My notion of server-side logic is to that which retrieves the data, and organises it; if I've got this right the logic is the 'controller' (C in MVC). And this is then passed to the 'view.' I tend to use the controller to get the data, and then the 'view' deals with presenting it to the user/client. So I don't see that client/server distinctions are necessarily relevant to the argument of creating an API, basically: horses for courses. :)
...also, as a hobbyist, I recognise that I may have a slightly twisted usage of MVC, so I'm willing to stand corrected on that point. But I still keep the presentation separate from the logic. And that separation is the plus point so far as APIs go.
I generally implement as much as reasonable client-side. The only exceptions that would make me go server-side would be to resolve the following:
Trust issues
Anyone is capable of debugging JavaScript and reading password's, etc. No-brainer here.
Performance issues
JavaScript engines are evolving fast so this is becoming less of an issue, but we're still in an IE-dominated world, so things will slow down when you deal with large sets of data.
Language issues
JavaScript is weakly-typed language and it makes a lot of assumptions of your code. This can cause you to employ spooky workarounds in order to get things working the way they should on certain browsers. I avoid this type of thing like the plague.
From your question, it sounds like you're simply trying to load values into a form. Barring any of the issues above, you have 3 options:
Pure client-side
The disadvantage is that your users' loading time would double (one load for the blank form, another load for the data). However, subsequent updates to the form would not require a refresh of the page. Users will like this if there will be a lot of data fetching from the server loading into the same form.
Pure server-side
The advantage is that your page would load with the data. However, subsequent updates to the data would require refreshes to all/significant portions of the page.
Server-client hybrid
You would have the best of both worlds, however you would need to create two data extraction points, causing your code to bloat slightly.
There are trade-offs with each option so you will have to weigh them and decide which one offers you the most benefit.
One consideration I have not heard mentioned was network bandwidth. To give a specific example, an app I was involved with was all done server-side and resulted in 200Mb web page being sent to the client (it was impossible to do less without major major re-design of a bunch of apps); resulting in 2-5 minute page load time.
When we re-implemented this by sending the JSON-encoded data from the server and have local JS generate the page, the main benefit was that the data sent shrunk to 20Mb, resulting in:
HTTP response size: 200Mb+ => 20Mb+ (with corresponding bandwidth savings!)
Time to load the page: 2-5mins => 20 secs (10-15 of which are taken up by DB query that was optimized to hell an further).
IE process size: 200MB+ => 80MB+
Mind you, the last 2 points were mainly due to the fact that server side had to use crappy tables-within-tables tree implementation, whereas going to client side allowed us to redesign the view layer to use much more lightweight page. But my main point was network bandwidth savings.
I'd like to give my two cents on this subject.
I'm generally in favor of the server-side approach, and here is why.
More SEO friendly. Google cannot execute Javascript, therefor all that content will be invisible to search engines
Performance is more controllable. User experience is always variable with SOA due to the fact that you're relying almost entirely on the users browser and machine to render things. Even though your server might be performing well, a user with a slow machine will think your site is the culprit.
Arguably, the server-side approach is more easily maintained and readable.
I've written several systems using both approaches, and in my experience, server-side is the way. However, that's not to say I don't use AJAX. All of the modern systems I've built incorporate both components.
Hope this helps.
I built a RESTful web application where all CRUD functionalities are available in the absence of JavaScript, in other words, all AJAX effects are strictly progressive enhancements.
I believe with enough dedication, most web applications can be designed this way, thus eroding many of the server logic vs client logic "differences", such as security, expandability, raised in your question because in both cases, the request is routed to the same controller, of which the business logic is all the same until the last mile, where JSON/XML, instead of the full page HTML, is returned for those XHR.
Only in few cases where the AJAXified application is so vastly more advanced than its static counterpart, GMail being the best example coming to my mind, then one needs to create two versions and separate them completely (Kudos to Google!).
I know this post is old, but I wanted to comment.
In my experience, the best approach is using a combination of client-side and server-side. Yes, Angular JS and similar frameworks are popular now and they've made it easier to develop web applications that are light weight, have improved performance, and work on most web servers. BUT, the major requirement in enterprise applications is displaying report data which can encompass 500+ records on one page. With pages that return large lists of data, Users often want functionality that will make this huge list easy to filter, search, and perform other interactive features. Because IE 11 and earlier IE browsers are are the "browser of choice"at most companies, you have to be aware that these browsers still have compatibility issues using modern JavaScript, HTML5, and CSS3. Often, the requirement is to make a site or application compatible on all browsers. This requires adding shivs or using prototypes which, with the code included to create a client-side application, adds to page load on the browser.
All of this will reduce performance and can cause the dreaded IE error "A script on this page is causing Internet Explorer to run slowly" forcing the User to choose if they want to continue running the script or not...creating bad User experiences.
Determine the complexity of the application and what the user wants now and could want in the future based on their preferences in their existing applications. If this is a simple site or app with little-to-medium data, use JavaScript Framework. But, if they want to incorporate accessibility; SEO; or need to display large amounts of data, use server-side code to render data and client-side code sparingly. In both cases, use a tool like Fiddler or Chrome Developer tools to check page load and response times and use best practices to optimize code.
Checkout MVC apps developed with ASP.NET Core.
At this stage the client side technology is leading the way, with the advent of many client side libraries like Backbone, Knockout, Spine and then with addition of client side templates like JSrender , mustache etc, client side development has become much easy.
so, If my requirement is to go for interactive app, I will surely go for client side.
In case you have more static html content then yes go for server side.
I did some experiments using both, I must say Server side is comparatively easier to implement then client side.
As far as performance is concerned. Read this you will understand server side performance scores.
http://engineering.twitter.com/2012/05/improving-performance-on-twittercom.html
I think the second variant is better. For example, If you implement something like 'skins' later, you will thank yourself for not formatting html on server :)
It also keeps a difference between view and controller. Ajax data is often produced by controller, so let it just return data, not html.
If you're going to create an API later, you'll need to make a very few changes in your code
Also, 'Naked' data is more cachable than HTML, i think. For example, if you add some style to links, you'll need to reformat all html.. or add one line to your js. And it isn't as big as html (in bytes).
But If many heavy scripts are needed to format data, It isn't to cool ask users' browsers to format it.
As long as you don't need to send a lot of data to the client to allow it to do the work, client side will give you a more scalable system, as you are distrubuting the load to the clients rather than hammering your server to do everything.
On the flip side, if you need to process a lot of data to produce a tiny amount of html to send to the client, or if optimisations can be made to use the server's work to support many clients at once (e.g. process the data once and send the resulting html to all the clients), then it may be more efficient use of resources to do the work on ther server.
If you do it in Ajax :
You'll have to consider accessibility issues (search about web accessibility in google) for disabled people, but also for old browsers, those who doesn't have JavaScript, bots (like google bot), etc.
You'll have to flirt with "progressive enhancement" wich is not simple to do if you never worked a lot with JavaScript. In short, you'll have to make your app work with old browsers and those that doesn't have JavaScript (some mobile for example) or if it's disable.
But if time and money is not an issue, I'd go with progressive enhancement.
But also consider the "Back button". I hate it when I'm browsing a 100% AJAX website that renders your back button useless.
Good luck!
2018 answer, with the existence of Node.js
Since Node.js allows you to deploy Javascript logic on the server, you can now re-use the validation on both server and client side.
Make sure you setup or restructure the data so that you can re-use the validation without changing any code.

Client-side caching in Rich Internet Applications

I'm starting to step into unfamiliar territory with regards to performance improvement and our RIA (Rich Internet Application) built with GWT. For those unfamiliar with GWT, essentially when deployed it's just pure JavaScript. We're interfacing with the server side using a REST-style XML web service via XMLHttpRequest.
Our XML is un-marshalled into JavaScript objects and used within the application to represent the data model behind the interface. When changes occur, the model is updated and marshalled back to XML and sent back to the server.
I've learned the number one rule of performance (in terms of user experience) is to make as few requests as possible. Obviously this brings up the possibility of caching. Caching is great for static data but things get tricky in a multi-user system where data on the server may be changing. Also, use of "Last-Modified" and "If-Modified-Since" requests don't quite do enough since we'd like to avoid unnecessary requests altogether.
I'm trying to figure out if caching data in the browser is even right for us before researching the approaches. I hope someone has tread this path before. I'm looking for similar approaches, lessons learned, things to avoid, etc.
I'm happy to provide more specific info if needed...
For GWT, if performance matters that much to you, you get better performance by sending all the data you need in a single request, instead of querying multiple small data. I would recommend against client-side data caching as there are lots of issues like keeping the data in sync with the database.
Besides, you already have a good advantage with GWT over traditional html apps. Unless you are dealing with special data (eg: does not become stale too quickly - implies mostly-read queries) I found out that there is no special need for caching. You are better off doing a service-layer caching, since most of the time should come of server-side processing.
If you can provide more details about the nature of the app, maybe some different conclusions can be taken.

Categories

Resources