CakePHP performance tuning for mobile web apps - performance

I'm building out a transactional web app intended for mobile devices. It'll basically just allow players in a league to submit their match scores to our league admin. I've already built it out somewhat with angularjs/JSON Services/ionic but it's very slow going. Changing requirements and very little time to work on it have me considering starting over in CakePHP (despite being fairly new to it and MVC in general).
What coding practices can I follow to keep the user experience fast? My cakephp source folder is massive compared to my angular source folder but if I understand correctly, that won't necessarily affect the user because most of the heavy lifting will be done by the server and presented as a fairly small website to the client, correct?
Should I try to do a big data load right when they login so that most of the data is already client side? Are there ways I can make the requests to/from the server smaller? Any pointers would be great.
Thanks

Without knowing the specifics of your data model, it's hard to give specific ways to optimize.
I would take a look at sending data asynchronously (client-side) with Pusher (or something home-grown) or using pagination to break up large sets of results into smaller subsets.
You can use something like a Real User Metric (RUM) monitor at Pingometer to track performance for users. It'll show what, if anything, takes time to load - network stuff (connectivity, encryption, etc.), application code (controllers), DOM (JavaScript manipulation), or Page Rendering (images, CSS, etc.).

Related

Is browser caching of images good enough to invalidate the need for server side storing?

I had an architecture question, and I had to rewrite the question title multiple times, since SO asked me to. So please feel free to correct it, if you feel so. I am not an expert in cache related things so I would very much appreciate some insights about my architecture related question.
So the situation is like this. We have a web based design app (frontend Javascript, backend PHP) which presents lots of clipart images to our customers who use that in creating online art work. Earlier, our app was loaded into an AWS machine and we used to have the clipart images also stored locally in the same server in order to not have any network transfer required to load the clipart and thus make the design app load time faster. The customer created designs were also saved into a backend MySQL server connected directly to the web based design app (in JSON and relational model).
A while before a new team joined to make a mobile version of this app, and they insisted that the cliparts should be loaded from a "central location" both for our web app, and for the mobile app they are creating. They also said that the design should also be stored into a "central database", accessible by the web and mobile apps (and there were some major re-architecting of the JSON structure as well)
So finally, the architecture changed such that, the cliparts now reside in a centralized location (S3 Server). And there is an "Asset Delivery and Storage (ADS) System" to which our design app makes requests for clipart images and gets served. (Please note that the cliparts repository is very large and only a subset of clipart images are served based on various parameters - such as the style of the design, account type of the customer etc). So this task is now done by the ADS system (written in python).
And since our web design app no longer has any local storage of cliparts nor logic of cliparts filtering (which got delegated to ADS, so no more server side PHP), it has also become a purely web based (front end Javasdcript) app without any server requirements and subsequently got moved to S3.
Now the real matter is that, our web app seems much more slower when initial loading, than when we had our on stash of cliparts stored in the server. I read that if an app requests for images, those images are cached in the browser and if the customer, for eg, loads the same order before that cache has expired then there is no repeat request that needs to be sent to the server (in this case ADS).
If that is true, is there any case I can really make to state that moving the clipart images from the design app server to the ADS system and having to send a request and load them every time a design is loaded has contributed in part to the recent slowness of the design app?
Also most times I hear the answer that "mobile app also does the same and is faster".I am not a mobile developer. Could there be some mobile cache tricks that help the mobile app to be much more "cache-efficient" than the purely web based design app, such that even though the architecture is same for both (sending request ADS for cliparts), the mobile app does it in a better and more efficient manner?
End note: I realise I am not asking a specific programming question. But from some of the notes I have read here, SO is a community for programmers, and I do not know of any other community that so well answers programming related questions. The architecture question I have is a genuine programming related question I face at work and sadly I am not skilled enough to understand if all the recent architectural changes there has any drawbacks that is causing our web app performance to degrade noticeably.
Thanks for reading, and I would really appreciate any pointers or even links to reading for better understanding this.
In chrome, open up the developer tools, and click on the network tab. 90% of the time you can identify the slow resource from there.

What is the right way to realize a large web app

I have to realize a web app able to guarantee two main actions execution:
first one action is about allowing to a small number of users to upload ads or posts, these A users will can upload ads in the application as they will want but they will can upload photos until five megabyte overall threshold. Moreover ads total number will be approximately 10k.
Second action is about allowing to a broad public (1k users per day) to looking for and check published articles from A users, these research will can be more accurate inserting advanced filters.
I would like to know if is strictly necessary build up my app in a scalable way or if I could simply use a MVC approach?
This app will be developed using Laravel framework and it will hosted by Amazon server.
what do you recommend me to do ?
I would like to have some advices, tips and tricks to do it in the best way.
Thanks in advance
The MVC approach is fine, Laravel, or any platform, can be scaled in multiple ways. The simplest is separating the functions of DB, laravel app, cache, queue, to separate servers, and each of those pieces can be scaled separately.
There is a great online set of videos about this, https://serversforhackers.com/scaling-laravel/forge.
But unless you know you will have a large amount of traffic right away, it's better to start with a simpler structure, you will save on cost and it's not hard to scale it later. I mean, start with one server for now, then maybe separate the functions (cache, DB etc) to separate servers as you find they need to be scaled.
If you want to save a little hassle though, I do recommend Laravel Forge, and Envoyer. It makes deploying and managing servers a lot easier. Envoyer is for deployments, great to automate all that.

Client-side logic OR Server-side logic?

I've done some web-based projects, and most of the difficulties I've met with (questions, confusions) could be figured out with help. But I still have an important question, even after asking some experienced developers: When functionality can be implemented with both server-side code and client-side scripting (JavaScript), which one should be preferred?
A simple example:
To render a dynamic html page, I can format the page in server-side code (PHP, python) and use Ajax to fetch the formatted page and render it directly (more logic on server-side, less on client-side).
I can also use Ajax to fetch the data (not formatted, JSON) and use client-side scripting to format the page and render it with more processing (the server gets the data from a DB or other source, and returns it to the client with JSON or XML. More logic on client-side and less on server).
So how can I decide which one is better? Which one offers better performance? Why? Which one is more user-friendly?
With browsers' JS engines evolving, JS can be interpreted in less time, so should I prefer client-side scripting?
On the other hand, with hardware evolving, server performance is growing and the cost of sever-side logic will decrease, so should I prefer server-side scripting?
EDIT:
With the answers, I want to give a brief summary.
Pros of client-side logic:
Better user experience (faster).
Less network bandwidth (lower cost).
Increased scalability (reduced server load).
Pros of server-side logic:
Security issues.
Better availability and accessibility (mobile devices and old browsers).
Better SEO.
Easily expandable (can add more servers, but can't make the browser faster).
It seems that we need to balance these two approaches when facing a specific scenario. But how? What's the best practice?
I will use client-side logic except in the following conditions:
Security critical.
Special groups (JavaScript disabled, mobile devices, and others).
In many cases, I'm afraid the best answer is both.
As Ricebowl stated, never trust the client. However, I feel that it's almost always a problem if you do trust the client. If your application is worth writing, it's worth properly securing. If anyone can break it by writing their own client and passing data you don't expect, that's a bad thing. For that reason, you need to validate on the server.
Unfortunately if you validate everything on the server, that often leaves the user with a poor user experience. They may fill out a form only to find that a number of things they entered are incorrect. This may have worked for "Internet 1.0", but people's expectations are higher on today's Internet.
This potentially leaves you writing quite a bit of redundant code, and maintaining it in two or more places (some of the definitions such as maximum lengths also need to be maintained in the data tier). For reasonably large applications, I tend to solve this issue using code generation. Personally I use a UML modeling tool (Sparx System's Enterprise Architect) to model the "input rules" of the system, then make use of partial classes (I'm usually working in .NET) to code generate the validation logic. You can achieve a similar thing by coding your rules in a format such as XML and deriving a number of checks from that XML file (input length, input mask, etc.) on both the client and server tier.
Probably not what you wanted to hear, but if you want to do it right, you need to enforce rules on both tiers.
I tend to prefer server-side logic. My reasons are fairly simple:
I don't trust the client; this may or not be a true problem, but it's habitual
Server-side reduces the volume per transaction (though it does increase the number of transactions)
Server-side means that I can be fairly sure about what logic is taking place (I don't have to worry about the Javascript engine available to the client's browser)
There are probably more -and better- reasons, but these are the ones at the top of my mind right now. If I think of more I'll add them, or up-vote those that come up with them before I do.
Edited, valya comments that using client-side logic (using Ajax/JSON) allows for the (easier) creation of an API. This may well be true, but I can only half-agree (which is why I've not up-voted that answer yet).
My notion of server-side logic is to that which retrieves the data, and organises it; if I've got this right the logic is the 'controller' (C in MVC). And this is then passed to the 'view.' I tend to use the controller to get the data, and then the 'view' deals with presenting it to the user/client. So I don't see that client/server distinctions are necessarily relevant to the argument of creating an API, basically: horses for courses. :)
...also, as a hobbyist, I recognise that I may have a slightly twisted usage of MVC, so I'm willing to stand corrected on that point. But I still keep the presentation separate from the logic. And that separation is the plus point so far as APIs go.
I generally implement as much as reasonable client-side. The only exceptions that would make me go server-side would be to resolve the following:
Trust issues
Anyone is capable of debugging JavaScript and reading password's, etc. No-brainer here.
Performance issues
JavaScript engines are evolving fast so this is becoming less of an issue, but we're still in an IE-dominated world, so things will slow down when you deal with large sets of data.
Language issues
JavaScript is weakly-typed language and it makes a lot of assumptions of your code. This can cause you to employ spooky workarounds in order to get things working the way they should on certain browsers. I avoid this type of thing like the plague.
From your question, it sounds like you're simply trying to load values into a form. Barring any of the issues above, you have 3 options:
Pure client-side
The disadvantage is that your users' loading time would double (one load for the blank form, another load for the data). However, subsequent updates to the form would not require a refresh of the page. Users will like this if there will be a lot of data fetching from the server loading into the same form.
Pure server-side
The advantage is that your page would load with the data. However, subsequent updates to the data would require refreshes to all/significant portions of the page.
Server-client hybrid
You would have the best of both worlds, however you would need to create two data extraction points, causing your code to bloat slightly.
There are trade-offs with each option so you will have to weigh them and decide which one offers you the most benefit.
One consideration I have not heard mentioned was network bandwidth. To give a specific example, an app I was involved with was all done server-side and resulted in 200Mb web page being sent to the client (it was impossible to do less without major major re-design of a bunch of apps); resulting in 2-5 minute page load time.
When we re-implemented this by sending the JSON-encoded data from the server and have local JS generate the page, the main benefit was that the data sent shrunk to 20Mb, resulting in:
HTTP response size: 200Mb+ => 20Mb+ (with corresponding bandwidth savings!)
Time to load the page: 2-5mins => 20 secs (10-15 of which are taken up by DB query that was optimized to hell an further).
IE process size: 200MB+ => 80MB+
Mind you, the last 2 points were mainly due to the fact that server side had to use crappy tables-within-tables tree implementation, whereas going to client side allowed us to redesign the view layer to use much more lightweight page. But my main point was network bandwidth savings.
I'd like to give my two cents on this subject.
I'm generally in favor of the server-side approach, and here is why.
More SEO friendly. Google cannot execute Javascript, therefor all that content will be invisible to search engines
Performance is more controllable. User experience is always variable with SOA due to the fact that you're relying almost entirely on the users browser and machine to render things. Even though your server might be performing well, a user with a slow machine will think your site is the culprit.
Arguably, the server-side approach is more easily maintained and readable.
I've written several systems using both approaches, and in my experience, server-side is the way. However, that's not to say I don't use AJAX. All of the modern systems I've built incorporate both components.
Hope this helps.
I built a RESTful web application where all CRUD functionalities are available in the absence of JavaScript, in other words, all AJAX effects are strictly progressive enhancements.
I believe with enough dedication, most web applications can be designed this way, thus eroding many of the server logic vs client logic "differences", such as security, expandability, raised in your question because in both cases, the request is routed to the same controller, of which the business logic is all the same until the last mile, where JSON/XML, instead of the full page HTML, is returned for those XHR.
Only in few cases where the AJAXified application is so vastly more advanced than its static counterpart, GMail being the best example coming to my mind, then one needs to create two versions and separate them completely (Kudos to Google!).
I know this post is old, but I wanted to comment.
In my experience, the best approach is using a combination of client-side and server-side. Yes, Angular JS and similar frameworks are popular now and they've made it easier to develop web applications that are light weight, have improved performance, and work on most web servers. BUT, the major requirement in enterprise applications is displaying report data which can encompass 500+ records on one page. With pages that return large lists of data, Users often want functionality that will make this huge list easy to filter, search, and perform other interactive features. Because IE 11 and earlier IE browsers are are the "browser of choice"at most companies, you have to be aware that these browsers still have compatibility issues using modern JavaScript, HTML5, and CSS3. Often, the requirement is to make a site or application compatible on all browsers. This requires adding shivs or using prototypes which, with the code included to create a client-side application, adds to page load on the browser.
All of this will reduce performance and can cause the dreaded IE error "A script on this page is causing Internet Explorer to run slowly" forcing the User to choose if they want to continue running the script or not...creating bad User experiences.
Determine the complexity of the application and what the user wants now and could want in the future based on their preferences in their existing applications. If this is a simple site or app with little-to-medium data, use JavaScript Framework. But, if they want to incorporate accessibility; SEO; or need to display large amounts of data, use server-side code to render data and client-side code sparingly. In both cases, use a tool like Fiddler or Chrome Developer tools to check page load and response times and use best practices to optimize code.
Checkout MVC apps developed with ASP.NET Core.
At this stage the client side technology is leading the way, with the advent of many client side libraries like Backbone, Knockout, Spine and then with addition of client side templates like JSrender , mustache etc, client side development has become much easy.
so, If my requirement is to go for interactive app, I will surely go for client side.
In case you have more static html content then yes go for server side.
I did some experiments using both, I must say Server side is comparatively easier to implement then client side.
As far as performance is concerned. Read this you will understand server side performance scores.
http://engineering.twitter.com/2012/05/improving-performance-on-twittercom.html
I think the second variant is better. For example, If you implement something like 'skins' later, you will thank yourself for not formatting html on server :)
It also keeps a difference between view and controller. Ajax data is often produced by controller, so let it just return data, not html.
If you're going to create an API later, you'll need to make a very few changes in your code
Also, 'Naked' data is more cachable than HTML, i think. For example, if you add some style to links, you'll need to reformat all html.. or add one line to your js. And it isn't as big as html (in bytes).
But If many heavy scripts are needed to format data, It isn't to cool ask users' browsers to format it.
As long as you don't need to send a lot of data to the client to allow it to do the work, client side will give you a more scalable system, as you are distrubuting the load to the clients rather than hammering your server to do everything.
On the flip side, if you need to process a lot of data to produce a tiny amount of html to send to the client, or if optimisations can be made to use the server's work to support many clients at once (e.g. process the data once and send the resulting html to all the clients), then it may be more efficient use of resources to do the work on ther server.
If you do it in Ajax :
You'll have to consider accessibility issues (search about web accessibility in google) for disabled people, but also for old browsers, those who doesn't have JavaScript, bots (like google bot), etc.
You'll have to flirt with "progressive enhancement" wich is not simple to do if you never worked a lot with JavaScript. In short, you'll have to make your app work with old browsers and those that doesn't have JavaScript (some mobile for example) or if it's disable.
But if time and money is not an issue, I'd go with progressive enhancement.
But also consider the "Back button". I hate it when I'm browsing a 100% AJAX website that renders your back button useless.
Good luck!
2018 answer, with the existence of Node.js
Since Node.js allows you to deploy Javascript logic on the server, you can now re-use the validation on both server and client side.
Make sure you setup or restructure the data so that you can re-use the validation without changing any code.

Client-side caching in Rich Internet Applications

I'm starting to step into unfamiliar territory with regards to performance improvement and our RIA (Rich Internet Application) built with GWT. For those unfamiliar with GWT, essentially when deployed it's just pure JavaScript. We're interfacing with the server side using a REST-style XML web service via XMLHttpRequest.
Our XML is un-marshalled into JavaScript objects and used within the application to represent the data model behind the interface. When changes occur, the model is updated and marshalled back to XML and sent back to the server.
I've learned the number one rule of performance (in terms of user experience) is to make as few requests as possible. Obviously this brings up the possibility of caching. Caching is great for static data but things get tricky in a multi-user system where data on the server may be changing. Also, use of "Last-Modified" and "If-Modified-Since" requests don't quite do enough since we'd like to avoid unnecessary requests altogether.
I'm trying to figure out if caching data in the browser is even right for us before researching the approaches. I hope someone has tread this path before. I'm looking for similar approaches, lessons learned, things to avoid, etc.
I'm happy to provide more specific info if needed...
For GWT, if performance matters that much to you, you get better performance by sending all the data you need in a single request, instead of querying multiple small data. I would recommend against client-side data caching as there are lots of issues like keeping the data in sync with the database.
Besides, you already have a good advantage with GWT over traditional html apps. Unless you are dealing with special data (eg: does not become stale too quickly - implies mostly-read queries) I found out that there is no special need for caching. You are better off doing a service-layer caching, since most of the time should come of server-side processing.
If you can provide more details about the nature of the app, maybe some different conclusions can be taken.

Does Ajax detoriate performance?

Does excess use of AJAX affects performance? In context of big size web-applications, how do you handle AJAX requests to control asynchronous requests?
excess use of anything degrades performance; using AJAX where necessary will improve performance, especially if the alternative is a complete full-page round-trip to the server [a 'postback' in asp.net terminology]
There are two sides to this story.
AJAX generally improves the performance from the client's perspective. Rather than loading an entire page, a smaller amount of data is requested from the server when it is needed. Given that a HTML page often references many dependent files (images, css, javascript,etc, each requiring a hit from the server (or the cache)) the client performance from judicious use of AJAX can be remarkable.
On the server-side, the issue becomes one of having many more connections to manage. Polling applications, such as in-browser chat in particular, can really start to increase the load on the server because the browser is now hitting the server much more rapidly. In a typical dynamic application (where the response is generated by code rather than from a static file) you may start running into issues - but these are generally balanced by the fact that the complexity of your request is often much lower (again, you aren't generating the entire page but a small subset of the page) and so therefore your platform can probably get a higher throughput in any case.
The exact outcome of any performance issue is going to depend on a number of factors including your server, platform, framework, and prevailing climactic conditions at the time.
My ultimate advice - focus on creating a good user experience, develop intelligently, collect as many metrics as you can and optimise when you know you need it.
AJAX itself (being asynchronous requests).. No not generally.
However if you have an abundance of javascript and markup and have large amounts of data transferred via your xmlhttprequests then yes you can see a performance hit. It really depends on how you want your website to function any degredation is generally avoidable if sculpted correctly.
Performance of what exactly? I'm going to assume you meant performance of an application in terms of user experience.
What Ajax appears to be best at is causing network traffic only when it's needed. Rather than downloading a honkin' great web page in one hit, it downloads only what's needed in as quick a manner as possible.
Then, if you do something that needs more info, it goes and gets it from the network then.
This means unused stuff is never downloaded (if you design it right, of course - bad code can be written in Ajax as much as any other environment).
I prefer to mix Ajax methods for data transfer and a client-side library like jQuery for pretty interface.
Depending on the situation, AJAX may have a performance overhead or it can actually have better performance than an equivelantly functioning web site that doesn't use AJAX.
It's very easy to overuse AJAX to overload the server with tons of frivilous requests and it can also be a burden on the client's CPU. Conversely, AJAX can also be used to deliver small bits of HTML and other code rather than a whole page for each request, which is at least less of a burden on the server.
Ajax is just an ordinary HTTP request, so as long as your server can handle those requests it won't be a problem. The upside to Ajax is faster perceived performance by the user, since the page doesn't have to reload and redraw itself for every user action.
If scalability is a concern, I'm sure you are also looking at scaling the system horizontally by adding more web servers to the farm. Same goes with even non-Ajax web apps anyway.
AJAX, like any technology can be a good thing or a bad thing depending on the situation and how it is implemented. If you have a specific need for the asynchronous process then it is a good tool to use. However, if you use it irresponsibly you can get into trouble. If you do use it, try to find a good framework that does most of the heavy lifting and be aware of some of the downsides of AJAX...
http://learningremix.net/w2007integ/vangoori/2007/01/the_downsides_of_ajax.shtml
I would agree with quite a few other posts in here. If you are using it in an intelligent way (ie, not using ajax every 30 seconds), then it will be fine. I use ajax on my website (and there is also a js free version) and from a clients perspective, the ajax version loads at anywhere from near-equal speeds to four times faster. It all depends on the design (graphics and other content) of the website and what you are updating.
The downside is, since you have to load some frameworks (even if you create your own like I have) you will have a bit slower of a load for the first page, or any full refreshes, and it does increase the processing load a bit. But that is just because the ajax has increased productivity and therefore the user can make more requests/updates
If the site is busy then it will, eventually, kill the server, unless your in a farm.
As to the site itself it shouldn't.

Resources