Parameters that affect performance of web app on different clients - performance

I have built a web application, in which I display a map and I use both WMS and WFS requests in order to show network (lines) and points of interest on the map. The application has several filters in order to send queries to database (such as date filters etc.).
The app runs in a remote server. The speed when accessing the app from my browser (firefox or chrome) is satisfying. Everything runs quite smoothly.
My issue is that I get complaints related to speed and performance. So my question is: on what does the performance depends on and how can it be possible that me having great experience and others don't?
One hypothesis is that the computer power of the other client is too low (which is not the case).
Another one is that the server doesn't have enough resources (which is also not the case).
What are other parameters that affect the performance of a web app?

The connection speed is a very important factor and also the response time (mostly distance client-server).

Related

CakePHP performance tuning for mobile web apps

I'm building out a transactional web app intended for mobile devices. It'll basically just allow players in a league to submit their match scores to our league admin. I've already built it out somewhat with angularjs/JSON Services/ionic but it's very slow going. Changing requirements and very little time to work on it have me considering starting over in CakePHP (despite being fairly new to it and MVC in general).
What coding practices can I follow to keep the user experience fast? My cakephp source folder is massive compared to my angular source folder but if I understand correctly, that won't necessarily affect the user because most of the heavy lifting will be done by the server and presented as a fairly small website to the client, correct?
Should I try to do a big data load right when they login so that most of the data is already client side? Are there ways I can make the requests to/from the server smaller? Any pointers would be great.
Thanks
Without knowing the specifics of your data model, it's hard to give specific ways to optimize.
I would take a look at sending data asynchronously (client-side) with Pusher (or something home-grown) or using pagination to break up large sets of results into smaller subsets.
You can use something like a Real User Metric (RUM) monitor at Pingometer to track performance for users. It'll show what, if anything, takes time to load - network stuff (connectivity, encryption, etc.), application code (controllers), DOM (JavaScript manipulation), or Page Rendering (images, CSS, etc.).

Reasonable web server performance

I'm currently running some performance tests to see how many requests per second a newly developed web back-end can handle.
However, I have absolutely no idea how many requests per second I should expect the web server to handle (10? 100? 1000?).
I'm currently testing on a modest 1GB - 1 core virtual machine. What should be a reasonable minimum number of request/second such a server should be able to handle?
I think the right question you should be asking yourself is what performance goals I want my application to have when X requests are being handled?
Remember that a good performance test is always oriented in achieving realistic and well defined performance goals.
These goals are usually set by the performance team and the customers/stake holders.
There are many variables to this question;
What web server software are you using (Apache, nginx, IIS, lighttpd, etc)? This affects the lookup latency and how many simultaneous requests can be handled.
What language is your system logic written in (PHP, Ruby, C, etc)? Affects memory usage and base speed of execution.
Does your system rely on any external services (databases, remote services, message queues, etc)? I/O latency.
How is your server connected to the outside world (dedicated line, dial-up modem (!), etc.)? Network latency.
One way to approach this is to first discover how many requests your webserver can serve up in optimal conditions, eg. serving a single static HTML page of 1 byte with minimal HTTP headers. This will test the web server's fundamental receive-retrieve-serve cycle and give you a good idea of it's maximum throughput (handled requests per second).
Once you have this figure, serve up your web application and benchmark again. The difference in requests per second gives you a general idea of how optimal (or sub-optimal) your app is.
Even the most modest of hardware can deliver thousands of responses given the right conditions.

Client rendering time

I know that in order to measure end-to-end response time for any application scenario, we need to compute: server time + network time + client time
While I know for sure, server and network time are impacted by load, I want to know if client time too is impacted by load??
If client rendering time isn't impacted by load then will it be appropriate, if we do a test with 100 users and measure server time with help of any performance testing tool (like HP LoadRunner, JMeter etc); then measure client rendering time with single user and finally present end-to-end time by adding client time to server time?
Any views on this will be appreciated.
Reagrds,
What you are describing is a very old concept, termed a GUI Virtual User. LoadRunner, and other classical tools such as SilkPerfomer, QALoad and Rational Performance tester, have always had the ability to run one or two graphical virtual users created with the functional automation test tools from the vendor in question to address the question of user "weight" of the GUI.
This capability went out of vogue for a while with the advent of the thin client web, but now that web clients are growing in thickness with complex client side code this question is being asked more often.
Don't worry about actual "rendering time," the time taken to draw the screen elements, since you cannot control that anyway. It will vary from workstation to workstation depending upon what is running on the host and most development shops don't have a reconciliation path to Microsoft, Mozilla, Opera, Google or Apple to ask them to tune up the rendering on their browsers if someone finds a problem in the actual rendering engine of the browser.

What differentiate virtual users / real users when performing load test?

Anyone can point out the difference between virtual user and real user?
In the context of web load testing, there are a lot of differences. A virtual user is a simulation of human using a browser to perform some actions on a website. One company offers what they call "real browser users", but they, too, are simulations - just at a different layer (browser vs HTTP). I'm going to assume you are using "real users" to refer to humans.
Using humans to conduct a load test has a few advantages, but is fraught with difficulties. The primary advantage is that there are real humans using real browsers - which means that, if they are following the scripts precisely, there is virtually no difference between a simulation and real traffic. The list of difficulties, however, is long: First, it is expensive. The process does not scale well beyond a few dozen users in a limited number of locations. Humans may not follow the script precisely...and you may not be able to tell if they did. The test is likely not perfectly repeatable. It is difficult to collect, integrate and analyze metrics from real browsers. I could go on...
Testing tools which use virtual users to simulate real users do not have any of those disadvantages - as they are engineered for this task. However, depending on the tool, they may not perform a perfect simulation. Most load testing tools work at the HTTP layer - simulating the HTTP messages passed between the browser and server. If the simulation of these messages is perfect, then the server cannot tell the difference between real and simulated users...and thus the test results are more valid. The more complex the application is, particularly in the use of javascript/AJAX, the harder it is to make a perfect simulation. The capabilities of tools in this regard varies widely.
There is a small group of testing tools that actually run real browsers and simulate the user by pushing simulated mouse and keyboard events to the browser. These tools are more likely to simulate the HTTP messages perfectly, but they have their own set of problems. Most are limited to working with only a single browser (i.e. Firefox). It can be hard to get good metrics out of real browsers. This approach is far more scalable better than using humans, but not nearly as scalable as HTTP-layer simulation. For sites that need to test <10k users, though, the web-based solutions using this approach can provide the required capacity.
There is a difference.
Depends on your jmeter testing, if you are doing from a single box, your IO is limited. You cant imitate lets say 10K users with jmeter in single box. You can do small tests with one box. If you use multiple jmeter boxes that s another story.
Also, how about the cookies, do you store cookies while load testing your app? that does make a difference
A virtual user is an automatic emulation of a real users browser and http requests.
Thus the virtual users is designed to simulate a real user. It is also possible to configure virtual users to run through what we think a real users would do, but without all the delay between getting a page and submitting a new one.
This allows us to simulate a much higher load on our server.
The real key differences between virtual user simulations and real users is are the network between the server and thier device as well as the actual actions a real user performs on the website.

How do you achieve acceptable performance metrics for your web application?

What analysis do you currently perform to achieve performance metrics that are acceptable? Metrics such as page weight, response time, etc. What are the acceptable metrics that are currently recommended?
This is performance related, so 'it depends' :)
Do you have existing metrics (an existing application) to compare against? Do you have users that are complaining - can you figure out why?
The other major factor will depend on what network sits between the application and the users. On a LAN, page weight probably doesn't matter. On a very slow WAN, page size (esp WRT to TCP windowing) is going to dwarf the impact of server time.
As far as analysis:
Server response time, measured by a load test tool on the same network as the app
Client response time, as measured by a browser / client either on a real or simulated network
The workload for 1) follows the 80/20 rule in terms of transaction mix. For 2), I look at some subset of pages for a browser app and run empty cache and full cache cases to simulate new vs returning users.
Use webpagetest.org to get waterfall charts.
Do RUM, Real User Monitoring, using Google Analytics snippet with Site Speed Javascript enabled.
They are the bare minimum to do.

Resources