Client rendering time - client

I know that in order to measure end-to-end response time for any application scenario, we need to compute: server time + network time + client time
While I know for sure, server and network time are impacted by load, I want to know if client time too is impacted by load??
If client rendering time isn't impacted by load then will it be appropriate, if we do a test with 100 users and measure server time with help of any performance testing tool (like HP LoadRunner, JMeter etc); then measure client rendering time with single user and finally present end-to-end time by adding client time to server time?
Any views on this will be appreciated.
Reagrds,

What you are describing is a very old concept, termed a GUI Virtual User. LoadRunner, and other classical tools such as SilkPerfomer, QALoad and Rational Performance tester, have always had the ability to run one or two graphical virtual users created with the functional automation test tools from the vendor in question to address the question of user "weight" of the GUI.
This capability went out of vogue for a while with the advent of the thin client web, but now that web clients are growing in thickness with complex client side code this question is being asked more often.
Don't worry about actual "rendering time," the time taken to draw the screen elements, since you cannot control that anyway. It will vary from workstation to workstation depending upon what is running on the host and most development shops don't have a reconciliation path to Microsoft, Mozilla, Opera, Google or Apple to ask them to tune up the rendering on their browsers if someone finds a problem in the actual rendering engine of the browser.

Related

Jmeter Mobile Native App Testing

I have Two Question related to Native App Performance Testing?
1)I have a Payment App, and it comes with bank security which is installed at the time of app installation. It sends an token number and rest of the data in encrypted format. Is it possible to handle such kind of request using Jmeter or any other performance testing tool, do i need to change some setting in app server or jmeter to get this done ?
2)Mobile App uses Device ID, so if i simulated load on cloud server it will use same Device ID which i used while creating script? is it possible to simulate different mobile ID to make it real-time?
any Help or references will be appreciated ..:)
(1) Yes. This is why performance testing tools are built around general purpose programming languages, to allow you (as the tester) to leverage your foundation skills in programming to leverage the appropriate algorithms and libraries to represent the same behavior as the client
(2) This is why performance testing tools allow for parameterization of the sending datastream to the server/application under test
I'm not an expert in JMeter. But work a lot with Loadrunner (LR) (Performance Testing Tool from HP). Though JMeter and LR are different tools, they work under same principle and objective and so objective of performance testing.
As James Pulley mentioned, the performance testing tool may have the capability. But the question is,
Have your tried recording your app with JMeter? Since your app is a native kind, please do the recording from simulator/emulator and check the feasibility. JMeter might not be the right candidate for mobile app load testing.
Alternatively there are lot of other tools available (both commercial and opensource) in market for your objective.
Best Regards
With the raise of several mobile network technologies, load testing a mobile application has become a different ball game in comparison with normal web app load testing. This is because of the differences in the response times that occur in different mobile networks such as 2G, 3G, 4G, etc. Additionally the client being a mobile device has plenty of physical constraints such as limited CPU, RAM, internal storage etc. All of these need to be considered while conducting performance testing of a mobile application if one wants to simulate a scenario close to a real time condition.
Coming to your 2 questions,
1) Yes it is possible but the amount of manual effort that needs to be invested to make the script execution ready might vary (since you are mentioning there is data in encrypted format - some are easy to understand and some are just crude and difficult to handle using JMeter). But there might not be any app server setting that would be required to change (unless of course you are unable to handle the encryption with JMeter in which case, the encryption might have to be disabled for QA phase)
2) As rightly said by James Pulley, these values can be parameterized. However, I fear that these values will be validated by the app server and hence the values need to be appropriately fed in the requests.
You can refer to this link for reference on how to do Mobile Performance Testing for Native application http://www.neotys.com/documents/doc/neoload/latest/en/html/#4234.htm#o4237
.The same could be extrapolated to JMeter to an extent.

Parameters that affect performance of web app on different clients

I have built a web application, in which I display a map and I use both WMS and WFS requests in order to show network (lines) and points of interest on the map. The application has several filters in order to send queries to database (such as date filters etc.).
The app runs in a remote server. The speed when accessing the app from my browser (firefox or chrome) is satisfying. Everything runs quite smoothly.
My issue is that I get complaints related to speed and performance. So my question is: on what does the performance depends on and how can it be possible that me having great experience and others don't?
One hypothesis is that the computer power of the other client is too low (which is not the case).
Another one is that the server doesn't have enough resources (which is also not the case).
What are other parameters that affect the performance of a web app?
The connection speed is a very important factor and also the response time (mostly distance client-server).

Measure average web app response time from the client side during a long period of time

My company has over a hundred users of a specific CRM web application, which is provided as a service by another company to us.
The users of this application are very dissatisfied with its average response time, and I need to find a way to gather metrics during a certain period of time (let's say .. a week) to prove the service provider that they are really providing a bad service.
If the application were mine, I would get the metrics from New Relic or some other equivalent monitoring service, but since it is not, I'm looking for something that could do some sort of client side monitoring.
I already checked Page Speed from Google and YSlow from Yahoo, but both are only useful when you want to test the application during a few seconds. They are not meant for the long term monitoring I need.
Would anybody know a way to get this kind of monitoring from a client side perspective?
LoadRunner is no charge for 50 users, but what you really need is not a test tool but a synthetic user monitor which runs every n number of minutes and pulls the stats. You can build it yourself using LoadRunner 12, Jmeter, or any other http sampling technology. You could also use a service like Gomez for sampling or mpulse from SOASTA for tracking every page component across all users.
Keep in mind that your developer tools will time all of the components of the request to give you some page times. As will Dynatrace for the web client.
If you have access to the web server then consider configuring the web server logs to capture the w3c time-taken field, which will track every request. Depending upon the server the level of granularity can be to the millionth of a second on each and every request.
You could also look at a service like LiteSquare which can process those web logs and provide ammunition for changes to the server to improve performance on a no-gain, no-charge model.
One (expensive) solution would be using LoadRunner endurance test feature. Check here for a demonstration.
Another tool is Oracle OATS.
JMeter is a free tool, though I'm not sure if it's reliable enough to run for a whole week.
These are load generator tools, so if you are testing as a single client, you should carefully chose your load amount (e.g. one user).
Last but not least, you could create your own webservice client, and create a cron job to run it on your specified time of day and log the access time.
If what you want is to get data from their server, this is impossible ... without hacking into it. All you can do is monitor the website as a client, using some of the above tools, make a report and present that to them. But even so they could challenge your bandwidth, your test method etc.
I recommend that you negotiate with them to give you their logs and to prove that their system can support a certain amount of load. If you are a customer to them, you can file a complain or test additional offers.
Dynatrace was already mentioned in combination with Load Testing. As you said that you want to monitor your live system I want to bring Dynatrace up again. Most of the time it is used to do live system monitoring to understand what end users are actually doing. It is also available as a 30 day trial - so - no need to buy it - but - use it for your sanity check: http://bit.ly/dttrial

What differentiate virtual users / real users when performing load test?

Anyone can point out the difference between virtual user and real user?
In the context of web load testing, there are a lot of differences. A virtual user is a simulation of human using a browser to perform some actions on a website. One company offers what they call "real browser users", but they, too, are simulations - just at a different layer (browser vs HTTP). I'm going to assume you are using "real users" to refer to humans.
Using humans to conduct a load test has a few advantages, but is fraught with difficulties. The primary advantage is that there are real humans using real browsers - which means that, if they are following the scripts precisely, there is virtually no difference between a simulation and real traffic. The list of difficulties, however, is long: First, it is expensive. The process does not scale well beyond a few dozen users in a limited number of locations. Humans may not follow the script precisely...and you may not be able to tell if they did. The test is likely not perfectly repeatable. It is difficult to collect, integrate and analyze metrics from real browsers. I could go on...
Testing tools which use virtual users to simulate real users do not have any of those disadvantages - as they are engineered for this task. However, depending on the tool, they may not perform a perfect simulation. Most load testing tools work at the HTTP layer - simulating the HTTP messages passed between the browser and server. If the simulation of these messages is perfect, then the server cannot tell the difference between real and simulated users...and thus the test results are more valid. The more complex the application is, particularly in the use of javascript/AJAX, the harder it is to make a perfect simulation. The capabilities of tools in this regard varies widely.
There is a small group of testing tools that actually run real browsers and simulate the user by pushing simulated mouse and keyboard events to the browser. These tools are more likely to simulate the HTTP messages perfectly, but they have their own set of problems. Most are limited to working with only a single browser (i.e. Firefox). It can be hard to get good metrics out of real browsers. This approach is far more scalable better than using humans, but not nearly as scalable as HTTP-layer simulation. For sites that need to test <10k users, though, the web-based solutions using this approach can provide the required capacity.
There is a difference.
Depends on your jmeter testing, if you are doing from a single box, your IO is limited. You cant imitate lets say 10K users with jmeter in single box. You can do small tests with one box. If you use multiple jmeter boxes that s another story.
Also, how about the cookies, do you store cookies while load testing your app? that does make a difference
A virtual user is an automatic emulation of a real users browser and http requests.
Thus the virtual users is designed to simulate a real user. It is also possible to configure virtual users to run through what we think a real users would do, but without all the delay between getting a page and submitting a new one.
This allows us to simulate a much higher load on our server.
The real key differences between virtual user simulations and real users is are the network between the server and thier device as well as the actual actions a real user performs on the website.

How do you achieve acceptable performance metrics for your web application?

What analysis do you currently perform to achieve performance metrics that are acceptable? Metrics such as page weight, response time, etc. What are the acceptable metrics that are currently recommended?
This is performance related, so 'it depends' :)
Do you have existing metrics (an existing application) to compare against? Do you have users that are complaining - can you figure out why?
The other major factor will depend on what network sits between the application and the users. On a LAN, page weight probably doesn't matter. On a very slow WAN, page size (esp WRT to TCP windowing) is going to dwarf the impact of server time.
As far as analysis:
Server response time, measured by a load test tool on the same network as the app
Client response time, as measured by a browser / client either on a real or simulated network
The workload for 1) follows the 80/20 rule in terms of transaction mix. For 2), I look at some subset of pages for a browser app and run empty cache and full cache cases to simulate new vs returning users.
Use webpagetest.org to get waterfall charts.
Do RUM, Real User Monitoring, using Google Analytics snippet with Site Speed Javascript enabled.
They are the bare minimum to do.

Resources