JMeter Result Analysis and Trend plotting - jmeter

Current Situation:
I am using JMeter for doing the Performance regression of my application. The scripts are prepared and are executed every night.
I am also using JMeter plugins to capture the PerfMon stats and JMX stats during the test. The response time stats, perfmon stats and JMX stats are all stored in the file in csv format.
Problem Statement:
Q1: The daily analysis of the results is tedious task. Also, we want to plot the daily trends of Response time and Server metrics and share it with larger group. Do you have any suggestions on available tools (open source/ free preferred) that can help us to plot daily trends for response time and server metrics.
If we have to develop our own tool then...
Q2: While plotting the trend, what will be the best way to convey the regression status with minimum number of graphs? Our suite has more than 200 samplers and is growing every month. Plotting the daily trends for 200+ samples in a graph is very confusing for the end audience. Can you suggest a way where I can get a single number to plot the daily trend.

I would recommend going for Jenkins. With Performance Plugin it can
execute JMeter tests on demand or automatically basing on many possible triggers
plot performance trend basing on previous execution results
conditionally fail the builds in case of e.g. response time exceeds a certain threshold
and many more. See Continuous Integration 101: How to Run JMeter With Jenkins article for more detailed explanation on Jenkins, Performance Plugin and JMeter installation and configuration.
Another possible solution could be using JChav - JMeter Chart History and Visualization

Related

How to collect traffic data and macroscopic statistics in Veins?

Hello StackEx community.
I am implementing certain scenarios in Veins 3.0 and I wish to collect certain traffic statistics such as the Average Waiting Time, the Average Energy consumption, etc from my simulation.
Please help on how to generate and interpret these information.
Thanks
TraCIMobility already records some statistics that you can directly use or build on. See, for example totalCO2Emission. Other statistics you might have to implement yourself, e.g., after detecting that a car was stopped for a certain time. See the OMNeT++ user manual pages on result recording and analysis for general information on how to do that and the Tic Toc tutorial for a concrete example.

How to avoid network latency in performance testing

We have servers which are installed on other Las Vegas and currently we need to perform performance testing with jmeter from SAN Francisco office.I am pretty sure doing so there will be network latency added to response times.Do you have any idea how can we avoid that.
You can't avoid network latency, but at least minimize its' consequences to your test results.
Just place your load generator instances (Jmeter servers) as close as possible to testing target. Ideally they should be in the same data center (take a look at Amazon EC2 instances for instance).
In this case your latency will not have a huge effect on performance results, since it will be relatively small.
But remember that network latency is an everyday part of any network communication and you have to take it into account also. It can have major effect on your system in production, especially for the users which are not closely "situated" to your data centers.
Actually JMeter stores Latency separately and as per The Load Reports guide
The response time that is required to receive a response from the server is the sum of the response time + latency.
JMeter .jtl result file looks as follows:
So a very simple formula like =B2-L2 will help you to determine response time without Latency metric, however it isn't something which is being normally done as latency matters.

I want to find out the percentage of HTTPS requests that take less than a second in JMeter

I have written a JMeter test plan firing HTTPS requests at a web service. I need to know the percentage of calls that take less than a second. Does anyone know how to do this?
There are two plugins/listeners that will generate this directly within JMeter UI or write the files to results if you are doing distributed test. Both are from the Extras Set.
Response Times Percentiles
The test I am using is simple and the response time is in ms. Y-axis is milliseconds, and X-Axis is percentage of requests responding in that time.
Response Times Distribution
Same concept except Y-Axis is number of requests responding in that bucket, X-Axis the response time in milliseconds. The size of the distribution is configurable.
Plugin installation is described here, it is as simple as copying files into your JMeter installation.

Performance Counters - Tool for monitoring in Windows Server 2008

I am able to get Performance counters for every two seconds in Windows Server 2008 machine using Powershell script. But when i go to Task Manager and check for the CPU Usage, powershell.exe is taking 50% of CPU. So i am trying to get those Performance counters using other third party tools. I have searched and found this and this. Those two are need to refresh manually and not getting automatically for every two seconds. Can anyone Please suggest some tool which gives the Performance Counters for every two seconds and analyzes the Maximum, Average of those counters and stores the results in text/xls or any other format. Please help me.
I found some Performance tools from here, listed below:
Apache JMeter
NeoLoad
LoadRunner
LoadUI
WebLOAD
WAPT
Loadster
LoadImpact
Rational Performance Tester
Testing Anywhere
OpenSTA
QEngine (ManageEngine)
Loadstorm
CloudTest
Httperf.
There are a number of tools that do this -- Google for "server monitor". Off the top of my head:
PA Server Monitor
Tembria FrameFlow
ManageEngine
SolarWinds Orion
GFI Max Nagios
SiteScope. This tool leverages either the perfmon API or the SNMP interface to collect the stats without having to run an additional non-native app on the box. If you go the open source route then you might consider Hyperic. Hyperic does require an agent to be on the box.
In either case I would look to your sample window as part of the culprit for the high CPU and not powershell. The higher your sample rate the higher you will drive the CPU, independent of tool. You can see this yourself just by running perfmon. Use the same sets of stats and what what happens to the CPU as you adjust the sample rate from once every 30 seconds, to once in 20, then ten, 5 and finally 2 seconds as the interval. When engaged in performance testing we rarely go below ten seconds on a host as this will cause the sampling tool to distort the performance of the host. If we have a particularly long term test, say 24 hours, then adjusting the interval to once in 30 seconds will be enough to spot long term trends in resource utilization.
If you are looking to collect information over a long period of time, 12 hours to more, consider going to a longer term interval. If you are going for a short period of sampling, an hour for instance, you may want to run a couple of different periods of one hour at lesser and greater levels of sampling (2 seconds vs 10 seconds) to ensure that the shorter sample interval is generating additional value for the additional overhead to the system.
To repeat, tools just to collect OS stats:
Commercial: SiteScope (Agentless). Leverages native interfaces
Open Source: Hyperic (Agent)
Native: Perfmon. Can dump data to a file for further analysis
This should be possible without third party tools. You should be able to collect the data using Windows Performance Monitor (see Creating Data Collector Sets) and then translate that data to a custom format using Tracerpt.
If you are still looking for other tools, I have compiled a list of windows server performance monitoring tools that also includes third party solutions.

Performance Testing fundamentals

I have some basic questions around understanding fundamentals of Performance testing. I know that under various circumstances we might want to do
- Stress Testing
- Endurance Testing etc.
But my main objective here is to ensure that response time is decent from application under a set of load which is towards a higher end or in least above average load.
My questions are as follows:
When you start to plan your expected response time of application; what do you consider. If thats the first step at all. I mean, I have a web application now. Do I just pull out a figure from air and say "I would expect application to take 3 seconds to respond to each request". and then go about figuring out what my application is lacking to get that response time?
OR is it the other way round, and you start performance test with a given set of hardware and say, lets see what response time I get now, and then look at results and say, well it's 8 seconds right now, I'd like it to be 3 seconds at max, so lets see how we can optimize it to be 3 seconds? But again is 3 seconds out of air? I am sure, scaling up machines only will not get response time up. It'll get response time up only when single machine/server is under load and you start clustering?
Now for one single user I have response time as 3 seconds but as the load increases it goes down exponentially; so where do I draw the line between "I need to optimize code further" (which has it's upper limit) and "I need to scale up my servers" (Which has a limit too)
What are the best free tools to do performance and load testing? I have used Jmeter a bit. But is there anything else, that is good and open source?
If I have to optimize code, I start profiling the specific flows which took lot of time responding to requests?
Basically I'd like to see how one goes about from end to end doing performance testing for their application. Any links or articles would be very helpful.
Thanks.
The Performance Testing Council is your gateway to freely exchange experiences, knowledge, and practice of performance testing.
Also read Microsoft Patterns & Practises for Performance testing. This guide shows you an end-to-end approach for implementing performance testing.
phoenix mentioned the Open Source tools.
First of all you can read
Best Practices for Speeding Up Your Web Site
For tools
Open source performance testing tools
performance: tools
This link and this show an example and method of performance tuning an application when the application does not have any obvious "bottlenecks". It works most intuitively on individual threads. I have no experience using it on web applications, although other people do. I agree that profiling is not easy, but I've always relied on this technique, and I think it is pretty easy / effective.
First of all, design your application properly.
Use a profiler, see where the bottlenecks in your application are, and take them away if possible. MEASURE performance before improving it.
I will try to provide basic step by step guide, which can be used for implementing Performance testing in you project.
1 - Before you start testing you should know amount of physical memory and amount of memory allocated for JVM, or whatever. DB size collect as much metrics as possible for your current environment. Know you environment
2 - Next step would be to identify common DB production size and expected yearly growth. You will want to test how your application will behave after year, two, five etc.,
3 - Automate environment setup, this is will help you a lot in future for regression testing and defect fix validation. So you need to have DB dumps for your tests. With current (baseline), one year, five year volume.
4 - Once you're done if gathering basic information - Think about monitoring your servers under load, maybe you already have some monitoring solution like http://newrelic.com/ this will help you to identify cause of performance degradation (CPU/Mem/Amount of threads etc.,) Some performance testing tools do have built in monitoring systems.
At this you are ready to move with tooling and load selection, there is already provided materials on how to do that so I will skip part with workload selection.
5 - Select tool I think that JMeter + http://blazemeter.com/ is what you need at this point, both do have a lot nice articles and education materials, for your script recording I would recommend to use blazemeters Chrom Extension instead of inbuilt JMeters solution. If you still think that you do lack knowledge on how things are done in JMeter I recommend to get this book - Performance Testing With JMeter 2.9 by Bayo Erinle
6 - Analyze results, review test plan and take corresponding actions.

Resources