Huge differences in Jmeter starttime and webservices entry point starttime on openshift - jmeter

I am doing load test to see performance using JMeter. My webservices deployed on OpenShift.
I was checking the starttime of Jmeter and STarttime of Webservice entry point of that particular transaction has some milliseconds differences(sometime it has more than 1s difference also).
Can you please let me know what could be the reason.

I don't know what do you mean by "starttime"
JMeter acts as follows:
Sends requests
Waits for the response
Measures the time in-between
If your JMeter instance is geographically in the different location than the OpenShift instance it might take some time for request to travel around the globe (see Connect Time and Latency metrics) so if you're in Japan and the server is in France - the request will have to pass via hundreds or routers
So if you want to get more closer results you need to deploy JMeter in more close geo location and preferably on OpenShift as well
More information:
JMeter Glossary
Understanding Your Reports: Part 3 - Key Statistics Performance Testers Need to Understand

Related

Difference in load test results

What can be the reason for difference in results of load test run at different times with SAME bandwidth?
If I run load test at midnight the response times would be better and during they are real bad. Thanks for your help.
Maybe during the day the application is being used by real users and your artificial load is being added to the natural load?
Another option is that network is more busy during the day so channel bandwidth is fully utilized.
Load testing tool itself metrics don't tell the full story, you can only make assumptions by looking at TCP connect time metric.
If you have an APM system in place you can assess what's going on with the system during the daytime and night time and detect the factors which are impacting the response time. If you don't - you can set up your own by using i.e. JMeter PerfMon Plugin
Adding to Dmitri's note, there could be multiple reasons / causes for difference in results.
As Dmitri pointed check your APM tool to see server health while tests is executing
Do you integrate with any downstream applications? Do these applications reside in a stable and dedicated perf testing environment or they are live production environments? if it is later, then you should expect a latency in response during day time
Authentication / token validation - usually gateways are configured to validate incoming bearer token. when you execute during morning time there is a possibility that your gateway could be busy serving other real users requests (assuming this a production AD / Okta / PingID servers)

Interpreting JMeter Results

I have been running some load tests against APIs using JMeter, the results are below:
I am trying to understand what the cause of the two different patterns of slow behaviour I am seeing could be:
Pattern 1: Time to connect is low, Latency is high
Pattern 2: Time to connect is high, Latency is low
*Note: the majority of calls are returning in around 45-50ms.
My current thoughts are as follows:
Pattern 1: This is "server processing time" so for some reason back-end server is taking longer than usual to respond. We will need to do deeper dive to figure out why.
Pattern 2: This pattern shows a long time to establish a TCP connection. Is there a way to determine if this is a problem on the outgoing side i.e. JMeter itself is running out of threads to make API connections, or if the API server is running out of connections and is unable to accept more?
How should I interpret these results? Are there any additional data points I could pull or tools I could use to better understand the findings?
Both Connect Time and Latency are network-related metrics, the formula is:
Elapsed Time = Connect Time + Latency + Server Response time
It looks like your server itself is no brainer, the problem is either on network level or connected with JMeter which might lack resources in order to send requests fast enough.
With regards to additional information sources:
Generate HTML Reporting Dashboard and look into "Over Time" charts. It should allow you to correlate increasing load with the increasing connect time / latency.
Consider setting up monitoring of essential health metrics of JMeter load generator(s) and the application under test. You can use JMeter PerfMon Plugin for this.
Make sure to follow JMeter Best Practices as JMeter default configuration is good for tests development and debugging and you need to perform fine tuning of JMeter for the high loads.

Jmeter response time seems high compare to New relic response time

I am giving you the scenario as follows:
We are deploying the build in Cloud foundary container(IASS) along with New relic binding services. This is hosted in Asia -South-east.
My jmeter resides in same location but it is in Aazon ECM2-(asia south-east).
While I ran the jmeter , seems response time looks higher compare to my New relic APP response time. Why soome time so much variation comes? is it due to latency factor? how to give explanation to my clientwhile they check New relic and Jmeter both the result. i am sure both are correct and need to find out rCA.
Please helpenter image description here
JMeter response time includes network latency which we can not avoid. So, it might be because of latency. If it is a huge difference, try to run the test from a machine which is very close to the app server / same data center and see if it helps to minimize the latency.
What is the max no of users you are trying to simulate? What is the CPU, memory utilization of the load generator - usually i would keep them below 80%.
Ensure that your results are satisfying the little's law! check below for more details. if it does not satisfy you are trying to simulate too much load from your load generator which it could not handle - go for distributed load testing in that case.
http://www.testautomationguru.com/jmeter-performance-testing-application-of-littles-law-to-workload-models/
http://www.testautomationguru.com/jmeter-distributed-load-testing-using-docker-rancheros-in-cloud/

JMeter - How to measure exact response time if distance between client and server is too far

I'm beginner in JMeter and I have a issues with it: when I run jmeter on Vietnam and test a server on US and user "View Results in Table" to view result. In this report, I want to know how to calculate "Sample Time"? It's time the server response or time which client received response? and how to effect if distance between client and server is too far?
Sampler Time will be the time of Vietnam.
But you can configure your JMeter instance to use the US Timezone through System property:
-Duser.timezone=
See:
Force Java timezone as GMT/UTC
Regarding Response Time, it will include latency due to you being far from US but it will reflect what Vietnam users will face.
So if your requirement is to measure US feeling then you will need to load test from a US server, if your requirement is to measure Vietnamese feeling on a US hosted application then it's ok.

Sample time (ms) is different from the response time of Loadrunner for the same request. Why is it so?

We recorded a request for launching a website page in Jmeter excluding all the static content files css, js etc. When we replayed the script, the Sample time( considering that it is the response time) was coming around 5000ms.
We recorded the same request in LoadRunner and the response time was coming around 300ms. Also when we saw the response time for the request through HTTPFox it was also around 300ms.
My question is why there is a drastic difference between the response time measured by the two tools. Am i going wrong while calculating the response time in jmeter OR is there any other way to calculate response time in Jmeter?
I can see several reasons on why this can happen:
JMeter is configured to "Redirect automatically" or to "Follow Redirects"
JMeter is configured to "Download embedded resources"
Your system under test demonstrates high Latency (amount of time for the request to reach the server, JMeter reports overall response time as latency + actual response time, see The Load Reports guide for metrics explanation)
There are lots of architectural differences which can contribute to this difference between the tools. Narrow your scope to one request, such as an image, and scale up your number of users in both tools to see what happens.
You also have test configuration items that can come into play, such as JMETER running monolithic on one host vs loadrunner running distributed amonst many generators. Think time setting differences, number of users. etc... You could spend all day nailing the jello of test settings and architecture.
But, given that the Loadrunner time is closest to observable with a proxy and manual execution what can you infer about the rest of your test data?

Resources