JMeter - How to measure exact response time if distance between client and server is too far - time

I'm beginner in JMeter and I have a issues with it: when I run jmeter on Vietnam and test a server on US and user "View Results in Table" to view result. In this report, I want to know how to calculate "Sample Time"? It's time the server response or time which client received response? and how to effect if distance between client and server is too far?

Sampler Time will be the time of Vietnam.
But you can configure your JMeter instance to use the US Timezone through System property:
-Duser.timezone=
See:
Force Java timezone as GMT/UTC
Regarding Response Time, it will include latency due to you being far from US but it will reflect what Vietnam users will face.
So if your requirement is to measure US feeling then you will need to load test from a US server, if your requirement is to measure Vietnamese feeling on a US hosted application then it's ok.

Related

When NewRelic starts to collect metrics

This might be unusual question but I have to be sure if my suspicions are correct. In our company we use NewRelic to monitor our applications. From time to time I check what NewRelic says about app developed by me and I'm always wondering why average response time is way much lower then tests made manually by me or some external tools, e.g:
average response time of some endpoint is always somewhere around 130 ms in NewRelic metrics
when I test it manually it is something about 230-250 ms
some tools used in our company which can make lots of requests per some period of time also claims that average response time is something about 200 ms
(similar difference is visible in other endpoints ~ 100ms)
Those tests are made from location in eastern Europe and our app is hosted in UK, so we can assume that request needs something about 40 ms to reach servers and then the same amount to go back. Another thing is that we have some infrastructure overhead like loadbalancing and url resolving so we can have there another "few" milliseconds. As You can see when we add everything we have the difference.
The question is: am I right ? Because those are only my speculations and I wasn't able to find clear answer where and when NewRelic starts to collect the data when we look at whole request path :
client ---request---> web-app ---response---> client

Huge differences in Jmeter starttime and webservices entry point starttime on openshift

I am doing load test to see performance using JMeter. My webservices deployed on OpenShift.
I was checking the starttime of Jmeter and STarttime of Webservice entry point of that particular transaction has some milliseconds differences(sometime it has more than 1s difference also).
Can you please let me know what could be the reason.
I don't know what do you mean by "starttime"
JMeter acts as follows:
Sends requests
Waits for the response
Measures the time in-between
If your JMeter instance is geographically in the different location than the OpenShift instance it might take some time for request to travel around the globe (see Connect Time and Latency metrics) so if you're in Japan and the server is in France - the request will have to pass via hundreds or routers
So if you want to get more closer results you need to deploy JMeter in more close geo location and preferably on OpenShift as well
More information:
JMeter Glossary
Understanding Your Reports: Part 3 - Key Statistics Performance Testers Need to Understand

Difference in load test results

What can be the reason for difference in results of load test run at different times with SAME bandwidth?
If I run load test at midnight the response times would be better and during they are real bad. Thanks for your help.
Maybe during the day the application is being used by real users and your artificial load is being added to the natural load?
Another option is that network is more busy during the day so channel bandwidth is fully utilized.
Load testing tool itself metrics don't tell the full story, you can only make assumptions by looking at TCP connect time metric.
If you have an APM system in place you can assess what's going on with the system during the daytime and night time and detect the factors which are impacting the response time. If you don't - you can set up your own by using i.e. JMeter PerfMon Plugin
Adding to Dmitri's note, there could be multiple reasons / causes for difference in results.
As Dmitri pointed check your APM tool to see server health while tests is executing
Do you integrate with any downstream applications? Do these applications reside in a stable and dedicated perf testing environment or they are live production environments? if it is later, then you should expect a latency in response during day time
Authentication / token validation - usually gateways are configured to validate incoming bearer token. when you execute during morning time there is a possibility that your gateway could be busy serving other real users requests (assuming this a production AD / Okta / PingID servers)

Jmeter response time seems high compare to New relic response time

I am giving you the scenario as follows:
We are deploying the build in Cloud foundary container(IASS) along with New relic binding services. This is hosted in Asia -South-east.
My jmeter resides in same location but it is in Aazon ECM2-(asia south-east).
While I ran the jmeter , seems response time looks higher compare to my New relic APP response time. Why soome time so much variation comes? is it due to latency factor? how to give explanation to my clientwhile they check New relic and Jmeter both the result. i am sure both are correct and need to find out rCA.
Please helpenter image description here
JMeter response time includes network latency which we can not avoid. So, it might be because of latency. If it is a huge difference, try to run the test from a machine which is very close to the app server / same data center and see if it helps to minimize the latency.
What is the max no of users you are trying to simulate? What is the CPU, memory utilization of the load generator - usually i would keep them below 80%.
Ensure that your results are satisfying the little's law! check below for more details. if it does not satisfy you are trying to simulate too much load from your load generator which it could not handle - go for distributed load testing in that case.
http://www.testautomationguru.com/jmeter-performance-testing-application-of-littles-law-to-workload-models/
http://www.testautomationguru.com/jmeter-distributed-load-testing-using-docker-rancheros-in-cloud/

Sample time (ms) is different from the response time of Loadrunner for the same request. Why is it so?

We recorded a request for launching a website page in Jmeter excluding all the static content files css, js etc. When we replayed the script, the Sample time( considering that it is the response time) was coming around 5000ms.
We recorded the same request in LoadRunner and the response time was coming around 300ms. Also when we saw the response time for the request through HTTPFox it was also around 300ms.
My question is why there is a drastic difference between the response time measured by the two tools. Am i going wrong while calculating the response time in jmeter OR is there any other way to calculate response time in Jmeter?
I can see several reasons on why this can happen:
JMeter is configured to "Redirect automatically" or to "Follow Redirects"
JMeter is configured to "Download embedded resources"
Your system under test demonstrates high Latency (amount of time for the request to reach the server, JMeter reports overall response time as latency + actual response time, see The Load Reports guide for metrics explanation)
There are lots of architectural differences which can contribute to this difference between the tools. Narrow your scope to one request, such as an image, and scale up your number of users in both tools to see what happens.
You also have test configuration items that can come into play, such as JMETER running monolithic on one host vs loadrunner running distributed amonst many generators. Think time setting differences, number of users. etc... You could spend all day nailing the jello of test settings and architecture.
But, given that the Loadrunner time is closest to observable with a proxy and manual execution what can you infer about the rest of your test data?

Resources