I have following test result from load test -
timeStamp,elapsed,label,responseCode,Latency
1447675626444,9,API1,201,9
1447675626454,1151,API2,404,Not Found,1151
As explicit, call to API2 fails and there is delay of 1oms between two calls.
I know that label timeStamp is time from epoch but is it the time when request was fired from client or the time when last response byte was received?
If latter then how do I find the time when request was fired from client?
The first timestamp is request start time. The latency is time from timestamp when the first response byte is received. elapsed is time from timestamp when the complete response was received. So in your case,
444: API1 request went out. 9 milliseconds later, at
453: First byte AND last byte of API1 response is received - because latency is the same as elapsed
454: API2 request went out
If you're using a regular thread group in JMeter with two samplers, the second request is not sent out until the response to the first sampler is completely received. Your issue would seem to be something other than pure sequence of calls.
==
To clarify what #Mike said about "request is sent behind two or three lines code.":
The Timestamp is when JMeter sampler code marked the request start event and made a log entry. After which the JVM has to execute a few of lines of code to use Apache HTTPClient object to create a TCP connection, assemble a HTTP request and then send out a HTTP packet over possibly several TCP packets. On any modern system this difference between timestamp and actual request going out will be less than a few millseconds. If this timing is important for you to measure, JMeter isn't really the right tool, you should use a network sniffer like Wireshark to look for timestamp of when the first packet was actually transmitted.
As you said, the timestamp is the API started time, maybe the request is sent behind two or three lines code. You cannot get the exactly timestamp when request is sent. As far as I know, it doesn't affect your performance.
If you just want to test how long between the request is sent and the response is returned, you need to make another API.
Related
First off, I'm new to JMeter and wanted to clear some doubts regarding the relationship between Load time, Connect time, and Latency.
I found some resources that explain the relationship between these metrics:
Latency time – Connect time = Server Processing Time
Elapsed time – Latency time = Download Time
resource
And then another resource says this:
Response Time = Latency + Processing Time
Given below is one of the sampler results I got. If you take this into consideration, can we really comment on how long it took for the server to process the request?
NOTE: In this scenario, my plan is to analyze how much of a load the server had to withstand. I don't really care about the delay of connection establishing and passing around of data packets.
Basically, I want to know the connection between the 3 aforementioned metrics: Load time, Connect time, and Latency. Any help is greatly appreciated. Thanks in advance :)
You cannot say "how long it took for the server to process the request" by looking to JMeter results because:
Latency is time to first byte
Elapsed time is time to last byte
The request lifecycle looks like:
JMeter establishes the connection (connect time)
JMeter sends request body to the server (unknown)
Server processes the request (unknown)
Server sends the response to JMeter (unknown)
JMeter receives the first byte of the response (Latency)
JMeter receives the last byte of the response (Elapsed time)
So you cannot say what is the server processing time even with millisecond precision as JMeter can only get high-level network metrics, if you want to enrich your report with server processing time you need to use an APM or a profiler tool or at least something like JMeter PerfMon Plugin to get this form of information directly from the application under test.
This documentation explains the metrics :
https://jmeter.apache.org/usermanual/glossary.html
Latency:
JMeter measures the latency from just before sending the request to just after the first response has been received. Thus the time includes all the processing needed to assemble the request as well as assembling the first part of the response, which in general will be longer than one byte. Protocol analysers (such as Wireshark) measure the time when bytes are actually sent/received over the interface. The JMeter time should be closer to that which is experienced by a browser or other application client.
Connect Time:
JMeter measures the time it took to establish the connection, including SSL handshake. Note that connect time is not automatically subtracted from latency. In case of connection error, the metric will be equal to the time it took to face the error, for example in case of Timeout, it should be equal to connection timeout.
Load time or Elapsed time:
JMeter measures the elapsed time from just before sending the request to just after the last response has been received. JMeter does not include the time needed to render the response, nor does JMeter process any client code, for example Javascript.
In layman terms I would describe these terms as below:
Load time: total time taken by the request. First req to the final packet
Connect time: Time taken by the request to reach the server
Latency: time taken by request for first response. (if the response is small this can be same as load time)
I am starting to test with the jmeter. I have read documentation and outstanding questions about the values that are obtained from jmeter, at least in the version I have are: timeStamp, elapsed, label, responseCode, responseMessage, threadName, dataType, success, failureMessage, bytes, sentBytes, grpThreads, allThreads, URL, Latency, IdleTime and Connect.
I am launching against a web page that has a server. The times are
elapsed (response time), is the time it takes to complete the request (from start to finish)
Latency is the time from when you start transmitting until you receive the first byte (from start to first response. Includes connect)
Connect is the time it takes to make a TCP connection.
My question would be: To take into account network latency, what data would need to be collected?
If you want to measure the time which takes the request to travel from the system under test to JMeter just subtract Latency from Elapsed time and that should be it.
Looking into JMeter Glossary:
Elapsed time. JMeter measures the elapsed time from just before sending the request to just after the last response has been received. JMeter does not include the time needed to render the response, nor does JMeter process any client code, for example Javascript.
Latency. JMeter measures the latency from just before sending the request to just after the first response has been received. Thus the time includes all the processing needed to assemble the request as well as assembling the first part of the response, which in general will be longer than one byte. Protocol analysers (such as Wireshark) measure the time when bytes are actually sent/received over the interface. The JMeter time should be closer to that which is experienced by a browser or other application client.
So TTLB minus TTFB should give you the time to transfer the response from server to JMeter and given you have at least 2 samplers with different response size you can figure out the network throughput for a single byte.
Some extra information can be obtained from the JMeter log file, if you enable debug logging on protocol level you will see timestamps for all the events in the log:
The line which needs to be added to log4j2.xml file:
<Logger name="org.apache.http" level="debug" />
Example output:
I am using Jmeter for performan testing, I believe elapsed time is the response time which I am considering(i.e.,85 milliseconds). When I hit the same request from postman It is taking very less time(i.e.,35 milliseconds) so I want to know where my JMETER giving correct results or not
Elapsed time consists of:
Connect time (it might include SSL handshake)
Latency
Application response time
Given you're running the same request (URL, body, headers) from the same machine you should have similar results.
Try running the request for more times, i.e. 10 or 100 using newman and JMeter (set number of iterations in Thread Group to 100). If you will still be seeing differences - consider comparing the requests using an external sniffer tool like Wireshark, it will give you more insights regarding what's going on under the hood.
I'm basically trying to calculate in JMeter 5.1 the server processing time for a HTTP request. I've read the JMeter documentation (specially https://jmeter.apache.org/usermanual/glossary.html) to know more about Elapsed time, Latency and Connect time.
Let's say I have a test plan with one thread which does successively 3 identical HTTP requests to one server. The thing is that for the first request, Connect time is (obviously) not equal to 0, but it is for second and third request.
However, from my understanding, Latency includes Connect time, hence for my first request, the Latency is always (much) larger than for the second and third request, and it does not reflect the time spent waiting (server processing time) for this first request.
Can I assume that, if I substract the Connect time from the Latency (Latency - Connect time), it gives me a meaningfull value of the server processing time (+ download content time maybe?)
See w3c time-taken HTTP request log field. Just turn this on and post process the HTTP request logs at the end of your test. You will have the complete processing time for each individual request.
My query is when I finish my performance testing and get the result file, I could see that there will be a difference between the Jmeter response time and Server response time.
I verified the server response time by checking the server logs.I am not writing any extra elements in the result file also.
Can I get an explanation why response time shown by Jmeter is always high when compared to actual response time
Have you thought about the network? According to JMeter glossary:
Elapsed time. JMeter measures the elapsed time from just before sending the request to just after the last response has been received. JMeter does not include the time needed to render the response, nor does JMeter process any client code, for example Javascript.
Latency. JMeter measures the latency from just before sending the request to just after the first response has been received. Thus the time includes all the processing needed to assemble the request as well as assembling the first part of the response, which in general will be longer than one byte. Protocol analysers (such as Wireshark) measure the time when bytes are actually sent/received over the interface. The JMeter time should be closer to that which is experienced by a browser or other application client.
Connect Time. JMeter measures the time it took to establish the connection, including SSL handshake. Note that connect time is not automatically subtracted from latency. In case of connection error, the metric will be equal to the time it took to face the error, for example in case of Timeout, it should be equal to connection timeout.
So my expectation is that the server measures only the time which is required to process request and respond while JMeter measures all end-to-end transaction to wit:
Establishing the connection (in particular initial SSL Handshake could be very long)
Sending packets to the server
here server starts measurement
Processing the request by the server
here server stops measurement
Waiting for the first packet to come (Latency)
Waiting for the last packet to come (Elapsed time)
The time needed for the request to travel back and forth can really matter, for example if you have a faulty router or not properly configured load balancer even if the actual server response time is low the user experience won't be smooth.