We have a Production monitoring system which gives stats on Web Navigation and Resource Timings as indicated by W3C standards. A sample reference for stats is given below:
https://developer.mozilla.org/en-US/docs/Web/Performance/Navigation_and_resource_timings
During production monitoring for our website, we found that Document Unload (unloadEventEnd) stats are on the higher side close to Time To First Byte (responseStart).
So, I landed up with a clarification to know:
Whether Document Unload if it takes longer due to any possible reasons -would cause its current API/Ajax or Https request to be delayed summing up the total time took to return first byte of response and hence showing up higher value for Time To First Byte (responseStart) metric?
Related
I have been running my JMeter script with the following setup
Users: 100
Loop Controller: 5
I used Loop Controller on the http request where transactions are needed to iterate.
My question is, there is a particular request which is searching where after 5 successful search the proceeding searches displayed “Customer API. Unable to establish connection”
Please see image below:
1 First Image displays lower load time while second image displays higher load time
It looks like you discovered a bottleneck in your application or at least it is not properly configured for high loads. Almost a minute response time for transfering 600 kilobytes is not something I would expect from a well-behaved application.
The error message you're getting is very specific so I would recommend checking your application logs as the very first step. The remaining steps would be inspecting application and middleware configuration and ensuring that it's properly set up for high performance and it has enough headroom to operate in terms of CPU, RAM, Network, Disk, etc.
We are currently conducting performance tests on both web apps that we have, one is running within a private network and the other is accessible for all. For both apps, a single page-load of the landing page or initial page only takes between 2-3 seconds on a user POV, but when we use blaze and JMeter, the results are between 15-20 seconds. Am I missing something? The 15-20 seconds result came from the Loadtime/Sample Time in JMeter and in Elapsed column if extracted to .csv. Please help as I'm stuck.
We have tried conducting tests on multiple PCs within the office premises along with a PC remotely accessed on another site and we still get the same results. The number of thread and ramp-up period is both set to 1 to imitate a single user only.
Where a delta exists, it is certain to mean that two different items are being timed. It would help to understand on your front end are you timing to a standard metric, such as w3c domComplete, time to interactive, first contentful paint, some other location, and then compare where this comes into play on the drilldown on the performance tab of chrome. Odds are that there is a lot occuring that is not visible that is being captured by Jmeter.
You might also look for other threads on here on how jmeter operates as compared to a "real browser" There are differences which could come into play affecting your page comparisons, particularly if you have dozens/hundreds of elements that need to be downloaded to complete your page. Also, pay attention to third party components where you do not have permission to test their servers.
I can think of 2 possible causees:
Clear your browser history, especially browser cache. It might be the case you're getting HTTP Status 304 for all requests in browser because responses are being returned from the browser cache and no actual requests are being made while JMeter always uses "clean" session.
Pay attention to Connect Time and Latency metrics as it might be the case the server response time is low but the time for network packets to travel back and forth is very high.
Connect Time. JMeter measures the time it took to establish the connection, including SSL handshake. Note that connect time is not automatically subtracted from latency. In case of connection error, the metric will be equal to the time it took to face the error, for example in case of Timeout, it should be equal to connection timeout.
Latency. JMeter measures the latency from just before sending the request to just after the first response has been received. Thus the time includes all the processing needed to assemble the request as well as assembling the first part of the response, which in general will be longer than one byte. Protocol analysers (such as Wireshark) measure the time when bytes are actually sent/received over the interface. The JMeter time should be closer to that which is experienced by a browser or other application client.
So basically "Elapsed time = Connect Time + Latency + Server Processing Time"
In general given:
the same machine
clean browser session
and JMeter configured to behave like a real browser
you should get similar or equal timings for the same page
I'm using load test in Visual Studio to test our web api services. But to my surprise I can't seem to test what I want to. Actually I have a single url in my .webtest file and try to send the same url time and again to see what is the avg. response time.
Here are the details
1.I use constant load of 1 user
2.Test duration of 1 hour
3.Think time of 10 seconds (not the think time between iterations)
4.The avg. response time that I get is 1.5 seconds
5.So the avg. test time comes out to be 11.5 seconds
6.Requests/sec are 0.088
7.And I'm using Sequential Test Order among 4 types of different tests
So these figures are making me think that every time a virtual user sends a request besides the specified think time it waits for the request to complete before he sends a new one (request). Thus technically the total think time becomes
Total think time = think time specified + avg. response time
But I don't want the user to wait for an already sent request to come back and then send a new one after a specified think time. I need to configure the load test in such a way that if the think time is 10 seconds then the user should send next request after every 10 seconds without waiting the first one to come back then think for another 10 seconds and then send a new request (hence making the total think time to 11.5 seconds in my case as mentioned above). And no matter what type of test I choose among 4 different types Visual Studio is always forcing the virtual user to wait for the completion of the request then add specified think time and then send a new one.
I know what Visual Studio load test is doing is more of a practical approach where the user sends the request wait till it comes back then think or interact with the website and then sends a new one.
Any help or suggestion would be appreciated towards what I'm trying to achieve.
In the properties of the scenario, set the "Test mix type" to be "Test mix based on user pace" and set the "Tests per user per hour" as appropriate. See here.
The suggestion in the question that:
Total think time = think time specified + avg. response time
is erroneous. To my mind adding the values does not provide a useful result. The two values on the right are as stated. Think time simulates the time a user spends reading the page, deciding what to do next and typing/clicking/etc their response. Response time is the "turn around" time between sending a request and getting the response. Adding them does not increase the think time in any sense, it just makes the total duration for handing the request in this specific test. Another test might make the same request with a different think time. Note that many web pages cause more than one request and response to be issued; JavaScript and other allow web pages to do many clever things.
I'm load testing a system with 500 virtual users. I've kept the "Ramp-Up period (in seconds)" option to zero. So, what I understand, JMeter will hit the system with 500 virtual users all at the same time. Please correct me if I'm wrong here.
Now, the summary report shows the average response time for the first page is ~100 seconds!. Which is more than a minute and a half of wait time. But while the JMeter is running, I manually went to the same page/url using a browser and didn't have to wait for that long. It was not even close, the page response was almost immediate for me.
My question is: is there any known issue for the average response time of the first page? Is it JMeter which is taking long to trigger that many users?
Thanks in advance.
--Ishtiaque
There is no issue in Jmeter related to first page response time.
Summary Report shows all response time details in Milliseconds, the value "100" seconds have you converted milliseconds to seconds?
Also in order to make sure that 500 users hit concurrently, use Synchronizing Timer.
Hope this will help.
While the response times will be accurate, you need to consider the affect of starting so many threads at once on both your server and your client.
500 threads to start at once is not insignificant n the client. If your server has the connections, it will start 500 threads as well.
Ramping over a period of time is more realistic loadwise, but still not really indicative of server capability until the threads have all started and settled in.
Databases can also require a settling in period which can affect response times.
Alternative to ramping is introducing a random wait at the start of each thread before firing the first sample. You can then choose not to ramp over time, but still expect resources on the client to suddenly come under load and change the settings if you hit limits. This will make the entire run much more realistic of typical behaviour. However, you need to determine if your use cases are typical.
Although the heap size is increased, i notice there is still longer time as compared to actual response time. Later i realised it was the probe effect (the extra time a tool generates due to test execution)
I am trying to measure time for next button one page to another. To do this I start transaction before to press button, I press the next button , when the next page loaded I end the transaction. Between this transaction process I use web_reg_find() and check specific text to verify that page.
When I use controller that transaction measured 5 sec , then I modified transaction content and delete web_reg_find() after I measured that transaction it will be 3 sec. Is that normal ?
Because I do load test , functionality is important so transaction are also important. Is there any alternative way to check content and save the performance ?
web_reg_find() does some logic based on the response sent from the server and therefore takes time. LoadRunner is aware that this is not actual time that will be perceived by the real user and therefore reports it as "wasted time" for the transaction. If you check the log for this transaction you will see something like this:
Notify: Transaction "login" ended with "Pass" status (Duration: 4.6360 Wasted Time: 0.0062).
The time the transaction took and out of that time how much time was wasted on LoadRunner internal operations.
Note that when you will open the result in Analysis the transaction times will be reported without the wasted time (i.e. Analysis will report the time as it is perceived by the real user).
The amount of time taken for the processing of web_reg_find() also seems unusually long. As web_reg_find() is both memory and CPU bound (holding the page in ram and running string comparisons) I would look at other possibilities as to why it takes an additional two seconds. My hypothesis is that you have a resource constricted, or over subscribed load generator. Look at the performance of a control group for this type of user, 1 user loaded by itself on a load generator. Compare your control group to the behavior of the global group. If you see a deviation then this is due to a local resource constriction which is showing as slowed virtual users. This would have an impact on your measurement of response time as well.
I deliberately underload my load generators to avoid any possibility of load generator coloration plus employing a control generator in the group to measure any possible coloration.
the time which is taken by web_reg_find is calculated as waste time...