Visual Studio Load Test request completion and think time - visual-studio

I'm using load test in Visual Studio to test our web api services. But to my surprise I can't seem to test what I want to. Actually I have a single url in my .webtest file and try to send the same url time and again to see what is the avg. response time.
Here are the details
1.I use constant load of 1 user
2.Test duration of 1 hour
3.Think time of 10 seconds (not the think time between iterations)
4.The avg. response time that I get is 1.5 seconds
5.So the avg. test time comes out to be 11.5 seconds
6.Requests/sec are 0.088
7.And I'm using Sequential Test Order among 4 types of different tests
So these figures are making me think that every time a virtual user sends a request besides the specified think time it waits for the request to complete before he sends a new one (request). Thus technically the total think time becomes
Total think time = think time specified + avg. response time
But I don't want the user to wait for an already sent request to come back and then send a new one after a specified think time. I need to configure the load test in such a way that if the think time is 10 seconds then the user should send next request after every 10 seconds without waiting the first one to come back then think for another 10 seconds and then send a new request (hence making the total think time to 11.5 seconds in my case as mentioned above). And no matter what type of test I choose among 4 different types Visual Studio is always forcing the virtual user to wait for the completion of the request then add specified think time and then send a new one.
I know what Visual Studio load test is doing is more of a practical approach where the user sends the request wait till it comes back then think or interact with the website and then sends a new one.
Any help or suggestion would be appreciated towards what I'm trying to achieve.

In the properties of the scenario, set the "Test mix type" to be "Test mix based on user pace" and set the "Tests per user per hour" as appropriate. See here.
The suggestion in the question that:
Total think time = think time specified + avg. response time
is erroneous. To my mind adding the values does not provide a useful result. The two values on the right are as stated. Think time simulates the time a user spends reading the page, deciding what to do next and typing/clicking/etc their response. Response time is the "turn around" time between sending a request and getting the response. Adding them does not increase the think time in any sense, it just makes the total duration for handing the request in this specific test. Another test might make the same request with a different think time. Note that many web pages cause more than one request and response to be issued; JavaScript and other allow web pages to do many clever things.

Related

Does the Constant Timer added in my HTTP Request affect the results in the Summary Report?

I have a HTTP Request in my Thread Grpup that takes around 20 to 30 seconds to complete with a single user, so when I added 50 users I get a 500/Internal Server Error or 503/Server has been shutdown sometimes.
I want to add a Constant Timer with 40 seconds (in miliseconds) under the HTTP Request so maybe the application will have some time to process it. I am going to the rigth way?
If I add the Constant Timer will it be calculate as well in the Summary Report?
I need that the Jmeter give the time to the API (My aplication) complete the process (need at least 30 seconds) and it may be or not affetct my Summary Report
PreProcessors, Post-Processors and Timers are not counted in the Elapsed time. so response time will not be impacted.
However Throughout (the number of requests for the test duration) will be lower.
See JMeter Glossary for more information on the above metrics.
With regards to "right way" - real users don't "hammer" application non-stop, they need some time to "think" between operations so if you're simulating a real user you should have non-zero think time, however 40 seconds it kind of too much for me. Take a look at How to make JMeter behave more like a real browser article for more tips on properly configuring your JMeter test.

Need help on response time

Need help on JMeter response result from the image
My scenario: Am calculating Min/Max/Average response time on Api creating a user account.
1.Login to site
2.Using API request creating a user account - (creating 100 users account using API)
3.Logout.
Observation :
Total elapsed time is 32 mins (which is there in the image).
Response time for 100 users is 90852.
I need to understand how the response time units are measured here.
does 90852 milliseconds mean approximately 90secs.
So is it like a single user account is created in 90 secs by the API?.
So, please tell me how response time is working here when it compared with total response time?
Thanks :)
The average creation of a user took your API 908 ms (the entry with 100 samples ending with /api/users).
Since the line (where the name of the transaction is not in the screenshot) has the sample count 1 and the response time resembles 100*908ms I would guess that you have a Transaction Controller that holds the Loop Controller.
The same hierarchy that you use to organize your test plan also applies to transaction controllers and samplers. So if you group several samplers - and/or transaction controllers - under a parent transaction controller, that parent transaction controller will have the combined response time of all its children.
Response time for 100 users is 90852. - No, only for 1 user. Looking at your image it appears that only 1 sample was collected during 32 mins. So this response time is for that 1 sample not for all 100 users. JMeter only shows you completed responses. Assuming you have a thread group of 100 users, the rest didn't complete / were waiting for the api to respond.
Does 90852 milliseconds mean approximately 90secs. - yes. In your api you seem to be using once only controller for login and authenticate and everything else seems to be running sequentially. So if you are load testing have a slow api response then you won't be able to measure other throughput for the rest of the apis correctly as the slowest api will hold up the thread for a long time.
Hope this helps.
It is hard to provide comprehensive analysis without seeing your Test Plan.
When it comes to your questions:
Total elapsed time is 32 mins (which is there in the image).
this looks a little bit high for me, given you create 100 user accounts and average response time is 908 milliseconds I would expect that your test will finish in 90.8 seconds which is 1.5 minutes.
does 90852 milliseconds mean approximately 90secs.
it rather looks like a sum of all 100 response times most probably you got it from the Transaction Controller
Average Response time is basically arithmetic mean, to wit sum of all response times divided by their count.
First of all you need to understand why does you test take that long.
You seem to be creating 100 user accounts using 1 thread (virtual user) in loop, you might want to consider doing it with multiple threads instead
You should be using JMeter GUI only for tests development and/or debugging, when it comes to test execution you should be running your JMeter tests in command-line non-GUI mode like:
jmeter -n -t test.jmx -l result.jtl

Why is JMeter Result is different to User Experience Result?

We are currently conducting performance tests on both web apps that we have, one is running within a private network and the other is accessible for all. For both apps, a single page-load of the landing page or initial page only takes between 2-3 seconds on a user POV, but when we use blaze and JMeter, the results are between 15-20 seconds. Am I missing something? The 15-20 seconds result came from the Loadtime/Sample Time in JMeter and in Elapsed column if extracted to .csv. Please help as I'm stuck.
We have tried conducting tests on multiple PCs within the office premises along with a PC remotely accessed on another site and we still get the same results. The number of thread and ramp-up period is both set to 1 to imitate a single user only.
Where a delta exists, it is certain to mean that two different items are being timed. It would help to understand on your front end are you timing to a standard metric, such as w3c domComplete, time to interactive, first contentful paint, some other location, and then compare where this comes into play on the drilldown on the performance tab of chrome. Odds are that there is a lot occuring that is not visible that is being captured by Jmeter.
You might also look for other threads on here on how jmeter operates as compared to a "real browser" There are differences which could come into play affecting your page comparisons, particularly if you have dozens/hundreds of elements that need to be downloaded to complete your page. Also, pay attention to third party components where you do not have permission to test their servers.
I can think of 2 possible causees:
Clear your browser history, especially browser cache. It might be the case you're getting HTTP Status 304 for all requests in browser because responses are being returned from the browser cache and no actual requests are being made while JMeter always uses "clean" session.
Pay attention to Connect Time and Latency metrics as it might be the case the server response time is low but the time for network packets to travel back and forth is very high.
Connect Time. JMeter measures the time it took to establish the connection, including SSL handshake. Note that connect time is not automatically subtracted from latency. In case of connection error, the metric will be equal to the time it took to face the error, for example in case of Timeout, it should be equal to connection timeout.
Latency. JMeter measures the latency from just before sending the request to just after the first response has been received. Thus the time includes all the processing needed to assemble the request as well as assembling the first part of the response, which in general will be longer than one byte. Protocol analysers (such as Wireshark) measure the time when bytes are actually sent/received over the interface. The JMeter time should be closer to that which is experienced by a browser or other application client.
So basically "Elapsed time = Connect Time + Latency + Server Processing Time"
In general given:
the same machine
clean browser session
and JMeter configured to behave like a real browser
you should get similar or equal timings for the same page

Does JMeter show the correct average response time for the first page it hits for many virtual users?

I'm load testing a system with 500 virtual users. I've kept the "Ramp-Up period (in seconds)" option to zero. So, what I understand, JMeter will hit the system with 500 virtual users all at the same time. Please correct me if I'm wrong here.
Now, the summary report shows the average response time for the first page is ~100 seconds!. Which is more than a minute and a half of wait time. But while the JMeter is running, I manually went to the same page/url using a browser and didn't have to wait for that long. It was not even close, the page response was almost immediate for me.
My question is: is there any known issue for the average response time of the first page? Is it JMeter which is taking long to trigger that many users?
Thanks in advance.
--Ishtiaque
There is no issue in Jmeter related to first page response time.
Summary Report shows all response time details in Milliseconds, the value "100" seconds have you converted milliseconds to seconds?
Also in order to make sure that 500 users hit concurrently, use Synchronizing Timer.
Hope this will help.
While the response times will be accurate, you need to consider the affect of starting so many threads at once on both your server and your client.
500 threads to start at once is not insignificant n the client. If your server has the connections, it will start 500 threads as well.
Ramping over a period of time is more realistic loadwise, but still not really indicative of server capability until the threads have all started and settled in.
Databases can also require a settling in period which can affect response times.
Alternative to ramping is introducing a random wait at the start of each thread before firing the first sample. You can then choose not to ramp over time, but still expect resources on the client to suddenly come under load and change the settings if you hit limits. This will make the entire run much more realistic of typical behaviour. However, you need to determine if your use cases are typical.
Although the heap size is increased, i notice there is still longer time as compared to actual response time. Later i realised it was the probe effect (the extra time a tool generates due to test execution)

About web_reg_find() in loadrunner

I am trying to measure time for next button one page to another. To do this I start transaction before to press button, I press the next button , when the next page loaded I end the transaction. Between this transaction process I use web_reg_find() and check specific text to verify that page.
When I use controller that transaction measured 5 sec , then I modified transaction content and delete web_reg_find() after I measured that transaction it will be 3 sec. Is that normal ?
Because I do load test , functionality is important so transaction are also important. Is there any alternative way to check content and save the performance ?
web_reg_find() does some logic based on the response sent from the server and therefore takes time. LoadRunner is aware that this is not actual time that will be perceived by the real user and therefore reports it as "wasted time" for the transaction. If you check the log for this transaction you will see something like this:
Notify: Transaction "login" ended with "Pass" status (Duration: 4.6360 Wasted Time: 0.0062).
The time the transaction took and out of that time how much time was wasted on LoadRunner internal operations.
Note that when you will open the result in Analysis the transaction times will be reported without the wasted time (i.e. Analysis will report the time as it is perceived by the real user).
The amount of time taken for the processing of web_reg_find() also seems unusually long. As web_reg_find() is both memory and CPU bound (holding the page in ram and running string comparisons) I would look at other possibilities as to why it takes an additional two seconds. My hypothesis is that you have a resource constricted, or over subscribed load generator. Look at the performance of a control group for this type of user, 1 user loaded by itself on a load generator. Compare your control group to the behavior of the global group. If you see a deviation then this is due to a local resource constriction which is showing as slowed virtual users. This would have an impact on your measurement of response time as well.
I deliberately underload my load generators to avoid any possibility of load generator coloration plus employing a control generator in the group to measure any possible coloration.
the time which is taken by web_reg_find is calculated as waste time...

Resources