Visual Studio web test transaction total request time wrong - visual-studio

I have a very quick question. I have recorded web scenarios and created different transactions on top of those recorded web requests as well as load tests on it.
When I run the load test and wanted to see the detail of each run, I figured the transaction total time does not match the total of each request under it.
Do you have any idea why?

Related

Whats the impact of response code 400,503 ? Can we ignore these codes if my primary focus is to measure loading time of web application?

I am testing a web application login page loading time with 300 thread users and ramp up period of 300 secs.Most of my samples return response code 200.But few of them return response code 400,503.
My goal is to just check the performance of the web application if 300 users start using it.
I am new to Jmeter and have basic knowledge of programming.
My Question :-
1.Can i ignore these errors and focus just on timings from the summary report ?
2.If i really need to fix these errors, how to fix it ?
There are 2 different problems indicated by these errors:
HTTP Status 400 stands for Bad Request - it means that you're sending malformed requests which cannot be understood by the server. You should inspect request details and amend JMeter configuration as it is the problem in your script.
HTTP Status 503 stands for Service Unavailable - it indicates the problem on server side, i.e. server is not capable of handling the load you're generating. This is something you can already report as the application issue. You can try to identify the underlying cause by:
looking into your application log files
checking whether your application has enough headroom to operate in terms of CPU, RAM, Network, Disk, etc. It can be done using APM tool or JMeter PerfMon Plugin
re-running your test with profiler tool telemetry to deep dive into what's under the hood of the longest response times
So first of all you should ensure that your test is doing what it is supposed to be doing by running it with 1-2 users/loops and inspecting requests/response details. At this stage you should not be having any errors.
Going forward you should increase the load gradually and correlate the increasing number of virtual users with the increasing response time/number of errors
`
Performance testing is different from load testing. What you are doing is load testing.
Performance testing is more about how quickly an action takes. I typically capture performance on a system not under load for a given action.
This gives a baseline that I can then refer to during load tests.
Hopefully, you’ve been given some performance figures to test. E.g. must be able to handle 300 requests in two minutes.
When moving onto load, I run a series of load tests with increasing number of users/threads and capture the results from each test.
Armed with this, I can see how load degrades performance to the point where errors start to show up. This gives you an idea of how much typical load the system can handle.
I’d also look to run soak tests too. This where I’d run JMeter for a long period with typical (not peak) load to make sure the system can handle sustained load.
In terms of the errors you’re seeing, no I would not ignore them. Assuming your test is calling the same endpoint, it seems safe to say the code is fine, its the infrastructure struggling with the load you’re throwing at it.

JMeter Load Testing Time Verification

I use JMeter for checking load testing.
I note a time with stopwatch when i check load time personally it was
8.5 seconds
when i run same case with JMeter it gave load time of 2 seconds
There is huge difference between them, How can i verify the actual time?
e.g : if one user taking 9 seconds to load the form while in JMeter it is given load time 2 seconds
Client time is a complex item, as you can see from the clip from the Chrome Developer tools, performance tab, above. There's lots going on at the client which does lead to a difference between the time you see with an HTTP protocol test tool, such as JMETER (and most of the other performance test tools on the planet) and the actual client render.
You can address this Delta in a number of ways:
Run a single GUI Virtual user. Name your timing records such as "Login" and "login_GUI." The delta between the two is your client weight. Make sure to run the GUI virtual user on a dedicated host to avoid resource contention
Run a test with all browsers. This was state of the art in 1995. Because of the resource cost and the skew imposed on trying to figure out the cost of the server response the entire industry shifted to protocol level virtual users. Some are trying to bring back this model as "state of the art." It is not
Ask a performance question earlier, also known as "shift left..." Every developer has these developer tools at their disposal, as does every functional tester. If you find that a client is slow for one user, be curious and use the developer tools to identify, "why?" If you are waiting to multi user performance testing to answer questions related to client weight, then you have waited too long and often will not have the time or resources to change the page architecture in meaningful ways to reduce the client page cost. This is where understanding earlier has tremendous advantages for making changes.
I picked the graphic above deliberately to illustrate the precise challenge you have. Notice, the loading of the components takes less than a tenth of a second. These are the requests that JMETER would be making. But the page takes almost five seconds to "render." Jmeter is not broken, it is working as designed. It is your understanding that needs to change on which tools can be used to pull particular stats for analysis.
You can't compare JMeter load time to browser as is, also because your browser will load JavaScript files and can call JavaScript functions on page load while JMeter doesn't execute JavaScript.
JMeter is not a browser, it works at protocol level. As far as
web-services and remote services are concerned, JMeter looks like a
browser (or rather, multiple browsers); however JMeter does not
perform all the actions supported by browsers. In particular, JMeter
does not execute the Javascript found in HTML pages. Nor does it
render the HTML pages as a browser does (it's possible to view the
response as HTML etc., but the timings are not included in any
samples, and only one sample in one thread is ever displayed at a
time).
Just a side note - you can use plugin to check exact load time in chrome.
Well-behaved JMeter test timing should be equal or similar to real user timing, if there is a 4x times difference - most probably your JMeter configuration is not correct.
Probably the most important. Make sure your HTTP Request samplers are configured to retrieve so called "embedded resources" (images, scripts, styles) which are referenced in the web page
If your application is using AJAX technology make sure you execute AJAX-driven requests as well and add their elapsed time to main sampler using i.e. Transaction Controller.
Make sure you mimic browser's:
Cookies via HTTP Cookie Manager
Headers via HTTP Header Manager
Cache via HTTP Cache Manager
Assuming all above you should be receiving similar to real user experience page load time. See How to make JMeter behave more like a real browser article for more detailed information on the above tips.
In addition to the answers provided by James and user7294900, please find these images to help you understand the reason behind the difference in time given by your stop watch and JMeter.
Below image gives the ideology behind how JMeter provides the time.
Below image gives the ideology behind how you have measured the time with
your stop watch.
Notice that there are additional actions performed by the browser when you are taking the time using your stop watch. This is the reason behind the huge difference in time between JMeter and your stop watch.
In addition to this, ensure that you are using the same test environmental conditions for both the tests (like same network conditions, same LG etc.)
Hope this helps!

Web Performance Test in Cloud Load Test sends cookie on some tests

I have a web performance test that begins with a webforms login, executes a few steps and then finishes.
Mostly this runs without errors but if I extend the load test run beyond 15 minutes I start to get load test failures which fail because some tests send a Session and Auth cookie on the initial Get to the root url.
Clearly the test recording does not have cookies on the initial request. Additionally, I have set the "Percentage of New Users" on the scenarios to 100% to ensure that all tests are running as a new user.
The test is databound to a list of 600 users in a User Pace scenario. Nothing very heavy.
However, I cannot identify why after a period of time (12 minutes) some of the tests begin to send the cookies on the initial request!
Can anyone give me any pointers please?
This is an old question and on re-reading it not very clear.
The scenario reflects more my lack of knowledge of the web testing features I was using.
I am fairly certain it was caused by a missing "log out" test step combined with the configuration of the load test probably re-using connections.
After much prodding around I achieved some clean runs

JMeter response time decreases after the first run of test plan

I have a test plan setup which I am using on my web application. It is pretty simple , a user logs in and then navigates through some of the pages. Everything's working fine except the fact that whenever I run the test plan for the first time(say first time after restarting the web application server) the average response time recorded are around 18000ms but in the susequent runs it is always around 3000ms until i restart the server. I just want to know why this is happening. Pardon me, I am newbie to this and thanks in advance.
You can start to exclude some part of test plan and try again. If this response time does not decrase then you can focus your web application server thread pool size. If it is very small and your Jmeter test plan needs more than this size then application server try to create new threads. When you increase your min thread pool size on app server the response time is still high then need to focus what your test plan does. By the way, I want to have a look at your test plan if you share.

Can I exclude counts from failed webtests in a VS.Net 2010 Loadtest?

I am using Visual Studio 2010 Ultimate to perform loadtests. These loadtest use recorded webtests.
When running a loadtest with an increasing number of concurrent users, some steps in my webtests will start to fail. The first error is often an internal server error 500. This will give a wrong impression of the average page_load, because these internal server errors are often returned very fast, in contrast to the generation a succesful response. So, when the load increases, the average page_load drops.
Of course, I need to attend to these internal server errors, but in the meantime, I would like to exclude failed webtests from my measurements.
Does anybody know if this can be done?
Thanks in advance.
It may be possible to run your own query on the test results database that ignores errors, but even that will be inaccurate.
Remember that the page return stats are really only useful when read in conjunction with the load on the hardware.
Essentially, the load test is recording the effect on your hardware of a given load. If you website is returning a large number of 500 error pages quickly, the load on the hardware will be affected and any page stats will reflect the change in server loading.
You will have to investigate the cause of the 500 errors and either fix the issue or report in your load testing results that once a load of 'x' is reached on the servers, the pages 'y' will give an internal server error 500 result instead of the requested page.
This gives the business owners of you app some information to make the decision whether to fix the problem or live with it.

Resources