Getting breakdown of method execution time in jvisualvm - performance

I am profiling a web application with jvisualvm. I can see how long various methods takes for example methodA takes 5 seconds... However, I can't see to double click this method to see where the 5 seconds is going. I can "drill down" so to speak.
How do I achieve this in jvisualvm?
Thanks.

If you hit the 'Snapshot' button in the Sampler or Profiler windows after profiling CPU usage, it will show you a call tree with a summary of the CPU time for each method, along with self-times.

Related

Times Elapsed method seems missing in JMC 7 (in method profiling)

I'm using JMC tool with JFR to do profiling on Java application.
After doing a record, and loading the JFR file, when i go to "Method Profiling", i saw Top package and top class and stack trace associated. In stack trace saw number of metod's call but i don't see time elapsed in a method.
Could you tell me what to do to see time elapsed for profiling methodsee image
The way JFR collects data is by sampling thread stacks, so there is no information in a recording on how long time a method has executed. The reason JFR uses sampling is to keep the overhead low and not skew the result by adding instrumentation to the application.

Dot Trace show waiting for CPU while excuting multiple ASP.NET MVC actions

I am currently trying to improve the performance of my Asp.Net Application. During this I have found out that when I call the same action multiple time or different action within the same controller through ajax call, it takes the unequal amount of time. Please refer below image.
Timeline of request
On digging using Dot trace tool, I found that this difference is being traced as "Waiting for CPU" i.e. task is waiting for thread assignment. How can we optimize this so that all the same actions get equal amount of time to execute their functionality.
Your CPU is at his max capacity. Close unused program to freed some CPU activity

Visual Studio Cloud Load Test Average Test Time Seems Long

I have a WebAPI service that I put together to test throughput hosted in Azure. I have it set up to call Task.Delay with a configurable number (IE webservice/api/endpoint?delay=500). When I run against the endpoint via Fiddler, everything works as expected, delays, etc.
I created a Load Test using VS Enterprise and used some of my free cloud load testing minutes to slam it with 500 concurrent users over 2 minutes. After multiple runs of the load test, it says the average test time is roughly 1.64 seconds. I have turned off think times for the test.
When I run my request in Fiddler concurrently with the Load test, I am seeing sub-second responses, even when spamming the execute button. My load test is doing effectively the same thing and getting 1.64 second response times.
What am I missing?
Code running in my unit test (which is then called for my load test):
var client = new HttpClient { BaseAddress = new Uri(CloudServiceUrl) };
var response = client.GetAsync($"{AuthAsyncTestUri}&bankSimTime={bankDelay}&databaseSimTime={databaseDelay}");
AuthAsyncTestUri is the endpoint for my cloud-hosted service.
There are several delay(), sleep(), pause(), etc methods available to a process. These methods cause the thread (or possible the program or process for some of them) to pause execution. Calling them from code used in a load test is not recommended, see the bottom of page 187 of the Visual Studio Performance Testing Quick Reference Guide (Version 3.6).
Visual Studio load tests do not have one thread per virtual user. Each operating system thread runs many virtual users. On a four-core computer I have seen a load test using four threads for the virtual users.
Suppose a load test is running on a four-core computer and Visual Studio starts four threads to execute the test cases. Suppose one virtual user calls sleep() or similar. That will suspend that thread, leaving three threads available to execute other virtual user activity. Suppose that four virtual users call sleep() or similar at approximately the same time. That will stop all four threads and no virtual users will be able to execute.
Responding to the following comment that was added to the question
I did try running it with a 5 user load, and saw average test times of less than 500 ms, which match what I see in my Fiddler requests. I'm still trying to figure out why the time goes up dramatically for the 500 user test while staying the same for Fiddler requests run in the middle of the 500 user test.
I think that this comment highlights the problem. At a low user load, the Visual Studio load test and the Fiddler test give similar times. At higher loads something between the load test and the server is limiting throughput and causing the slowdown. It would be worth examining the network route between the computer running the tests and the system being tested. Are there any slow segments on that path? Are there any segments that might see the load test as a denial of service attack and hence might slow down the traffic?
Running a test for as little as 2 minutes does not really show how the test runs. The details in the question do net tell how many tests started, how many finished and how many were abandoned at the end of the two minute run. It is possible that many tests cases were abandoned and that the average time of those that completed was 1.6 second.
If you have the results of the problem run then look at the "details" section of the results. Expand the slider below the image to include the whole run. Tick the option (top left corner) to highlight failing tests. I would expect to see a lot of red at the two minute mark for failing tests. However, the two minute run may be too short compared to the sampling interval (in the run settings) to see much.
Running a first test at 500 users tells you very little. It tells you either that the system copes with that load or that it does not. You need to run the test at several different user loads. Then you start to learn where the boundary between working and not working lies. Hence I recommend using a stepped load.
I believe you need at least one more test run to understand what is happening. I suggest doing a run as follows. Set a one minute cool-down period. Set a stepped load: start at 5 users as you know that that works. Increment by 1 user every two seconds until 100 users. That will take 190 seconds. Run for about another minute at that 100 user load. Total of 4 minutes 10 seconds. Call it 4 minutes. Adding in the one minute cool down makes (5 minutes) x (100 VU) = 500 VUM, which is a small portion of the free minutes per month. After the run look at the graphs of average test times. If all is OK on that test then you could try another that ramps up more quickly to say 500 users.

How do i add a WAIT in a Web Performance Test loop?

Writing a Web Performance Test for a process that will run for an undetermined time, and have to put a refresh command in a while that runs until the process state indicates it is done.
The refresh command consumes about 3 seconds. so do not want it running constantly in the loop. So, am trying to find a sleep/wait function to stop the execution between loops.
The only reference i've found is for Thread.Sleep which seems to do the job.
BUT, this method seems to also stop the test's timers. so, however many times the loop runs, and whatever the actual time taken by the process, the test report will only show the cumulative time of the refresh statements.
Is there another method that will not stop the test's timers?
If the refresh is in a loop within the Web Performance Test then set a suitable "think time" on the request. This will pause the test after the response is received. (Think times are normally used to simulate the time a person spends reading a web page and filling in forms etc before the next request is issued.)
Think times are set via the properties of the request. Think times (also reporting names) for all requests in a test can be viewed and modified using the "Set request detail" command accessed using the (rightmost) command icon in eth web test editor.
Think times can also be set or adjusted in the PreRequest method of a WebTestRequest plugin.

About web_reg_find() in loadrunner

I am trying to measure time for next button one page to another. To do this I start transaction before to press button, I press the next button , when the next page loaded I end the transaction. Between this transaction process I use web_reg_find() and check specific text to verify that page.
When I use controller that transaction measured 5 sec , then I modified transaction content and delete web_reg_find() after I measured that transaction it will be 3 sec. Is that normal ?
Because I do load test , functionality is important so transaction are also important. Is there any alternative way to check content and save the performance ?
web_reg_find() does some logic based on the response sent from the server and therefore takes time. LoadRunner is aware that this is not actual time that will be perceived by the real user and therefore reports it as "wasted time" for the transaction. If you check the log for this transaction you will see something like this:
Notify: Transaction "login" ended with "Pass" status (Duration: 4.6360 Wasted Time: 0.0062).
The time the transaction took and out of that time how much time was wasted on LoadRunner internal operations.
Note that when you will open the result in Analysis the transaction times will be reported without the wasted time (i.e. Analysis will report the time as it is perceived by the real user).
The amount of time taken for the processing of web_reg_find() also seems unusually long. As web_reg_find() is both memory and CPU bound (holding the page in ram and running string comparisons) I would look at other possibilities as to why it takes an additional two seconds. My hypothesis is that you have a resource constricted, or over subscribed load generator. Look at the performance of a control group for this type of user, 1 user loaded by itself on a load generator. Compare your control group to the behavior of the global group. If you see a deviation then this is due to a local resource constriction which is showing as slowed virtual users. This would have an impact on your measurement of response time as well.
I deliberately underload my load generators to avoid any possibility of load generator coloration plus employing a control generator in the group to measure any possible coloration.
the time which is taken by web_reg_find is calculated as waste time...

Resources