I have an application of my own that increases the CPU usage of a process on Windows (in this case: audiodg.exe, which handles audio in 'some' way). I want to measure the overall CPU performance of this process over a minute or so.
It is possible to attach Visual Studio to this process (run as administrator and just do Attach to Process...), so I can view the CPU and memory performance. However, this isn't very useful. The process in constantly around 1-2% of the total CPU, so the graph doesn't give any interesting visual information.
Moreover, I'm interested in an average over ~1 minute, which the Performance Profiler can't do (I think?).
What is the best way to get accurate average CPU performance data out of a Windows process? Are there any tools for this or API that can get me this data?
I think Visual Studio is not the best tool for that usage as it's very heavy on its own.
I would use Windows Performance Recorder where you can select what you want to record:
Then you start the logging, replicate the issue and then stop the recording.
After that, you can open the log in Windows Performance Analyzer and try to understand why the issue is occurring.
Alternatively, you could take the process memory dump with ProcDump.exe when the CPU spikes above certain thresholds and then from the dump try to investigate the problem.
Related
The Performance and Diagnostics Hub in Visual Studio is an amazing feature. I use it for diagnosing Memory and high cpu issues while writing code. However, so far I am not able to figure out how to use this tool for troubleshooting low-cpu hang scenarios (or wall-clock analysis). Let's say my application takes long time on waiting a response back from a network or file I/O. Is there anyway of determining this from the Diagnostics windows in Visual Studio during a debugging sessions? I was hoping this analysis could be part of CPU Analysis section in there.
Like this blog here:
https://blogs.msdn.microsoft.com/devops/2014/02/28/new-cpu-usage-tool-in-the-performance-and-diagnostics-hub-in-visual-studio-2013/
The CPU Usage tool measures the CPU’s resources in terms of how much time each core in the CPU spends executing your code, it seems that it didn't provide the feature to resolve/collect the low-cpu hang issue.
Maybe you could think about using other tool like the PerfView or the suggestion of magicandre1981.
https://blogs.msdn.microsoft.com/vancem/2012/11/26/wall-clock-time-analysis-using-perfview/
We have a few sites running MVC3 on a windows 2008 server. We're seeing that in average these sites are using over 300 mb in memory each one, and high CPU. Each site easily goes to 20-25% CPU when they get requests.
Is this normal?
I know my question is very general, but if we were to spend time on optimization, etc.. what should we aim for? what is considered normal in terms of memory and cpu usage for a typical database driven MVC3 website?
Also, I was told that we should "profile" the application to troubleshoot the high CPU usage? Is this done via Visual Studio, or through some other tools?
thanks for your help in advance,
G.S
Without knowing the details of your application and what it's doing, it's impossible to say what's normal and what isn't. If performance is a problem, you should optimize, if it isn't, you shouldn't :)
Profiling is a general term for measuring the performance of different aspects of your application. You can profile memory, cpu usage, garbage collections and thread use (among other things) using a profiler.
There are several profilers around such as Ants, .Net memory profiler and some excellent ones built into visual studio. They are available in the pro versions of VS2012 and up.
Using the Windows Performance Recorder, is it possible to generate an ETL file based on the tracing of a single process? The ETL files generated for all of the processes in the system result in ETL files measured in GBs for intervals as small as a couple of minutes.
ETW (kernel event) tracing is system wide and captures all processes.
I don't think it is possible to record ETW traces that record just one process (at least not with xperf or wpr). If your traces are too big then the best tactic is to make sure that the rest of the system is as quiet as possible so that it doesn't contribute too much data.
If the rest of the system is already quiet then the traces are probably big because ETW traces tend to be big. You can use trace compression to make them smaller on disk - see UIforETW for how this works - https://randomascii.wordpress.com/2015/09/24/etw-central/.
If the rest of the system is not already quiet then yes, it probably is contributing to bloat in the traces. Note that it may also be affecting performance, so that data is not irrelevant.
And, if you really do need single-process profiling consider using a different profiler. The Visual Studio profiler does per-process profiling.
I am doing some performance measurement of my code on a Windows box and I am finding that I am getting dramatically different results between measurements. A quick bit of ad hoc exploration during a slow one shows in the task manager System Idle Processes taking up almost 100% CPU.
Does anyone know what System Idle Processes actually means and what Windows features it may be running?
NB: I am not measuring performance using the task manager, I just used it to take a look at what else was running during a particularly slow measurement.
Please think before saying this is not programming related and closing the question. I would not ask it unless I thought there were grounds to say it is. In this case I believe it clearly is because it is detrimentally affecting my development and test environments and in order to sort it out I need to know a bit more about it. Programming does not start and end with the writing of the code.
The idle process typically doesn't do any useful work except execute the HLT instruction, which puts that CPU core into a lower power state (C1). However, the fact that your benchmark is not consuming 100% CPU time does open the door for speculation about what is going on.
If your application is single-threaded and your test system is multicore/hyperthreaded/multi-CPU, then you should expect to see around 50% idle CPU time for two cores, 75% for four, etc. This is because the CPU time percentages in Task Manager include all cores. (I believe that older versions of Windows had an option to change this, but I don't see it on Vista.)
If the idle process is consuming a lot of CPU, that may indicate that your application is spending a lot of time sleeping. It might be waiting for data from some external source (e.g. a disk or network). It might be spending a lot of time waiting for synchronization objects (e.g. mutexes or events). It also might be spending a lot of time calling the Sleep() function. Profiling your code should identify where it's spending the time.
Getting fully reproducible benchmark results may require you to disable processor/disk/network-intensive background applications and services (e.g. search indexing, SMS software inventory, virus scanning, Windows Update downloads, IncrediBuild/distcc) or to connect the machine to an isolated network (or to no network at all).
I'm assuming that you wrote a benchmark for your application, and that you're just trying to use Task Manager to diagnose why the benchmark results aren't what you expected. Task Manager isn't an accurate way to measure application performance.
System Idle Process is a sort of a default process that Windows runs on the processor when it has nothing else to schedule for running. This process is like a housekeeper that does things like trying to save system power etc.
If you're measuring the performance of your program, don't use the Windows Task manager. Use Performance Monitor instead (which you can start by typing 'perfmon' at the commandline). Or better still, use a profiler.
If you "system idle process" is taking up 100% then essentially you machine is bored, nothing is going on. If you add up everything going on in task manager, subtract this number from 100%, then you will have the value of "system idle process." Notice it consumes almost no memory at all and cannot be affecting performance.
I am doing profiling of a C code in Microsoft VS 2005 on a Intel Core-2Duo platform.
I measure the time(secs:millisecs) counsumed by my function. But i have some doubts about the accuracy of this measurement as the operating system will not continuously run my application, but instead schedule others apps/services in between the execution of my code.(Although i have no major applications running while i do the profile run, still windows will have lot of code of its own which it will run by preempting my app.). Because of all this i believe the profiling number(time taken by my app to run) is not accurate.
So my question is there any way to find out the Operating system overheads, scheduling overhead on a typical windows system(I run Windows XP)e.g. if my applications says it ran for 60 milliseconds, out of that 60 msec, how much time really was used by my app. and how much time it was sitting idle, due to being pre-empted by some other task scheduled by the OS?
or
Atleast is there any ball-park number to get such OS overhead, based on your experience you came across while doing something similar?
#Kogus: Even if i run outside debugger(standalone app. from a command prompt) it still could be preempted by OS and cause a incorrect measurement of the time consumed by my app.
Is'nt it?
-AD
I think you are going to have some problems with the granularity. See similar questions GetLocalTime() API time resolution and Is gettimeofday() guaranteed to be of microsecond resolution?
Also, you may want to take a look at the Windows Resource Kits Tools which include timeit.exe (similar to time on unix/linux) to give you elapsed and process times.
Suggestion
Try run on multi CPU systems.
The best way of doing this is a dedicated profiling tool. There are lots out there. I haven't used one for C for a few years, someone else will hopefully be able to give better advice. As you are using Visual Studio 2005 this might be a good place to start:
AQ, but I've never used it.
1 - Put some debug logging in your code (include timestamps of course), and run it outside of the debugger
2 - Run again in the debugger
3 - Repeat many times, to get statistically valid data.
4 - Compare.
If there is a significant difference in the average execution time of the standalone vs. the debugger, then you are right to be suspicious of the OS (or the overhead of the debugger hooks themselves...). If no difference, then don't sweat it.
Edit0: Obviously the debug messages have some overhead of their own. You may want to leave those in the code even when you are running from the debugger. That way, both the standalone and the debugger are running the very same code.
Edit1: I misunderstood the question. I thought your concern was that --while debugging--, the OS might interrupt your app more frequently than in a normal mode of execution. If you want to know how much time your app actually spent working, just compare the time taken to the "CPU Time" in the Task Manager.
Edit2: Compare the time returned by GetProcessTimes for your process to the actual execution time. The difference is the time spent by the CPU on somebody else.