could anyone suggest a way (other than using Task Manager) to track and log a program's usage of CPU and RAM in order to profile its performance?
I'm working under Windows.
Something generic would be useful. A more specific request solution would involve Visual Studio. I've tried Performance Wizard, but it doesn't seem to give me the information I need. Thanks
Process Explorer can be useful.
You can use perfmon utility to gather various counters
Well, there are published APIs for that sort of thing. You might want to take a look at WMI and the Win32_Process class.
If you're looking for a command-line program that gets those things for you there is tasklist and wmic. You can parse their output if you're so inclined.
The Microsoft Platform SDK includes the Windows Performance Toolkit, which tracks CPU, disk, and memory usage over time (along with a ton of other features). It's very handy for tracking down spikes of CPU/memory usage, as well as tracking down issues like why your laptop won't sleep.
How about Intel VTune?
I view the measuring of performance, and the finding of performance problems so as to make the program faster, as two distinctly different goals.
For measuring, one can use profilers, or simply timers, to get the job done.
For finding performance problems, I take an entirely different approach.
Related
We have a few sites running MVC3 on a windows 2008 server. We're seeing that in average these sites are using over 300 mb in memory each one, and high CPU. Each site easily goes to 20-25% CPU when they get requests.
Is this normal?
I know my question is very general, but if we were to spend time on optimization, etc.. what should we aim for? what is considered normal in terms of memory and cpu usage for a typical database driven MVC3 website?
Also, I was told that we should "profile" the application to troubleshoot the high CPU usage? Is this done via Visual Studio, or through some other tools?
thanks for your help in advance,
G.S
Without knowing the details of your application and what it's doing, it's impossible to say what's normal and what isn't. If performance is a problem, you should optimize, if it isn't, you shouldn't :)
Profiling is a general term for measuring the performance of different aspects of your application. You can profile memory, cpu usage, garbage collections and thread use (among other things) using a profiler.
There are several profilers around such as Ants, .Net memory profiler and some excellent ones built into visual studio. They are available in the pro versions of VS2012 and up.
Is anybody know a good testing tool that can produce a graph containing the CPU cycle and RAM usage?
What I will do for ex. is I will run an application and while the application is running the testing tool will record CPU cycle and RAM Usage and it will make a graph as an output.
Basically what I'm trying to test is how much heavy load an application put on RAM and CPU.
Thanks in advance.
In case this is Windows the easiest way is probably Performance Monitor (perfmon.exe).
You can configure the counters you are interested in (Such as Processor Time/Commited Bytes/et) and create a Data Collector Set that measures these counters at the desired interval. There are even templates for basic System Performance Report or you can add counters for the particular process you are interested in.
You can schedule the time where you want to execute the sampling and you will be able to see the result using PerfMon or export to a file for further processing.
Video tutorial for the basics: http://www.youtube.com/watch?v=591kfPROYbs
Good Sample where it shows how to monitor SQL:
http://www.brentozar.com/archive/2006/12/dba-101-using-perfmon-for-sql-performance-tuning/
Loadrunner is the best I can think of ; but its very expensive too ! Depending on what you are trying to do, there might be cheaper alternatives.
Any tool which can either hook to the standard Windows or 'NIX system utilities can do this. This has been a defacto feature set on just about every commercial tool for the past 15 years (HP, IBM, Microfocus, etc). Some of the web only commercial tools (but not all) and the hosted services offer this as wekll. For the hosted services you will generally need to punch a hole through your firewall for them to get access to the hosts for monitoring purposes.
On the open source fron this is a totally mixed bag. Some have it, some don't. Some support one platform, but not others (i.e. support Windows, but not 'NIX or vice-versa).
What tools are you using? It is unfortunately common for people to have performance tools in use and not be aware of their existing toolset's monitoring capabilities.
All of the major commercial performance testing tools have this capability, as well as a fair number of the open source ones. The ability to integrate monitor data with response time data is key to the identification of bottlenecks in the system.
If you have a commercial tool and your staff is telling you that it cannot be done then what they are really telling you is that they don't know how to do this with the tool that you have.
It can be done using jmeter, once you install the agent in the target machine you just need to add the perfmon monitor to your test plan.
It will produce 2 result files, the pefmon file and the requests log.
You could also build a plot that compares the resource compsumtion to the load, and througput. The throughput stops increasing when some resource capacity is exceeded. As you can see in the image CPU time increases as the load increases.
JMeter perfmon plugin: http://jmeter-plugins.org/wiki/PerfMon/
I know this is an old thread but I was looking for the same thing today and as I did not found something that was simple to use and produced graphs I made this helper program for apachebench:
https://github.com/juanluisbaptiste/apachebench-graphs
It will run apachebench and plot the results and percentile files using gnuplot.
I hope it helps someone.
I recently got a shiny new development workstation. The only disadvantage of this is that the desktop apps I'm developing now run very, very fast, and so I fear that parts of the code that would be annoyingly slow on end users' machines will go unnoticed during my testing.
Is there a good way to slow down an application for testing? I've tried searching around, but all of the results I've been able to find seem pretty fiddly to set up (e.g., manually setting up a high-priority CPU-bound task on the same CPU core as the target app, or running a background process that rapidly interrupts and resumes the target app), and I don't know if the end result is actually a good representation of running on a slower computer (with its slower CPU, slower RAM, slower disk I/O...).
I don't think that this is a job for a profiler; I'm interested in the user's perception of end-to-end performance rather than in where the time goes for particular operations.
setup a virtual machine, give in as little ram as needed and also you can have it use 1,2 or more CPUs. I like VirtualBox myself install your app and test with different RAM configs
Personally, I'd get an old used crappy computer that is typical of what the users have and test on that. It should be cheap and you will see pretty fast how bad things are.
I think the only way to deal with this is through proper end-user testing, i.e. get yourself a "typical" system for testing and use that to identify any perceptible performance bottlenecks.
You can try out either Virtual PC or VMWare Player/Workstation, load an OS onto it, and then throttle back the resources. I know that with any of those tools you can reduce the memory to whatever you'd like. You can also specify the number of cores you want to use. You might even be able to adjust the clock speed in VMWare Workstation... I'm not sure.
I upvoted SQLMenace's answer, but, I also think that profiling needs to be mentioned, no matter how quickly the code is executing - you'll still see what's taking the most time. If you find yourself with some free time, I think profiling and investigating the results is a good way to spend it.
I'm trying to reproduce a bug that seems to appear when a user is using up a bunch of RAM. What's the best way to either limit the available RAM the computer can use, or fill most of it up? I'd prefer to do this without physically removing memory and without running a bunch of arbitrary, memory-intensive programs (ie, Photoshop, Quake, etc).
Use a virtual machine and set resource limits on it to emulate the conditions that you want.
VMWare is one of the leaders in this area and they have a free vmware player that lets you do this.
I'm copying my answer from a similar question:
If you are testing a native/unmanaged/C++ application you can use AppVerifier and it's Low Resource Simulation setting which will use fault injection to simulate errors in memory allocations (among many other things). It's also really useful for finding a ton of other subtle problems that often lead to application crashes.
You can also use consume.exe, which is part of the Microsoft Windows SDK for Windows 7 and .NET Framework 3.5 Service Pack 1 to easily use a lot of memory, disk space, cpu time, the page file, or kernel pool and see how your application handles the lack of available resources. (Does it crash? How is the performance affected? etc.)
Use either a job object or ulimit(1).
Create a virtual machine and set the ram to what you need.
The one I use is Virtual Box from SUN.
http://www.virtualbox.org/
It is easy to set up.
If you are developing in Java, you can set the memory limits for the JVM at startup.
I basically have a unix process running and it is doing some heavy processing as well as outputting data over the network. I was wondering what system calls are used to interact with the networking layer.
I would like to measure the performance metrics of this process: CPU usage, networking usage. I am not sure if this process is blocked because it is writing way too fast to the networking layer or if this process is spending too much time processing code.
Any suggestions?
Thanks!
What Unix? Solaris/FreeBSD/OSX have dtrace, Linux has oprofile. All of them have tcpdump for you to analyze the network flow.
What you really need is a profiler. That way, you'll be able to see which parts of your code are taking the most time.
Try http://oprofile.sourceforge.net/, or a specific profiler for your toolchain.
As a quick measurement you can try running your process under strace to see which system calls it makes and see live how long they take.
I suggest valgrind which is another profiler.