Is QueryUnbiasedInterruptTime monotonic? - windows

I'm trying to find a decent replacement on Windows for clock_gettime(CLOCK_MONOTONIC) and mach_absolute_time.
GetTickCount is annoying because it's deliberately biased: when the system resumes from suspend or hibernation, it seems Windows works out how long it was suspended (from motherboard) then adds that to the tick count to make it look like the tick carried on while the system was powered down.
QueryUnbiasedInterruptTime is friendlier: it's a simple, seemingly-reliable counter that just returns the time the system has been running since boot.
Now, I've experimented with QueryUnbiasedInterruptTime to see what happens when the computer enters each of: sleep, hybrid sleep, and hibernation. It appears that it's correctly updated so that it appears to be monotonic (ie, the system stashes it before hibernation and restores it so that applications aren't confused).
But, there's a note here on the Python dev pages (which, by the way, are a very helpful resource for time function reference!):
QueryUnbiasedInterruptTime() is not monotonic.
I presume someone did a test to determine that. Is that information accurate? If so, out of interest I wonder what you have to do to get the return value to go backwards.

QueryUnbiasedInterruptTime is guaranteed to be monotonic. The interrupt time has no skew added to it, so it always ticks forward although the rate may sometimes differ a little from the wall clock.
I couldn't find any good references for this in MSDN, but I'm now convinced that Hans is right in his comment, the CPython developers must have simply got confused.

The main problem with GetTickCount is the value being only 32 bits, i.e. it resets to zero every 50 days of uptime.
You probably need QueryPerformanceCounter API.

Related

Which timing method should I use to measure changes in time

After delving into VBA benchmarking (see also) I'm not satisfied those answers go into sufficient detail. From a similar question about timing in Go, I see there is a difference between measuring absolute time and changes in time. For absolute time, a "Wall clock" should be used, which can be synchronised between machines using the Network Time Protocol for example
Meanwhile "Monatonic Clocks" should be used to measure differences in time, as these are not subject to leap seconds or (according to that linked go answer) changes in frequency of the clock.
Have I got that right? Is there anything else to consider?
Assuming those definitions are correct, which category do each of these clocks belong to, or in other words, which of these clocks will give me the most accurate measurement of changes in time:
VBA Time
VBA Timer
WinApi GetTickCount
WinApi GetSystemTimePreciseAsFileTime
WinApi QueryPerformanceFrequency + QueryPerformanceCounter
Or is it something else?
I may be overlooking other approaches. I say this because some languages like Java get time in nanoseconds rather than microseconds, how is this possible? Surely the Windows Api will tap into the most accurate hardware timer available, which gives microsecond resolution. What's Java doing and can I copy that?
PS, I have no idea how to tag this, please add as you think appropriate
QueryPerformanceFrequency and QueryPerformanceCounter are going to give you the highest resolution timer. They basically wrap the rtdsc instruction which counts elapsed CPU cycles.

What is the max time a function call should take to run in 2015?

I have a function that needs to run on every keypress and I want to know if it is doing too much work. To find that out I would time it. If it is less than a millisecond I would have to iterate over it 1k - 1 million times then divide by that amount.
While this may seem broad I'm going to ask anyway, what is the max amount of time a function should take to run in 2015? And is there a standard?
In video games if you want to hit 30 frames per second then you have 33 milliseconds to play with since 1 second equals 1000 milliseconds and 1000ms/30frames per second equals 33milliseconds. If you aim for 60 frames per second you only have 15 milliseconds to work with. So at what point would you say to your fellow software developer that their function takes too long? After, 1 millisecond? 100 milliseconds?
For example, if you were to use JQuery to find an element on a page of a 1000 elements how long should that function take? Is 100 ms too long? Is 1 ms too long? What standards to people use to go by? I've heard that database administrators have a standard search time for queries.
Update:
I really don't know how to word this question the way I meant it to. The reason I ask is because I do not know if I need to optimize a function or just move on. And if I do time a function I don't know if it's fast or slow. So, I'm going to ignore it if it's less than a millisecond and work on it if it's longer than a millisecond.
Update 2:
I found a post that describes a scenario. Look at the numbers in this question. It takes 3ms to query the database. If I was a database administrator I would want to know if that is taking too long to scale to a million users. I would want to know how long it should take to connect to the database or perform a query before adding another server to help load balance.
Okay. I think you're asking the wrong question. Or maybe thinking about the problem in a way that's not likely to yield productive results.
There's absolutely no rule that says, "your function should return after no more than X milliseconds". There are plenty of robust web applications that utilize functions that may not return for 250 milliseconds. That can be okay, depending on the context.
And, keep in mind that a function that runs for, say 3 milliseconds, on your dev machine, may run much faster or much slower on someone else's machine.
But here are some tips to get you thinking a little bit more clearly:
1) It's all about the user.
Really (and I kind of hesitate to say this because someone is going to take me too literally and either write bad code, or start a flame war with me), but as long as the performance of your code doesn't affect the user experience, your functions can take as long as you want to do their business. I'm not saying you should be lazy in your code optimization; I'm saying as long as the user doesn't perceive any delay, you don't really need to stress about it.
2) Ask your self two questions:
a) How often is the functions going to run? and b) How many times is it going to run in quick succession, with no breaks in between (i.e. is it going to run in a loop)?
In your example of an calling a function every time the user types a key, you can base the decision about "how long is too long to finish a function" on how often the user is going to hit a key. Again, if it doesn't fuss with the user's ability to use your application effectively, you're wasting time haggling over 3 milliseconds.
On the other hand, if your function is going to get called inside a loop that runs 100,000 times, then you definitely want to spend some time making it as lean as possible.
3) Utilize promises where appropriate.
Javascript promises are a nice feature (although they're not unique to javascript). Promises are way of saying, "listen, I don't know how long this function is going to take. Ima go start working on it, and I'll just let you know whenever I'm finished." Then, the rest of your code can keep on executing and whenever the promise is fulfilled, you are notified, and can do something at that point.
The most common example of promises is the AJAX design pattern. You never know how long a call to the back end might take, so you just say "alright, let me know when the backend responds with some useful information".
Read more about promises here
4) Benchmarking for fun and profit
As mentioned above, sometimes you really do need to shave off every last miserable millisecond. In those cases I've found jsperf.com to be very useful. They have made it easy to quickly test two pieces of code to see which one runs faster. There are tons of other benchmarking tools, but I like that one a lot. (I realize this is a bit of a tangent to the original post, but at some point somebody else is going to come a long and read this, so I feel it's relevant)
After all is said and done, remember to keep relating this question to the user. Optimize for the user, not for your ego, or for some arbitrary number.

Is there something like clock() that works better for parallel code?

So I know that clock() measures clock cycles, and thus isn't very good for measuring time, and I know there are functions like omp_get_wtime() for getting the wall time, but it is frustrating for me that the wall time varies so much, and was wondering if there was some way to measure distinct clock cycles (only one cycle even if more than one thread executed in it). It has to be something relatively simple/native. Thanks
Are you sure that taking time measurements will not work for you? Keep in mind you can only measure to so many milliseconds, depending upon the OS.
See FreeMemory's answer to this question for RDTSC if your using x86 which I've tested and seems to work fine on my system (mac), but see my answer to this question. Also see the criticism of RDTSC here.
It's not usually worth it to get down to too low a level of detail though, other bits and pieces of work the computer needs to do will use up clock cycles, so it will vary depending on load. I find omp_get_wtime() sufficient, though I need to put my code in an extra loop to make sure it takes about a second to ensure consistent results from run to run.

When is the optimization really worth the time spent on it?

After my last question, I want to understand when an optimization is really worth the time a developer spends on it.
Is it worth spending 4 hours to have queries that are 20% quicker? Yes, no, maybe, yes if...?
Is 'wasting' 7 hours to switch a task to another language to save about 40% of the CPU usage 'worth it'?
My normal iteration for a new project is:
Understand what the customer wants, and what he needs;
Plan the project: what languages and where, database design;
Develop the project;
Test and bug-fix;
Final analysis of the running project and final optimizations;
If the project requires it, a further analysis on the real-life usage of the resources followed by further optimization;
"Write good, maintainable code" is implied.
Obviously, the big 'optimization' part happens at point #2, but often when reviewing the code after the project is over I find some sections that, even if they do their job well, could be improved. This is the rationale for point #5.
To give a concrete example of the last point, a simple example is when I expect 90% of queries to be SELECT and 10% to be INSERT/UPDATE, so I charge a DB table with indexes. But after 6 months, I see that in real-life there are 10% SELECT queries and 90% INSERT/UPDATEs, so the query speed is not optimized. This is the first example that comes to my mind (and obviously this is more a 'patch' to an initial mis-design than an optimization ;).
Please note that I'm a developer, not a business-man - but I like to have a clear conscience by giving my clients the best, where possible.
I mean, I know that if I lose 50 hours to gain 5% of an application's total speed-up, and the application is used by 10 users, then it maybe isn't worth the time... but what about when it is?
When do you think that an optimization is crucial?
What formula do you usually apply, aware that the time spent optimizing (and the final gain) is not always quantifiable on paper?
EDIT: sorry, but i cant accept an answer like 'untill people dont complain about id, optimization is not needed'; It can be a business-view (questionable, imho), but not an developer or (imho too) a good-sense answer. I know, this question is really subjective.
I agree with Cheeso, the performance optimization should be deferred, after some analysis about the real-life usage and load of the project, but a small'n'quick optimization can be done immediatly after the project is over.
Thanks to all ;)
YAGNI. Unless people complain, a lot.
EDIT : I built a library that was slightly slower than the alternatives out there. It was still gaining usage and share because it was nicer to use and more powerful. I continued to invest in features and capability, deferring any work on performance.
At some point, there were enough features and performance bubbled to the top of the list I finally spent some time working on perf improvement, but only after considering the effort for a long time.
I think this is the right way to approach it.
There are (at least) two categories of "efficiency" to mention here:
UI applications (and their dependencies), where the most important measure is the response time to the user.
Batch processing, where the main indicator is total running time.
In the first case, there are well-documented rules about response times. If you care about product quality, you need to keep response times short. The shorter the better, of course, but the breaking points are about:
100 ms for an "immediate" response; animation and other "real-time" activities need to happen at least this fast;
1 second for an "uninterrupted" response. Any more than this and users will be frustrated; you also need to start think about showing a progress screen past this point.
10 seconds for retaining user focus. Any worse than this and your users will be pissed off.
If you're finding that several operations are taking more than 10 seconds, and you can fix the performance problems with a sane amount of effort (I don't think there's a hard limit but personally I'd say definitely anything under 1 man-month and probably anything under 3-4 months), then you should definitely put the effort into fixing it.
Similarly, if you find yourself creeping past that 1-second threshold, you should be trying very hard to make it faster. At a minimum, compare the time it would take to improve the performance of your app with the time it would take to redo every slow screen with progress dialogs and background threads that the user can cancel - because it is your responsibility as a designer to provide that if the app is too slow.
But don't make a decision purely on that basis - the user experience matters too. If it'll take you 1 week to stick in some async progress dialogs and 3 weeks to get the running times under 1 second, I would still go with the latter. IMO, anything under a man-month is justifiable if the problem is application-wide; if it's just one report that's run relatively infrequently, I'd probably let it go.
If your application is real-time - graphics-related for example - then I would classify it the same way as the 10-second mark for non-realtime apps. That is, you need to make every effort possible to speed it up. Flickering is unacceptable in a game or in an image editor. Stutters and glitches are unacceptable in audio processing. Even for something as basic as text input, a 500 ms delay between the key being pressed and the character appearing is completely unacceptable unless you're connected via remote desktop or something. No amount of effort is too much for fixing these kinds of problems.
Now for the second case, which I think is mostly self-evident. If you're doing batch processing then you generally have a scalability concern. As long as the batch is able to run in the time allotted, you don't need to improve it. But if your data is growing, if the batch is supposed to run overnight and you start to see it creeping into the wee hours of the morning and interrupting people's work at 9:15 AM, then clearly you need to work on performance.
Actually, you really can't wait that long; once it fails to complete in the required time, you may already be in big trouble. You have to actively monitor the situation and maintain some sort of safety margin - say a maximum running time of 5 hours out of the available 6 before you start to worry.
So the answer for batch processes is obvious. You have a hard requirement that the bast must finish within a certain time. Therefore, if you are getting close to the edge, performance must be improved, regardless of how difficult/costly it is. The question then becomes what is the most economical means of improving the process?
If it costs significantly less to just throw some more hardware at the problem (and you know for a fact that the problem really does scale with hardware), then don't spend any time optimizing, just buy new hardware. Otherwise, figure out what combination of design optimization and hardware upgrades is going to get you the best ROI. It's almost purely a cost decision at this point.
That's about all I have to say on the subject. Shame on the people who respond to this with "YAGNI". It's your professional responsibility to know or at least find out whether or not you "need it." Assuming that anything is acceptable until customers complain is an abdication of this responsibility.
Simply because your customers don't demand it doesn't mean you don't need to consider it. Your customers don't demand unit tests, either, or even reasonably good/maintainable code, but you provide those things anyway because it is part of your profession. And at the end of the day, your customers will be a lot happier with a smooth, fast product than with any of those other developer-centric things.
Optimization is worth it when it is necessary.
If we have promised the client response times on holiday package searches that are 5 seconds or less and that the system will run on a single Oracle server (of whatever spec) and the searches are taking 30 seconds at peak load, then the optimization is definitely worth it because we're not going to get paid otherwise.
When you are initially developing a system, you (if you are a good developer) are designing things to be efficient without wasting time on premature optimization. If the resulting system isn't fast enough, you optimize. But your question seems to be suggesting that there's some hand-wavey additional optimization that you might do if you feel that it's worth it. That's not a good way to think about it because it implies that you haven't got a firm target in mind for what is acceptable to begin with. You need to discuss it with the stakeholders and set some kind of target before you start worrying about what kind of optimizations you need to do.
Like everyone said in the other questions answers is, when it makes monetary sense to change something then it needs changing. In most cases good enough wins the day. If the customers aren't complaining then it is good enough. If they are complaining then fix it enough so they stop complaining. Agile methodologies will give you some guidance on how to know when enough is enough. Who cares if something is using 40% CPU more CPU than you think it needs to be, if it is working and the customers are happy then it is good enough. Really simple, get it working and maintainable and then wait for complaints that probably will never come.
If what you are worried about was really a problem, NO ONE would ever have started using Java to build mission critical server side applications. Or Python or Erlang or anything else that isn't C for that matter. And if they did that, nothing would get done in a time frame to even acquire that first customer that you are so worried about losing. You will know well in advance that you need to change something well before it becomes a problem.
Good posting everyone.
Have you looked at the unnecessary usage of transactions for simple SELECT(s)? I got burned on that one a few times... I also did some code cleanup and found MANY graphs being returned for maybe 10 records needed.... on and on... sometimes it's not YOUR code per se, but someone cutting corners.... Good luck!
If the client doesn't see a need to do performance optimization, then there's no reason to do it.
Defining a measurable performance requirements SLA with the client (i.e., 95% of queries complete in under 2 seconds) near the beginning of the project lets you know if you're meeting that goal, or if you have more optimization to do. Performance testing at current and estimated future loads gives you the data that you need to see if you're meeting the SLA.
Optimization is rarely worth it before you know what needs to be optimized. Always remember that if I/O is basically idle and CPU is low, you're not getting anything out of the computer. Obviously you don't want the CPU pegged all the time and you don't want to be running out of I/O bandwidth, but realize that trying to have the computer basically idle all day while it performs intense operations is unrealistic.
Wait until you start to reach a predefined threshold (80% utilization is the mark I usually use, others think that's too high/low) and then optimize at that point if necessary. Keep in mind that the best solution may be scaling up or scaling out and not actually optimizing the software.
Use Amdal's law. It shows you which is the overall improvement when optimizing a certain part of a system.
Also: If it ain't broke, don't fix it.
Optimization is worth the time spent on it when you get good speedups for spending little time in optimizing (obviously). To get that, you need tools/techniques that lead you very quickly to the code whose optimization yields the most benefit.
It is common to think that the way to find that code is by measuring the time spent by functions, but to my mind that only provides clues - you still have to play detective. What takes me straight to the code is stackshots. Here is an example of a 40-times speedup, achieved by finding and fixing several problems. Others on SO have reported speedup factors from 7 to 60, achieved with little effort.*
*(7x: Comment 1. 60x: Comment 30.)

One could use a profiler, but why not just halt the program? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
If something is making a single-thread program take, say, 10 times as long as it should, you could run a profiler on it. You could also just halt it with a "pause" button, and you'll see exactly what it's doing.
Even if it's only 10% slower than it should be, if you halt it more times, before long you'll see it repeatedly doing the unnecessary thing. Usually the problem is a function call somewhere in the middle of the stack that isn't really needed. This doesn't measure the problem, but it sure does find it.
Edit: The objections mostly assume that you only take 1 sample. If you're serious, take 10. Any line of code causing some percentage of wastage, like 40%, will appear on the stack on that fraction of samples, on average. Bottlenecks (in single-thread code) can't hide from it.
EDIT: To show what I mean, many objections are of the form "there aren't enough samples, so what you see could be entirely spurious" - vague ideas about chance. But if something of any recognizable description, not just being in a routine or the routine being active, is in effect for 30% of the time, then the probability of seeing it on any given sample is 30%.
Then suppose only 10 samples are taken. The number of times the problem will be seen in 10 samples follows a binomial distribution, and the probability of seeing it 0 times is .028. The probability of seeing it 1 time is .121. For 2 times, the probability is .233, and for 3 times it is .267, after which it falls off. Since the probability of seeing it less than two times is .028 + .121 = .139, that means the probability of seeing it two or more times is 1 - .139 = .861. The general rule is if you see something you could fix on two or more samples, it is worth fixing.
In this case, the chance of seeing it in 10 samples is 86%. If you're in the 14% who don't see it, just take more samples until you do. (If the number of samples is increased to 20, the chance of seeing it two or more times increases to more than 99%.) So it hasn't been precisely measured, but it has been precisely found, and it's important to understand that it could easily be something that a profiler could not actually find, such as something involving the state of the data, not the program counter.
On Java servers it's always been a neat trick to do 2-3 quick Ctrl-Breakss in a row and get 2-3 threaddumps of all running threads. Simply looking at where all the threads "are" may extremely quickly pinpoint where your performance problems are.
This technique can reveal more performance problems in 2 minutes than any other technique I know of.
Because sometimes it works, and sometimes it gives you completely wrong answers. A profiler has a far better record of finding the right answer, and it usually gets there faster.
Doing this manually can't really be called "quick" or "effective", but there are several profiling tools which do this automatically; also known as statistical profiling.
Callstack sampling is a very useful technique for profiling, especially when looking at a large, complicated codebase that could be spending its time in any number of places. It has the advantage of measuring the CPU's usage by wall-clock time, which is what matters for interactivity, and getting callstacks with each sample lets you see why a function is being called. I use it a lot, but I use automated tools for it, such as Luke Stackwalker and OProfile and various hardware-vendor-supplied things.
The reason I prefer automated tools over manual sampling for the work I do is statistical power. Grabbing ten samples by hand is fine when you've got one function taking up 40% of runtime, because on average you'll get four samples in it, and always at least one. But you need more samples when you have a flat profile, with hundreds of leaf functions, none taking more than 1.5% of the runtime.
Say you have a lake with many different kinds of fish. If 40% of the fish in the lake are salmon (and 60% "everything else"), then you only need to catch ten fish to know there's a lot of salmon in the lake. But if you have hundreds of different species of fish, and each species is individually no more than 1%, you'll need to catch a lot more than ten fish to be able to say "this lake is 0.8% salmon and 0.6% trout."
Similarly in the games I work on, there are several major systems each of which call dozens of functions in hundreds of different entities, and all of this happens 60 times a second. Some of those functions' time funnels into common operations (like malloc), but most of it doesn't, and in any case there's no single leaf that occupies more than 1000 μs per frame.
I can look at the trunk functions and see, "we're spending 10% of our time on collision", but that's not very helpful: I need to know exactly where in collision, so I know which functions to squeeze. Just "do less collision" only gets you so far, especially when it means throwing out features. I'd rather know "we're spending an average 600 μs/frame on cache misses in the narrow phase of the octree because the magic missile moves so fast and touches lots of cells," because then I can track down the exact fix: either a better tree, or slower missiles.
Manual sampling would be fine if there were a big 20% lump in, say, stricmp, but with our profiles that's not the case. Instead I have hundreds of functions that I need to get from, say, 0.6% of frame to 0.4% of frame. I need to shave 10 μs off every 50 μs function that is called 300 times per second. To get that kind of precision, I need more samples.
But at heart what Luke Stackwalker does is what you describe: every millisecond or so, it halts the program and records the callstack (including the precise instruction and line number of the IP). Some programs just need tens of thousands of samples to be usefully profiled.
(We've talked about this before, of course, but I figured this was a good place to summarize the debate.)
There's a difference between things that programmers actually do, and things that they recommend others do.
I know of lots of programmers (myself included) that actually use this method. It only really helps to find the most obvious of performance problems, but it's quick and dirty and it works.
But I wouldn't really tell other programmers to do it, because it would take me too long to explain all the caveats. It's far too easy to make an inaccurate conclusion based on this method, and there are many areas where it just doesn't work at all. (for example, that method doesn't reveal any code that is triggered by user input).
So just like using lie detectors in court, or the "goto" statement, we just don't recommend that you do it, even though they all have their uses.
I'm surprised by the religous tone on both sides.
Profiling is great, and certainly is a more refined and precise when you can do it. Sometimes you can't, and it's nice to have a trusty back-up. The pause technique is like the manual screwdriver you use when your power tool is too far away or the bateries have run-down.
Here is a short true story. An application (kind of a batch proccessing task) had been running fine in production for six months, suddenly the operators are calling developers because it is going "too slow". They aren't going to let us attach a sampling profiler in production! You have to work with the tools already installed. Without stopping the production process, just using Process Explorer, (which operators had already installed on the machine) we could see a snapshot of a thread's stack. You can glance at the top of the stack, dismiss it with the enter key and get another snapshot with another mouse click. You can easily get a sample every second or so.
It doesn't take long to see if the top of the stack is most often in the database client library DLL (waiting on the database), or in another system DLL (waiting for a system operation), or actually in some method of the application itself. In this case, if I remember right, we quickly noticed that 8 times out of 10 the application was in a system DLL file call reading or writing a network file. Sure enough recent "upgrades" had changed the performance characteristics of a file share. Without a quick and dirty and (system administrator sanctioned) approach to see what the application was doing in production, we would have spent far more time trying to measure the issue, than correcting the issue.
On the other hand, when performance requirements move beyond "good enough" to really pushing the envelope, a profiler becomes essential so that you can try to shave cycles from all of your closely-tied top-ten or twenty hot spots. Even if you are just trying to hold to a moderate performance requirement durring a project, when you can get the right tools lined-up to help you measure and test, and even get them integrated into your automated test process it can be fantasticly helpful.
But when the power is out (so to speak) and the batteries are dead, it's nice know how to use that manual screwdriver.
So the direct answer: Know what you can learn from halting the program, but don't be afraid of precision tools either. Most importantly know which jobs call for which tools.
Hitting the pause button during the execution of a program in "debug" mode might not provide the right data to perform any performance optimizations. To put it bluntly, it is a crude form of profiling.
If you must avoid using a profiler, a better bet is to use a logger, and then apply a slowdown factor to "guesstimate" where the real problem is. Profilers however, are better tools for guesstimating.
The reason why hitting the pause button in debug mode, may not give a real picture of application behavior is because debuggers introduce additional executable code that can slowdown certain parts of the application. One can refer to Mike Stall's blog post on possible reasons for application slowdown in a debugging environment. The post sheds light on certain reasons like too many breakpoints,creation of exception objects, unoptimized code etc. The part about unoptimized code is important - the "debug" mode will result in a lot of optimizations (usually code in-lining and re-ordering) being thrown out of the window, to enable the debug host (the process running your code) and the IDE to synchronize code execution. Therefore, hitting pause repeatedly in "debug" mode might be a bad idea.
If we take the question "Why isn't it better known?" then the answer is going to be subjective. Presumably the reason why it is not better known is because profiling provides a long term solution rather than a current problem solution. It isn't effective for multi-threaded applications and isn't effective for applications like games which spend a significant portion of its time rendering.
Furthermore, in single threaded applications if you have a method that you expect to consume the most run time, and you want to reduce the run-time of all other methods then it is going to be harder to determine which secondary methods to focus your efforts upon first.
Your process for profiling is an acceptable method that can and does work, but profiling provides you with more information and has the benefit of showing you more detailed performance improvements and regressions.
If you have well instrumented code then you can examine more than just the how long a particular method; you can see all the methods.
With profiling:
You can then rerun your scenario after each change to determine the degree of performance improvement/regression.
You can profile the code on different hardware configurations to determine if your production hardware is going to be sufficient.
You can profile the code under load and stress testing scenarios to determine how the volume of information impacts performance
You can make it easier for junior developers to visualise the impacts of their changes to your code because they can re-profile the code in six months time while you're off at the beach or the pub, or both. Beach-pub, ftw.
Profiling is given more weight because enterprise code should always have some degree of profiling because of the benefits it gives to the organisation of an extended period of time. The more important the code the more profiling and testing you do.
Your approach is valid and is another item is the toolbox of the developer. It just gets outweighed by profiling.
Sampling profilers are only useful when
You are monitoring a runtime with a small number of threads. Preferably one.
The call stack depth of each thread is relatively small (to reduce the incredible overhead in collecting a sample).
You are only concerned about wall clock time and not other meters or resource bottlenecks.
You have not instrumented the code for management and monitoring purposes (hence the stack dump requests)
You mistakenly believe removing a stack frame is an effective performance improvement strategy whether the inherent costs (excluding callees) are practically zero or not
You can't be bothered to learn how to apply software performance engineering day-to-day in your job
....
Stack trace snapshots only allow you to see stroboscopic x-rays of your application. You may require more accumulated knowledge which a profiler may give you.
The trick is knowing your tools well and choose the best for the job at hand.
These must be some trivial examples that you are working with to get useful results with your method. I can't think of a project where profiling was useful (by whatever method) that would have gotten decent results with your "quick and effective" method. The time it takes to start and stop some applications already puts your assertion of "quick" in question.
Again, with non-trivial programs the method you advocate is useless.
EDIT:
Regarding "why isn't it better known"?
In my experience code reviews avoid poor quality code and algorithms, and profiling would find these as well. If you wish to continue with your method that is great - but I think for most of the professional community this is so far down on the list of things to try that it will never get positive reinforcement as a good use of time.
It appears to be quite inaccurate with small sample sets and to get large sample sets would take lots of time that would have been better spent with other useful activities.
What if the program is in production and being used at the same time by paying clients or colleagues. A profiler allows you to observe without interferring (as much, because of course it will have a little hit too as per the Heisenberg principle).
Profiling can also give you much richer and more detailed accurate reports. This will be quicker in the long run.
EDIT 2008/11/25: OK, Vineet's response has finally made me see what the issue is here. Better late than never.
Somehow the idea got loose in the land that performance problems are found by measuring performance. That is confusing means with ends. Somehow I avoided this by single-stepping entire programs long ago. I did not berate myself for slowing it down to human speed. I was trying to see if it was doing wrong or unnecessary things. That's how to make software fast - find and remove unnecessary operations.
Nobody has the patience for single-stepping these days, but the next best thing is to pick a number of cycles at random and ask what their reasons are. (That's what the call stack can often tell you.) If a good percentage of them don't have good reasons, you can do something about it.
It's harder these days, what with threading and asynchrony, but that's how I tune software - by finding unnecessary cycles. Not by seeing how fast it is - I do that at the end.
Here's why sampling the call stack cannot give a wrong answer, and why not many samples are needed.
During the interval of interest, when the program is taking more time than you would like, the call stack exists continuously, even when you're not sampling it.
If an instruction I is on the call stack for fraction P(I) of that time, removing it from the program, if you could, would save exactly that much. If this isn't obvious, give it a bit of thought.
If the instruction shows up on M = 2 or more samples, out of N, its P(I) is approximately M/N, and is definitely significant.
The only way you can fail to see the instruction is to magically time all your samples for when the instruction is not on the call stack. The simple fact that it is present for a fraction of the time is what exposes it to your probes.
So the process of performance tuning is a simple matter of picking off instructions (mostly function call instructions) that raise their heads by turning up on multiple samples of the call stack. Those are the tall trees in the forest.
Notice that we don't have to care about the call graph, or how long functions take, or how many times they are called, or recursion.
I'm against obfuscation, not against profilers. They give you lots of statistics, but most don't give P(I), and most users don't realize that that's what matters.
You can talk about forests and trees, but for any performance problem that you can fix by modifying code, you need to modify instructions, specifically instructions with high P(I). So you need to know where those are, preferably without playing Sherlock Holmes. Stack sampling tells you exactly where they are.
This technique is harder to employ in multi-thread, event-driven, or systems in production. That's where profilers, if they would report P(I), could really help.
Stepping through code is great for seeing the nitty-gritty details and troubleshooting algorithms. It's like looking at a tree really up close and following each vein of bark and branch individually.
Profiling lets you see the big picture, and quickly identify trouble points -- like taking a step backwards and looking at the whole forest and noticing the tallest trees. By sorting your function calls by length of execution time, you can quickly identify the areas that are the trouble points.
I used this method for Commodore 64 BASIC many years ago. It is surprising how well it works.
I've typically used it on real-time programs that were overrunning their timeslice. You can't manually stop and restart code that has to run 60 times every second.
I've also used it to track down the bottleneck in a compiler I had written. You wouldn't want to try to break such a program manually, because you really have no way of knowing if you are breaking at the spot where the bottlenck is, or just at the spot after the bottleneck when the OS is allowed back in to stop it. Also, what if the major bottleneck is something you can't do anything about, but you'd like to get rid of all the other largeish bottlenecks in the system? How to you prioritize which bottlenecks to attack first, when you don't have good data on where they all are, and what their relative impact each is?
The larger your program gets, the more useful a profiler will be. If you need to optimize a program which contains thousands of conditional branches, a profiler can be indispensible. Feed in your largest sample of test data, and when it's done import the profiling data into Excel. Then you check your assumptions about likely hot spots against the actual data. There are always surprises.

Resources