how do I profile a specific block of code in vs2008/10? - visual-studio

I am completely new to the idea of profiling, can anyone give me simple steps on how I would profile a specific function?
Both functions do the same thing, but one is in C# in vs2008, the other C++ in vs2010. I'm really mostly interested in the C++ function as it is being incredibly slow and I want to find out where/why. Thanks.

simple steps on how I would profile a specific function
The method I use is random pausing, maybe better known as the poor man's profiler (which I liked until I saw how they aggregate).
Here's another link.
You don't have to tell it what specific function to look at.
It automatically finds what takes the most time, whether it's that function or another one.
The thing about it is, compared to "real" profilers, some might say it is crude, tedious, etc.
So why do it?
Because it's effective.
You nail your problem, down to specific instructions, and know what to fix, before you even begin to puzzle through the reams of stuff that comes out of most profilers - self time, call counts, call trees, call graphs, "hot paths", "cpu time", etc. etc..
The thing to ask of any profiler is not how "accurate" it is, but
How much speedup is typically achieved by using it, in real (not toy) programs?
Isn't that what you care about?
Here's an example of a 43x speedup done with random-pausing.

Here is one idea. Write a test app that calls this function repeatedly and does as little else as possible. Then create two executables with the two versions of your function. Now run the Very Sleepy profiler on both executables, and compare the results. The profiler will show you what the slow parts of each function are.
Another profiler that you may find useful is Shiny. It works in a completely different way. You have to insert calls in the functions or sections of code you want to profile, and then compile a special profiler enabled version of your app.

Related

Profiling vs debugging what is the practice

Most often, I do wounder how best to make my applications have optimal performance. How to optimize and identify functions/methods that are more resource intensive than the others and make necessary adjustments. In software development irrespective of language I believe that, there should some ways of finding out how the processor/network resources are being used by different parts of my codes. I will illustrate what I mean using the simplest example I can think of: I have background on Java, Python and PHP and feel more comfortable working on linux environment. Please feel free to advice me using any of these languages you are comfortable with:
In Javascript one can comfortably test and assign a value to a variable by doing:
//METHOD 1:
if(true){
console.log("It will always be true");
}else{
console.log("You can never see me");
}
//METHOD 2:
var print;
if(true){
print="It will always be true";
}else{
print="You can never see me";
}
console.log(print);
//METHOD 3:
console.log((true)? "It will always be true" : "You can never see me");
If different people were to be asked which of these methods will perform faster than the other. I am sure that different individuals will come up with different ideas. But I need a more reliable way to know about resource usage both on desktop and mobile applications. Thanks.
First of all, Debugging is done when you know the exact bug or wrong functionality.
It is done for functional testing i.e. to check defects in application.
For performance, profilers are used to find out resource intensive methods. they will provide heavy modules/functions/DB queries etc. so that after analysis you can tune and improve your system performance.If after modification some defect arises then you debugger can be used to pinpoint and correct the issue.
There are many opensource as well as paid profilers for java(I am saying java because you pasted javascript code).
Please have a look at them and use them to tune your system.
IMHO downvoting was for 2 reasons bad english and very basic question.
In the examples you gave, it simply doesn't matter. That's like asking which is faster, a snail or a worm BUT, if by
"profiling" you mean "measurements of time taken by functions", and if by
"debugging performance problem" you mean "the bug is that time is being spent unnecessarily, and I need to find out why",
then I strongly agree with the premise of your question.
Debugging works much better.
A performance problem consists of excess time being spent for unnecessary reasons.
The way to find it is by breaking into the execution at random times, which will naturally gravitate toward whatever is taking the most time.
The more time the problem takes, the more likely the break is to land in the problem.
Just do it several times, and each time look carefully at the program's state, to understand why it's doing whatever it's doing.
If you see it doing something that can be avoided, and you see it doing it on more than one break, you've found a performance problem.
Fix it and observe the speedup.
Then repeat the process, because what was the next biggest problem is now the biggest one.
The speedups multiply together.
Here's an example where the time goes from 2700us to 1800, then to 1500, 1300, 440, 170, and finally 3.7us. None of the speedups were all that big as fractions of the original time, but in the aggregate, they were stunning.
This is very much different from measuring.
In measuring, the first thing that happens is you assume numerical accuracy is important, so you assume you either need lots of samples or you have to instrument the code to measure time precisely.
As if numerical accuracy helps to find the problem.
This false assumption entered programmers' consciousness around 1982, when GPROF appeared.
In measuring, you assume the problem can be localized to a function, when in fact you need to see all of what's happening at a point in time to know if it can be avoided.
IMHO, the best profilers sample the stack, on wall-clock time, and report for each line of code that appears on samples, the percent of samples it appears on.
However, even these profilers don't tell you the context that you can get by simply looking carefully at individual stack samples, and data as well.
(Other forms of eye-candy have the same problem: call-graph, hot-path, flame-graph, etc.)

When should I consider the performance impact of a function call?

In a recent conversation with a fellow programmer, I asserted that "if you're writing the same code more than once, it's probably a good idea to refactor that functionality such that it can be called once from each of those places."
My fellow programmer buddy instead insisted that the performance impact of making these function calls was not acceptable.
Now, I'm not looking for validation of who was right. I'm simply curious to know if there are situations or patterns where I should consider the performance impact of a function call before refactoring.
"My fellow programmer buddy instead insisted that the performance impact of making these function calls was not acceptable."
...to which the proper answer is "Prove it."
The old saw about premature optimization applies here. Anyone who isn't familiar with it needs to be educated before they do any more harm.
IMHO, if you don't have the attitude that you'd rather spend a couple hours writing a routine that can be used for both than 10 seconds cutting and pasting code, you don't deserve to call yourself a coder.
Don't even consider the effect of calling overhead if the code isn't in a loop that's being called millions of times, in an area where the user is likely to notice the difference. Once you've met those conditions, go ahead and profile to see if your worries are justified.
Modern compilers of languages such as Java will inline certain function calls anyway. My opinion is that the design is way more important over the few instructions spent with function call. The only situation I can think about would be writing some really fine tuned code in assembler.
You need to ask yourself several questions:
Cost of time spent on optimizing code vs cost of throwing more hardware at it.
How does this impact maintainability?
How does going in either direction impact your deadline?
Does this really beg optimization when many modern compilers will do it for you anyway? Do not try to outsmart the compiler.
And of course, which will help you sleep better at night? :)
My bet is that there was a time in which the performance cost of a call to an external method or function WAS something to be concerned with, in the same way that the lengths of variable names and such all needed to be evaluated with respect to performance implications.
With the monumental increases in processor speed and memory resources int he last two decades, I propose that these concerns are no longer as pertinent as they once were.
We have been able use long variable names without concern for some time, and the cost of a call to external code is probably negligible in most cases.
There might be exceptions. If you place a function call within a large loop, you may see some impact, depending upon the number of iterations.
I propose that in most cases you will find that refactoring code into discrete function calls will have a negligible impact. There might be occasions in which there IS an impact. However, proper TESTING of a refactoring will reveal this. In those minority of cases, your friend might be correct. For most of the rest of the time, I propose that your friend is clining a little to closely to practices which pre-date most modern processors and storage media.
You care about function call overhead the same time you care about any other overhead: when your performance profiling tool indicates that it's a problem.
for the c/c++ family:
the 'cost' of the call is not important. if it needs to be fast, you just have to make sure the compiler is able to inline it. that means that:
the body must be visible to the compiler
the body is indeed small enough to be considered an inline candidate.
the method does not require dynamic dispatch
there are a few ways to break this default ability. for example:
huge instruction count already in the callsite. even with early inlining, the compiler may pop a trivial function out of line (even though it could generate more instructions/slower execution). early inlining is the compiler's ability to inline a function early on, when it sees the call costs more than the inline.
recursion
the inline keyword is more or less useless in this era, regarding its original intent. however, many compilers offer a means to restore the meaning, with a compiler specific directive. using this directive (correctly) helps considerably. learning how to use it correctly takes time. if in doubt, omit the directive and leave it up to the compiler.
assuming you are using a modern compiler, there is no excuse to avoid the function, unless you're also willing to go down to assembly for this particular program.
as it stands, and if performance is crucial, you really have two choices:
1) learn to write well organized programs for speed. downside: longer compile times
2) maintain a poorly written program
i prefer 1. any day.
(yes, i have spent a lot of time writing performance critical programs)

Can this kernel function be more readable? (Ideas needed for an academic research!)

Following my previous question regarding the rationale behind extremely long functions, I would like to present a specific question regarding a piece of code I am studying for my research. It's a function from the Linux Kernel which is quite long (412 lines) and complicated (an MCC index of 133). Basically, it's a long and nested switch statement
Frankly, I can't think of any way to improve this mess. A dispatch table seems both huge and inefficient, and any subroutine call would require an inconceivable number of arguments in order to cover a large-enough segment of code.
Do you think of any way this function can be rewritten in a more readable way, without losing efficiency? If not, does the code seem readable to you?
Needless to say, any answer that will appear in my research will be given full credit - both here and in the submitted paper.
Link to the function in an online source browser
I don't think that function is a mess. I've had to write such a mess before.
That function is the translation into code of a table from a microprocessor manufacturer. It's very low-level stuff, copying the appropriate hardware registers for the particular interrupt or error reason. In this kind of code, you often can't touch registers which have not been filled in by the hardware - that can cause bus errors. This prevents the use of code that is more general (like copying all registers).
I did see what appeared to be some code duplication. However, at this level (operating at interrupt level), speed is more important. I wouldn't use Extract Method on the common code unless I knew that the extracted method would be inlined.
BTW, while you're in there (the kernel), be sure to capture the change history of this code. I have a suspicion that you'll find there have not been very many changes in here, since it's tied to hardware. The nature of the changes over time of this sort of code will be quite different from the nature of the changes experienced by most user-mode code.
This is the sort of thing that will change, for instance, when a new consolidated IO chip is implemented. In that case, the change is likely to be copy and paste and change the new copy, rather than to modify the existing code to accommodate the changed registers.
Utterly horrible, IMHO. The obvious first-order fix is to make each case in the switch a call to a function. And before anyone starts mumbling about efficiency, let me just say one word - "inlining".
Edit: Is this code part of the Linux FPU emulator by any chance? If so this is very old code that was a hack to get linux to work on Intel chips like the 386 which didn't have an FPU. If it is, it's probably not a suitable study for academics, except for historians!
There's a kind of regularity here, I suspect that for a domain expert this actually feels very coherent.
Also having the variations in close proximty allows immediate visual inspection.
I don't see a need to refactor this code.
I'd start by defining constants for the various classes. Coming into this code cold, it's a mystery what the switching is for; if the switching was against named constants, I'd have a starting point.
Update: You can get rid of about 70 lines where the cases return MAJOR_0C_EXCP; simply let them fall through to the end of the routine. Since this is kernel code I'll mention that there might be some performance issues with that, particularly if the case order has already been optimized, but it would at least reduce the amount of code you need to deal with.
I don't know much about kernels or about how re-factoring them might work.
The main thing that comes to my mind is taking that switch statement and breaking each sub step in to a separate function with a name that describes what the section is doing. Basically, more descriptive names.
But, I don't think this optimizes the function any more. It just breaks it in to smaller functions of which might be helpful... I don't know.
That is my 2 cents.

How to test application's startup time or performance

There is a free tool called PassMark AppTimer. But I think it's not quite fit for my needs.
Windows provides a tool called xperf, is there a way to use it to test/benchmark application startup time?
If I'm helping to develop an app, and it gets too slow on startup (or any other phase), I just do this.
Common wisdom is that measuring performance of various routines is necessary for finding performance problems.
I go the other way - I locate the biggest problems (because their very slowness exposes them), and then I can roughly estimate how much time they take, if I care to. Here's an example of how it works.
The kinds of things I have found are, for example 1) fetching and converting strings from resources, which were in resources so that they could be internationalized, but did not really need to be internationalized, or 2) creating and deleting (along with serializing) deep data structures for no real reason in the process of setting up UI controls.
The things found are almost never what you might guess, so it is a mistake to guess. Just see what the process tells you.
The interesting thing about this is that the problem is almost never the kind of thing that profilers could easily tell you. The problem is nearly always some innocent-looking function or method call, somewhere in the middle of the call stack, that only gets your attention because 1) it shows up a lot, and 2) by looking at what it is doing and why, you can see that it can be done without. Getting rid of it saves as much time as it was on the stack.

What's a good profiling tool to use when source code isn't available?

I have a big problem. My boss said to me that he wants two "magic black box":
1- something that receives a micropocessor like input and return, like output, the MIPS and/or MFLOPS.
2- something that receives a c code like input and return, like output, something that can characterize the code in term of performance (something like the necessary MIPS that a uP must have to execute the code in some time).
So the first "black box" I think could be a benchmark of EEMBC or SPEC...different uP, same benchmark that returns MIPS/MFLOPS of each uP. The first problem is OK (I hope)
But the second...the second black box is my nightmare...the only thingh that i find is to use profiling tool but I ask a particular profiling tool.
Is there somebody that know a profiling tool that can have, like input, simple c code and gives me, like output, the performance characteristics of my c code (or the times that some assembly instruction is called)?
The real problem is that we must choose the correct uP for a certai c code...but we want a uP tailored for our c code...so if we know a MIPS (and architectural structure of uP, memory structure...) and what our code needed
Thanks to everyone
I have to agree with Adam, though I would be a little more gracious about it. Compiler optimizations only matter in hotspot code, i.e. tight loops that a) don't call functions, and b) take a large percentage of time.
On a positive note, here's what I would suggest:
Run the C code on a processor, any processor. On that processor, find out what takes the most time.
You could use a profiler for this. The simple method I prefer is to just run it under a debugger and manually halt it, some number of times (like 10) and each time write down the call stack. I suppose there is something in the code taking a good percentage of the time, like 50%. If so, you will see it doing that thing on roughly that percentage of samples, so you won't have to guess what it is.
If that activity is something that would be helped by some special processor, then try that processor.
It is important not to guess. If you say "I think this needs a DSP chip", or "I think it needs a multi-core chip", that is a guess. The guess might be right, but probably not. It is probably the case that what takes the most time is something you never would guess, like memory management or I/O formatting. Performance issues are very good at hiding from you.
No. If someone made a tool that could analyse (non-trivial) source code and tell you its performance characteristics, it would be common place. i.e. everyone would be using it.
Until source code is compiled for a particular target architecture, you will not be able to determine its overall performance. For instance, a parallelising compiler targeting n processors might conceivably be able to change an O(n^2) algorithm to one of O(n).
You won't find a tool to do what you want.
Your only option is to cross-compile the code and profile it on an emulator for the architecture you're running. The problem with profiling high level code is the compiler makes a stack of optimizations that are non trivial and you'd need to know how the particular compiler did that.
It sounds dumb, but why do you want to fit your code to a uP and a uP to your code? If you're writing signal processing buy a DSP. If you're building a SCADA box then look into Atmel or ARM stuff. Are you building a general purpose appliance with a user interface? Look into PPC or X86 compatible stuff.
Simply put, choose a bloody architecture that's suitable and provides the features you need. Optimization before choosing the processor is retarded (very roughly paraphrasing Knuth).
Fix the architecture at something roughly appropriate, work out roughly the processing requirements (you can scratch up an estimate by hand which will always be too high when looking at C code) and buy a uP to match.

Resources