Compact Framework and JIT. How long could it take - performance

We have/had a phantom delay in our app. This was traced to the initialisation of a singleton when the object was touched for the first time and was blamed on JIT. I'm not utterly convinced by this as there is no mechanism for measuring JIT (or is there?) and the entire delay was seven seconds. Seven seconds of JIT?!? Could that be forreal?
Either way I have difficulty in blaming things that one cannot easily measure. When I had a glance at the issue a while back I commented out a bunch of code and watched the seven second delay "jump" elsewhere in the app. Suggesting it is somehow happening on a background process somewhere (and I guess this would count JIT in as a potential cause).
Just for fun if there was a static object that happened to reference a lot of other objects does anyone have a rule of thumb for how long the JIT might take? Does anyone have further references so I can understand more about the JIT so I stand a chance of learning whether or not JIT is/was to blame for this slow down?

I've only seen JIT take a really long time (greater than 1 second) in a weird bug that had to do with templated items inside a templated collection (see edit below).
At any rate, the fact you see it "move" definitely indicates to me that it probably isn't the issue. To try to determine this definitively I'd look at using RPM to see what's happening right before and after the delay.
Expected JIT time is a really nebulous thing, since there are so many factors that can affect it. Processor speed is an obvious one, but less obvious might be things like app storage media and device memory pressure.
Storage media can affect JIT speed because the JITter has to pull the IL from the media when it needs to JIT it, and if pulling it is slow, then JITting it will be slow.
Memory pressure is a tough one, and can have serious repercussions on a CE device. The issue here is that when you start running out of memory, the EE will start pitching JITted code during collection - everything but the call stack. Now if you're in a routine that, for example, calls out to some worker or helper stuff, or has a thread running, then that helper method could be getting pitched, JITted, pitched JITted, etc. This is referred to as "thrash."
Identifying the latter is fairly easy with RPM (fixing it may not be so easy). Look at the amount of code pitched to raise frequently and look for a strong correlation between a rise in the number of pitches and your perceived lock ups.
Edit: I finally found the bug description here.

JIT (and GC) timers etc. can be found here:
Performance Counters in the .NET Compact Framework
(http://msdn.microsoft.com/en-us/library/ms172525.aspx)
Monitoring Application Performance on the .NET Compact Framework Part I - Enabling performance counters (http://blogs.msdn.com/davidklinems/archive/2005/10/04/476988.aspx)
Analyzing Device Application Performance with the .Net Compact Framework Remote Performance Monitor (http://blogs.msdn.com/stevenpr/archive/2006/04/17/577636.aspx)
Performance Counters in the .NET Framework
(http://msdn.microsoft.com/en-us/library/w8f5kw2e(VS.80).aspx)
Regards,
tamberg

Related

Amount of acceptable Memory Leak

How much memory leak is negligible?
In my program, I am using Unity and when I go through Profile > Leaks and work with the project it shows about total 16KB memory Leak caused by Unity and I cannot help that.
EDIT: After a long play with the program it amounts to a 400KB leak.
What should I do? Is this amount of memory leak acceptable for an iPad project?
It's not great, but it won't get your app rejected unless it causes a crash in front of a reviewer. The size is less important than how often it occurs. If it only occurs once every time the app is run, that's not a big deal. If it happens every time the user does something, then that's more of a problem.
It's probably a good idea for you to track down these bugs and fix them, because Objective C memory management is quite different compared to Java, and it's good to get some practice in with smaller stuff before you're stuck trying to debug a huge problem with a deadline looming.
First, look if you can use Unity in another way to circumvent the leak (if you have enough insight into the workings of this framework).
Then, report the leakage to the Unity developers if not already done (by you or someone else).
Third, if you absolutely rely on this framework, hope it get fixed ASAP, unless switching to another framework is an option for you.
A 400K leak is not a very big deal unless it amounts to that size within few minutes. Though, no matter how small the leak, it is always necessary to keep an eye on any leak caused by your or third party code and try to get rid of them in the next minor or major iteration of your app.

Drastic performance inprovement in .NET CF after app gets moved out of the foreground. Why?

I have a large (500K lines) .NET CF (C#) program, running on CE6/.NET CF 3.5 (v.3.5.10181.0). This is running on a FreeScale i.Mx31 (ARM) # 400MHz. It has 128MB RAM, with ~80MB available to applications. My app is the only significant one running (this is a dedicated, embedded system). Managed memory in use (as reported by GC.Collect) is about 18MB.
To give a better idea of the app size, here's some stats culled from .NET CF Remote Performance Monitor after staring up the application:
GC:
Garbage Collections 131
Bytes Collected by GC 97,919,260
Managed Bytes in use after GC 17,774,992
Total Bytes in use after GC 24,117,424
GC Compactions 41
JIT:
Native Bytes Jitted: 10,274,820
Loader:
Classes Loaded 7,393
Methods Loaded 27,691
Recently, I have been trying to track down a performance problem. I found that my benchmark after running the app in two different startup configurations would run in approximately 2 seconds (slow case) vs. 1 second (fast case). In the slow case, the time for the benchmark could change randomly from EXE run to EXE run from 1.1 to 2 seconds, but for any given EXE run, would not change for the life of the application. In other words, you could re-run the benchmark and the time for the test stays the same until you restart the EXE, at which point a new time is established and consistent.
I could not explain the 1.1 to 2x slowdown via any conventional mechanism, or by narrowing the slowdown to any particular part of the benchmark code. It appeared that the overall process was just running slower, almost like a thread was spinning and taking away some of "my" CPU.
Then, I randomly discovered that just by switching away from my app (the GUI loses the foreground) to another app, my performance issue disappears. It stays gone even after returning my app to the foreground. I now have a tentative workaround where my app after startup launches an auxiliary app with a 1x1 size window that kills itself after 5ms. Thus the aux app takes the foreground, then relinquishes it.
The question is, why does this speed up my application?
I know that code gets pitched when a .NET CF app loses the foreground. I also notice that when performing a "GC Heap" capture with .NET CF Remote Performance Monitor, a Code Pitch is logged -- and this also triggers the performance improvement in my app. So I suspect somehow that code pitching is related or even responsible for fixing performance. But I'm at a loss as to figure out how to determine if that is really the case, or even to explain why pitching code could help in this way. Does pitching out lots of code somehow significantly help locality of reference of code pages (that are re-JITted, presumably near each other in memory) enough to help this much? (My benchmark spans probably 3 dozen routines and hundreds of lines of code.)
Most importantly, what can I do in my app to reliably avoid this slower condition. Any pointers to relevant .NET CF / JIT / Code pitching information would be greatly appreciated.
Your app going to the background auto-triggers a GC.Collect, which collects, may compact the GC Heap and may pitch code. Have you checked to see if a manual GC.Collect without going to the background gives the same behavior? It might not be pitching that's giving the perf gain, it might be collection or compaction. If a significant number of dead roots are swept up, walking the root tree may be getting faster. Can't say I've specifically seen this issue, so this is all conjecture.
Send your app a wm_hibernate before your performance loop. Will clean up things
We have a similar issue with our .NET CF application.
Over time, our application progressively slows down, eventually to a halt with what I anticipate is due to high CPU load, or as #wil-s says, as if thread is spinning consuming CPU. The only assumption / conclusion I've made to so far is either we have a rogue thread in our code, or there's an under the cover issue in .NET CF, maybe with the JITter.
Closing the application and re-launching returns our application to normal expected performance.
I am yet to implement code change to test issuing WM_HIBERNATE or launch a dummy app which quits itself (as above) to force a code pitch, but am fairly sure this will resolve our issue based on the above comments. (so many thanks for that)
However, I'm subsequently interested to know whether a root cause was ever found to this specific issue?
Incidentally and seemingly off topic (but bear with me), we're using a Freescale i.MX28 processor and are experiencing unpredictable FlashDisk corruption. Seeing 2K blocks of 0xFFs (erased blocks) in random files located on NAND Flash.
I'm mentioning this as I now believe the CPU and FlashDisk corruption issues are linked, due to this article as well as this one:
https://electronics.stackexchange.com/questions/26720/flash-memory-corruption-due-to-electricals
In the article, #jwygralak67 comments:
I recently worked through a flash corruption issue, on a WinCE system,
as part of a development team. We would sporadically find 2K blocks of
flash that were erased. (All bytes 0xFF) For about 6 months we tested
everything from ESD, to dirty power to EMI and RFI interference, we
bought brand new devices and tracked usage to make sure we weren't
exceeding the erase cycle limit and buring out blocks, we went through
our (application level) software with a fine toothed comb.
In the end it turned out to be an obscure bug in the very low level
flash driver code, which only occurred under periods of heavy CPU
load. The driver came from a 3rd party. We informed them of the issue
we found, but I don't know if they ever released a patch.
Unfortunately, we're yet to make contact with him.
With all of this in mind, potentially if we work around the high CPU load, maybe the corruption will no longer be present. Another conjecture case!
On that assumption however, this doesn't give a firm root cause for either situation, which I'm desperately seeking!
Any knowledge or insight, however small, would be very gratefully received.
#ctacke - we've spoken before regarding OpenNETCF via email, so I'm pleased to see your name!

Performance profiling of Windows Apps. Better alternatives for Visual Studio Profiler?

I am impressed with Visual Studio Profiler for performance analysis. Fast for my purposes and easy to use.
I am just curious to know about the caveats in visual studio profiler. Are there any better profilers for windows applications which fare better for these caveats?
On the positive side, nobody makes great apps like Microsoft. Visual Studio is a fine product, and its profiler shares those attributes.
On the other hand, there are caveats (shared by other profilers as well).
In sampling mode, it doesn't sample when the thread is blocked. Therefore it is blind to extraneous I/O, socket calls, etc. This is an attribute that dates from the early days of prof and gprof, which started out as PC samplers, and since when blocked the PC is meaningless, sampling was turned off. The PC may be meaningless, but the stack tells exactly why the thread is blocked and, when there is much time going into that, you need to know it.
In instrumentation mode, it can include I/O, but it only gives you function-level percent of time, not line level. That may be OK if functions happen to be small, or if they only call each other in a small number of places, so finding call sites is not too hard. I work with good programmers, but our code is not all like that. In fact, often the call sites are invisible, because they are compiler-inserted. On the other hand, stack samples pinpoint those calls no matter who wrote them.
The profiler does a nice job of showing you the split between activity of different threads. Then what you need to know is, if a thread is suspended or showing a low processor activity, is that because it is blocking for something that it doesn't really have to? Stack samples could tell you that if they could be taken during blocking. On the other hand, if a thread is cranking heavily, do you know if what it is doing is actually necessary or could be reduced? Stack samples will tell you that also.
Many people think the primary job of a profiler is to measure. Personally, I want something that pinpoints code that costs a lot of time and can be done more efficiently. Most of the time these are function call sites, not "hot spots". I don't need to know "a lot of time" with any precision. It I know it is, say, 60% +/- 20% that's perfectly fine with me because I'm looking for the problem, not the measurement. If because of this imprecision, I fix a problem which is not the largest, that's OK, because when I repeat the process, the largest problem will be even bigger, as a percent, so I won't miss it.

Do all profilers significantly slow execution?

The profilers I have experience with (mainly the Digital Mars D profiler that comes w/ the compiler) seem to massively slow down the execution of the program being profiled. This has a major effect on my willingness to use a profiler, as it makes profiling a "real" run of a lot of my programs, as opposed to testing on a very small input, impractical. I don't know much about how profilers are implemented. Is a major (>2x) slowdown when profiling pretty much a fact of life, or are there profilers that avoid it? If it can be avoided, are there any fast profilers available for D, preferrably for D2 and preferrably for free?
I don't know about D profilers, but in general there are two different ways a profiler can collect profiling information.
The first is by instrumentation, by injecting logging calls all over the place. This slows down the application more or less. Typically more.
The second is sampling. Then the profiler breaks the application at regular intervals and inspects the call stack. This does not slow down the application very much at all.
The downside of a sampling profiler is that the result is not as detailed as with an instrumenting profiler.
Check the documentation for your profiler if you can run with sampling instead of instrumentation. Otherwise you have some new Google terms in "sampling" and "instrumenting".
My favorite method of profiling slows the program way way down, and that's OK. I run the program under the debugger, with a realistic load, and then I manually interrupt it. Then I copy the call stack somewhere, like to Notepad. So it takes on the order of a minute to collect one sample. Then I can either resume execution, or it's even OK to start it over from the beginning to get another sample.
I do this 10 or 20 times, long enough to see what the program is actually doing from a wall-clock perspective. When I see something that shows up a lot, then I take more samples until it shows up again. Then I stop and really study what it is in the process of doing and why, which may take 10 minutes or more. That's how I find out if that activity is something I can replace with more efficient code, i.e. it wasn't totally necessary.
You see, I'm not interested in measuring how fast or slow it's going. I can do that separately with maybe only a watch. I'm interested in finding out which activities take a large percentage of time (not amount, percentage), and if something takes a large percentage of time, that is the probability that each stackshot will see it.
By "activity" I don't necessarily mean where the PC hangs out. In realistic software the PC is almost always off in a system or library routine somewhere. Typically more important is call sites in our code. If I see, for example, a string of 3 calls showing up on half of the stack samples, that represents very good hunting, because if any one of those isn't truly necessary and can be done away with, execution time will drop by half.
If you want a grinning manager, just do that once or twice.
Even in what you would think would be math-heavy scientific number crunching apps where you would think low-level optimization and hotspots would rule the day, you know what I often find? The math library routines are checking arguments, not crunching. Often the code is not doing what you think it's doing, and you don't have to run it at top speed to find that out.
I'd say yes, both sampling and instrumenting forms of profiling will tax your program heavily - regardless of whose profiler you are using, and on what language.
You could try h3r3tic's xfProf, which is a sampling profiler. Haven't tried it myself, but that guy always makes cool stuff :)
From the description:
If the program is sampled only a few hundred (or thousand)
times per second, the performance overhead will not be noticeable.

Effective Code Instrumentation?

All too often I read statements about some new framework and their "benchmarks." My question is a general one but to the specific points of:
What approach should a developer take to effectively instrument code to measure performance?
When reading about benchmarks and performance testing, what are some red-flags to watch out for that might not represent real results?
There are two methods of measuring performance: using code instrumentation and using sampling.
The commercial profilers (Hi-Prof, Rational Quantify, AQTime) I used in the past used code instrumentation (some of them could also use sampling) and in my experience, this gives the best, most detailed result. Especially Rational Quantity allow you to zoom in on results, focus on sub trees, remove complete call trees to simulate an improvement, ...
The downside of these instrumenting profilers is that they:
tend to be slow (your code runs about 10 times slower)
take quite some time to instrument your application
don't always correctly handle exceptions in the application (in C++)
can be hard to set up if you have to disable the instrumentation of DLL's (we had to disable instrumentation for Oracle DLL's)
The instrumentation also sometimes skews the times reported for low-level functions like memory allocations, critical sections, ...
The free profilers (Very Sleepy, Luke Stackwalker) that I use use sampling, which means that it is much easier to do a quick performance test and see where the problem lies. These free profilers don't have the full functionality of the commercial profilers (although I submitted the "focus on subtree" functionality for Very Sleepy myself), but since they are fast, they can be very useful.
At this time, my personal favorite is Very Sleepy, with Luke StackWalker coming second.
In both cases (instrumenting and sampling), my experience is that:
It is very difficult to compare the results of profilers over different releases of your application. If you have a performance problem in your release 2.0, profile your release 2.0 and try to improve it, rather than looking for the exact reason why 2.0 is slower than 1.0.
You must never compare the profiling results with the timing (real time, cpu time) results of an application that is run outside the profiler. If your application consumes 5 seconds CPU time outside the profiler, and when run in the profiler the profiler reports that it consumes 10 seconds, there's nothing wrong. Don't think that your application actually takes 10 seconds.
That's why you must consistently check results in the same environment. Consistently compare results of your application when run outside the profiler, or when run inside the profiler. Don't mix the results.
Also use a consistent environment and system. If you get a faster PC, your application could still run slower, e.g. because the screen is larger and more needs to be updated on screen. If moving to a new PC, retest the last (one or two) releases of your application on the new PC so you get an idea on how times scale to the new PC.
This also means: use fixed data sets and check your improvements on these datasets. It could be that an improvement in your application improves the performance of dataset X, but makes it slower with dataset Y. In some cases this may be acceptible.
Discuss with the testing team what results you want to obtain beforehand (see Oded's answer on my own question What's the best way to 'indicate/numerate' performance of an application?).
Realize that a faster application can still use more CPU time than a slower application, if the faster one uses multi-threading and the slower one doesn't. Discuss (as said before) with the testing time what needs to be measured and what doesn't (in the multi-threading case: real time instead of CPU time).
Realize that many small improvements may lead to one big improvement. If you find 10 parts in your application that each take 3% of the time and you can reduce it to 1%, your application will be 20% faster.
It depends what you're trying to do.
1) If you want to maintain general timing information, so you can be alert to regressions, various instrumenting profilers are the way to go. Make sure they measure all kinds of time, not just CPU time.
2) If you want to find ways to make the software faster, that is a distinctly different problem.
You should put the emphasis on the find, not on the measure.
For this, you need something that samples the call stack, not just the program counter (over multiple threads, if necessary). That rules out profilers like gprof.
Importantly, it should sample on wall-clock time, not CPU time, because you are every bit as likely to lose time due to I/O as due to crunching. This rules out some profilers.
It should be able to take samples only when you care, such as not when waiting for user input. This also rules out some profilers.
Finally, and very important, is the summary you get.
It is essential to get per-line percent of time.
The percent of time used by a line is the percent of stack samples containing the line.
Don't settle for function-only timings, even with a call graph.
This rules out still more profilers.
(Forget about "self time", and forget about invocation counts. Those are seldom useful and often misleading.)
Accuracy of finding the problems is what you're after, not accuracy of measuring them. That is a very important point. (You don't need a large number of samples, though it does no harm. The harm is in your head, making you think about measuring, rather than what is it doing.)
One good tool for this is RotateRight's Zoom profiler. Personally I rely on manual sampling.

Resources