Executing operations - delegates

Which is a better approach to run operations
Use Delegate
Use Action
Use Predicate
Use Func
Which once is the best in terms of performance, memory and code maintainability.

Probably not what you want to hear but,
It all depends.
Using Predicate<> is a good idea in the specific applications that it is fit for (but it is also the same as Func<T, bool>).
If you can use Func<> (or its return-less cousin Action<>) then go for it. It's always better to re-use what is already there rather than re-invent the wheel.
If all else fails, fall back on delegate. There's nothing wrong with it and it still works great.
I don't think you're going to find that any one of those consistently performs any better in terms of speed or memory consumption since their performance is going to be dictated by what code you're running inside them.
Just pick what works for your needs and move on. If there's a performance issue at some point down the road...worry about it then. Code first, optimize later.

As with most things programming, it depends. Are you trying to run async operations? Multithreaded operations? Are you dealing with events? Your name 'WPF User' suggests that you're using .NET.

Related

When should I use AsParallel() in linq/plinq

I'm looking make use of the advantages of parallel programming in linq by using plinq, im not sure I understand the use entirely apart from the fact its going to make use of all cpu cores more efficiently so for a large query it might be quicker. Can I just simply call AsParallel() on linq calls to make use of th eplinq functionality and it will always be quicker? Or should I only use it when there is a lot of data to query or process?
You can't just assume that execution in parallel is always faster. It depends. In some situations you will gain a lot on multi-core processors by doing things in parallel. In other cases, you will just slow the things down, since parallel loops have a small overheat over simple loops.
For example, see my other answer which explains why embedded parallel loops can be a disaster.
Now, the best way to know if it is a good idea to use parallel loop in a precise context is to test both parallel and not parallel implementations and to measure the time they take.
To further add to the answer, it also depends on your data. Going a little 'old school' for a moment you could go down the road of loop unrolling, using for instead of foreach and so on and so forth.
However, you really need to ensure you aren't micro-optimising. Depending on your data fetches and the size of data (certainly with paged data) then you can probably get away with not using it.
That's not to say that making your linq mult-core aware isn't cool. But be aware of the setup costs of doing something like that and so be able to weigh up the benefits against the complexities of maintaining and debugging that code.
If your algorithm is already top notch then looking at the plinq extensions, a map reduce mechanism or similar may be the way to go. But first check your algorithm and your overall benefits. Operating on the right kind of collection (etc) in the right kind of way will always bring its own benefits (and problems!).
What are you trying to solve?

Do many old ColdFusion Performance admonitions still apply in CFMX 8?

I have an old standards document that has gone through several iterations and has its roots back in the ColdFusion 5 days. It contains a number of admonitions, primarily for performance, that I'm not so sure are still valid.
Do any of these still apply in ColdFusion MX 8? Do they really make that much difference in performance?
Use compare() or compareNoCase() instead of is not when comparing strings
Don't use evaluate() unless there is no other way to write your code
Don't use iif()
Always use struct.key or struct[key] instead of structFind(struct,key)
Don't use incrementValue()
I agree with Tomalak's thoughts on premature optimization. Compare is not as readable as "eq."
That being said there is a great article on the Adobe Developer Center about ColdFusion Performance: http://www.adobe.com/devnet/coldfusion/articles/coldfusion_performance.html
Compare()/CompareNoCase(): comparing case-insensitively is more expensive in Java, too. I'd say this still holds true.
Don't use evaluate(): Absolutely - unless there's no way around it. Most of the time, there is.
Don't use Iif(): I can't say much about this one. I don't use it anyway because the whole DE() stuff that comes with it sucks so much.
struct.key over StructFind(struct,key): I'd suspect that internally both use the same Java method to get a struct item. StructFind() is just one more function call on the stack. I've never used it, since I have no idea what benefit it would bring. I guess it's around for backwards compatibility only.
IncrementValue(): I've never used that one. I mean, it's 16 characters and does not even increment the variable in-place. Which would have been the only excuse for it's existence.
Some of the concerns fall in the "premature optimization" corner, IMHO. Personal preference or coding style apart, I would only start to care about some of the subtleties in a heavy inner loop that bogs down the app.
For instance, if you do not need a case-insensitive string compare, it makes no sense using CompareNoCase(). But I'd say 99.9% of the time the actual performance difference is negligible. Sure you can write a loop that times 100000 iterations of different operations and you'd find they perform differently. But in real-world situations these academic differences rarely make any measurable impact.
Coldfusion MX 8 is several times faster than MX 7 from all accounts. When it came out, I read many opinions that simply upgrading for the performance boost without changing a line of code was well worth it... It was worth it. With the gains in processing power, memory availability, generally, you can do a lot more with less optimized code.
Does this mean we should stop caring and write whatever? No. Chances are where we take the most shortcuts, we'll have to grow the system the most there.
Finding that find line between enough engineering and not over-engineering a solution is a fine balance. There's a quote there by Knuth I believe that says "Premature optimizations is the root of all evil"
For me, I try to base it on:
how much it will be used,
how expensive that will be across my expected user base,
how critical/central it is to everything,
how often I may be coming back to the code to extend it into other areas
The more that these types of ideas lie in the "probably or one way or another I will", I pay more attention to it. If it needs to be readable and a small performance hit results, it's the better way to go for sustainability of the code.
Otherwise, I let items fight for my attention while I solve and build things of real(er) value.
The single biggest favour we can do ourselves is use a framework with any project, no matter how small and do the small things right from the beginning.
That way there is no sense of dread in going back to work on a system that was originally meant to be a temporary hack but never got re-factored.

Performance anti patterns

I am currently working for a client who are petrified of changing lousy un-testable and un-maintainable code because of "performance reasons". It is clear that there are many misconceptions running rife and reasons are not understood, but merely followed with blind faith.
One such anti-pattern I have come across is the need to mark as many classes as possible as sealed internal...
*RE-Edit: I see marking everything as sealed internal (in C#) as a premature optimisation.*
I am wondering what are some of the other performance anti-patterns people may be aware of or come across?
The biggest performance anti-pattern I have come across is:
Not measuring performance before and
after the changes.
Collecting performance data will show if a certain technique was successful or not. Not doing so will result in pretty useless activities, because someone has the "feeling" of increased performance when nothing at all has changed.
The elephant in the room: Focusing on implementation-level micro-optimization instead of on better algorithms.
Variable re-use.
I used to do this all the time figuring I was saving a few cycles on the declaration and lowering memory footprint. These savings were of minuscule value compared with how unruly it made the code to debug, especially if I ended up moving a code block around and the assumptions about starting values changed.
Premature performance optimizations comes to mind. I tend to avoid performance optimizations at all costs and when I decide I do need them I pass the issue around to my collegues several rounds trying to make sure we put the obfu... eh optimization in the right place.
One that I've run into was throwing hardware at seriously broken code, in an attempt to make it fast enough, sort of the converse of Jeff Atwood's article mentioned in Rulas' comment. I'm not talking about the difference between speeding up a sort that uses a basic, correct algorithm by running it on faster hardware vs. using an optimized algorithm. I'm talking about using a not obviously correct home brewed O(n^3) algorithm when a O(n log n) algorithm is in the standard library. There's also things like hand coding routines because the programmer doesn't know what's in the standard library. That one's very frustrating.
Using design patterns just to have them used.
Using #defines instead of functions to avoid the penalty of a function call.
I've seen code where expansions of defines turned out to generate huge and really slow code. Of course it was impossible to debug as well. Inline functions is the way to do this, but they should be used with care as well.
I've seen code where independent tests has been converted into bits in a word that can be used in a switch statement. Switch can be really fast, but when people turn a series of independent tests into a bitmask and starts writing some 256 optimized special cases they'd better have a very good benchmark proving that this gives a performance gain. It's really a pain from maintenance point of view and treating the different tests independently makes the code much smaller which is also important for performance.
Lack of clear program structure is the biggest code-sin of them all. Convoluted logic that is believed to be fast almost never is.
Do not refactor or optimize while writing your code. It is extremely important not to try to optimize your code before you finish it.
Julian Birch once told me:
"Yes but how many years of running the application does it actually take to make up for the time spent by developers doing it?"
He was referring to the cumulative amount of time saved during each transaction by an optimisation that would take a given amount of time to implement.
Wise words from the old sage... I often think of this advice when considering doing a funky optimisation. You can extend the same notion a little further by considering how much developer time is being spent dealing with the code in its present state versus how much time is saved by the users. You could even weight the time by hourly rate of the developer versus the user if you wanted.
Of course, sometimes its impossible to measure, for example, if an e-commerce application takes 1 second longer to respond you will loose some small % money from users getting bored during that 1 second. To make up that one second you need to implement and maintain optimised code. The optimisation impacts gross profit positively, and net profit negatively, so its much harder to balance. You could try - with good stats.
Exploiting your programming language. Things like using exception handling instead of if/else just because in PLSnakish 1.4 it's faster. Guess what? Chances are it's not faster at all and that two years from now someone maintaining your code will get really angry with you because you obfuscated the code and made it run much slower, because in PLSnakish 1.8 the language maintainers fixed the problem and now if/else is 10 times faster than using exception handling tricks. Work with your programming language and framework!
Changing more than one variable at a time. This drives me absolutely bonkers! How can you determine the impact of a change on a system when more than one thing's been changed?
Related to this, making changes that are not warranted by observations. Why add faster/more CPUs if the process isn't CPU bound?
General solutions.
Just because a given pattern/technology performs better in one circumstance does not mean it does in another.
StringBuilder overuse in .Net is a frequent example of this one.
Once I had a former client call me asking for any advice I had on speeding up their apps.
He seemed to expect me to say things like "check X, then check Y, then check Z", in other words, to provide expert guesses.
I replied that you have to diagnose the problem. My guesses might be wrong less often than someone else's, but they would still be wrong, and therefore disappointing.
I don't think he understood.
Some developers believe a fast-but-incorrect solution is sometimes preferable to a slow-but-correct one. So they will ignore various boundary conditions or situations that "will never happen" or "won't matter" in production.
This is never a good idea. Solutions always need to be "correct".
You may need to adjust your definition of "correct" depending upon the situation. What is important is that you know/define exactly what you want the result to be for any condition, and that the code gives those results.
Michael A Jackson gives two rules for optimizing performance:
Don't do it.
(experts only) Don't do it yet.
If people are worried about performance, tell 'em to make it real - what is good performance and how do you test for it? Then if your code doesn't perform up to their standards, at least it's something the code writer and the application user agree on.
If people are worried about non-performance costs of rewriting ossified code (for example, the time sink) then present your estimates and demonstrate that it can be done in the schedule. Assuming it can.
I believe it is a common myth that super lean code "close to the metal" is more performant than an elegant domain model.
This was apparently de-bunked by the creator/lead developer of DirectX, who re-wrote the c++ version in C# with massive improvements. [source required]
Appending to an array using (for example) push_back() in C++ STL, ~= in D, etc. when you know how big the array is supposed to be ahead of time and can pre-allocate it.

What Simple Changes Made the Biggest Improvements to Your Delphi Programs [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have a Delphi 2009 program that handles a lot of data and needs to be as fast as possible and not use too much memory.
What small simple changes have you made to your Delphi code that had the biggest impact on the performance of your program by noticeably reducing execution time or memory use?
Thanks everyone for all your answers. Many great tips.
For completeness, I'll post a few important articles on Delphi optimization that I found.
Before you start optimizing Delphi code at About.com
Speed and Size: Top 10 Tricks also at About.com
Code Optimization Fundamentals and Delphi Optimization Guidelines at High Performance Delphi, relating to Delphi 7 but still very pertinent.
.BeginUpdate;
.EndUpdate;
;)
Use a Delphi Profiling tool (Some here or here) and discover your own bottle necks. Optimizing the wrong bottlenecks is a waste of time. In other words, if you apply all of these suggestions here, but ignore the fact someone put a sleep(1000) (or similar) in some very important code is a waste of your time. Fix your actual bottlenecks first.
Stop using TStringList for everything.
TStringList is not a general purpose datastructure for effective storage and handling of everything from simple to complex types. Look for alternatives. I use Delphi Container and Algorithm Library (DeCAL, formerly known as SDL). Julians EZDSL should also be a good alternative.
Pre-allocating lists and arrays, rather than growing them with each iteration.
This has probably had the biggest impact for me in terms of speed.
If you need to use Application.processmesssages (or similar) in a loop, try calling it only every Nth iteration.
Similarly, if updating a progressbar, don't update it every iteration. Instead, increment it by x units every x iterations, or scale the updates according to time or as a percentage of overall task length.
FastMM
FastCode (lib)
Use high performance data structures, like hash table (etc). Many places it is faster to make one loop which makes lookup hash table for your data. Uses quite lot of memory but it surely is fast. (this maybe is most important one, but 2 first are dead simple and need very little of effort to do)
Reduce disk operations. If there's enough memory, load the file entirely to RAM and do all operations in memory.
Consider the careful use of threads. If you are not using threads now, then consider adding a couple. If you are, make sure you are not using too many. If you are running on a Dual or Quad core computer (which most are any more) then proper thread tuning is very important.
You could look at OmniThread Library by Gabr, but there are a number of thread libraries in development for Delphi. You could easily implement your own parallel for using anonymous types.
Before you do anything, identify slow parts. Do not touch working code which performs fast enough.
The biggest improvement came when I started using AsyncCalls to convert single-threaded applications that used to freeze up the UI, into (sort of) multi-threaded apps.
Although AsyncCalls can do a lot more, I've found it useful for this very simple purpose. Let's say you have a subroutine blocked like this: Disable Button, Do Work, Enable Button.
You move the 'Do Work' part to a local function (call it AsyncDoWork), and add four lines of code:
var a: IAsyncCall;
a := LocalAsyncCall(#AsyncDoWork);
while (NOT a.Finished) do
application.ProcessMessages;
a.Sync;
What this does for you is run AsyncDoWork in a separate thread, while your main thread remains available to respond to the UI (like dragging the window or clicking Abort.) When AsyncDoWork is finished the code continues. Because I moved it to a local function, all local vars are available, an the code does not need to be changed.
This is a very limited type of 'multi-threading'. Specifically, it's dual threading. You must ensure that your Async function and the UI do not both access the same VCL components or data structures. (I disable all controls except the stop button.)
I don't use this to write new programs. It's just a really quick & easy way to make old programs more responsive.
When working with a tstringlist (or similar), set "sorted := false" until needed (if at all). Seems like a no-brainer...
Create unit tests
Verify tests all pass
Profile your application
Refactor looking for bottlenecks and memory
Repeat from Step 2 (comparing to previous pass)
Make intelligent use of SetLength() for strings and arrays. Optimise initialisation with FillChar or ZeroMemory.
Local variables created on stack (e.g. record types) are faster than heap allocated (objects and New()) variables.
Reuse objects rather than Destroy then create. But make sure management code for this is faster than memory manager!
Check heavily-used loops for calculations that could be (at least partially) pre-calculated or handled with a lookup table. Trig functions are a classic for this, but it applies to many others.
If you have a list, use a dynamic array of anything, even a record as follows:
This needs no classes, no freeing and access to it is very fast. Even if it needs to grow you can do this - see below. Only use TList or TStringList if you need lots of size changing flexibility.
type
TMyRec = record
SomeString : string;
SomeValue : double;
end;
var
Data : array of TMyRec;
I : integer;
..begin
SetLength( Data, 100 ); // defines the length and CLEARS ALL DATA
Data[32].SomeString := 'Hello';
ShowMessage( Data[32] );
// Grow the list by 1 item.
I := Length( Data );
SetLength( Data, I+1 );
..end;
Separating the program logic from user interface, refactoring, then optimizing the most-used, most resource-intensive elements independently.
Turn debugging OFF
Turn optimizations ON
Remove all references to units that
you don't actually use
Look for memory leaks
Use a lot of assertions to debug, then turn them off in shipping code.
Turn off range and overflow checking after you have tested extensively.
If you really, really, really need to be light weight then you can shed the VCL. Take a look at the KOL & MCK. Granted if you do that then you are trading features for reduced footprint.
Use the full FastMM and study the documentation and source and see if you can tweak it to your specifications.
For an old BDE development when I first started Delphi, I was using lots of TQuery components. Someone told me to use TTable master-detail after I explained him what I was doing, and that made the program run much faster.
Calling DisableControls can omit unnecessary UI updates.
When identifying records, use integers if at all possible for record comparison. While a primary key of "company name" might seem logical, the time spent generating and storing a hash of this will greatly improve overall search times.
You might consider using runtime packages. This could reduce your memory foot print if there are more then one program running that is written using the same packages.
If you use threads, set their processor affinity. If you don't use threads yet, consider using them, or look into asynchronous I/O (completion ports) if your application does lots of I/O.
Consider if a DBMS database is really the perfect choice. If you are only reading data and never changing it, then a flat fixed record file could work faster, especially if the path to the data can be easily mapped (ie, one index). A trivial binary search on a fixed record file is still extremely fast.
BeginUpdate ... EndUpdate
ShortString vs. String
Use arrays instead of TStrings and TList
But the sad answer is that tuning and optimization will give you maybe 10% improvement (and it's dangerous); re-design can give you 90%. Once you really understand the goal, you often can restate the problem (and therefore the solution) in much better terms.
Cheers
Examine all loops, and look for ways to short circuit. If your looking for something specific and find it in a loop, then use the BREAK command to immediately bail...no sense looping thru the rest. If you know that you don't have a match, then use a CONTINUE as quickly as possible.
Take advantage of some of the FastCode project code. Parts of it were incorporated into VCL/RTL proper (like FastMM was), but there is more out there you can use!
Note, they have a new site they are moving too, but it seems to be a bit inactive.
Consider hardware issues. If you really need performance then consider the type of hard drive(s) your program and your databases are running on. There are a lot of variables especially if you are running a database. RAID is not always the best answer either.

When/how to optimize generic code?

When writing application code, it's generally accepted that premature micro-optimization is evil, and that profiling first is essential, and there is some debate about how much, if any, higher level optimization to do up front. However, I haven't seen any guidelines for when/how to optimize generic code that will be part of a library or framework, where you never know exactly how your code will be used in the future. What are some guidelines for this? Is premature micro-optimization still evil? How should performance be balanced with other design goals such as ease of use, ease of demonstrating correctness, ease of implementation, and flexibility?
"How should performance be balanced with other design goals...?"
Get it to work.
Optimize it until it cannot be optimized further.
Note the order. Avoid premature optimization means optimize it after it works.
Optimization is still very, very important. Premature optimization does not mean NO optimization. It means optimize after it works.
I would say that optimization must take a back seat to other design goals such as ease of use, ease of demonstrating correctness, ease of implementation, and flexibility.
Try to write your code intelligently using good practices and avoiding the obvious pitfalls. Still, don't optimize until you can do it with a profiler and real use cases.
You will still encounter some use cases you never thought of but you can't optimize for them if you never thought of them.
A well designed framework will usually be a reasonably performing one too.
I heard an interesting and very enlightening discussion about the famous knuth quote on a podcast recently (think it was deep fried bytes), which I'll try summarize:
Everyone knows the famous quote: Premature optimization is the root of all evil..
However, that's only half of it. The full quote is:
We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.
Look at this carefully - say about 97% of the time.
The other side of that statement is about 3% of the time, "small" efficiencies are critical.
My monitor displays about 50 lines of code. Statistically, at least 1-2 lines of code on every screen will contain something performance sensitive! Following the common wisdom of 'do it now, optimize it later' doesn't seem like such a cunning plan when you think that on every screen you have a possible performance issue.
IMHO you should always be thinking about performance. You shouldn't expend a great deal of effort or sacrifice maintainability for it until proven by profiling/testing, but you should definitely have it in the back of your mind.
I'd personally apply this to generic code like this:
You are bound to have some code somewhere, which when you wrote it you thought "this will be slow", or "this is a dumb algorithm, but it's not important right now, so I'll fix it later." As you're in a shared library and you can't assert that method A will only ever get called with 5 items, you should go in and clean all this stuff up.
Once you've sorted those things out, I wouldn't bother going much further. Maybe run the profiler over your unit tests to make sure nothing dumb has snuck through, but otherwise wait for feedback from the consumers of your library.
My rule of thumb is:
don't optimize
The full rule is actually:
if you don't have a metric, don't optimize
This means that if you haven't measured the performance and generated a concrete metric, you shouldn't be doing anything to make the code perform better.
After all: without a metric, how do you know what to optimize?
Once you have one some profiling, you may actually be surprised by where the performance bottlenecks of your system are ... in my experience it is often the case that relatively minor changes can have a drastic impact.
You're right it's not always clear where the best bang for the buck is for your time. Your best bet is to be a user of your framework as well as its designer.
Employ your own framework in a non-trivial application, try to exercise the whole range of functionality. The more you use it, it will become clear which are the things you need most to be optimal.
Also, get feedback and suggestions from other users as frequently as possible. You will inevitably find that other people want to do things to do with your framework that you would never think of.
I think the best approach is to have a really good set of use cases for how your framework will be exercised. Only then will you have any good idea of whether the performance is adequate for its intended use.
Sure, you're never going to know how somebody is going to use your framework in the future (in the early years of my career, it never failed to amaze me the creative ways that users put my software to use - ways I'd never envisaged!) but having thought about how you think it will be used should get you most of the way there.

Resources