Good memory profiling, leak and error detection for Windows - debugging

I'm currently looking for a good memory / leak detection tool for Windows. A few years ago, I used Numega's Boundschecker, which was VERY good. Right now it seems to have been
sold to Compuware, which apparently sold it again to some other company.
Trying to evaluate a demo of the current version has been so far very frustrating, in the best "enterprisy" tradition:
(a) no advertised prices on their website (Great Red Flashing Lights of Warning);
(b) contact form asked for number of employeers and other private information;
(c) no response to my emails asking for a evaluation and price.
I had to conclude that BoundsChecker is now one of "those" products. Y'know, the type where you innocently call and tomorrow 3 men in black suits turn up at your
building wanting to talk to you about "partnerships" and not-so-secretly gauge the size of your company and therefore how much they can get away with charging you.
SO, rant aside, can anyone recommend an excellent memory checking/leak detection tool, how much it costs, and suggestions for where to buy?

You can try Memory Validator. You can try the evaluation copy of the same as well.
Licensed version prices

Beware of Compuware's bounds checker:
It is stable up to a point. It costs about 3600 dollars, and about an equal amount to maintain from year to year.
But that is peanuts compared to Coverity.
I haven't gotten a good test run to work right under Bounds Checker for the last 3 years. That is why I don't use it anymore, and why I don't recommend you use it, except on small, tiny projects. On big enterprise apps, it's just too slow, takes up too memory, and simply stops working. I mean really, do you want your application to take 5 minutes to boot? Do you want your test executions to take 3 times longer? Worst of all, is it's tendency to just lock up. Customer support from Compuware was pretty limited. But bounds checker was sold to another company (can't remember their name) whose website is so aniceptic, sterilized and dry, it makes financial company websites look entertaining.
But the killer problem with BoundsChecker is it is 32 bit only. So if you need to profile a large application that takes lots of memory (More than 1 Gig), you are simply out of luck. Bounds Checker will eat up 2 to 3 Gigs of memory from your app. And with 32 bit apps, you well know that 4 Gigs is the tops you get.
Coverity is great if you hire a person to babysit it. Seriously Coverity costs more than my house. That's not to mention the person my company would have too hire to babysit the dang thing. It takes 24 hours to do it's magic. And it doesn't do all that much more magic than simply compiling your code at warning level 4, and turning on 'Code Analysis' (In visual studio).
I've tried other memory leak tools (for native code). They all SUCK big time, are too complicated, or just plain old lock up the system.
I'm so disgusted with the entire field of memory profilers, that I just want to go back to using the debug CRT. That or just write my own.
As for code coverage tools, Bullseye wins hands down. Why can't a memory leak detector just work as solidly as bullseye?

Microsoft's Application Verifier tool is very good at detecting leaks as well as a bunch of other common programming mistakes on Windows (COM, heaps, TLS, locks, etc).
It doesn't do so much in the way of profiling, but it will give you the stack of where the memory was allocated when you leak it, or the stack where it was free'd the first time if you double free, etc.

I've been fairly happy with AQTime, and the pricing is tough to beat (and very transparent - $599/user).
The allocation profiler works fairly well - it's not quite as sophisticated as Boundschecker (from what I remember of Boundschecker), but what it does, it does well - and it handles quite a few other things, too.

This thread is way out of date. It is true that we haven't been able to convince Micro Focus to post prices out on their main web site, but you can get prices on ComponentSource, and we don't send out agents in dark suits and shades 8-/ Pricing depends on whether you are asking for a single user or multiple user license, and whether you want just BoundsChecker, or you want all of DevPartner Studio. See ComponentSource Listing for details.
Anyway, we've not stopped working on the product. On February 4th, we released version 10.5, which (finally) supports 64-bit applications (AMD64,Intel64, not Itanium) on Vista and Windows 7. Quite a few old bugs were fixed along the way. The next update will include support for XP64 and Windows 7 SP1, as well as Visual Studio 2010 SP1.

Related

Bleeding edge vs field tested technology. How will you strike a balance [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I have been pondering about this for some time. How do you pick a technology ( am not talking about Java vs .Net vs PHP) when you are planning for a new project /maintaining an existing project in an organization.
Arguments for picking the latest technology
It might overcome some of the limitations of the existing technology ( Think No SQL vs RDBMS when it comes to scalability). Sometimes latest technology is backward compatible and only get to gain the new features without breaking the old functionality
It will give better user experience (May be HTML 5 for videos, just a thought)
Will cut down development time/cost and make maintenance of the code base relatively easy
Arguments for picking field tested technology/against picking a bleeding edge technology
It has not stood the test of time. There can be unforeseen problems. convoluted solutions might lead to more problems during maintenance phase and the application might become a white elephant
Standards might not yet be in place. Standards might change and significant rework might be needed to make the project adhere to standards.Choosing the field tested technology will save these efforts
The new technology might not be supported by the organization. Supporting a new (or for that matter a different technology) would require additional resources
It might be hard to get qualified resources with bleeding edge technology
From a developer perspective, I do not see a reason not to get hands dirty with some new technology (in your spare time) but he/she might be limited to open source/free ware/developer editions
From am organization perspective, it looks like its a double edged sword. Sit too long in a "field tested" technology and good people might move away (not to mention that there will always be people who prefer familiar technology who refuse to update their knowledge). Try an unconventional approach and you risk overrunning the budged/time not to mention the unforeseen risks
TL;DR
Bottom line. When do you consider a technology mature enough so that it can be adopted by an organization ?
Most likely you work with a team of people and this should be taken into consideration as well. Some ways to test/evaluate technology maturity:
Is your team and management receptive to using new technology at all? This might be your biggest barrier. If you get the sense that they aren't receptive, you can try big formal presentations to convince them... or just go try it out (see below).
Do some Googling around for problems people have with it. If you don't find much, then this is what you'll run into when you have problems
Find some new small project with very low risk (e.g. something just you or a couple people would use), apply new technology in skunkworks fashion to see how it pans out.
Try to find the most mature of the immature. For instance if you're thinking about a NoSQL type of data store. All of the NoSQL stuff is immature when you compare against RDBMS like Oracle that has been around for decades, so look at the most mature solution for these that has support organizations, either professionally or via support groups.
Easiest project to start is to re-write an existing piece of software. You already have your requirements: make it just like that. Just pick a small piece of the software to re-write in the new technology, preferably something you can hammer at with unit/load testing to see how it stands up. I'm not advocating to re-write an entire application to prove it out, but a small measurable chunk.
A few rules of thumb.
Only use one "new" technology at a time. The more new things you use, the greater chance of there being a serious problem.
Make sure there is an advantage to using it. If that cool, new technology does not give you some advantage, why are you using it?
Plan for the learning curve. There will be aspects of the new technology that you do not know about. You, and your team, will have to spend more time learning about them then you think you will.
If possible try the new technology on a small less important project first. Your companies accounting system is not the best place to experiment.
Have a back up plan. New technologies don't always turn out to be worth it. Know when you are in a "coffin corner" and it is time to bail out.
There's a difference between "field tested" and "out-of-date." Developers (myself included) generally prefer bleeding edge stuff. To some extent, you have to keep your development staff happy and interested in their jobs.
But I've never had a customer unhappy with field tested technology. They are generally unaware or unconcerned about the technology that is used in producing a product. Their number one priority is how it works in their daily interactions with it.
When starting a new project, two questions come to mind in evaluating if I should move to a new platform:
1) What benefits do I get from going to the new platform. If it offers me a dramatically reduced development time or significant performance increases for the users, I will consider a semi-bleeding edge technology.
2) What risks are associated with the new platform. Is it likely that there are some scenarios that I will encounter that aren't quite worked out in the new platform? Is it likely that support for this new platform will fizzle out and I'll be left holding the bag on supporting a deprecated environment? Are there support channels in place that I can use if I get stuck at a critical juncture of my project?
Like everything, it's a cost/benefit analysis. Generally speaking, while I always learn and train on new technologies, I won't build something for a client using a technology (environment, library, server platform, etc) that hasn't been widely adopted by a large number of developers for at least 6-12 months.
That depends on the context. Each organisation has to make its own decisions. The classic literature on this topic is Crossing the Chasm by Geoffrey A. Moore.
If the company/community developing the product is known for good products then I'm very happy to put a safe bet on their new products.
For example I would be quite happy to develop on Rails 3 or Ruby 1.9, as I am quite sure they will be fine when finalized.
However I wouldn't write much code in superNewLang untill I was convinced that they had a great, well supported product, or they had a feature I couldn't live without.
I will tend to get the most trusted product, that suits all my needs.
You got to ask yourself only one question ... do I feel lucky ?
Where's the money ?
Do you get to profit big and fast enough even if tech X is a flop ?
If not, does new tech bring higher perf for a long time?
Like 64-bit CPU, Shader Model 4, heavy multi-threading
Do you see a lot of ideological trumpeting around it
"paradigm shift" blurbs etc. - wait 2-8 yr till it cools off and gets replaces :-)
Is it bulky and requires 2x of everything just to run?
let your enemy pay for it first :-)
Can you just get some basic education and a trial project wihtout risking anything?
might as well try unless it looks like a 400 lb lady who doesn't sing :-)
There is no general answer to such question - please got to #1

How to measure productivity loss from slow PCs running Visual Studio? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
Many PCs we have on the development team are out-dated and are very slow to run Visual Studio 2008. They should very much be replaced with newer machines. But there's a general reluctance on management/company to buy new machines.
How do we come up with numbers and benchmarks to show that these slow PCs are causing a loss in productivity?
Obviously we can't call them to sit down with us as we build solutions and/or open various files.
Is there an objective way to come up with some kind of reliable numbers that non-technical people can understand?
It'd be nice to have a way to measure this across an entire organization on many different PCs running Visual Studio. I'm looking for an answer that does better than using a physical stopwatch. :)
Modify your solutions so that the pre-build and post-build events write the current time to a centralised database. Include the machine name and the name of the project.
You can then display this information as graph showing time for build vs machine.
This should show a correlation between the build time and the age of the machine, hopefully showing the the older machines are slower. You could even convert the time into a $ (or £ or € ) value to show how much these older machines are costing. Summing this over time will give a value for the payback on any investment in new machines.
By modifying the solutions you can get this logging deployed onto all development machines by simply getting everyone to do a "get latest" from source control.
This does not really answer your question, but might help to achieve the required results. Tell your bosses that The Programmer's Bill of Rights is something to be taken seriously.
I would attempt to explain to them that programmers cost much more than machines. If you spend 30 minutes a day waiting, do the math and figure out what percentage of your salary is wasted due to laggy machines. Present these numbers to them, and compare it to the price of a new computer, and explain how they could save money in the long run by upgrading.
If they choose to continue to spend big bucks affording your wisdom only to have you sit there and watch a spinning cursor, just laugh because the joke is on them.
Many PHB's understand productivity in terms of lines of code (which IMO is very Wrong).
Can you record the amount of code produced per day on the slow machines vs not so slow machines?
Slow machines are the bane of development, IMHO, especially since any delay gets developers out of concentration and can lead to a much costlier switch to things like web browsers. There can be other weird effects like a how a slight increase in the latency for the Javadoc popup or C# equivalent) to appear when you hover a method and the chances someone would consult the documentation.
If legal in your company (at least for self use), record about half an hour of work with a screen capture tool like Camtasia. Then use a fast editor to spot the times the machine was hung (easy if you have a cursor change, progress bar, etc.) and count the time and number of instances. I've done that for hours of tapes - it doesn't take that long. Use these numbers to argue the case, though you need to also argue that it leads to the indirect costs like a context switch.
Also, in my experience, hard drives are often the major cause of slowdown, not CPU or RAM, and unfortunately most organizations skimp on fast hard drives or SSDs and have very strict rules about replacing them.
Don't forget to factor in the cost of time spent figuring out how much slow PCs cost you (this post in other words)!

Best way to limit a trial version? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm building a shareware software that allows users to import various types of files (XML, CSV, etc.) into a database. I'd like to provide a trial version, but limit it in some way to prevent users that really need it, from not having to buy it ever.
I considered time based limit, but it seems that there are so many ways to work around that, especially today with virtual machines and stuff.
So, I'm thinking to limit the functionality, but I don't want this trial version to become crippleware.
Have you ever bought some shareware software? What was limitation of its trial version?
edit: Also, how do you feel about nag-screens as a user?
Look at this blog post, it's a survey made by Andy Brice on small software vendors. Here you can find the trial types and its % of use.
http://successfulsoftware.net/2009/04/23/the-truth-about-conversion-ratios-for-software/
I recommend you the Business Of Software Forum:
http://discuss.joelonsoftware.com/default.asp?biz
Regular posters there seem to agree that you don't have to think too much about piracy. People who won't buy won't buy anyway and complicated copy protection schemes run the risk of bothering the honest customers.
This is an excellent post by Patrick McKenzie about this issue:
http://www.kalzumeus.com/2006/09/05/everything-you-need-to-know-about-registration-systems/
Limit to number of uses is the fairest way.
As for stopping circumvention... Anyone who wants to crack your software will, and most people are too lazy to circumvent anything but the most trivial usage.
I'd argue that you want to get those people who regularly use your software to happily pay for it. They're most likely to happily pay for it if they've used it a non-trivial number of times.
b.t.w. Beyond compare has a system where you can use the software for 30 non-consecutive days. i.e. If you use it one day, and then use it 2 weeks later, then this will only count as two days. I've never been so happy to pay for software than when I paid for Beyond Compare.
Many people will only need this tool once in a lifetime for an import of some data.
So you will definitely have to go to a limited version instead of a time version.
It is common for software during the trial period to lack (or limit) some significant feature, such as printing or saving. I've tried (and bought) a panorama assembly tool that put a large watermark across the finished image. It allowed the quality of the tool to be evaluated, but put a real limit on further use of the images created during the evaluation period.
I've shipped a trial version of a commercial product that allowed all features to be used with limits generous enough to run through all of the tutorials in the user manual, but did not permit saving your work. We know that many users were able to quickly determine whether the tool would work for them by constructing tests with their own data, and it generated more than enough sales to justify the added development work to create the demo version.
The trick in your case will be to find a way to limit the functionality without eliminating the key utility of a trial period: actually trying out the software.
Perhaps limiting the number of records that can be converted in one run would do the trick?
I would definitely go with a time-based limit. As you mentioned, this usually is fairly easy to circumvent, but I promise you that if your software has a large enough user-base, cracks will be around in no time anyway. Thus there IMHO is no point in making it hard/impossible to pass by your software's time limit.
Any other limitations (such as annoying pop-ups or limited functionality) would definitely be a show-stopper for me. If I cannot evaluate the software properly, it has to be very good to make me consider buying it.
Please do not fall into the trap of limiting anything about the trial version other than time or number of uses. Reducing functionality and/or having annoying popups saying "This is a Pro feature" will simply alienate your users. Besides, the trial period is a chance for you to impress potential buyers, so you should showcase all of the features rather than hide them.
This is a marketing decision and the answer to any marketing decision is always "it depends...". For example it doesn't make much sense (commercially) to have a 30 day time limit on software that most users will probably only use once (e.g. harddisk recovery).
There are some issues association with time-limited trials:
A user might use it once and then not get chance to complete the trial
before the 30 days is up.
Time limitations are easy to work around, e.g. using VMs, registry
hacks, additionalmachines or
resetting the system clock.
Longer sales cycle - most customers will only buy on day 31.
Some of these are avoided if you go for a limited number of uses.
The main issue with feature limited trials is that the customer might not feel they can fully evaluate the system. But this isn't too much of a problem if you have a good money-back guarantee (and you should).
As a vendor I prefer feature limited trials in most cases. You can be quite creative in how you cripple the trial (watermarking, limited number of records input or output etc).
I just wanted to add that this is a strangely biased group. As programmers we want to be able to see all of the features, and play with them, etc. We get annoyed by certain limitations, and ads, etc.
However, normal people seem to react to these things differently. I'm just saying that we aren't really the target market here, probably.
ALSO, we should note that most people aren't nearly as good as we are at circumventing these things, so as has been mentioned, something pretty dumb is probably ok...
We have one product with a free - limited version, and a pro version, and another product with a 2 week free trial. Actually the free version also had a 2 week free trial of the pro, so it was sort of both...
All in all, I think it depends on the product and the people using it...
Limited number of uses
Ads
Splash screens that make you wait X seconds before it goes away
Not all functionality is available
Demo limited to a single database
Closing the gap was a very enlightening read when Eric first posted it, I believe it is equally relevant today.
You could limit the size of the import file supported. Or the number of uses. Or if insertion speed is a critical factor you could start out at full speed and then after 30 days add some delay (but be sure to tell your users this is intentional).
BTW, one of the worst schemes was in the Ingres database for Sun workstations 20-odd years ago. If you entered the wrong license key, Ingres silently enabled a dozen serious query processing errors. After playing with it for an afternoon, I told the salesman that his product was ridiculously buggy and that we'd be going with a competitor. He quickly told me what the issue was, but by then the sale was all but lost.
You can limit the number of times the users can use the functionality within a given time period; say they can use it 8 times in a month, before it throws up a "nag screen" when they use it. And if they need to use it more than, say, 20 times in a month, insist that they buy the software. If you do this, you may want to provide a certain number of keys for charitable purposes or educational purposes as well; it helps users to buy the software when they know that there are some charitable purposes their money is going to pay for.
I've purchased plenty of shareware but always tend to be annoyed when it disables certain features. I prefer to check all features before I make a purchase.
I do like the limitations that Altova add to their software. You always need a registration key to use their software and to get one, you have to provide your email address to which they will send you your temporary key. You can then use their software for up to a month and then you'll need a new key. Some people will just continue to request a new temporary license but most users will sooner or later purchase a permanent key.
The Altova software does make a "call home" to validate the key it uses. It does this to restrict the number of users that can use the software. I can install their product on as many computers as I like, but at any moment, I can only use it on a single computer. If I try to use the software on two or more systems at the same time, the software will discover this multiple usage and thus block my access to the application.
Still, I do know that many people are willing to pay for your software if it's good enough. Especially if you can provide some additional services next to the software itself. (E.g. regular updates or subscriptions to additional data feeds.)
No, I have never bought limited trial versions. However, I have donated to a couple of open source projects. If this were a software for people, instead of companies I would recommend a donation system.
This software sounds like a work-related app, so:
constant reminder, like Foxit
just 2 file formats
app married to a single database
Since this is a software to import data in a database, you could also add rows to the database indicating that the importation was made with a shareware version.
Just annoying enough (rows are still deletable), but you get most of the functionnalities.
How about keeping all the functionality in but randomly re-arranging the menu/buttons in the non-paid version ;)

Does Wirth's law still hold true?

Adage made by Niklaus Wirth in 1995:
«Software is getting slower more rapidly than hardware becomes faster»
Do you think it's actually true?
How should you measure "speed" of software? By CPU cycles or rather by time you need to complete some task?
What about software that is actually getting faster and leaner (measured by CPU cycles and MB of RAM) and more responsive with new versions, like Firefox 3.0 compared with 2.0, Linux 2.6 compared with 2.4, Ruby 1.9 compared to 1.8. Or completely new software that is order of magnitude faster then old stuff (like Google's V8 Engine)? Doesn't it negate that law?
Yes I think it is true.
How do I measure the speed of software? Well time to solve tasks is a relevant indicator. For me as a user of software I do not care whether there are 2 or 16 cores in my machine. I want my OS to boot fast, my programs to start fast and I absolutely do not want to wait for simple things like opening files to be done. A software has to just feel fast.
So .. when booting Windows Vista there is no fast software I am watching.
Software / Frameworks often improve their performance. That's great but these are mostly minor changes. The exception proves the rule :)
In my opinion it is all about feeling. And it feels like computers have been faster years ago. Of course I couldn't run the current games and software on those old machines. But they were just faster :)
It's not that software becomes slower, it's that its complexity increases.
We now build upon many levels of abstraction.
When was the last time people on SO coded in assembly language?
Most never have and never will.
It is wrong. Correct is
Software is getting slower at the same rate as hardware becomes faster.
The reason is that this is mostly determined by human patience, which stays the same.
It also neglects to mention that the software of today does more than 30 years ago, even if we ignore eye candy.
In general, the law holds true. As you have stated, there are exceptions "that prove the rule". My brother recently installed Win3.1 on his 2GHz+ PC and it boots in a blink of an eye.
I guess there are many reasons why the law holds:
Many programmers entering the profession now have never had to consider limited speed / resourced systems, so they never really think about the performance of their code.
There's generally a higher importance on getting the code written for deadlines and performance tuning usually comes last after bug fixing / new features.
I find FF's lack of immediate splash dialog annoying as it takes a while for the main window to appear after starting the application and I'm never sure if the click 'worked'. OO also suffers from this.
There are a few articles on the web about changing the perception of speed of software without changing the actual speed.
EDIT:
In addition to the above points, an example of the low importance given to efficiency is this site, or rather, most of the other Q&A sites. This site has always been developed to be fast and responsive and it shows. Compare this to the other sites out there - I've found phpBB based sites are flexible but slow. Google is another example of putting speed high up in importance (it even tells you how long the search took) - compare with other search engines that were around when google started (now, they're all fast thanks to google).
It takes a lot of effort, skill and experience to make fast code which is something I found many programmers lack.
From my own experience, I have to disagree with Wirth's law.
When I first approached a computer (in the 80'), the time for displaying a small still picture was perceptible. Today my computer can decode and display 1080p AVCHD movies in realtime.
Another indicator is the frames per second of video games. Not long ago it used to be around 15fps. Today 30fps to 60 fps are not uncommon.
Quoting from a UX study:
The technological advancements of 21 years have placed modern PCs in a completely different league of varied capacities. But the “User Experience” has not changed much in two decades. Due to bloated code that has to incorporate hundreds of functions that average users don’t even know exist, let alone ever utilize, the software companies have weighed down our PCs to effectively neutralize their vast speed advantages.
Detailed comparison of UX on a vintage Mac and a modern Dual Core: http://hubpages.com/hub/_86_Mac_Plus_Vs_07_AMD_DualCore_You_Wont_Believe_Who_Wins
One of the issues of slow software is a result of most developers using very high end machines with multicore CPUs and loads of RAM as their primary workstation. As a result they don't notice performance issues easily.
Part of their daily activity should be running their code on slower more mainstream hardware that the expected clients will be using. This will show the real world performance and allow them to focus on improving bottlenecks. Or even running within a VM with limited resources can aid in this review.
Faster hardware shouldn't be an excuse for creating slow sloppy code, however it is.
My machine is getting slower and clunkier every day. I attribute most of the slowdown to running antivirus. When I want to speed up, I find that disabling the antivirus works wonders, although I am apprehensive like being in a seedy brothel.
I think that Wirth's law was largely caused by Moore's law - if your code ran slow, you'd just disregard since soon enough, it would run fast enough anyway. Performance didn't matter.
Now that Moore's law has changed direction (more cores rather than faster CPUs), computers don't actually get much faster, so I'd expect performance to become a more important factor in software development (until a really good concurrent programming paradigm hits the mainstream, anyway). There's a limit to how slow software can be while still being useful, y'know.
Yes, software nowadays may be slower or faster, but you're not comparing like with like. The software now has so much more capability, and a lot more is expected of it.
Lets take as an example: Powerpoint. If I created a slideshow with Powerpoint from the early nineties, I can have a slideshow with pretty colours fairly easily, nice text etc. Now, its a slideshow with moving graphics, fancy transitions, nice images.
The point is, yes, software is slower, but it does more.
The same holds true of the people who use the software. back in the 70s, to create a presentation you had to create your own transparencies, maybe even using a pen :-). Now, if you did the same thing, you'd be laughed out of the room. It takes the same time, but the quality is higher.
This (in my opinion) is why computers don't give you gains in productivity, because you spend the same amount of time doing 'the job'. But if you use todays software, your results looks more professional, you gain in quality of work.
Skizz and Dazmogan have it right.
On the one hand, when programmers try to make their software take as few cycles as possible, they succeed, and it is blindingly fast.
On the other hand, when they don't, which is most of the time, their interest in "Galloping Generality" uses up every available cycle and then some.
I do a lot of performance tuning. (My method of choice is random halting.) In nearly every case, the reason for the slowness is over-design of class and data structure.
Oddly enough, the reason usually given for excessively event-driven and redundant data structure is "efficiency".
As Bompuis says, we build upon many layers of abstraction. That is exactly the problem.
Yes it holds true. You have given some prominent examples to counter the thesis, but bear in mind that these examples are developed by a big community of quite knowledgeable people, who are more or less aware of good practices in programming.
People working with the kernel are aware of different CPU's architectures, multicore issues, cache lines, etc. There is an interesting ongoing discussion about inclusion of hardware performance counters support in the mainline kernel. It is interesting from the 'political' point of view, as there is a conflict between the kernel people and people having much experience in performance monitoring.
People developing Firefox understand that the browser should be "lightweight" and fast in order to be popular. And to some extend they manage to do a good job.
New versions of software are supposed to be run on faster hardware in order to have the same user experience. But whether the price is just? How can we asses whether the functionality was added in the efficient way?
But coming back to the main subject, many of the people after finishing their studies are not aware of the issues related to performance, concurrency (or even worse, they do not care). For quite a long time Moore law was providing a stable performance boost. Thus people wrote mediocre code and nobody even noticed that there was something wrong with inefficient algorithms, data-structures or more low-level things.
Then some limitations came into play (thermal efficiency for example) and it is no longer possible to get 'easy' speed for few bucks. People who just depend on hardware performance improvements might get a cold shower. On the other hand, people who have in-depth knowledge of algorithms, data structures, concurrency issues (quite difficult to recruit these...) will continue to write good applications and their value on the job market will increase.
The Wirth law should not only be interpreted literally, it is also about poor code bloat, violating the keep-it-simple-stupid rule and people who waste the opportunity to use the 'faster' hardware.
Also if you happen to work in the area of HPC then these issues become quite obvious.
In some cases it is not true: the frame rate of games and the display/playing of multimedia content is far superior today than it was even a few years ago.
In several aggravatingly common cases, the law holds very, very true. When opening the "My Computer" window in Vista to see your drives and devices takes 10-15 seconds, it feels like we are going backward. I really don't want to start any controversy here but it was that as well as the huge difference in time needed to open Photoshop that drove me off of the Windows platform and on to the Mac. The point is that this slowdown in common tasks is serious enough to make me jump way out of my former comfort zone to get away from it.
Can´t find the sense. Why is this sentence a law?
You never can compare Software and Hardware, they are too different.
Hardware is genuine material and Software is a written code.
The connection is only that Software has to control the performance of Hardware. After an executed step in Hardware the Software needs a completion-sign, so the next order of Software can be done.
Why should I slow down Software? We allways try to make it faster !
It´s a lot of things to do in real physical way to make Hardware faster (changing print-modules or even physical parts of a computer).
It may be senseful,if Wirth means: to do this in one computer (= one Software- and Hardware-System).
To get a higher speed of Hardware it´s necessary to know the function of the Hardware, amount of parallel inputs or outputs at one moment and the frequency of possible switches in one second. Last not least it´s important that different Hardware-prints have the same or with a numeric factor multiplicated frequeny.
So perhaps the Software may slow down automaticaly very easy if You change something in the Hardware. - Wirth was thinking much more in Hardware, he is one of the great inventors since the computer is existing in the German-speaking area.
The other way is not easy. You have to know the System-Software of a computer very exactly to make the Hardware faster by changing the Software (=System-Software, Machine-Programs) of a computer. And if You use more layers you nearly have no direct influence in the speed of the Hardware.
Perhaps this may be the explanation of Wirth´s Law-Thinking....I got it!

What's the difference between Managed/Byte Code and Unmanaged/Native Code?

Sometimes it's difficult to describe some of the things that "us programmers" may think are simple to non-programmers and management types.
So...
How would you describe the difference between Managed Code (or Java Byte Code) and Unmanaged/Native Code to a Non-Programmer?
Managed Code == "Mansion House with an entire staff or Butlers, Maids, Cooks & Gardeners to keep the place nice"
Unmanaged Code == "Where I used to live in University"
think of your desk, if you clean it up regularly, there's space to sit what you're actually working on in front of you. if you don't clean it up, you run out of space.
That space is equivalent to computer resources like RAM, Hard Disk, etc.
Managed code allows the system automatically choose when and what to clean up. Unmanaged Code makes the process "manual" - in that the programmer needs to tell the system when and what to clean up.
I'm astonished by what emerges from this discussion (well, not really but rhetorically). Let me add something, even if I'm late.
Virtual Machines (VMs) and Garbage Collection (GC) are decades old and two separate concepts. Garbage-collected native-code compiled languages exist, even these from decades (canonical example: ANSI Common Lisp; well, there is at least a compile-time garbage-collected declarative language, Mercury - but apparently the masses scream at Prolog-like languages).
Suddenly GCed byte-code based VMs are a panacea for all IT diseases. Sandboxing of existing binaries (other examples here, here and here)? Principle of least authority (POLA)/capabilities-based security? Slim binaries (or its modern variant SafeTSA)? Region inference? No, sir: Microsoft & Sun does not authorize us to even only think about such perversions. No, better rewrite our entire software stack for this wonderful(???) new(???) language§/API. As one of our hosts says, it's Fire and Motion all over again.
§ Don't be silly: I know that C# is not the only language that target .Net/Mono, it's an hyperbole.
Edit: it is particularly instructive to look at comments to this answer by S.Lott in the light of alternative techniques for memory management/safety/code mobility that I pointed out.
My point is that non technical people don't need to be bothered with technicalities at this level of detail.
On the other end, if they are impressed by Microsoft/Sun marketing it is necessary to explain them that they are being fooled - GCed byte-code based VMs are not this novelty as they claim, they don't solve magically every IT problem and alternatives to these implementation techniques exist (some are better).
Edit 2: Garbage Collection is a memory management technique and, as every implementation technique, need to be understood to be used correctly. Look how, at ITA Software, they bypass GC to obtain good perfomance:
4 - Because we have about 2 gigs of static data we need rapid access to,
we use C++ code to memory-map huge
files containing pointerless C structs
(of flights, fares, etc), and then
access these from Common Lisp using
foreign data accesses. A struct field
access compiles into two or three
instructions, so there's not really
any performance. penalty for accessing
C rather than Lisp objects. By doing
this, we keep the Lisp garbage
collector from seeing the data (to
Lisp, each pointer to a C object is
just a fixnum, though we do often
temporarily wrap these pointers in
Lisp objects to improve
debuggability). Our Lisp images are
therefore only about 250 megs of
"working" data structures and code.
...
9 - We can do 10 seconds of Lisp computation on a 800mhz box and cons
less than 5k of data. This is because
we pre-allocate all data structures we
need and die on queries that exceed
them. This may make many Lisp
programmers cringe, but with a 250 meg
image and real-time constraints, we
can't afford to generate garbage. For
example, rather than using cons, we
use "cons!", which grabs cells from an
array of 10,000,000 cells we've
preallocated and which gets reset
every query.
Edit 3: (to avoid misunderstanding) is GC better than fiddling directly with pointers? Most of the time, certainly, but there are alternatives to both. Is there a need to bother users with these details? I don't see any evidence that this is the case, besides dispelling some marketing hype when necessary.
I'm pretty sure the basic interpretation is:
Managed = resource cleanup managed by runtime (i.e. Garbage Collection)
Unmanaged = clean up after yourself (i.e. malloc & free)
Perhaps compare it with investing in the stock market.
You can buy and sell shares yourself, trying to become an expert in what will give the best risk/reward - or you can invest in a fund which is managed by an "expert" who will do it for you - at the cost of you losing some control, and possibly some commission. (Admittedly I'm more of a fan of tracker funds, and the stock market "experts" haven't exactly done brilliant recently, but....)
Here's my Answer:
Managed (.NET) or Byte Code (Java) will save you time and money.
Now let's compare the two:
Unmanaged or Native Code
You need to do your own resource (RAM / Memory) allocation and cleanup. If you forget something, you end up with what's called a "Memory Leak" that can crash the computer. A Memory Leak is a term for when an application starts using up (eating up) Ram/Memory but not letting it go so the computer can use if for other applications; eventually this causes the computer to crash.
In order to run your application on different Operating Systems (Mac OSX, Windows, etc.) you need to compile your code specifically for each Operating System, and possibly change alot of code that is Operating System specific so it works on each Operating System.
.NET Managed Code or Java Byte Code
All the resource (RAM / Memory) allocation and cleanup are done for you and the risk of creating "Memory Leaks" is reduced to a minimum. This allows more time to code features instead of spending it on resource management.
In order to run you application on different Operating Systems (Mac OSX, Windows, etc.) you just compile once, and it'll run on each as long as they support the given Framework you are app runs on top of (.NET Framework / Mono or Java).
In Short
Developing using the .NET Framework (Managed Code) or Java (Byte Code) make it overall cheaper to build an application that can target multiple operating systems with ease, and allow more time to be spend building rich features instead of the mundane tasks of memory/resource management.
Also, before anyone points out that the .NET Framework doesn't support multiple operating systems, I need to point out that technically Windows 98, WinXP 32-bit, WinXP 64-bit, WinVista 32-bit, WinVista 64-bit and Windows Server are all different Operating Systems, but the same .NET app will run on each. And, there is also the Mono Project that brings .NET to Linux and Mac OSX.
Unmanaged code is a list of instructions for the computer to follow.
Managed code is a list of tasks for the computer follow that the computer is free to interpret on its own on how to accomplish them.
The big difference is memory management. With native code, you have to manage memory yourself. This can be difficult and is the cause of a lot of bugs and lot of development time spent tracking down those bugs. With managed code, you still have problems, but a lot less of them and they're easier to track down. This normally means less buggy software, and less development time.
There are other differences, but memory management is probably the biggest.
If they were still interested I might mention how a lot of exploits are from buffer overruns and that you don't get that with managed code, or that code reuse is now easy, or that we no longer have to deal with COM (if you're lucky anyway). I'd probably stay way from COM otherwise I'd launch into a tirade over how awful it is.
It's like the difference between playing pool with and without bumpers along the edges. Unless you and all the other players always make perfect shots, you need something to keep the balls on the table. (Ignore intentional ricochets...)
Or use soccer with walls instead of sidelines and endlines, or baseball without a backstop, or hockey without a net behind the goal, or NASCAR without barriers, or football without helmets ...)
"The specific term managed code is particularly pervasive in the Microsoft world."
Since I work in MacOS and Linux world, it's not a term I use or encounter.
The Brad Abrams "What is Managed Code" blog post has a definition that say things like ".NET Framework Common Language Runtime".
My point is this: it may not be appropriate to explain it the terms at all. If it's a bug, hack or work-around, it's not very important. Certainly not important enough to work up a sophisticated lay-persons description. It may vanish with the next release of some batch of MS products.

Resources