Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
At my previous employer we used a third party component which basically was just a DLL and a header file. That particular module handled printing in Win32. However, the company that made the component went bankcrupt so I couldn't report a bug I'd found.
So I decided to fix the bug myself and launched the debugger. I was surprised to find anti-debugging code almost everywhere, the usual IsDebuggerPresent, but the thing that caught my attention was this:
; some twiddling with xor
; and data, result in eax
jmp eax
mov eax, 0x310fac09
; rest of code here
At the first glance I just stepped over the routine which was called twice, then things just went bananas. After a while I realized that the bit twiddling result was always the same, i.e. the jmp eax always jumped right into the mov eax, 0x310fac09 instruction.
I dissected the bytes and there it was, 0f31, the rdtsc instruction which was used to measure the time spent between some calls in the DLL.
So my question to SO is: What is your favourite anti-debugging trick?
My favorite trick is to write a simple instruction emulator for an obscure microprocessor.
The copy protection and some of the core functionality will then compiled for the microprocessor (GCC is a great help here) and linked into the program as a binary blob.
The idea behind this is, that the copy protection does not exist in ordinary x86 code and as such cannot be disassembled. You cannot remove the entire emulator either because this would remove core functionality from the program.
The only chance to hack the program is to reverse engineer what the microprocessor emulator does.
I've used MIPS32 for emulation because it was so easy to emulate (it took just 500 lines of simple C-code). To make things even more obscure I didn't used the raw MIPS32 opcodes. Instead each opcode was xor'ed with it's own address.
The binary of the copy protection looked like garbage-data.
Highly recommended! It took more than 6 month before a crack came out (it was for a game-project).
I've been a member of many RCE communities and have had my fair share of hacking & cracking. From my time I've realized that such flimsy tricks are usually volatile and rather futile. Most of the generic anti-debugging tricks are OS specific and not 'portable' at all.
In the aforementioned example, you're presumably using inline assembly and a naked function __declspec, both which are not supported by MSVC when compiling on the x64 architecture. There are of course still ways to implement the aforementioned trick but anybody who has been reversing for long enough will be able to spot and defeat that trick in a matter of minutes.
So generally I'd suggest against using anti-debugging tricks outside of utilizing the IsDebuggerPresent API for detection. Instead, I'd suggest you code a stub and/or a virtual machine. I coded my own virtual machine and have been improving on it for many years now and I can honestly say that it has been by far the best decision I've made in regards to protecting my code so far.
Spin off a child process that attaches to parent as a debugger & modifies key variables. Bonus points for keeping the child process resident and using the debugger memory operations as a kind of IPC for certain key operations.
On my system, you can't attach two debuggers to the same process.
Nice thing about this one is unless they try to tamper w/ things nothing breaks.
Reference uninitialized memory! (And other black magic/vodoo...)
This is a very cool read:
http://spareclockcycles.org/2012/02/14/stack-necromancy-defeating-debuggers-by-raising-the-dead/
The most modern obfuscation method seems to be the virtual machine.
You basically take some part of your object code, and convert it to your own bytecode format. Then you add a small virtual machine to run this code. Only way to properly debug this code will be to code an emulator or disassembler for your VM's instruction format. Of course you need to think of performance too. Too much bytecode will make your program run slower than native code.
Most old tricks are useless now:
Isdebuggerpresent : very lame and easy to patch
Other debugger/breakpoint detections
Ring0 stuff : users don't like to install drivers, you might actually break something on their system etc.
Other trivial stuff that everybody knows, or that makes your software unstable. remember that even if a crack makes your program unstable but it still works, this unstability will be blamed on you.
If you really want to code the VM solution yourself (there are good programs for sale), don't use just one instruction format. Make it polymorphic, so that you can have different parts of the code have different format. This way all your code can't be broken by writing just one emulator/disassembler. For example MIPS solution some people offered seems to be easily broken because MIPS instruction format is well documented and analysis tools like IDA can already disassemble the code.
List of instruction formats supported by IDA pro disassembler
I would prefer that people write software that is solid, reliable and does what it is advertised to do. That they also sell it for a reasonable price with a reasonable license.
I know that I have wasted way too much time dealing with vendors that have complicated licensing schemes that only cause problems for the customers and the vendors. It is always my recommendation to avoid those vendors. Working at a nuclear power plant we are forced to use certain vendors products and thus are forced to have to deal with their licensing schemes. I wish there was a way to get back the time that I have personally wasted dealing with their failed attempts to give us a working licensed product. It seems like a small thing to ask, but yet it seems to be a difficult thing for people that get too tricky for their own good.
I second the virtual machine suggestion. I implemented a MIPS I simulator that (now) can execute binaries generated with mipsel-elf-gcc. Add to that code/data encryption capabilities (AES or with any other algorithm of your choice), the ability of self-simulation (so you can have nested simulators) and you have a pretty good code obfuscator.
The nice feature of choosing MIPS I is that 1) it's easy to implement, 2) I can write code in C, debug it on my desktop and just cross-compile it for MIPS when it's done. No need to debug custom opcodes or manually write code for a custom VM..
My personal favourite was on the Amiga, where there is a coprocessor (the Blitter) doing large data transfers independent from the processor; this chip would be instructed to clear all memory, and reset from a timer IRQ.
When you attached an Action Replay cartridge, stopping the CPU would mean that the Blitter would continue clearing the memory.
Calculated jumps in the middle of a legitimate looking but really hiding an actual instruction instructions are my favorite. They are pretty easy to detect for humans anyway, but automated tools often mess it up.
Also replacing a return address on the stack makes a good time waster.
Using nop to remove assembly via the debugger is a useful trick. Of course, putting the code back is a lot harder!!!
Related
i have a question concerning computer programming. Let's say i have only one computer with no OS running. And would like to start to "develop" an OS. basically what i have is a blank sheet an a pen to do so. an a couple of electronic devices. how do i put my instruction into that computer?
because today we use interpreter of compiler that "turn" programming language into what they call "machine code". But my question could be how to generate machine code from nowhere.
Thank you for your replies, a link to learn how to do that will be must welcome.
The first computers where programmed making the "machine code" directly. Just punching one's an zeros into cards (well, in fact they punched octal digits).
This was done that way until somebody thought it would be a nice idea to have an assembler which translated the machine code instructions into that ones and zeros.
After that, another guy thought that it can be very nice idea to have a programming language, who will translate "top level" instructions to machine code.
And after that, or probably at the same time, some "internal procedures" where created to ease the programming: to open a file, to close a file the only thing you have to do is to call an internal subroutine in the machine instead of programming all the open file and close file subroutines by yourself: the seed for the operating systems was planted.
The cross compiling issue that is commented here is the way to create an operating system for a new computer nowadays: you use a working computer as a "lever" to create an operating system for a new computer.
it depends on how far back you want to go. the earliest ones "programming" was moving wires from one essentially analog alu to another.
The woman/women programming at this point were called computers and used use pencil and paper.
later you use a pencil and paper and the datasheet/documentation for the instruction set. assemble by hand basically, there are no compilers or even the concept of a programming language at this point, this has to evolve still. you wrote down the ones and zeros in whatever form you preferred (binary or octal).
one way to enter code at this point is with switches. certainly computers predated it but look for a picture of the front panel of a pdp8 or the altair, etc. you set the switches for the data value and address, and you manually strobe a write. you load the bootstrap in this way and/or the whole program. set the start address and switch to run mode.
over time they developed card and tape readers for which you loaded the bootstrap in by hand (switches) then you could use a reader to load larger programs easier. cards could be punched on a typewriter type thing, literally a keyboard but instead of striking through a ribbon onto paper, it cut slots in a card.
oses and programming languages started to evolve at this point. until you bootstrapped your compiler you had to write the first compiler for a new language in some other language (no different than today). so the first assembler had to be in machine code, then from assembler you could create some other language and so on.
If you wanted to repeat something like this today you would have to build a computer with some sort of manual input. you could certainly but you would have to design it that way, like then you need the debouncing out but you could for example have a processor with an external flash, be it parallel or serial, mux the lines to the switches (a switch controls the mux) and either address/data/write your program, or for fun could use a spi flash and serially load the program into the flash. much better to just use one of the pdp or altair, etc online simulators to get a feel for the experience.
there is no magic here, there is no chicken and egg problem at all. humans had to do it by hand before the computer could do it. a smaller/simpler program had to generate more complicated programs, and so on. this long, slow, evolution is well documented all over the internet and in books in libraries everywhere.
Computers are based on a physical processor which was designed to accept instructions (eg. in assembly code) that only allowed primitive instructions like shift, move, copy, add. This processor decided how it spoke (eg. how big were the words (8-bit) and and other specs (speed/standards etc). Using some type of storage, we could store the instructions (punch cards, disk) and execute huge streams of these instructions.
If instructions were repeated over and over, you could move to an address and execute what was at that location and create loops and other constructs (branches, context switches, recursion).
Since you would have peripherals, you would have some kind of way to interface with it (draw, print dots), and you could create routines to build up on that to build letters, fonts, boxes, lines. Then you could run a subroutine to print the letter 'a' on screen..
An OS is basically a high-level implementation of all those lower level instructions. It is really a collection of all the instructions to interface with different areas (i/o, computations etc). Unix is a perfect example of different folks working on different areas and plugging them all into a single OS.
Does anyone have any suggestions for assembly file analysis tools? I'm attempting to analyze ARM/Thumb-2 ASM files generated by LLVM (or alternatively GCC) when passed the -S option. I'm particularly interested in instruction statistics at the basic block level, e.g. memory operation counts, etc. I may wind up rolling my own tool in Python, but was curious to see if there were any existing tools before I started.
Update: I've done a little searching, and found a good resource for disassembly tools / hex editors / etc here, but unfortunately it is mainly focused on x86 assembly, and also doesn't include any actual assembly file analyzers.
What you need is a tool for which you can define an assembly language syntax, and then build custom analyzers. You analyzers might be simple ("how much space does an instruction take?") or complex ("How many cycles will this isntruction take to execute?" [which depends on the preceding sequence of instructions and possibly a sophisticated model of the processor you care about]).
One designed specifically to do that is the New Jersey Machine Toolkit. It is really designed to build code generators and debuggers. I suspect it would be good at "instruction byte count". It isn't clear it is good at more sophisticated analyses. And I believe it insists you follow its syntax style, rather than yours.
One not designed specifically to do that, but good at parsing/analyzing langauges in general is our
DMS Software Reengineering Toolkit.
DMS can be given a grammar description for virtually any context free language (that covers most assembly language syntax) and can then parse a specific instance of that grammar (assembly code) into ASTs for further processing. We've done with with several assembly langauges, including the IBM 370, Motorola's 8 bit CPU line, and a rather peculiar DSP, without trouble.
You can specify an attribute grammar (computation over an AST) to DMS easily. These are great way to encode analyses that need just local information, such as "How big is this instruction?". For more complex analysese, you'll need a processor model that is driven from a series of instructions; passing such a machine model the ASTs for individual instructions would be an easy way to apply a machine model to compute more complex things as "How long does this instruction take?".
Other analyses such as control flow and data flow, are provided in generic form by DMS. You can use an attribute evaluator to collect local facts ("control-next for this instruction is...", "data from this instruction flows to,...") and feed them to the flow analyzers to compute global flow facts ("if I execute this instruction, what other instructions might be executed downstream?"..)
You do have to configure DMS for your particular (assembly) language. It is designed to be configured for tasks like these.
Yes, you can likely code all this in Python; after all, its a Turing machine. But likely not nearly as easily.
An additional benefit: DMS is willing to apply transformations to your code, based on your analyses. So you could implement your optimizer with it, too. After all, you need to connect the analysis indication the optimization is safe, to the actual optimization steps.
I have written many disassemblers, including arm and thumb. Not production quality but for the purposes of learning the assembler. For both the ARM and Thumb the ARM ARM (ARM Architectural Reference Manual) has a nice chart from which you can easily count up data operations from load/store, etc. maybe an hours worth of work, maybe two. At least up front, you would end up with data values being counted though.
The other poster may be right, as with the chart I am talking about it should be very simple to write a program to examine the ASCII looking for ldr, str, add, etc. No need to parse everything if you are interested in memory operations counts, etc. Of course the downside is that you are likely not going to be able to examine loops. One function may have a load and store, another may have a load and store but have it wrapped by a loop, causing many more memory operations once executed.
Not knowing what you really are interested in, my guess is you might want to simulate the code and count these sorts of things. I wrote a thumb simulator (thumbulator) that attempts to do just that. (and I have used it to compare llvm execution vs gcc execution when it comes to number of instructions executed, fetches, memory operations, etc) The problem may be that it is thumb only, no ARM no Thumb2. Thumb2 could be added easier than ARM. There exists an armulator from arm, which is in the gdb sources among other places. I cant remember now if it executes thumb2. My understanding is that when arm was using it would accurately tell you these sorts of statistics.
You can plug your statistics into LLVM code generator, it's quite flexible and it is already collecting some stats, which could be used as an example.
So I just found out GCC could do inline assembly and I was wondering two things:
What's the benefit of being able to inline assembly?
Is it possible to use GCC as an assembly compiler/assembler to learn assembly?
I've found a couple articles but they are all oldish, 2000 and 2001, not really sure of their relevance.
Thanks
The benefit of inline assembly is to have the assembly code, inlined (wait wait, don't kill me). By doing this, you don't have to worry about calling conventions, and you have much more control of the final object file (meaning you can decide where each variable goes- to which register or if it's memory stored), because that code won't be optimized (assuming you use the volatile keyword).
Regarding your second question, yes, it's possible. What you can do is write simple C programs, and then translate them to assembly, using
gcc -S source.c
With this, and the architecture manuals (MIPS, Intel, etc) as well as the GCC manual, you can go a long way.
There's some material online.
http://www.ibiblio.org/gferg/ldp/GCC-Inline-Assembly-HOWTO.html
http://gcc.gnu.org/onlinedocs/gcc-4.4.2/gcc/
The downside of inline assembly, is that usually your code will not be portable between different compilers.
Hope it helps.
Inline Assembly is useful for in-place optimizations, and access to CPU features not exposed by any libraries or the operating system.
For example, some applications need strict tracking of timing. On x86 systems, the RDTSC assembly command can be used to read the internal CPU timer.
Time Stamp Counter - Wikipedia
Using GCC or any C/C++ compiler with inline assembly is useful for small snippets of code, but many environments do not have good debugging support- which will be more important when developing projects where inline assembly provides specific functionality. Also, portability will become a recurring issue if you use inline assembly. It is preferable to create specific items in a suitable environment (GNU assembler, MASM) and import them projects as needed.
Inline assembly is generally used to access hardware features not otherwise exposed by the compiler (e.g. vector SIMD instructions where no intrinsics are provided), and/or for hand-optimizing performance critical sections of code where the compiler generates suboptimal code.
Certainly there is nothing to stop you using the inline assembler to test routines you have written in assembly language; however, if you intend to write large sections of code you are better off using a real assembler to avoid getting bogged down with irrelevancies. You will likely find the GNU assembler got installed along with the rest of the toolchain ;)
The benefit of embedding custom assembly code is that sometimes (dare I say, often times) a developer can write more efficient assembly code than a compiler can. So for extremely performance intensive items, custom written assembly might be beneficial. Games tend to come to mind....
As far as using it to learn assembly, I have no doubt that you could. But, I imagine that using an actual assembly SDK might be a better choice. Aside from the standard experimentation of learning how to use the language, you'd probably want the knowledge around setting up a development environment.
You should not learn assembly language by using the inline asm feature.
Regarding what it's good for, I agree with jldupont, mostly obfuscation. In theory, it allows you to easily integrate with the compiler, because the complex syntax of extended asm allows you to cooperate with the compiler on register usage, and it allows you to tell the compiler that you want this and that to be loaded from memory and placed in registers for you, and finally, it allows the compiler to be warned that you have clobbered this register or that one.
However, all of that could have been done by simply writing standard-conforming C code and then writing an assembler module, and calling the extension as a normal function. Perhaps ages ago the procedure call machine op was too slow to tolerate, but you won't notice today.
I believe the real answer is that it is easier, once you know the contraint DSL. People just throw in an asm and obfuscate the C program rather than go to the trouble of modifying the Makefile and adding a new module to the build and deploy workflow.
This isn't really an answer, but kind of an extended comment on other peoples' answers.
Inline assembly is still used to access CPU features. For instance, in the ARM chips used in cell phones, different manufacturers distinguish their offerings via special features that require unusual machine language instructions that would have no equivalent in C/C++.
Back in the 80s and early 90s, I used inline assembly a lot for optimizing loops. For instance, C compilers targeting 680x0 processors back then would do really stupid things, like:
calculate a value and put it in data register D1
PUSH D1, A7 # Put the value from D1 onto the stack in RAM
POP D1, A7 # Pop it back off again
do something else with the value in D1
But I haven't needed to do that in, oh, probably fifteen years, because modern compilers are much smarter. In fact, current compilers will sometimes generate more efficient code than most humans would. Especially given CPUs with long pipelines, branch prediction, and so on, the fastest-executing sequence of instructions is not always the one that would make most sense to a human. So you can say, "Do A B C D in that order", and the compiler will scramble the order all around for greater efficiency.
Playing a little with inline assembly is fine for starters, but if you're serious, I echo those who suggest you move to a "real" assembler after a while.
Manual optimization of loops that are executed a lot. This article is old, but can give you an idea about the kinds of optimizations hand-coded assembly is used for.
You can also use the assembler gcc uses directly. It's called as (see man as). However, many books and articles on assembly assume you are using a DOS or Windows environment. So it might be kind of hard to learn on Linux (maybe running FreeDOS on a virtual machine), because you not only need to know the processor (you can usually download the official manuals) you code for but also how hook to into the OS you are running.
A nice beginner book using DOS is the one by Norton and Socha. It's pretty old (the 3rd and latest edition is from 1992), so you can get used copies for like $0.01 (no joke). The only book I know of that is specific to Linux is the free "Programming from the Ground Up"
Following my previous question regarding the rationale behind extremely long functions, I would like to present a specific question regarding a piece of code I am studying for my research. It's a function from the Linux Kernel which is quite long (412 lines) and complicated (an MCC index of 133). Basically, it's a long and nested switch statement
Frankly, I can't think of any way to improve this mess. A dispatch table seems both huge and inefficient, and any subroutine call would require an inconceivable number of arguments in order to cover a large-enough segment of code.
Do you think of any way this function can be rewritten in a more readable way, without losing efficiency? If not, does the code seem readable to you?
Needless to say, any answer that will appear in my research will be given full credit - both here and in the submitted paper.
Link to the function in an online source browser
I don't think that function is a mess. I've had to write such a mess before.
That function is the translation into code of a table from a microprocessor manufacturer. It's very low-level stuff, copying the appropriate hardware registers for the particular interrupt or error reason. In this kind of code, you often can't touch registers which have not been filled in by the hardware - that can cause bus errors. This prevents the use of code that is more general (like copying all registers).
I did see what appeared to be some code duplication. However, at this level (operating at interrupt level), speed is more important. I wouldn't use Extract Method on the common code unless I knew that the extracted method would be inlined.
BTW, while you're in there (the kernel), be sure to capture the change history of this code. I have a suspicion that you'll find there have not been very many changes in here, since it's tied to hardware. The nature of the changes over time of this sort of code will be quite different from the nature of the changes experienced by most user-mode code.
This is the sort of thing that will change, for instance, when a new consolidated IO chip is implemented. In that case, the change is likely to be copy and paste and change the new copy, rather than to modify the existing code to accommodate the changed registers.
Utterly horrible, IMHO. The obvious first-order fix is to make each case in the switch a call to a function. And before anyone starts mumbling about efficiency, let me just say one word - "inlining".
Edit: Is this code part of the Linux FPU emulator by any chance? If so this is very old code that was a hack to get linux to work on Intel chips like the 386 which didn't have an FPU. If it is, it's probably not a suitable study for academics, except for historians!
There's a kind of regularity here, I suspect that for a domain expert this actually feels very coherent.
Also having the variations in close proximty allows immediate visual inspection.
I don't see a need to refactor this code.
I'd start by defining constants for the various classes. Coming into this code cold, it's a mystery what the switching is for; if the switching was against named constants, I'd have a starting point.
Update: You can get rid of about 70 lines where the cases return MAJOR_0C_EXCP; simply let them fall through to the end of the routine. Since this is kernel code I'll mention that there might be some performance issues with that, particularly if the case order has already been optimized, but it would at least reduce the amount of code you need to deal with.
I don't know much about kernels or about how re-factoring them might work.
The main thing that comes to my mind is taking that switch statement and breaking each sub step in to a separate function with a name that describes what the section is doing. Basically, more descriptive names.
But, I don't think this optimizes the function any more. It just breaks it in to smaller functions of which might be helpful... I don't know.
That is my 2 cents.
Sometimes it's difficult to describe some of the things that "us programmers" may think are simple to non-programmers and management types.
So...
How would you describe the difference between Managed Code (or Java Byte Code) and Unmanaged/Native Code to a Non-Programmer?
Managed Code == "Mansion House with an entire staff or Butlers, Maids, Cooks & Gardeners to keep the place nice"
Unmanaged Code == "Where I used to live in University"
think of your desk, if you clean it up regularly, there's space to sit what you're actually working on in front of you. if you don't clean it up, you run out of space.
That space is equivalent to computer resources like RAM, Hard Disk, etc.
Managed code allows the system automatically choose when and what to clean up. Unmanaged Code makes the process "manual" - in that the programmer needs to tell the system when and what to clean up.
I'm astonished by what emerges from this discussion (well, not really but rhetorically). Let me add something, even if I'm late.
Virtual Machines (VMs) and Garbage Collection (GC) are decades old and two separate concepts. Garbage-collected native-code compiled languages exist, even these from decades (canonical example: ANSI Common Lisp; well, there is at least a compile-time garbage-collected declarative language, Mercury - but apparently the masses scream at Prolog-like languages).
Suddenly GCed byte-code based VMs are a panacea for all IT diseases. Sandboxing of existing binaries (other examples here, here and here)? Principle of least authority (POLA)/capabilities-based security? Slim binaries (or its modern variant SafeTSA)? Region inference? No, sir: Microsoft & Sun does not authorize us to even only think about such perversions. No, better rewrite our entire software stack for this wonderful(???) new(???) language§/API. As one of our hosts says, it's Fire and Motion all over again.
§ Don't be silly: I know that C# is not the only language that target .Net/Mono, it's an hyperbole.
Edit: it is particularly instructive to look at comments to this answer by S.Lott in the light of alternative techniques for memory management/safety/code mobility that I pointed out.
My point is that non technical people don't need to be bothered with technicalities at this level of detail.
On the other end, if they are impressed by Microsoft/Sun marketing it is necessary to explain them that they are being fooled - GCed byte-code based VMs are not this novelty as they claim, they don't solve magically every IT problem and alternatives to these implementation techniques exist (some are better).
Edit 2: Garbage Collection is a memory management technique and, as every implementation technique, need to be understood to be used correctly. Look how, at ITA Software, they bypass GC to obtain good perfomance:
4 - Because we have about 2 gigs of static data we need rapid access to,
we use C++ code to memory-map huge
files containing pointerless C structs
(of flights, fares, etc), and then
access these from Common Lisp using
foreign data accesses. A struct field
access compiles into two or three
instructions, so there's not really
any performance. penalty for accessing
C rather than Lisp objects. By doing
this, we keep the Lisp garbage
collector from seeing the data (to
Lisp, each pointer to a C object is
just a fixnum, though we do often
temporarily wrap these pointers in
Lisp objects to improve
debuggability). Our Lisp images are
therefore only about 250 megs of
"working" data structures and code.
...
9 - We can do 10 seconds of Lisp computation on a 800mhz box and cons
less than 5k of data. This is because
we pre-allocate all data structures we
need and die on queries that exceed
them. This may make many Lisp
programmers cringe, but with a 250 meg
image and real-time constraints, we
can't afford to generate garbage. For
example, rather than using cons, we
use "cons!", which grabs cells from an
array of 10,000,000 cells we've
preallocated and which gets reset
every query.
Edit 3: (to avoid misunderstanding) is GC better than fiddling directly with pointers? Most of the time, certainly, but there are alternatives to both. Is there a need to bother users with these details? I don't see any evidence that this is the case, besides dispelling some marketing hype when necessary.
I'm pretty sure the basic interpretation is:
Managed = resource cleanup managed by runtime (i.e. Garbage Collection)
Unmanaged = clean up after yourself (i.e. malloc & free)
Perhaps compare it with investing in the stock market.
You can buy and sell shares yourself, trying to become an expert in what will give the best risk/reward - or you can invest in a fund which is managed by an "expert" who will do it for you - at the cost of you losing some control, and possibly some commission. (Admittedly I'm more of a fan of tracker funds, and the stock market "experts" haven't exactly done brilliant recently, but....)
Here's my Answer:
Managed (.NET) or Byte Code (Java) will save you time and money.
Now let's compare the two:
Unmanaged or Native Code
You need to do your own resource (RAM / Memory) allocation and cleanup. If you forget something, you end up with what's called a "Memory Leak" that can crash the computer. A Memory Leak is a term for when an application starts using up (eating up) Ram/Memory but not letting it go so the computer can use if for other applications; eventually this causes the computer to crash.
In order to run your application on different Operating Systems (Mac OSX, Windows, etc.) you need to compile your code specifically for each Operating System, and possibly change alot of code that is Operating System specific so it works on each Operating System.
.NET Managed Code or Java Byte Code
All the resource (RAM / Memory) allocation and cleanup are done for you and the risk of creating "Memory Leaks" is reduced to a minimum. This allows more time to code features instead of spending it on resource management.
In order to run you application on different Operating Systems (Mac OSX, Windows, etc.) you just compile once, and it'll run on each as long as they support the given Framework you are app runs on top of (.NET Framework / Mono or Java).
In Short
Developing using the .NET Framework (Managed Code) or Java (Byte Code) make it overall cheaper to build an application that can target multiple operating systems with ease, and allow more time to be spend building rich features instead of the mundane tasks of memory/resource management.
Also, before anyone points out that the .NET Framework doesn't support multiple operating systems, I need to point out that technically Windows 98, WinXP 32-bit, WinXP 64-bit, WinVista 32-bit, WinVista 64-bit and Windows Server are all different Operating Systems, but the same .NET app will run on each. And, there is also the Mono Project that brings .NET to Linux and Mac OSX.
Unmanaged code is a list of instructions for the computer to follow.
Managed code is a list of tasks for the computer follow that the computer is free to interpret on its own on how to accomplish them.
The big difference is memory management. With native code, you have to manage memory yourself. This can be difficult and is the cause of a lot of bugs and lot of development time spent tracking down those bugs. With managed code, you still have problems, but a lot less of them and they're easier to track down. This normally means less buggy software, and less development time.
There are other differences, but memory management is probably the biggest.
If they were still interested I might mention how a lot of exploits are from buffer overruns and that you don't get that with managed code, or that code reuse is now easy, or that we no longer have to deal with COM (if you're lucky anyway). I'd probably stay way from COM otherwise I'd launch into a tirade over how awful it is.
It's like the difference between playing pool with and without bumpers along the edges. Unless you and all the other players always make perfect shots, you need something to keep the balls on the table. (Ignore intentional ricochets...)
Or use soccer with walls instead of sidelines and endlines, or baseball without a backstop, or hockey without a net behind the goal, or NASCAR without barriers, or football without helmets ...)
"The specific term managed code is particularly pervasive in the Microsoft world."
Since I work in MacOS and Linux world, it's not a term I use or encounter.
The Brad Abrams "What is Managed Code" blog post has a definition that say things like ".NET Framework Common Language Runtime".
My point is this: it may not be appropriate to explain it the terms at all. If it's a bug, hack or work-around, it's not very important. Certainly not important enough to work up a sophisticated lay-persons description. It may vanish with the next release of some batch of MS products.