Emphasize managed or unmanaged in a mixed C++ .NET app? - unmanaged

The app on which I work is a WinForms app written almost entirely in Visual C++ circa 2003. .NET was selected, prior to my arrival on the scene, because of the UI building framework, but the vast majority of code was developed in unmanaged land. Part of that was absolutely necessary--we do some real-time image processing on some very large data sets, use some Intel image processing libraries that need pointers to the image buffers, and we really are in that <1% of cases where performance is that critical.
The app itself was a large executable formed by linking the UI code to several static libraries, each of which corresponds to a functional subsystem--data acquisition, image processing, things like that. Since I joined, I split a couple of these subsystems into DLLs by writing managed wrappers, which we reuse in other apps, but the main app is still subtantially composed of statically-linked libraries.
My colleague and I differ substantially on whether further dvelopment should emphasize unmanaged or managed. With the exception of the case I have mentioned, there are no performance requirements dictating unmanaged code. We are solidly committed to .NET, so cross-platform is not an issue. I am of the opinion that we should favor managed unless dictated otherwise.
Last month, my colleague developed a set of classes that managed a subsystem; rather than implementing them as ref classes and adding some events to a .NET interface, he wrote a couple implementations of Observer, using gcroot to hold handles to managed clients, and allowed himself to stay in unmanaged land. This seems to me wrong, just because why write something that you can have for free? But I am wondering if I am being too rigid.
Any thoughts?

There is no single right or wrong answer, except to say you leaning towards managed and your co-worker leaning towards unmanaged is definitely wrong. Whichever you choose, you both need to be in agreement in order to not make the problem of having some code managed and some code unmanaged worse.
You said "We are solidly committed to .NET". Who is "we" here? Is it your company or you and your coworker? It sounds like your coworker is not quite so committed to .NET. If the company is committed and the coworker isn't, then someone needs to reiterate to your coworker that he is part of a team and needs to follow the company direction.
If the decision really is up to the two of you and your coworker won't budge, then you should consider acquiescing.
We have a large managed/unmanaged app as well and we would never consider doing anything in unmanaged unless we absolutely had to. The majority of the app is managed. It's much better that way. But that's my opinion, not fact.

The Zen of managed vs unmanaged programming:
Good unmanaged code will be exemplified by not having a noticeable disadvantage in memory management bugs versus equivalent managed code.
Good managed code will be exemplified by not having a noticeable disadvantage in performance versus equivalent unmanaged code.
There is nothing good about interop between the two of them ;)

Related

Should app using VCL migrate?

Is VCL dead, or does it have a future as a GUI library? As CLX ended, is there any chance for cross-platform support in future releases?
I've had to do some work with legacy app that uses Borland's VCL(BCB6). Now that new features have to be implemented, it's necessary to revalue alternatives. Whether to stick with VCL or migrate to some other library/framework.
I've never read much what's happening in the field Embarcadero(Borland) tools. At least there seems to be only few VCL tagged questions here in SO and no much luck with google either.
Whether to continue using VCL in your project, or migrate to an alternative depends alot on your requirements. The VCL framework is powerful and mature, with lots of 3rd party components, which makes it a good idea to consider. The alternatives have been improving rapidly, and to point out one as the ultimate choice really requires you to carefully consider your requirements, and validate the strengths and weaknesses of the different frameworks.
Considering that cross platform is on the road map, I remind you that so has 64 bit support been for quite a while. We might see cross platform support, perhaps on schedule, perhaps delayed as we have seen with many previous features. I want to believe its coming because I truly like the VCL framework, but I always have a natural doubt concerning the official road map of the RAD studio series - sorry David. ;)
If you've researched the different alternatives, and found VCL to be the best choice based on its relevance to your project, then I'd consider using the VCL framework, especially if it is a framework you are familiar with. Learning a new framework can - while often a good idea - be a time consuming job. So even though there might be a risk of the framework not being held alive (as will there be with any alternatives) you might save a lot of work staying with the familiar framework, if it is the framework that suits your project the most.
If you do consider going with C++ Builder and the VCL, you might find that the C++ Builder Journal is a valuable source of information, they have a relatively quite forum, but with some interesting posts in it, and some free hints on their website: www.bcbjournal.com.
Of course there is also the embarcadero forums, and this site, it may be a good idea to search the Delphi forums and categories, since it seems there are more active users on these, and by far more posts. One good thing though, is that conversion from Delphi to C++ in VCL related questions is quite simple.
VCL is undergoing continued development.
Cross platform is on the current roadmap.
The embarcadero forums are still a valuable resource.
As a user of VCL I must say that your observations are truly correct. VCL might appeal to you, but the resources available compared to QT and other toolkits is poor at least esp. at SO. Our team have also found several bugs in their components, and have more than once patched components to make our application stable. Still for me the main reason to migrate is that VCL locks you in with a single set of development tools. I must admit that I have a hard time trying to find any really good reasons to continue to use it if you have the resources to migrate.
Given that bcc32 and its libraries is also very buggy, the lockin gets even more serious, The last months me and my team have spent more time fixing issues caused by the compiler than actually developing features. For me this is such a serious impediment that its cost overweight its benefits tenfold. Unfortunately the costs of migrating for us is so high that we at least for now have to endure its pains.

Is System.AddIn mostly about making it easier to use Remoting or does it make it harder to do so?

It takes at least 7 assemblies and restricting my AddIn's data model to data types that remoting can deal with before the appdomain isolation features begin to work. It is so complex! The System.AddIn teams blog implies to me they were trying to re-create a mental model of COM, a model I never understood very well in the first place and am not sold on the benefits. (If COM is so good why's it dead?-rhetorical question.) If I don't need to mirror or interop with legacy COM (like VSTO does using System.AddIn), is it possible to just create some classes that load load in a new AppDomain?
I can write the discovery code my self, I've done it before and a naive implementation is pretty fast because I'm not like iterating over the assemblies in the GAC!
So my specific question is, can I get the AppDomain isolation that AddIns provide with a few code Remoting snippets, and what would those be?
I'm not entirely sure that that any answer to your question meets the terms of the site - there is no solution.
Yes, remoting is easier as it is done for you. However, it is highly controlled and as you identified, requires a little work to plumb it all together. The cache file spewed out by the discovery process is hardly welcome either.
System.AddIn excels at isolation, which is actually a bit of an arse to put together from scratch in a robust, flexible way. It supports cross process hosting and fairly simple passage of user WPF elements from one domain to another.
One thing to remember however is that MAF's target audience is not those who are trying to connect two applications together. It is targeting developers wanting pluggable yet secure systems (cross process hosting protects the root application from unhandled exceptions, appdomains allow for executing potentially foreign code with defined security). From most communication, direct yourself straight towards System.Runtime.Remoting or WCF.
If you want to continue with System.AddIn, consider the pipeline builder plugin for visual studio!
In conclusion - you can get System.AddIn isolation using Remoting but to get a decent system you will require more than a few snippets. I am trying to replicate it myself and am tripping up all over remote interface component - something System.AddIn does without a hitch.
After messing around with System.Add for quite a while, I'm convinced that it was added as a one-off special purpose solution for Microsoft use. I'm surprised it got elevated to a core part of the .NET framework. It doesn't seem to have the refinement and polish needed for a general .NET framework component.
I'd like to find an alternative way to create .NET managed add-ins that doesn't require so much effort.

How mature is the Microsoft Code Contracts framework?

Microsoft has recently put a release of their Code Contracts framework on DevLabs with a commercial license. We're interested on using them in our project (mostly C#, some C++/CLI) to gradually replace all the custom validation code, but I'm keen to know about the experience other people have had with it before we commit to it, specifically:
Do you think the framework is sufficiently mature for large and complex commercial projects?
What problems have you run into while using it?
What benefits have you got from it?
Is it currently more pain than it's worth?
I realise that this is a somewhat subjective question as it requires opinion, but given that this framework is a very significant part of .NET 4.0 and will (potentially) change the way we all write validation code, I hope that this question will be left open to gather experience on the subject to help me make a decision to a specific, answerable question:
Should we be starting to use it next month?
Note that we do not ship a code API, only a web service one, so for the majority of code breaking compatibility in terms of the exception type thrown is not a concern. However, as I'm hoping more people than just me will benefit from this post and its answers, any detail around this area is more than welcome.
The last mature response to this was in 2009, and .NET 4 is out. I figure we're due for an update:
Code Contracts might well be mature enough for your Debug releases.
I realise this is somewhat of an upgrade from “Harmless” to “Mostly Harmless”.
The Code Contracts home page links to quite thorough documentation in PDF format. The documentation outlines usage guidelines in section 5. To summarize, you can pick how brave you feel about the Contract Tools re-writing your IL in your Release builds.
We're using the “don't rewrite my Release IL” mode.
So far, I'm most enjoying this unexpected benefit: there's less code, thus less code to test. All your guard clauses melt away.
if(arg != null) {
throw new ArgumentNullException("arg");
}
// Blank line here insisted upon by StyleCop
becomes:
Contract.Requires(arg != null);
Your functions are shorter. Your intent is clearer. And, you no longer have to write a test named ArgumentShouldNotBeNull just to reach 100% coverage.
So far, I've run into two problems:
I had a unit test which relied on a contract failure to succeed. You might argue the existence of the test was a blunder, but I wanted to document this particular prohibition in the form of a test. The test failed on my build server because I didn't have the tools installed. Solution: install the tools.
We're using two tools that rewrite IL: Code Contracts and PostSharp. They didn't get along too well. PostSharp's 2.0.8.1283 fixed the problem. I'd cautiously evaluate how any two IL-rewriting tools get along, though.
So far, the benefits are outweighing the hazards.
Addressing out-of-date concerns raised in other answers:
Code Contracts's documentation is quite thorough, though regrettably in PDF.
There's at least one Code Contract forum hosted by Microsoft.
Code Contracts Standard Edition is free if you have any VS2010 license.
.NET 4 is out. I've run into Microsoft's contracts when implementing generic collection interfaces.
I've been playing around with the code contracts some more myself on a small but moderately complex standalone project, which needs to inherit from some BCL classes and use other ones.
The contracts thing seems great when you're working in a completely isolated environment with just your own code and primitive types, but as soon as you start using BCL classes (which until .NET 4.0 do not have their own contracts) the verifier cannot check whether they will violate any of the requires/ensures/invariants and so you get a lot of warnings about potentially unsatisfied constraints.
On the other hand, it does find some invalid or potentially unsatisfied constraints which could be real bugs. But it's very hard to find these because there is so much noise that it's hard to find out which ones you can fix. It's possible to suppress the warnings from the BCL classes by using the assume mechanism, but this is somewhat self-defeating as these classes will have contracts in the future and assumptions will lessen their worth.
So my feeling is that for now, because in 3.5 we're trying to build on a framework that the verifier does not sufficiently understand, that it's probably worth waiting for 4.0.
Judging by this thread I would say it is not quite mature enough to use for an enterprise level project. I haven't used it myself, but people are still running into bugs that would bring your contract-critical project to a halt. It seems like a really great framework and the example videos they've provided have been exciting, but I'd wait for:
Existence of a community forum. You're going to want to be able to discuss inevitable problems you run into with other developers, and you want to know there is a decently strong base of developers out there to discuss solutions with.
A successful pilot project release. Generally, when Microsoft Research releases something that they think is mature enough to be used in a commercial project, they will work with an organization to pilot it, and then release that project open source to as a proof of concept and trial-by-fire of all of the major features. This would give a lot of confidence that most of the common contract scenarios are covered and working.
More complete documentation. Plain and simple, at some point you're going to want to do something with contracts that you can't do yet using Microsoft Code Contracts. You want to be able to quickly and clearly reason that your scenario is not yet supported. The current documentation is going to keep you guessing and trying different things, though, in my opinion, which will result in a lot of wasted time.
It's not mature enough.
It will be as soon as Microsoft releases it with the affordable editions of VS, but without the static code analysis it's not usable at all.
The editions of VS, that have it, are so insanely expensive that only a handful of people will ever be able to afford it.
It's a shame Microsoft killed this amazing idea with their pricing policy. I wish Code Contracts would become mainstream, but they won't.
Epic fail.

Is MonoRail ready for productive usage?

Right now I'm not sure...
I'd say yes. I'm using it. I know for a fact that Universal are using it on some of their (thousands of) sites. I will add some caveats, however:
There are serious problems with setting it up, especially if you want to debug into the libraries.
The helper functions favour prototype, as opposed to the more modern jQuery. This is changing rapidly, however.
The documentation is a bit chaotic, again the Castle Team are working on that.
I'm not guaranteeing every last "out-there" feature works, but the point of the system is actually to keep it simple.
Compared to vanilla ASP.NET, it's an absolute joy. I assure you that you won't miss viewstate.
We have been building a fairly large application with it for the past year and a half. Its been nice not to have to deal with the old ASP/Page based model and use the better Model/View/Controller design pattern.
To get the new stuff you really need to work off the trunk of development because they don't do releases very often. We have a lot of tests that get the framework involved so when an update in the framework breaks something we depend on we know about it immediately.
If you have to work in .NET this beats the heck out of the alternatives.
There is an overview on the monorail forum: http://forum.castleproject.org/viewforum.php?f=6
I'm using it for an application and haven't had any big issues with it.
The biggest problem is indeed find good documentation and examples.
I've had no problems setting it up. Julian, I don't think it is constructive to say things like "serious problems" without any further clarification or example.
Debugging into the libraries is trivial. Because it's open source, you can debug into the whole thing.
I've been using MonoRail for production for ages on many projects, as an employee, as an indie contractor, and for non-work related sites.
I know I'm biased on that, however I can whole heartedly promise that my positive usage experience is what lured me into contributing to the project, not the other way around.

Does it still make sense to learn low level WinAPI programming? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Does it make sense, having all of the C#-managed-bliss, to go back to Petzold's Programming Windows and try to produce code w/ pure WinAPI?
What can be learn from it? Isn't it just too outdated to be useful?
This question is bordering on religious :) But I'll give my thoughts anyway.
I do see value in learing the Win32 API. Most, if not all, GUI libraries (managed or unmanaged) result in calls to the Win32 API. Even the most thorough libraries don't cover 100% of the API, and hence there are always gaps which need to be plugged by direct API calls or P/invoking. Some of the names of the wrappers around the API calls have similar names to the underlying API calls, but those names aren't exactly self-documenting. So understanding the underlying API, and the terminology used therein, will aid in understanding the wrapper APIs and what they actually do.
Plus, if you understand the nature of the underlying APIs that are used by frameworks, then you will make better choices with regards to which library functionality you should use in a given scenario.
Cheers!
I kept to standard C/C++ for years before learning Win32 API, and to be quite blunt, the "learning Win32 API" part is not the best technical experience of my life.
In one hand Win32 API is quite cool. It's like an extension of the C standard API (who needs fopen when you can have CreateFile. But I guess UNIX/Linux/WhateverOS have the same gizmo functions. Anyway, in Unix/Linux, they have the "Everything is a file". In Windows, they have the "Everything is a... Window" (no kidding! See CreateWindow!).
In the other hand, this is a legacy API. You will be dealing with raw C, and raw C madness.
Like telling one's structure its own size to pass through a void * pointer to some Win32 function.
Messaging can be quite confusing, too: Mixing C++ objects with Win32 windows lead to very interesting examples of Chicken or Egg problem (funny moments when you write a kind of delete this ; in a class method).
Having to subclass a WinProc when you're more familiar with object inheritance is head-splitting and less than optimal.
And of course, there is the joy of "Why in this fracking world they did this thing this way ??" moments when you strike your keyboard with your head once too many and get back home with keys engraved in your forehead, just because someone thought it more logical to write an API to enable the changing of the color of a "Window", not by changing one of its properties, but by asking it to its parent window.
etc.
In the last hand (three hands ???), consider that some people working with legacy APIs are themselves using legacy code styling. The moment you hear "const is for dummies" or "I don't use namespaces because they decrease the runtime speed", or the even better "Hey, who needs C++? I code in my own brand of object-oriented C!!!" (No kidding... In a professional environment, and the result was quite a sight...), you'll feel the kind of dread only condemned feel in front of the guillotine.
So... All in all, it's an interesting experience.
Edit
After re-reading this post, I see it could be seen as overly negative. It is not.
It is sometimes interesting (as well as frustrating) to know how the things work under the hood. You'll understand that, despite enormous (impossible?) constraints, the Win32 API team did wonderful work to be sure everything, from you "olde Win16 program" to your "last Win64 over-the-top application", can work together, in the past, now, and in the future.
The question is: Do you really want to?
Because spending weeks to do things that could be done (and done better) in other more high-level and/or object-oriented API can be quite de-motivational (real life experience: 3 weeks for Win API, against 4 hours in three other languages and/or libraries).
Anyway, you'll find Raymond Chen's Blog very interesting because of his insider's view on both Win API and its evolution through the years:
https://blogs.msdn.microsoft.com/oldnewthing/
Absolutely. When nobody knows the low level, who will update and write the high level languages? Also, when you understand the low level stuff, you can write more efficient code in a higher level language, and also debug more efficiently.
The native APIs are the "real" operating system APIs. The .NET library is (with few exceptions) nothing more than a fancy wrapper around them. So yes, I'd say that anybody who can understand .NET with all its complexity, can understand relatively mundane things like talking to the API without the benefit of a middle-man.
Just try to do DLL Injection from managed code. It can't be done. You will be forced to write native code for this, for windowing tweaks, for real subclassing, and a dozen other things.
So yes: you should (must) know both.
Edit: even if you plan to use P/Invoke.
On the assumption that you're building apps targeted at Windows:
it can sure be informative to understand lower levels of the system - how they work, how your code interacts with them (even if only indirectly), and where you have additional options that aren't available in the higher-level abstractions
there are times when your code might not be as efficient, high-performance or precise enough for your requirements
However, in more and more cases, folks like us (who never learned "unmanaged coding") will be able to pull off the programming we're trying to do without "learning" Win32.
Further, there's plenty of sites that provide working samples, code fragments and even fully-functional source code that you can "leverage" (borrow, plagiarize - but check that you're complying with any re-use license or copyright!) to fill in any gaps that aren't handled by the .NET framework class libraries (or the libraries that you can download or license).
If you can pull off the feats you need without messing around in Win32, and you're doing a good job of developing well-formed, readable managed code, then I'd say mastering .NET would be a better choice than spreading yourself thin over two very different environments.
If you frequently need to leverage those features of Windows that haven't received good Framework class library coverage, then by all means, learn the skills you need.
I've personally spent far too much time worrying about the "other areas" of coding that I'm supposed to understand to produce "good programs", but there's plenty of masochists out there that think everyone's needs and desires are like their own. Misery loves company. :)
On the assumption that you're building apps for the "Web 2.0" world, or that would be just as useful/beneficial to *NIX & MacOS users:
Stick with languages and compilers that target as many cross-platform environments as possible.
pure .NET in Visual Studio is better than Win32 obviously, but developing against the MONO libraries, perhaps using the Sharp Develop IDE, is probably an even better approach.
you could also spend your time learning Java, and those skills would transfer very well to C# programming (plus the Java code would theoretically run on any platform with the matching JRE). I've heard it said that Java is more like "write once, debug everywhere", but that's probably as true as (or even moreso than) C#.
Analogy: If you build cars for a living (programming), then its very pertinent to know how the engine works (Win32).
Simple answer, YES.
This is the answer to any question that is like.. "does it make sense to learn a low level language/api X even when a higher level language/api Y is there"
YES
You are able to boot up your Windows PC (or any other OS) and ask this question in SO because a couple of guys in Microsoft wrote 16-bit assembly code that loads your OS.
Your browser works because someone wrote an OS kernel in C that serves all your browser's requests.
It goes all the way up to scripting languages.
Big or small, there is always a market and opportunity to write something in any level of abstraction. You just have to like it and fit in the right job.
No api/language at any level of abstraction is irrelevent unless there is a better one competing at the same level.
Another way of looking at it: A good example from one of Michael Abrash's book: A C programmer was given the task of writing a function to clear the screen. Since C was a better (higher level) abstraction over assembly and all, the programmer only knew C and knew it well. He did his best - he moved the cursor to each location on the screen and cleared the character there. He optimized the loop and made sure it ran as fast as it could. But still it was slow... until some guy came in and said there was some BIOS/VGA instruction or something that could clear the screen instantly.
It always helps to know what you are walking on.
Yes, for a few reasons:
1) .net wraps Win32 code. .net is usually a superior system to code against, but having some knowledge of the underlying Win32 layer (oops, WinAPI now that there is 64-bit code too) bolsters your knowledge of what is really happening.
2) in this economy, it is better to have some advantages over the other guy when you are looking for a job. Some WinAPI experience may provide this for you.
3) some system aspects are not available through the .net framework yet, and if you want to access those features you will need to use p/invoke (see http://www.pinvoke.net for some help there). Having at least a smattering of WinAPI experience will make your p/invoke development effort a lot more efficient.
4) (added) Now that Win8 has been around for awhile, it is still built on top of the WinAPI. iOS, Android, OS/X, and Linux are all out there, but the WinAPI will still be out there for many many years.
Learning a new programming language or technology is for one of three reasons:
1. Need: you're starting a project for building a web application and you don't know anything about ASP.NET
2. Enthusiasm: you're very excited about ASP.NET MVC. why not try that?
3. Free time: but who has that anyway.
The best reason to learn something new is Need. If you need to do something that the .NET framework can't do (like performance for example) then WinAPI is your solution. Until then we keep ourself busy with learning about .NET
For most needs on the desktop you wont need to know the Win32, however there is a LOT of Win32 not in .NET, but it is in the outlaying stuff that may end up being less than 1% of your application.
USB support, HID support, Windows Media Foundation just off the top of my head. There are many cool Vista API's only available from Win32.
You will do yourself a large favor by learning how to do interop with a Win32 API, if you do desktop programing, because when you do need to call Win32, and you will, you won't spend weeks scratching your head.
Personally I don't really like the Win32 API but there's value in learning it as the API will allow more control and efficiency using the GUI than a language like Visual Basic, and I believe that if you're going to make a living writing software you should know the API even if you don't use it directly. This is for reasons similar to the reasons it's good to learn C, like how a strcpy takes more time than copying an integer, or why you should use pointers to arrays as function parameters instead of arrays by value.
Learning C or a lower level language can definitely be useful. However, I don't see any obvious advantage in using the unmanaged WinAPI.
I've seen low level Windows API code... it ain't pretty... I wish I could unlearn it. I think it benefits to learn low level as in C, as you gain a better understanding of the hardware architecture and how all that stuff works. Learning old Windows API... I think that stuff can be left to the people at Microsoft who may need to learn it to build higher level languages and API... they built it, let them suffer with it ;-)
However, if you happen to find a situation where you feel you just can't do what you need to do in a higher level language (few and far between), then perhaps start the dangerous dive into that world.
yes. take a look at uTorrent, an amazing piece of software efficiency. Half of it's small size is due to the fact that much of it's core components were re-written to not use gargatuian libraries.
Much of this couldn't be done without understanding how these libraries interface with the lower level API's
It's important to know what is available with the Windows API. I don't think you need to crank out code with it, but you should know how it works. The .NET Framework contains a lot of functionality, but it doesn't provide managed code equivalents for the entire Windows API. Sometimes you have to get a bit closer to the metal, and knowing what's down there and how it behaves will give you a better understanding of how to use it.
This is really the same as the question, should I learn a low level language like C (or even assembler).
Coding in it is certainly slower (though of course the result is much faster), but its true advantage is you gain an insight into what is happening at close to the system level, rather than than just understanding someone else's metaphor for what is going on.
It can also be better when things won't work well, or fast enough or with the sort of granularity that you need. (And do at least some subclassing and superclassing.)
I'll put it this way. I don't like programming to the Win32 API. It can be a pain compared to managed code. BUT, I'm glad I know it because I can write programs that otherwise I wouldn't be able to. I can write programs that other people can't. Plus it gives you more insight into what your managed code is doing behind the scenes.
The amount of value you get out of learning the Win32 API, (aside from the sorts of general insights you get from learning about how the nuts and bolts of the machine fit together) depends on what you're trying to achieve. A lot of the Win32 API has been wrapped nicely in .NET library classes, but not all of it. If for instance you're looking to do some serious audio programming, that portion of the Win32 API would be an excellent subject of study because only the most basic of operations are available from .NET classes. Last I checked even the managed DirectX DirectSound library was awful.
At the risk of shameless self-promotion....
I just came across a situation where the Win32 API was my only option. I want to have different tooltips on each item in a listbox. I wrote up how I did it on this question.
Even in very very high level languages you still make use of the API. Why? Well not every aspect of the API has been replicated by the various libraries, frameworks, etc. You need to learn the API for as long as you will need the API to accomplish what you are trying to do. (And no longer.)
Apart from some very special cases when you need direct access to APIs, I would say NO.
There is considerable time and effort required to learn to implement the native API calls correctly and the returning value is just not worth it. I would rather spend the time learning some new hot technology or framework that will make your life easier and programming less painful. Not decades-old obsolete COM libraries that nobody really uses anymore (sorry to COM users).
Please don't stone me for this view. I know a lot of engineers here have really curious souls and there is nothing wrong with learning how things work. Curiousity is good and really helps understanding. But from a managerial point of view, I would rather spend a week learning how to develop Android apps than how to calls OLEs or COMs.
If you planning to develop a cross platform application, If you use win32, then your application could easily run on linux through WINE. This results in a highly maintainable application. This is one of the advantages of learning win32.

Resources