Understanding ZeroMQ - zeromq

So as I have asked in a previous post, I want to be able to make programs or functions written in different languages to communicate between them.
I have come across zeromq recently and I'm trying to figure out whether or not this is something that could help me since it provides some sort of sockets. Can zeromq for example exchange data (or pass arguments) between a program written in python with a program or a function written in C++ or is its function for something completely different?

A: Oh Yes, exactly that is the power of ZeroMQ or nanomsg frameworks
Both of these are not sockets but rather BEHAVIOUR created within a context of a Zero-* -- a set of courageous maxims the Scaleable Formal Communication Pattern Framework was designed, developed and finetuned to meet.
That will enable you to assemble your own fast & smart messaging layer(s).
Q: What is the best next step?
In spite of your first impression, simly do forget anything you know about sockets and multithreaded synchronisation tricks.
Yes, rather forget and build your new understanding on "green field".
Take Pieter HINTJENS' book "Code Connected, Volume 1" (accessible in PDF ) and spend a few weeks on understanding both the motivation and the typical errors Pieter has hammered into this must-read bible of ZeroMQ.
Code-snippets are dangerous in case you did not get or completely missed the full-context of the bigger picture.
Believe me. I could not give you better advice. You may check my other posts on ZeroMQ & nanomsg, to see the difference.
You will definitely benefit from this book and ZeroMQ will give you many powers you would never ( and believe me never ) would be ready to program from scratch on your own. The power is so immense ( if well re-used ).
nota bene
For real-world inter-process communications, there is one minor issue to be aware of. Various ZeroMQ versions' inter-operability. Yes, the power of ZeroMQ is immense, nevertheless, it is necessary to keep the version control built in your messaging layer so as to solve situations, where some platforms do not have an update-path to "newer" releases available. Went into this issue with re-integration of a trading system with a component, where as old as zmq.__version__ == 2.1.11 was necessary, while recent are versions well above 14.x.y, so as to be assured to be 100% end-to-end backward-compatible.
Still, the overall potential is so immense, it makes sense to persevere and get the job done. G/L on that.

ZeroMQ is an abstraction of sockets. It is cross platform and have lots of language binding: I personally don't know any language that doesn't have ZeroMQ bindings.
So yes, you can use ZeroMQ to communicate between a program written in Python and program written in C++.
I recommend going through the zguide as it contains a lot of very useful information about ZeroMQ.
PyZMQ can be used as Python binding, and zmqpp for your C++ code. Note that for the C++ code you could also use cppzmq or the zmq C API directly. I would recommend using zmqpp as its higher level and (imho) easier to use.

Related

Using ZeroMQ for cross platform development?

We have a large console application in Haskell that I have been charged with making cross platform and adding a gui.
The requirements are:
Native-as-possible look and feel.
Clients for Windows and Mac OS X, Linux if possible.
No separate runtime to install.
No required network communication. The haskell code deals with very sensitive information that cannot be transmitted over the wire. This is really the only reason this isn't a web application.
Now, the real reason for this question is to explain one solution I'm researching at the moment and to solicit for reasons that I'm not thinking of that make this a bad idea.
My solution is a native gui. Winforms on Windows, Cocoa on Mac OS X, and GTK/Glade on Linux, that simply handles the presentation. Then I would write a layer on top of the Haskell code that turns it into a responder for messages to and from the UI using ZeroMQ to handle the messages and maybe protobufs for serializing the data back and forth. So the native application would start which would itself start the daemon where all of the magic happens, and send messages back and forth.
Aside from making sure that the daemon only accepts connections from the application that started it, and the challenge of providing the right data back and forth for advanced gui elements (I'm thinking table views, cells, etc.), I don't see many downsides to this.
What am I not thinking about that makes this a bad idea?
I should probably mention that at first glance I was going to go with GTK on all platforms. The problem is that, while it's close, and GTK and Glade support for Haskell is nice to work with, the result doesn't look 'right'. It's close, but just not native enough in subtle ways which make that solution unacceptable to the people who happen to be writing the check for this work.
Also, the issue of multiple platforms and thus multiple languages for the gui isn't a problem so I'm not necessarily looking for other ways to solve that problem unless it simplifies something about the interop with the haskell code.
Then I would write a layer on top of the Haskell code that turns it into
a responder for messages to and from the UI using ZeroMQ to handle the
messages and maybe protobufs for serializing the data back and forth.
I think this is reasonable (a client/server model, where the client just
happens to be a native look-n-feel desktop app). (I have no strong view
about protobufs versus e.g. JSON, thrift).
The Haskell zeromq
bindings are getting
some use now, too.
What am I not thinking about that makes this a bad idea?
How well tested is zeromq on Windows and Mac? It is probably fine, but
something I'd check.
The problem is that, while it's close, and GTK and Glade support for
Haskell is nice to work with, the result doesn't look 'right'.
Does the integration package help
there?
Here's an interesting possibility: wai-handler-webkit. It essentially packages up QtWebkit with the Warp web server to make your web apps deployable. It hasn't seen intensive use, has never been tested on Mac, and is tricky to compile on Windows, but it's a fairly straight-forward approach that lets you use the fairly rich web ecosystem developing in Haskell.
I'm likely going to be doing more development on it in the near future, so if you have interest in using it, let me know what extra features would be useful, as well as if you could offer any help on the Mac front in particular. I'm also not convinced that we need to stick with QtWebkit on all platforms: it might make more sense to use a different Webkit backend depending on OS, or maybe even using Gecko or (shudder) Trident instead.
I've had some problems getting zeromq to play nice with haskell on OSX (problems with looking for a dylib as opposed to an "o" I think). Protocol buffers and haskell seems to work fine though.
So your reason not to use a web application is because of sensitive nature of haskell program's output. And THAT's why you are distributing that same sensitive application that spews out unencrypted data on ALL client machines ? That does not make any sense.
If your application is sensitive you DEFINITELLY should put it on server and utilize strongest possible TLS.

Cooperation between multiple programming languages

I'm a fairly advanced hobby programmer. I consider myself capable at Objective-C, Java, some straight C, Python, and general MVC design.
I've written quite a few programs but they have all been relatively self-contained, using external libraries occasionally.
When reading about larger projects, and/or more complicated programs, I hear a lot of language thrown around about "Writing one part in X, and writing this part in Y."
Since I have a lack of experience with this, I was wondering if someone could point me in the right direction. What general designs/mechanisms are employed for applications or projects written in more than one language? What is involved in a "scriptable" design?
Thanks for any guidance on the topic!
-Chase
There is no single "right way". A multitude of approaches exist, including the .NET-way, where all the languages are hosted inside a common runtime environment with well-specified interoperability constraints, and a good old Unix-way, where all the components are supposed to communicate via pipes or sockets, using simple text-based protocols.
For the latter you can read a classic book: http://en.wikipedia.org/wiki/The_Unix_Programming_Environment
Depends on what you need to do. For example if you want to build a poker game online then, most probably you would use java for the application and flash/flex for the interface. Java has the power of the libraries and the flash/flex are quite generally available and offer a rich interface.
If you have a software that receives input from an online application and offers output on a specific output (label printer for example) then your online-ready software (Java/PHP/Python) would best communicate with a specially designed program on the target computer. A program for which I'd use C++ for it's technical power, rigurosity and speed compared to java.
The idea is to identify the languages that suit your purpose best. In my opinion it is ideal that you use one language to do all the stuff, that is why I like java as it seems to fit everything although it has a more or less bad renown for slowness.
I see things in a kind of this way:
1. Engineered, machine oriented stuff then it is C++ (and languages of it's kind)
2. Mobile multifunctional stuff (middle-ware mainly) Java
3. Online , browser based stuff PHP especially for B2C(people oriented) applications
4. Python,Ruby etc are from my point of view somewhere between java and PHP but I never really worked with them so I can not give an exact opinion
You can link them together depending on your needs.

It's a good idea use ruby for socket programming?

My language of choice is Ruby, but I know because of twitter that Ruby can't handle a lot of requests. It is a good idea using it for socket development? or Should I use a functional language like erlang or haskell or scala like twitter developers did?
The company I work for uses Ruby for our web site. We have so far handled a little over 34,000,000,000 hits. We have no problem handling around 10,000,000 hits per day. Peak hits have exceeded 40,000,000 hits per day.
Scalability depends on a lot of factors. Our databases do a disproportionately high percentage of writes compared to reads, for example. While most websites do about 90% reads to 10% writes, we are closer to 50%-50%. My point is that scalability is affected by a lot of factors. If you are database-limited, as is often the case for web apps, it won't matter what language you use, you'll be waiting on your database.
There's a lot to think about if you are looking at handling large scales. Sharding databases, memcached, etc. etc. etc. etc. The language you use for your application is just one aspect, and often, though not always, a small aspect of scalability.
Ruby may be a good option for you, but there's a lot to like in other languages. Erlang tries hard to make it easier to recover from errors, for example.
I'm not sure that any "lessons" that the Twitter team has learned about Ruby (more specifically, Rails) and scaling would apply to your project. They're looking at WAY more traffic than most people can reasonably expect to see.
As far as sockets and Ruby go, check out I like Unicorn because it's Unix. It's quite an interesting read about doing sockets in Ruby.
I'd like to provide a bit of context first. I'm pretty active with the Scala community, and I would choose Scala over Ruby for any project.
So, having said that, keep with Ruby unless you actually hit barrier. If Ruby is your language of choice, it might just be that you'll never be happy with the choices you mention, particularly the statically typed ones.
It might be good to learn a new language, to have something to fall back on if you need an alternative. In your case, I'd recommend Clojure or Erlang. Scala is a good statically typed, OO language with functional programming perks. It might be easier to learn than the others, but people who really like dynamic typing don't convert to static typing easy.
As for Haskell, it's one of the most awesome languages out there (and much more well support and popular than the equally awesome alternatives), and can open your mind like nothing else. It's also tough to master.
If ruby is your favorite language, yes it is a good idea. It is always better to use what you know and what you like
Whereas you may get better performance from a functional language such as Erlang the suitability of Ruby will really depend on what you are trying to achieve. For example how many requests are you going to be handling is probably the first question, if the performance benefits of using Erlang don't make much difference use something you are comfortable with, why learn a new language if you don't have to?
You at least have the option of staying in your favorite high level language if you use a fast, concurrent language like Haskell, Erlang or Scala. With Ruby, performance bottlenecks will mean switching to compiled C (or Haskell, or ...) for speed anyway.
Ruby has the advantage of good frontend frameworks.
I have also used Ruby for many projects though I've recently moved to Scala and like it quite a bit. One thing that I've heard good things about (but never tried myself) for network stuff in Ruby is EventMachine. It uses the Reactor Pattern just like twisted and it seems quite solid.
The key is to have a low level library in C/C++ that does the socket multiplexing for you. Socket multiplexing is what makes a TCP server process truly multi-user. such libraries in C (which is what you want) could be libevent/libev... and in c++ boost::asio. Python has twisted that does it behind the scenes.
If you get such a library and use it in ruby you should be able to implement most socket programs fairly well. This is especially true on UNIX oses which favour multi-process to multi-threading.
Having recently written (actually still doing so now), a project using sockets with Ruby and Java I would say no. The ruby socket implementation is poorly documented unless you plan on writing a basic blocking chat server. I found writing in C or Java simpler, Ruby wraps up native sockets and your kinda left wondering how the hell to use it now. I have previously written plenty of socket code on windows, Linux and other platforms in C, with less stress.
My Ruby code now is very small and works well, getting to that point was a real pain.

Asterisk AGI framework for IVR; Adhearsion alternative?

I am trying to get started writing scalable, telecom-grade applications with Asterisk and Ruby. I had originally intended to use the Adhearsion framework for this, but it does not have the required maturity and its documentation is severely lacking. AsteriskRuby seems to be a good alternative, as it's well documented and appears to be written by Vonage.
Does anyone have experience deploying AGI-based IVR applications? What framework, if any, did you use? I'd even consider a non-Ruby one if it's justified. Thanks!
SipX is really the wrong answer. I've written some extremely complicated VoiceXML on SipX 3.10.2 and it's been all for naught since SipX 4 is dropping SipXVXML for an interface that requires IVRs to be compiled JARs. Top that off with Nortel filing bankruptcy, extremely poor documentation on the open-source version, poor compliance with VXML 2.0 (as of 3.10.2) and SIP standards (as of 3.10.2, does not trunk well with ITSPs). I will applaud it for a bangup job doing what it was designed to do, be a PBX. But as an IVR, if I had it to do all over again, I'd do something different. I don't know what for sure, but something different. I'm toying with Trixbox CE now and working on tying it into JVoiceXML or VoiceGlue.
Also, don't read that SipX wiki crap. It compares SipX 3.10 to AsteriskNOW 1 to Trixbox 1. Come on. It's like comparing Mac OS X to Win95! A more realistic comparison would be SipX 4 (due out 1Q 2009) to Asterisk 1.6 and Trixbox 2.6, which would show that they accomplish near identical results except in the arena of scalibility and high-availability; SipX wins at that. But, for maturity and stability, I'd advocate Asterisk.
Also, my real world performance results with SipXVXML:
Dell PowerEdge R200, Xeon Dual Core 3.2GHz, handles 17 calls before jitters.
HP DL380 G4, Dual Xeon HT 3.2 GHz, handles 30 calles before long pauses.
I'll post my findings when I finish evaluating VoiceGlue and JVoiceXML but I think I'm going to end up writing a custom PHP called from AGI since all the tools are native to Asterisk.
You should revisit Adhearsion as v0.8.1 is out, and the documentation has gotten much better quite recently. Have a look here:
http://adhearsion.com
http://docs.adhearsion.com
http://api.adhearsion.com
If you're looking for "telecom-grade" applications, you may want to look into SipXecs instead of asterisk. It's featureful, free, and open source, with commercial support available from Nortel. You can interact with it via a Web Services API in ruby (or any other language).
See the SipXecs wiki for more information. There's a comparison matrix on that site, comparing features with AsteriskNOW and TrixBox.
There really aren't any other frameworks out there. There's of course AGI bindings to every language, but as far as full-fledged frameworks for developing telephony applications, we're just not there yet. At least in the open-source world.
I have asked somewhat related questions here, here, and here. I'm using Microsoft's Speech Server, and I'm very intested to learn about any alternatives that are out there, especially open source ones. You might find some good info in the answers to one of those questions.
I used JAGIServer extensively, even though it's not under development anymore, and it's pretty good and easy to use. It's an interface for FastAGI, which I recommend you use instead of simple AGI.
The new version of this framework is OrderlyCalls which seems to have a lot more features but since I haven't needed them, I haven't tried it.
I guess it all depends on what you want to do with AGI; usually I have a somewhat complex dialplan to gather and validate all user input and then just use AGI to connect to a Java application which will read some variables, do some stuff with it (perform operations, queries, etc etc) and then sets some more variables on the AGI channel and disconnects. At this point, the dialplan continues depending on the result of the variables set by the Java app.
This works really fast because you have a ServerSocket on the Java app, which receives incoming connections from AGI, creates a JAGIClient with the new socket and a new instance of a JAGIProcessor (which you have to write, it's the object that will do all your processing), and then run the JAGIClient inside a thread pool.
Your JAGIProcessor implements the processCall method where it does all the work it needs, interacting with the JAGIClient passed as a parameter, to read and set variables or do whatever stuff the AGI interface allows you to.
So you have a Java app running all the time and it can be a simple J2SE app or an EE app on a container, doesn't matter; once it's running, it will process AGI requests really fast, since no new processes have to be started (in contrast to simple AGI which runs a program for every AGI call).
Smee again. After migrating my client's IVR's over from SipX to Asterisk utilizing PHPAGI, I must say that I haven't encountered any other architecture that anywhere near as simple and capable. I'll be stress testing Trixbox CE 2.8 today on the same hardware I had tested SipX on earlier. But I must say, using PHPAGI for the IVR and the Asterisk CLI for debugging has worked perfectly and allowed me to develop IVR's far faster than any other company out there. I'm working on implementing TTS and ASR today and I'll post my stress test results when I can.
Simple small flexible Asterisk AGI IVR written on PHP
http://freshmeat.net/projects/phpivr
For small and easy applications I use Asterisk::AGI in perl. There are also extensions for the Fast AGI. For bigger applications, like VoIP operator's backends I use something similar to OrderlyCalls written in Java (my own code). OrderlyCalls is great though to start with java fastagi engine and extend it to your needs.

Does it still make sense to learn low level WinAPI programming? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Does it make sense, having all of the C#-managed-bliss, to go back to Petzold's Programming Windows and try to produce code w/ pure WinAPI?
What can be learn from it? Isn't it just too outdated to be useful?
This question is bordering on religious :) But I'll give my thoughts anyway.
I do see value in learing the Win32 API. Most, if not all, GUI libraries (managed or unmanaged) result in calls to the Win32 API. Even the most thorough libraries don't cover 100% of the API, and hence there are always gaps which need to be plugged by direct API calls or P/invoking. Some of the names of the wrappers around the API calls have similar names to the underlying API calls, but those names aren't exactly self-documenting. So understanding the underlying API, and the terminology used therein, will aid in understanding the wrapper APIs and what they actually do.
Plus, if you understand the nature of the underlying APIs that are used by frameworks, then you will make better choices with regards to which library functionality you should use in a given scenario.
Cheers!
I kept to standard C/C++ for years before learning Win32 API, and to be quite blunt, the "learning Win32 API" part is not the best technical experience of my life.
In one hand Win32 API is quite cool. It's like an extension of the C standard API (who needs fopen when you can have CreateFile. But I guess UNIX/Linux/WhateverOS have the same gizmo functions. Anyway, in Unix/Linux, they have the "Everything is a file". In Windows, they have the "Everything is a... Window" (no kidding! See CreateWindow!).
In the other hand, this is a legacy API. You will be dealing with raw C, and raw C madness.
Like telling one's structure its own size to pass through a void * pointer to some Win32 function.
Messaging can be quite confusing, too: Mixing C++ objects with Win32 windows lead to very interesting examples of Chicken or Egg problem (funny moments when you write a kind of delete this ; in a class method).
Having to subclass a WinProc when you're more familiar with object inheritance is head-splitting and less than optimal.
And of course, there is the joy of "Why in this fracking world they did this thing this way ??" moments when you strike your keyboard with your head once too many and get back home with keys engraved in your forehead, just because someone thought it more logical to write an API to enable the changing of the color of a "Window", not by changing one of its properties, but by asking it to its parent window.
etc.
In the last hand (three hands ???), consider that some people working with legacy APIs are themselves using legacy code styling. The moment you hear "const is for dummies" or "I don't use namespaces because they decrease the runtime speed", or the even better "Hey, who needs C++? I code in my own brand of object-oriented C!!!" (No kidding... In a professional environment, and the result was quite a sight...), you'll feel the kind of dread only condemned feel in front of the guillotine.
So... All in all, it's an interesting experience.
Edit
After re-reading this post, I see it could be seen as overly negative. It is not.
It is sometimes interesting (as well as frustrating) to know how the things work under the hood. You'll understand that, despite enormous (impossible?) constraints, the Win32 API team did wonderful work to be sure everything, from you "olde Win16 program" to your "last Win64 over-the-top application", can work together, in the past, now, and in the future.
The question is: Do you really want to?
Because spending weeks to do things that could be done (and done better) in other more high-level and/or object-oriented API can be quite de-motivational (real life experience: 3 weeks for Win API, against 4 hours in three other languages and/or libraries).
Anyway, you'll find Raymond Chen's Blog very interesting because of his insider's view on both Win API and its evolution through the years:
https://blogs.msdn.microsoft.com/oldnewthing/
Absolutely. When nobody knows the low level, who will update and write the high level languages? Also, when you understand the low level stuff, you can write more efficient code in a higher level language, and also debug more efficiently.
The native APIs are the "real" operating system APIs. The .NET library is (with few exceptions) nothing more than a fancy wrapper around them. So yes, I'd say that anybody who can understand .NET with all its complexity, can understand relatively mundane things like talking to the API without the benefit of a middle-man.
Just try to do DLL Injection from managed code. It can't be done. You will be forced to write native code for this, for windowing tweaks, for real subclassing, and a dozen other things.
So yes: you should (must) know both.
Edit: even if you plan to use P/Invoke.
On the assumption that you're building apps targeted at Windows:
it can sure be informative to understand lower levels of the system - how they work, how your code interacts with them (even if only indirectly), and where you have additional options that aren't available in the higher-level abstractions
there are times when your code might not be as efficient, high-performance or precise enough for your requirements
However, in more and more cases, folks like us (who never learned "unmanaged coding") will be able to pull off the programming we're trying to do without "learning" Win32.
Further, there's plenty of sites that provide working samples, code fragments and even fully-functional source code that you can "leverage" (borrow, plagiarize - but check that you're complying with any re-use license or copyright!) to fill in any gaps that aren't handled by the .NET framework class libraries (or the libraries that you can download or license).
If you can pull off the feats you need without messing around in Win32, and you're doing a good job of developing well-formed, readable managed code, then I'd say mastering .NET would be a better choice than spreading yourself thin over two very different environments.
If you frequently need to leverage those features of Windows that haven't received good Framework class library coverage, then by all means, learn the skills you need.
I've personally spent far too much time worrying about the "other areas" of coding that I'm supposed to understand to produce "good programs", but there's plenty of masochists out there that think everyone's needs and desires are like their own. Misery loves company. :)
On the assumption that you're building apps for the "Web 2.0" world, or that would be just as useful/beneficial to *NIX & MacOS users:
Stick with languages and compilers that target as many cross-platform environments as possible.
pure .NET in Visual Studio is better than Win32 obviously, but developing against the MONO libraries, perhaps using the Sharp Develop IDE, is probably an even better approach.
you could also spend your time learning Java, and those skills would transfer very well to C# programming (plus the Java code would theoretically run on any platform with the matching JRE). I've heard it said that Java is more like "write once, debug everywhere", but that's probably as true as (or even moreso than) C#.
Analogy: If you build cars for a living (programming), then its very pertinent to know how the engine works (Win32).
Simple answer, YES.
This is the answer to any question that is like.. "does it make sense to learn a low level language/api X even when a higher level language/api Y is there"
YES
You are able to boot up your Windows PC (or any other OS) and ask this question in SO because a couple of guys in Microsoft wrote 16-bit assembly code that loads your OS.
Your browser works because someone wrote an OS kernel in C that serves all your browser's requests.
It goes all the way up to scripting languages.
Big or small, there is always a market and opportunity to write something in any level of abstraction. You just have to like it and fit in the right job.
No api/language at any level of abstraction is irrelevent unless there is a better one competing at the same level.
Another way of looking at it: A good example from one of Michael Abrash's book: A C programmer was given the task of writing a function to clear the screen. Since C was a better (higher level) abstraction over assembly and all, the programmer only knew C and knew it well. He did his best - he moved the cursor to each location on the screen and cleared the character there. He optimized the loop and made sure it ran as fast as it could. But still it was slow... until some guy came in and said there was some BIOS/VGA instruction or something that could clear the screen instantly.
It always helps to know what you are walking on.
Yes, for a few reasons:
1) .net wraps Win32 code. .net is usually a superior system to code against, but having some knowledge of the underlying Win32 layer (oops, WinAPI now that there is 64-bit code too) bolsters your knowledge of what is really happening.
2) in this economy, it is better to have some advantages over the other guy when you are looking for a job. Some WinAPI experience may provide this for you.
3) some system aspects are not available through the .net framework yet, and if you want to access those features you will need to use p/invoke (see http://www.pinvoke.net for some help there). Having at least a smattering of WinAPI experience will make your p/invoke development effort a lot more efficient.
4) (added) Now that Win8 has been around for awhile, it is still built on top of the WinAPI. iOS, Android, OS/X, and Linux are all out there, but the WinAPI will still be out there for many many years.
Learning a new programming language or technology is for one of three reasons:
1. Need: you're starting a project for building a web application and you don't know anything about ASP.NET
2. Enthusiasm: you're very excited about ASP.NET MVC. why not try that?
3. Free time: but who has that anyway.
The best reason to learn something new is Need. If you need to do something that the .NET framework can't do (like performance for example) then WinAPI is your solution. Until then we keep ourself busy with learning about .NET
For most needs on the desktop you wont need to know the Win32, however there is a LOT of Win32 not in .NET, but it is in the outlaying stuff that may end up being less than 1% of your application.
USB support, HID support, Windows Media Foundation just off the top of my head. There are many cool Vista API's only available from Win32.
You will do yourself a large favor by learning how to do interop with a Win32 API, if you do desktop programing, because when you do need to call Win32, and you will, you won't spend weeks scratching your head.
Personally I don't really like the Win32 API but there's value in learning it as the API will allow more control and efficiency using the GUI than a language like Visual Basic, and I believe that if you're going to make a living writing software you should know the API even if you don't use it directly. This is for reasons similar to the reasons it's good to learn C, like how a strcpy takes more time than copying an integer, or why you should use pointers to arrays as function parameters instead of arrays by value.
Learning C or a lower level language can definitely be useful. However, I don't see any obvious advantage in using the unmanaged WinAPI.
I've seen low level Windows API code... it ain't pretty... I wish I could unlearn it. I think it benefits to learn low level as in C, as you gain a better understanding of the hardware architecture and how all that stuff works. Learning old Windows API... I think that stuff can be left to the people at Microsoft who may need to learn it to build higher level languages and API... they built it, let them suffer with it ;-)
However, if you happen to find a situation where you feel you just can't do what you need to do in a higher level language (few and far between), then perhaps start the dangerous dive into that world.
yes. take a look at uTorrent, an amazing piece of software efficiency. Half of it's small size is due to the fact that much of it's core components were re-written to not use gargatuian libraries.
Much of this couldn't be done without understanding how these libraries interface with the lower level API's
It's important to know what is available with the Windows API. I don't think you need to crank out code with it, but you should know how it works. The .NET Framework contains a lot of functionality, but it doesn't provide managed code equivalents for the entire Windows API. Sometimes you have to get a bit closer to the metal, and knowing what's down there and how it behaves will give you a better understanding of how to use it.
This is really the same as the question, should I learn a low level language like C (or even assembler).
Coding in it is certainly slower (though of course the result is much faster), but its true advantage is you gain an insight into what is happening at close to the system level, rather than than just understanding someone else's metaphor for what is going on.
It can also be better when things won't work well, or fast enough or with the sort of granularity that you need. (And do at least some subclassing and superclassing.)
I'll put it this way. I don't like programming to the Win32 API. It can be a pain compared to managed code. BUT, I'm glad I know it because I can write programs that otherwise I wouldn't be able to. I can write programs that other people can't. Plus it gives you more insight into what your managed code is doing behind the scenes.
The amount of value you get out of learning the Win32 API, (aside from the sorts of general insights you get from learning about how the nuts and bolts of the machine fit together) depends on what you're trying to achieve. A lot of the Win32 API has been wrapped nicely in .NET library classes, but not all of it. If for instance you're looking to do some serious audio programming, that portion of the Win32 API would be an excellent subject of study because only the most basic of operations are available from .NET classes. Last I checked even the managed DirectX DirectSound library was awful.
At the risk of shameless self-promotion....
I just came across a situation where the Win32 API was my only option. I want to have different tooltips on each item in a listbox. I wrote up how I did it on this question.
Even in very very high level languages you still make use of the API. Why? Well not every aspect of the API has been replicated by the various libraries, frameworks, etc. You need to learn the API for as long as you will need the API to accomplish what you are trying to do. (And no longer.)
Apart from some very special cases when you need direct access to APIs, I would say NO.
There is considerable time and effort required to learn to implement the native API calls correctly and the returning value is just not worth it. I would rather spend the time learning some new hot technology or framework that will make your life easier and programming less painful. Not decades-old obsolete COM libraries that nobody really uses anymore (sorry to COM users).
Please don't stone me for this view. I know a lot of engineers here have really curious souls and there is nothing wrong with learning how things work. Curiousity is good and really helps understanding. But from a managerial point of view, I would rather spend a week learning how to develop Android apps than how to calls OLEs or COMs.
If you planning to develop a cross platform application, If you use win32, then your application could easily run on linux through WINE. This results in a highly maintainable application. This is one of the advantages of learning win32.

Resources