Using ZeroMQ for cross platform development? - user-interface

We have a large console application in Haskell that I have been charged with making cross platform and adding a gui.
The requirements are:
Native-as-possible look and feel.
Clients for Windows and Mac OS X, Linux if possible.
No separate runtime to install.
No required network communication. The haskell code deals with very sensitive information that cannot be transmitted over the wire. This is really the only reason this isn't a web application.
Now, the real reason for this question is to explain one solution I'm researching at the moment and to solicit for reasons that I'm not thinking of that make this a bad idea.
My solution is a native gui. Winforms on Windows, Cocoa on Mac OS X, and GTK/Glade on Linux, that simply handles the presentation. Then I would write a layer on top of the Haskell code that turns it into a responder for messages to and from the UI using ZeroMQ to handle the messages and maybe protobufs for serializing the data back and forth. So the native application would start which would itself start the daemon where all of the magic happens, and send messages back and forth.
Aside from making sure that the daemon only accepts connections from the application that started it, and the challenge of providing the right data back and forth for advanced gui elements (I'm thinking table views, cells, etc.), I don't see many downsides to this.
What am I not thinking about that makes this a bad idea?
I should probably mention that at first glance I was going to go with GTK on all platforms. The problem is that, while it's close, and GTK and Glade support for Haskell is nice to work with, the result doesn't look 'right'. It's close, but just not native enough in subtle ways which make that solution unacceptable to the people who happen to be writing the check for this work.
Also, the issue of multiple platforms and thus multiple languages for the gui isn't a problem so I'm not necessarily looking for other ways to solve that problem unless it simplifies something about the interop with the haskell code.

Then I would write a layer on top of the Haskell code that turns it into
a responder for messages to and from the UI using ZeroMQ to handle the
messages and maybe protobufs for serializing the data back and forth.
I think this is reasonable (a client/server model, where the client just
happens to be a native look-n-feel desktop app). (I have no strong view
about protobufs versus e.g. JSON, thrift).
The Haskell zeromq
bindings are getting
some use now, too.
What am I not thinking about that makes this a bad idea?
How well tested is zeromq on Windows and Mac? It is probably fine, but
something I'd check.
The problem is that, while it's close, and GTK and Glade support for
Haskell is nice to work with, the result doesn't look 'right'.
Does the integration package help
there?

Here's an interesting possibility: wai-handler-webkit. It essentially packages up QtWebkit with the Warp web server to make your web apps deployable. It hasn't seen intensive use, has never been tested on Mac, and is tricky to compile on Windows, but it's a fairly straight-forward approach that lets you use the fairly rich web ecosystem developing in Haskell.
I'm likely going to be doing more development on it in the near future, so if you have interest in using it, let me know what extra features would be useful, as well as if you could offer any help on the Mac front in particular. I'm also not convinced that we need to stick with QtWebkit on all platforms: it might make more sense to use a different Webkit backend depending on OS, or maybe even using Gecko or (shudder) Trident instead.

I've had some problems getting zeromq to play nice with haskell on OSX (problems with looking for a dylib as opposed to an "o" I think). Protocol buffers and haskell seems to work fine though.

So your reason not to use a web application is because of sensitive nature of haskell program's output. And THAT's why you are distributing that same sensitive application that spews out unencrypted data on ALL client machines ? That does not make any sense.
If your application is sensitive you DEFINITELLY should put it on server and utilize strongest possible TLS.

Related

mac look n feel on other platforms?

I'm just curious whether there is a gui framework that alloys you to use a mac look n feel on other platforms. presumably frameworks that use native APIs wouldn't be helpful (eg wxwidgets).
qt uses native API partially for mac look n feel, so that isn't useful.
what about swing?
In general, don't do this. Different platforms have different conventions, and your software should follow its platform's conventions to minimize the cognitive load of the user.
Unless, I guess, you're the only person who you ever expect to use it.
Legally, you can't create the Mac look and feel on another platform. Apple owns the copyrights to it.
However, the Quaqua Java Look-and-feel implements Java widgets that have that look-and-feel. If you have a good reason to use it in a non-commercial way, it may be a solution.
Every operating system and desktop manager has a different way of implementing GUI elements. Trying to port an application that will look exactly the same as OSX onto these platforms will be difficult if you're looking to use the native controls. Secondly, like Gred stated, each one will have their own way of doing things and could cause user issues by having unusal icons, symbols and controls displayed or missing.
Though if you would like to attempt this, one of the multi-platform web browsers such as FireFox might be able to give you some idea of how to carry over the same general look and feel over the platforms.
Good luck, and hope this helps some.
To answer the last question, Swing does not ship with the Mac look and feel on platforms that aren't Mac-based. This also applied to the other platform-specific look and feel elements.
While not the greatest solution, for Java, the Nimbus L&F is a good alternative... but since it's now included in Java (as of 6u10), it's not available separately any more.
You can implement your own style in Qt [and probably GTK*] to attempt to look the same (see QStyle). However it probably isn't worth the time and effort and will piss off some users. There are some windows themes that attempt to mimic the look. I know the new version of parallels ships with something like that, but it looks rather funny as the margins, spacing, font, etc is completely wrong.

Develop a Qt/GTK-Like Framework

I'm now with a idea to start the development of a bare bones Qt/GTK+-like framework, but I want to know some things before I start the creation of this project:
What is the structure of GTK+ and Qt?
Do I need to develop a window manager to build my own framework?
Some resources to start?
Developing a GUI/Application framework is a significant undertaking. You might want to be very clear about why you need to write yet an other framework.
Both projects you mention are open source. Why not start there?
GTK: git clone git://git.gnome.org/gtk+
Qt: git clone git://gitorious.org/qt/qt.git
Ed You ask what the structure of GTK and Qt are, whether you need to write your own widow manager (answer: no) and how to get started. Answers to at least the first two are in the source code. Don't forget, great practitioners in any field learn by watching others. Reading code is no different.
Writing a GUI/app framework would be a great learning experience, but even a fairly small app framework would be a very big job, and not something you really should tackle until you're fairly expert in writing applications using several other frameworks and widget toolkits.
I did something like this once, back in the early years of this decade. That was after I'd been programming for the Mac for over 15 years, Windows over 10, and had programmed both directly to their native graphics, event, and widget APIs, as well as various object-oriented toolkits for them including PowerPlant, MFC, and MacApp. When I started working on a PalmOS application, I spent a couple of weeks writing a very small app framework modeled on PowerPlant. But I could not have succeeded at all without those decades of broad and deep experience with so many GUI systems.
Doing this for Linux/X11 is even more work. That's because, unlike Mac OS and Windows, neither X11 nor Linux supply built-in user interface widgets, or much in the way of graphics primitives or text layout capabilities. GTK+ is part of the GNOME ecosystem; it provides the widgets, gets its message queue and internal communications from GObject, relies on GDK to abstract and simplify its graphics and event communications with X11, and uses Pango and Cairo for text rendering and layout. I work all through that system, and it probably represents many dozens of person-years of hard work by a lot of really smart people. And I'm sure Qt is very similar.
So if you really want to do this, I would recommend you:
Write programs with a lot of different app and widget toolkits, on multiple operating systems. That will help you learn not just how such systems work, but why they are designed as they are. And it will give you some feeling for what works well, and what works poorly.
Contribute bug fixes or new features to one or more of the various open-source frameworks. GTK+ has a list of tasks for beginners to work on. Another great open-source framework is wxWidgets.
Become an expert-level C/C++ programmer.
When you've done that for a few years, you will have the expertise suitable for tackling your own framework.
That sounds like a major undertaking, at least as a starting project.
Not sure what you mean by "the structure" of e.g. GTK+. You can see the object hierarchy for GTK+, that tells you at least how the implemented objects (GTK+ is an object-oriented API) relate to each other. You can guess how the code can be structured, from that information.
And no, you don't need to write your own window manager; the toolkits mainly concern themselves with what happens inside windows, not with the window management itself. Of course you could decide that your "platform" should have a wider scope, and include a WM.
I think some of the answers here might exaggerate a bit. Obviously making something of the same quality, width and depth as Qt and Gtk is a huge untertaking. But you can make simpler stuff and still learn a lot about how it works. I suggest doing like I did in university. Use OpenGL with Glut. Then you got basic drawing functionality and event system in place already. You then need to create classes for buttons, text fields etc.
If you want to make it really simple then each component just needs to know where it is drawn and have some sort of bounding box where you check whether mouse click are inside or not. You also needs to create some system which makes it possible for buttons, check boxes etc to tell the rest of your code that they were clicked.
This isn't really the rocket science people here make it out to be. Games have made their own very simple GUI toolkits for years. You can try that approach as well. I have modeled a simple GUI tookit on top of a game engine before. Your buttons and textfield could be simply be sprites.
But yeah, if you want to make something that will compete with Gtk+ and Qt, forget about it. That is a team effort over many years.

is it worth keeping the OS look and feel?

Is it worth to try to keep your GUI within the system looks ?
Every major program have their own anyways...
(visual studio, iexplorer, firefox, symantec utilities, adobe ...)
Or just the frame and dialogs should be left in the system look 'n feel range ?
update:
One easy exemple, if you want to add a close button to your tab, usually you make it against your current desktop theme. But if the user has a different theme, your close button is out of place, it doesn't fit the system look anymore.
I played with the uxtheme api, but there is nothing much you can do, and some themes i've seen are incomplete sets.
So to address this issue, the best way i see, is to do like visual studio/firefox/chrome roolup your own tab control with your theme...
I think, that unless your program becomes a very major part of the users life, you should strive to minimize "surprises" and maximimze recognizability (is that even a word?).
So, if you are making something that is used by 1.000 people for 10 minutes a day, go with system looks, and mechanisms.
If, on the other hand, you are making something that 100 people are using for 6 hours a day, I would start exploring what UI improvements and shortcuts I could cram in to make those 6 hours easier to deal with.
Notice however, that UI fixes must not come at the expense of performance. This is almost always the case in the beginning when someone thinks that simply overriding the OnPaint event in .Net will be sufficient.
Before you know it you are once again intercepting NC_PAINT and NC_BACKGROUNDERASE and all those little tricks to make it go as fast as the built-in controls.
I tend to agree with others here- especially Soraz and Smaci.
One thing I'll add, though. If you do feel that the OS L&F is too constraining, and you have good grounds for going beyond it, I'd strive to follow the priciple of "Pacing and leading" (which I'm borrowing here from an NLP context).
The idea is that you still want to capitalise as much as possible on your intended audidences familiarity with the host OS (there will be rare exceptions to this, as Smaci has already covered). So you use as much as possible of the "standard" controls and behaviours (this is the "pacing") - but extend it where necessary in ways that still "fit in" as much as possible (leading).
You've already mentioned some good examples of this principle at work - Visual Studio, even Office to some extend (Office is "special" as new UI styles that cut their teeth here often find their way back into future OS versions - or de-facto standards).
I'm bringing this up to contrast the type of apps that just "do it their way" - usually because they've been ported from another platform, or have been written to be cross-platform in GUI as well as core. Java apps often fall into this category, but they're not the only ones. It's not as bad as it used to be, but even today most pro audio apps have mongrel UIs, showing their lineage as they have been ported from one platform to another through the years. While there might be good business reasons for these examples, it remains that their UIs tend to suck and going this route should be avoided if in any way possible!
The overriding principle is still to follow the path of least surprise, and take account of your user's familiarity with the OS, and ratio of their time using your app to others on the OS.
Yes, if only because it enables the OS to use any accessability features that are built in like text-to-speech. There is nothing more annoying for someone who needs accessability features to have yet another UI that breaks all the tools they are used to.
I'd say it depends on the users, the application and the platform. The interface should be intuitive to the users, which is only the same as following system UI standards if they are appropriate for those users. For example, in the past I have been involved in developing hand held systems for dairy and bread delivery on Windows CE hand helds. The users in this case typically were not computer literate, and had a weak educational backround. The user interface focussed on ease of use through simple language and was modelled on a pre-existing paper form system. It made no attempt to follow the Windows look and feel as this would not have been appropriate.
Currently, I develop very graphical software for a user group that is typically 3rd level educated and very computer literate. The expectation here is that the software will adhere to and extend the Windows look and feel.
Software should be easy and intuitive where possible, and how to achieve this is entirely context dependent.
I'd like to reply with another question (Not really Stackoverflow protocol, but I think that, in this case, it's justified)
The question is 'Is it worth breaking the OS look and feel?'
In other words,
Do you have justification for doing so? (In order to present data in some way that's not possible within normal L&F)
What do you gain from doing so? (Improvinging usability?)
What do you lose from doing so? (Intuitiveness & familiarity?)
Don't simply do it 'To be different'
It depends on how wide you would define system look'n feel... But in general, you should keep it.
Do not surprise the user with differentiating from what he is used to. That's one of the reasons why we call him user ;-)
Firefox and Adobe products usually don't because they are targeting several plattforms which all have their own L&F. But Visual Studio keeps the typical Windows L&F. And, as long as you are developing only for Windows, so should you.
Apart from the fact that there is no well-defined look-n-feel on Windows, you should always try to follow the host platform native L&F. Note however that look-n-feel is just as much about how a program behaves as how it looks. Programs which behave in a counter-intuitive way is just as annoying as programs sporting their own ugly widgets.
Fraps is a good example (IMHO) of a program which is actually very useful, but breaks several user interface guidelines and looks really ugly.
If you're developing for Apple's Mac OS X or Microsoft Windows, the vendors supply interface guidelines which should be followed for any application to be "native".
See Are there any standards to follow in determining where to place menu items? for more information.
If you are on (or develop for) a Mac, then definitely YES!
And this should be true for Windows also.
In general, yes. But there's the occassional program that does well despite being not formatted for all the OSes it runs on. For example, emacs runs pretty much contrary to every interface guideline on OS X or Windows (and probably even gnome/KDE) and it's not going away any time soon.
I strongly recommend making your application look native.
A common mistake that developers who are porting an application to a new platform seem to make is that the new application should look-and-feel like it does on the old platform.
No, the new application should look-and-feel like all the other application that the user is used to on the new platform.
Otherwise, you get abominations like iTunes on Windows. The same UI design may be exactly right on one platform and very wrong on the next.
You will find that your users may not be able to pin-point why they dislike your application, but they just feel it hard to use.
Yes, there are valid exceptions, but they are rare (and sure enough, they tend to be the major applications like Office and Firefox, rather than the little ones). If you are unsure enough to have to ask on StackOverflow, your application isn't one of them.

Asterisk AGI framework for IVR; Adhearsion alternative?

I am trying to get started writing scalable, telecom-grade applications with Asterisk and Ruby. I had originally intended to use the Adhearsion framework for this, but it does not have the required maturity and its documentation is severely lacking. AsteriskRuby seems to be a good alternative, as it's well documented and appears to be written by Vonage.
Does anyone have experience deploying AGI-based IVR applications? What framework, if any, did you use? I'd even consider a non-Ruby one if it's justified. Thanks!
SipX is really the wrong answer. I've written some extremely complicated VoiceXML on SipX 3.10.2 and it's been all for naught since SipX 4 is dropping SipXVXML for an interface that requires IVRs to be compiled JARs. Top that off with Nortel filing bankruptcy, extremely poor documentation on the open-source version, poor compliance with VXML 2.0 (as of 3.10.2) and SIP standards (as of 3.10.2, does not trunk well with ITSPs). I will applaud it for a bangup job doing what it was designed to do, be a PBX. But as an IVR, if I had it to do all over again, I'd do something different. I don't know what for sure, but something different. I'm toying with Trixbox CE now and working on tying it into JVoiceXML or VoiceGlue.
Also, don't read that SipX wiki crap. It compares SipX 3.10 to AsteriskNOW 1 to Trixbox 1. Come on. It's like comparing Mac OS X to Win95! A more realistic comparison would be SipX 4 (due out 1Q 2009) to Asterisk 1.6 and Trixbox 2.6, which would show that they accomplish near identical results except in the arena of scalibility and high-availability; SipX wins at that. But, for maturity and stability, I'd advocate Asterisk.
Also, my real world performance results with SipXVXML:
Dell PowerEdge R200, Xeon Dual Core 3.2GHz, handles 17 calls before jitters.
HP DL380 G4, Dual Xeon HT 3.2 GHz, handles 30 calles before long pauses.
I'll post my findings when I finish evaluating VoiceGlue and JVoiceXML but I think I'm going to end up writing a custom PHP called from AGI since all the tools are native to Asterisk.
You should revisit Adhearsion as v0.8.1 is out, and the documentation has gotten much better quite recently. Have a look here:
http://adhearsion.com
http://docs.adhearsion.com
http://api.adhearsion.com
If you're looking for "telecom-grade" applications, you may want to look into SipXecs instead of asterisk. It's featureful, free, and open source, with commercial support available from Nortel. You can interact with it via a Web Services API in ruby (or any other language).
See the SipXecs wiki for more information. There's a comparison matrix on that site, comparing features with AsteriskNOW and TrixBox.
There really aren't any other frameworks out there. There's of course AGI bindings to every language, but as far as full-fledged frameworks for developing telephony applications, we're just not there yet. At least in the open-source world.
I have asked somewhat related questions here, here, and here. I'm using Microsoft's Speech Server, and I'm very intested to learn about any alternatives that are out there, especially open source ones. You might find some good info in the answers to one of those questions.
I used JAGIServer extensively, even though it's not under development anymore, and it's pretty good and easy to use. It's an interface for FastAGI, which I recommend you use instead of simple AGI.
The new version of this framework is OrderlyCalls which seems to have a lot more features but since I haven't needed them, I haven't tried it.
I guess it all depends on what you want to do with AGI; usually I have a somewhat complex dialplan to gather and validate all user input and then just use AGI to connect to a Java application which will read some variables, do some stuff with it (perform operations, queries, etc etc) and then sets some more variables on the AGI channel and disconnects. At this point, the dialplan continues depending on the result of the variables set by the Java app.
This works really fast because you have a ServerSocket on the Java app, which receives incoming connections from AGI, creates a JAGIClient with the new socket and a new instance of a JAGIProcessor (which you have to write, it's the object that will do all your processing), and then run the JAGIClient inside a thread pool.
Your JAGIProcessor implements the processCall method where it does all the work it needs, interacting with the JAGIClient passed as a parameter, to read and set variables or do whatever stuff the AGI interface allows you to.
So you have a Java app running all the time and it can be a simple J2SE app or an EE app on a container, doesn't matter; once it's running, it will process AGI requests really fast, since no new processes have to be started (in contrast to simple AGI which runs a program for every AGI call).
Smee again. After migrating my client's IVR's over from SipX to Asterisk utilizing PHPAGI, I must say that I haven't encountered any other architecture that anywhere near as simple and capable. I'll be stress testing Trixbox CE 2.8 today on the same hardware I had tested SipX on earlier. But I must say, using PHPAGI for the IVR and the Asterisk CLI for debugging has worked perfectly and allowed me to develop IVR's far faster than any other company out there. I'm working on implementing TTS and ASR today and I'll post my stress test results when I can.
Simple small flexible Asterisk AGI IVR written on PHP
http://freshmeat.net/projects/phpivr
For small and easy applications I use Asterisk::AGI in perl. There are also extensions for the Fast AGI. For bigger applications, like VoIP operator's backends I use something similar to OrderlyCalls written in Java (my own code). OrderlyCalls is great though to start with java fastagi engine and extend it to your needs.

Does it still make sense to learn low level WinAPI programming? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Does it make sense, having all of the C#-managed-bliss, to go back to Petzold's Programming Windows and try to produce code w/ pure WinAPI?
What can be learn from it? Isn't it just too outdated to be useful?
This question is bordering on religious :) But I'll give my thoughts anyway.
I do see value in learing the Win32 API. Most, if not all, GUI libraries (managed or unmanaged) result in calls to the Win32 API. Even the most thorough libraries don't cover 100% of the API, and hence there are always gaps which need to be plugged by direct API calls or P/invoking. Some of the names of the wrappers around the API calls have similar names to the underlying API calls, but those names aren't exactly self-documenting. So understanding the underlying API, and the terminology used therein, will aid in understanding the wrapper APIs and what they actually do.
Plus, if you understand the nature of the underlying APIs that are used by frameworks, then you will make better choices with regards to which library functionality you should use in a given scenario.
Cheers!
I kept to standard C/C++ for years before learning Win32 API, and to be quite blunt, the "learning Win32 API" part is not the best technical experience of my life.
In one hand Win32 API is quite cool. It's like an extension of the C standard API (who needs fopen when you can have CreateFile. But I guess UNIX/Linux/WhateverOS have the same gizmo functions. Anyway, in Unix/Linux, they have the "Everything is a file". In Windows, they have the "Everything is a... Window" (no kidding! See CreateWindow!).
In the other hand, this is a legacy API. You will be dealing with raw C, and raw C madness.
Like telling one's structure its own size to pass through a void * pointer to some Win32 function.
Messaging can be quite confusing, too: Mixing C++ objects with Win32 windows lead to very interesting examples of Chicken or Egg problem (funny moments when you write a kind of delete this ; in a class method).
Having to subclass a WinProc when you're more familiar with object inheritance is head-splitting and less than optimal.
And of course, there is the joy of "Why in this fracking world they did this thing this way ??" moments when you strike your keyboard with your head once too many and get back home with keys engraved in your forehead, just because someone thought it more logical to write an API to enable the changing of the color of a "Window", not by changing one of its properties, but by asking it to its parent window.
etc.
In the last hand (three hands ???), consider that some people working with legacy APIs are themselves using legacy code styling. The moment you hear "const is for dummies" or "I don't use namespaces because they decrease the runtime speed", or the even better "Hey, who needs C++? I code in my own brand of object-oriented C!!!" (No kidding... In a professional environment, and the result was quite a sight...), you'll feel the kind of dread only condemned feel in front of the guillotine.
So... All in all, it's an interesting experience.
Edit
After re-reading this post, I see it could be seen as overly negative. It is not.
It is sometimes interesting (as well as frustrating) to know how the things work under the hood. You'll understand that, despite enormous (impossible?) constraints, the Win32 API team did wonderful work to be sure everything, from you "olde Win16 program" to your "last Win64 over-the-top application", can work together, in the past, now, and in the future.
The question is: Do you really want to?
Because spending weeks to do things that could be done (and done better) in other more high-level and/or object-oriented API can be quite de-motivational (real life experience: 3 weeks for Win API, against 4 hours in three other languages and/or libraries).
Anyway, you'll find Raymond Chen's Blog very interesting because of his insider's view on both Win API and its evolution through the years:
https://blogs.msdn.microsoft.com/oldnewthing/
Absolutely. When nobody knows the low level, who will update and write the high level languages? Also, when you understand the low level stuff, you can write more efficient code in a higher level language, and also debug more efficiently.
The native APIs are the "real" operating system APIs. The .NET library is (with few exceptions) nothing more than a fancy wrapper around them. So yes, I'd say that anybody who can understand .NET with all its complexity, can understand relatively mundane things like talking to the API without the benefit of a middle-man.
Just try to do DLL Injection from managed code. It can't be done. You will be forced to write native code for this, for windowing tweaks, for real subclassing, and a dozen other things.
So yes: you should (must) know both.
Edit: even if you plan to use P/Invoke.
On the assumption that you're building apps targeted at Windows:
it can sure be informative to understand lower levels of the system - how they work, how your code interacts with them (even if only indirectly), and where you have additional options that aren't available in the higher-level abstractions
there are times when your code might not be as efficient, high-performance or precise enough for your requirements
However, in more and more cases, folks like us (who never learned "unmanaged coding") will be able to pull off the programming we're trying to do without "learning" Win32.
Further, there's plenty of sites that provide working samples, code fragments and even fully-functional source code that you can "leverage" (borrow, plagiarize - but check that you're complying with any re-use license or copyright!) to fill in any gaps that aren't handled by the .NET framework class libraries (or the libraries that you can download or license).
If you can pull off the feats you need without messing around in Win32, and you're doing a good job of developing well-formed, readable managed code, then I'd say mastering .NET would be a better choice than spreading yourself thin over two very different environments.
If you frequently need to leverage those features of Windows that haven't received good Framework class library coverage, then by all means, learn the skills you need.
I've personally spent far too much time worrying about the "other areas" of coding that I'm supposed to understand to produce "good programs", but there's plenty of masochists out there that think everyone's needs and desires are like their own. Misery loves company. :)
On the assumption that you're building apps for the "Web 2.0" world, or that would be just as useful/beneficial to *NIX & MacOS users:
Stick with languages and compilers that target as many cross-platform environments as possible.
pure .NET in Visual Studio is better than Win32 obviously, but developing against the MONO libraries, perhaps using the Sharp Develop IDE, is probably an even better approach.
you could also spend your time learning Java, and those skills would transfer very well to C# programming (plus the Java code would theoretically run on any platform with the matching JRE). I've heard it said that Java is more like "write once, debug everywhere", but that's probably as true as (or even moreso than) C#.
Analogy: If you build cars for a living (programming), then its very pertinent to know how the engine works (Win32).
Simple answer, YES.
This is the answer to any question that is like.. "does it make sense to learn a low level language/api X even when a higher level language/api Y is there"
YES
You are able to boot up your Windows PC (or any other OS) and ask this question in SO because a couple of guys in Microsoft wrote 16-bit assembly code that loads your OS.
Your browser works because someone wrote an OS kernel in C that serves all your browser's requests.
It goes all the way up to scripting languages.
Big or small, there is always a market and opportunity to write something in any level of abstraction. You just have to like it and fit in the right job.
No api/language at any level of abstraction is irrelevent unless there is a better one competing at the same level.
Another way of looking at it: A good example from one of Michael Abrash's book: A C programmer was given the task of writing a function to clear the screen. Since C was a better (higher level) abstraction over assembly and all, the programmer only knew C and knew it well. He did his best - he moved the cursor to each location on the screen and cleared the character there. He optimized the loop and made sure it ran as fast as it could. But still it was slow... until some guy came in and said there was some BIOS/VGA instruction or something that could clear the screen instantly.
It always helps to know what you are walking on.
Yes, for a few reasons:
1) .net wraps Win32 code. .net is usually a superior system to code against, but having some knowledge of the underlying Win32 layer (oops, WinAPI now that there is 64-bit code too) bolsters your knowledge of what is really happening.
2) in this economy, it is better to have some advantages over the other guy when you are looking for a job. Some WinAPI experience may provide this for you.
3) some system aspects are not available through the .net framework yet, and if you want to access those features you will need to use p/invoke (see http://www.pinvoke.net for some help there). Having at least a smattering of WinAPI experience will make your p/invoke development effort a lot more efficient.
4) (added) Now that Win8 has been around for awhile, it is still built on top of the WinAPI. iOS, Android, OS/X, and Linux are all out there, but the WinAPI will still be out there for many many years.
Learning a new programming language or technology is for one of three reasons:
1. Need: you're starting a project for building a web application and you don't know anything about ASP.NET
2. Enthusiasm: you're very excited about ASP.NET MVC. why not try that?
3. Free time: but who has that anyway.
The best reason to learn something new is Need. If you need to do something that the .NET framework can't do (like performance for example) then WinAPI is your solution. Until then we keep ourself busy with learning about .NET
For most needs on the desktop you wont need to know the Win32, however there is a LOT of Win32 not in .NET, but it is in the outlaying stuff that may end up being less than 1% of your application.
USB support, HID support, Windows Media Foundation just off the top of my head. There are many cool Vista API's only available from Win32.
You will do yourself a large favor by learning how to do interop with a Win32 API, if you do desktop programing, because when you do need to call Win32, and you will, you won't spend weeks scratching your head.
Personally I don't really like the Win32 API but there's value in learning it as the API will allow more control and efficiency using the GUI than a language like Visual Basic, and I believe that if you're going to make a living writing software you should know the API even if you don't use it directly. This is for reasons similar to the reasons it's good to learn C, like how a strcpy takes more time than copying an integer, or why you should use pointers to arrays as function parameters instead of arrays by value.
Learning C or a lower level language can definitely be useful. However, I don't see any obvious advantage in using the unmanaged WinAPI.
I've seen low level Windows API code... it ain't pretty... I wish I could unlearn it. I think it benefits to learn low level as in C, as you gain a better understanding of the hardware architecture and how all that stuff works. Learning old Windows API... I think that stuff can be left to the people at Microsoft who may need to learn it to build higher level languages and API... they built it, let them suffer with it ;-)
However, if you happen to find a situation where you feel you just can't do what you need to do in a higher level language (few and far between), then perhaps start the dangerous dive into that world.
yes. take a look at uTorrent, an amazing piece of software efficiency. Half of it's small size is due to the fact that much of it's core components were re-written to not use gargatuian libraries.
Much of this couldn't be done without understanding how these libraries interface with the lower level API's
It's important to know what is available with the Windows API. I don't think you need to crank out code with it, but you should know how it works. The .NET Framework contains a lot of functionality, but it doesn't provide managed code equivalents for the entire Windows API. Sometimes you have to get a bit closer to the metal, and knowing what's down there and how it behaves will give you a better understanding of how to use it.
This is really the same as the question, should I learn a low level language like C (or even assembler).
Coding in it is certainly slower (though of course the result is much faster), but its true advantage is you gain an insight into what is happening at close to the system level, rather than than just understanding someone else's metaphor for what is going on.
It can also be better when things won't work well, or fast enough or with the sort of granularity that you need. (And do at least some subclassing and superclassing.)
I'll put it this way. I don't like programming to the Win32 API. It can be a pain compared to managed code. BUT, I'm glad I know it because I can write programs that otherwise I wouldn't be able to. I can write programs that other people can't. Plus it gives you more insight into what your managed code is doing behind the scenes.
The amount of value you get out of learning the Win32 API, (aside from the sorts of general insights you get from learning about how the nuts and bolts of the machine fit together) depends on what you're trying to achieve. A lot of the Win32 API has been wrapped nicely in .NET library classes, but not all of it. If for instance you're looking to do some serious audio programming, that portion of the Win32 API would be an excellent subject of study because only the most basic of operations are available from .NET classes. Last I checked even the managed DirectX DirectSound library was awful.
At the risk of shameless self-promotion....
I just came across a situation where the Win32 API was my only option. I want to have different tooltips on each item in a listbox. I wrote up how I did it on this question.
Even in very very high level languages you still make use of the API. Why? Well not every aspect of the API has been replicated by the various libraries, frameworks, etc. You need to learn the API for as long as you will need the API to accomplish what you are trying to do. (And no longer.)
Apart from some very special cases when you need direct access to APIs, I would say NO.
There is considerable time and effort required to learn to implement the native API calls correctly and the returning value is just not worth it. I would rather spend the time learning some new hot technology or framework that will make your life easier and programming less painful. Not decades-old obsolete COM libraries that nobody really uses anymore (sorry to COM users).
Please don't stone me for this view. I know a lot of engineers here have really curious souls and there is nothing wrong with learning how things work. Curiousity is good and really helps understanding. But from a managerial point of view, I would rather spend a week learning how to develop Android apps than how to calls OLEs or COMs.
If you planning to develop a cross platform application, If you use win32, then your application could easily run on linux through WINE. This results in a highly maintainable application. This is one of the advantages of learning win32.

Resources