Asterisk AGI framework for IVR; Adhearsion alternative? - ruby

I am trying to get started writing scalable, telecom-grade applications with Asterisk and Ruby. I had originally intended to use the Adhearsion framework for this, but it does not have the required maturity and its documentation is severely lacking. AsteriskRuby seems to be a good alternative, as it's well documented and appears to be written by Vonage.
Does anyone have experience deploying AGI-based IVR applications? What framework, if any, did you use? I'd even consider a non-Ruby one if it's justified. Thanks!

SipX is really the wrong answer. I've written some extremely complicated VoiceXML on SipX 3.10.2 and it's been all for naught since SipX 4 is dropping SipXVXML for an interface that requires IVRs to be compiled JARs. Top that off with Nortel filing bankruptcy, extremely poor documentation on the open-source version, poor compliance with VXML 2.0 (as of 3.10.2) and SIP standards (as of 3.10.2, does not trunk well with ITSPs). I will applaud it for a bangup job doing what it was designed to do, be a PBX. But as an IVR, if I had it to do all over again, I'd do something different. I don't know what for sure, but something different. I'm toying with Trixbox CE now and working on tying it into JVoiceXML or VoiceGlue.
Also, don't read that SipX wiki crap. It compares SipX 3.10 to AsteriskNOW 1 to Trixbox 1. Come on. It's like comparing Mac OS X to Win95! A more realistic comparison would be SipX 4 (due out 1Q 2009) to Asterisk 1.6 and Trixbox 2.6, which would show that they accomplish near identical results except in the arena of scalibility and high-availability; SipX wins at that. But, for maturity and stability, I'd advocate Asterisk.
Also, my real world performance results with SipXVXML:
Dell PowerEdge R200, Xeon Dual Core 3.2GHz, handles 17 calls before jitters.
HP DL380 G4, Dual Xeon HT 3.2 GHz, handles 30 calles before long pauses.
I'll post my findings when I finish evaluating VoiceGlue and JVoiceXML but I think I'm going to end up writing a custom PHP called from AGI since all the tools are native to Asterisk.

You should revisit Adhearsion as v0.8.1 is out, and the documentation has gotten much better quite recently. Have a look here:
http://adhearsion.com
http://docs.adhearsion.com
http://api.adhearsion.com

If you're looking for "telecom-grade" applications, you may want to look into SipXecs instead of asterisk. It's featureful, free, and open source, with commercial support available from Nortel. You can interact with it via a Web Services API in ruby (or any other language).
See the SipXecs wiki for more information. There's a comparison matrix on that site, comparing features with AsteriskNOW and TrixBox.

There really aren't any other frameworks out there. There's of course AGI bindings to every language, but as far as full-fledged frameworks for developing telephony applications, we're just not there yet. At least in the open-source world.

I have asked somewhat related questions here, here, and here. I'm using Microsoft's Speech Server, and I'm very intested to learn about any alternatives that are out there, especially open source ones. You might find some good info in the answers to one of those questions.

I used JAGIServer extensively, even though it's not under development anymore, and it's pretty good and easy to use. It's an interface for FastAGI, which I recommend you use instead of simple AGI.
The new version of this framework is OrderlyCalls which seems to have a lot more features but since I haven't needed them, I haven't tried it.
I guess it all depends on what you want to do with AGI; usually I have a somewhat complex dialplan to gather and validate all user input and then just use AGI to connect to a Java application which will read some variables, do some stuff with it (perform operations, queries, etc etc) and then sets some more variables on the AGI channel and disconnects. At this point, the dialplan continues depending on the result of the variables set by the Java app.
This works really fast because you have a ServerSocket on the Java app, which receives incoming connections from AGI, creates a JAGIClient with the new socket and a new instance of a JAGIProcessor (which you have to write, it's the object that will do all your processing), and then run the JAGIClient inside a thread pool.
Your JAGIProcessor implements the processCall method where it does all the work it needs, interacting with the JAGIClient passed as a parameter, to read and set variables or do whatever stuff the AGI interface allows you to.
So you have a Java app running all the time and it can be a simple J2SE app or an EE app on a container, doesn't matter; once it's running, it will process AGI requests really fast, since no new processes have to be started (in contrast to simple AGI which runs a program for every AGI call).

Smee again. After migrating my client's IVR's over from SipX to Asterisk utilizing PHPAGI, I must say that I haven't encountered any other architecture that anywhere near as simple and capable. I'll be stress testing Trixbox CE 2.8 today on the same hardware I had tested SipX on earlier. But I must say, using PHPAGI for the IVR and the Asterisk CLI for debugging has worked perfectly and allowed me to develop IVR's far faster than any other company out there. I'm working on implementing TTS and ASR today and I'll post my stress test results when I can.

Simple small flexible Asterisk AGI IVR written on PHP
http://freshmeat.net/projects/phpivr

For small and easy applications I use Asterisk::AGI in perl. There are also extensions for the Fast AGI. For bigger applications, like VoIP operator's backends I use something similar to OrderlyCalls written in Java (my own code). OrderlyCalls is great though to start with java fastagi engine and extend it to your needs.

Related

Using ZeroMQ for cross platform development?

We have a large console application in Haskell that I have been charged with making cross platform and adding a gui.
The requirements are:
Native-as-possible look and feel.
Clients for Windows and Mac OS X, Linux if possible.
No separate runtime to install.
No required network communication. The haskell code deals with very sensitive information that cannot be transmitted over the wire. This is really the only reason this isn't a web application.
Now, the real reason for this question is to explain one solution I'm researching at the moment and to solicit for reasons that I'm not thinking of that make this a bad idea.
My solution is a native gui. Winforms on Windows, Cocoa on Mac OS X, and GTK/Glade on Linux, that simply handles the presentation. Then I would write a layer on top of the Haskell code that turns it into a responder for messages to and from the UI using ZeroMQ to handle the messages and maybe protobufs for serializing the data back and forth. So the native application would start which would itself start the daemon where all of the magic happens, and send messages back and forth.
Aside from making sure that the daemon only accepts connections from the application that started it, and the challenge of providing the right data back and forth for advanced gui elements (I'm thinking table views, cells, etc.), I don't see many downsides to this.
What am I not thinking about that makes this a bad idea?
I should probably mention that at first glance I was going to go with GTK on all platforms. The problem is that, while it's close, and GTK and Glade support for Haskell is nice to work with, the result doesn't look 'right'. It's close, but just not native enough in subtle ways which make that solution unacceptable to the people who happen to be writing the check for this work.
Also, the issue of multiple platforms and thus multiple languages for the gui isn't a problem so I'm not necessarily looking for other ways to solve that problem unless it simplifies something about the interop with the haskell code.
Then I would write a layer on top of the Haskell code that turns it into
a responder for messages to and from the UI using ZeroMQ to handle the
messages and maybe protobufs for serializing the data back and forth.
I think this is reasonable (a client/server model, where the client just
happens to be a native look-n-feel desktop app). (I have no strong view
about protobufs versus e.g. JSON, thrift).
The Haskell zeromq
bindings are getting
some use now, too.
What am I not thinking about that makes this a bad idea?
How well tested is zeromq on Windows and Mac? It is probably fine, but
something I'd check.
The problem is that, while it's close, and GTK and Glade support for
Haskell is nice to work with, the result doesn't look 'right'.
Does the integration package help
there?
Here's an interesting possibility: wai-handler-webkit. It essentially packages up QtWebkit with the Warp web server to make your web apps deployable. It hasn't seen intensive use, has never been tested on Mac, and is tricky to compile on Windows, but it's a fairly straight-forward approach that lets you use the fairly rich web ecosystem developing in Haskell.
I'm likely going to be doing more development on it in the near future, so if you have interest in using it, let me know what extra features would be useful, as well as if you could offer any help on the Mac front in particular. I'm also not convinced that we need to stick with QtWebkit on all platforms: it might make more sense to use a different Webkit backend depending on OS, or maybe even using Gecko or (shudder) Trident instead.
I've had some problems getting zeromq to play nice with haskell on OSX (problems with looking for a dylib as opposed to an "o" I think). Protocol buffers and haskell seems to work fine though.
So your reason not to use a web application is because of sensitive nature of haskell program's output. And THAT's why you are distributing that same sensitive application that spews out unencrypted data on ALL client machines ? That does not make any sense.
If your application is sensitive you DEFINITELLY should put it on server and utilize strongest possible TLS.

learning a different platform independent web framework or relying on asp.net in mono for the near future?

I'm somehow familiar with the ASP.NET MVC and the .NET framework in general (I use it at work).
I've been thinking about starting a personal project (a website). I, however, don't want to be tied to a specific platform (it bothers me A LOT).
This led me to looking into Mono. For what I've seen, though, Mono trails behind Microsoft's .NET in some aspects that are crucial to me (or that I would really like to have available). Some of these are:
LINQ to SQL. The Mono team just now (Mono 2.6) released support for LINQ with help from the DBLinq project. The problems are that DBLinq's main test platform are MS SQL SERVER and SQLite. It seems to me that PostgreSQL and MySQL are a bit abandoned. Also, LINQ to SQL has just been implemented and thus it makes me think when it will become mature.
Hosting of Mono on Linux. Very few of these are available.
Also, I want to be prepared to handle heavy-duty processing on the server (this is a main issue), and Twitter's experience makes me drift away from Ruby on Rails, but if you can prove RoR scales weel (please, show benchmarks/facts and not opinions) I'd be willing to try it.
Should I take my time learning a different web framework, or should I rely on Mono's advances and hosting options for the near future (1~2 years) on platforms other than Windows/SQL Server.
In terms of scalability, I tend to think that C# has an inherent aspect of scalability (strongly typed and ByteCode instead of parsed/interpreted). Am I wrong to think like that?
Are there ways to work with other frameworks in ways that the code won't be hosted on the server (I accept python/ruby/anything VMs and others)
This is an old question so I may be answering more for others than for the original poster:
Hosting
If you write in Mono, you can host on Windows, Mac, or Linux (or Solaris, FreeBSD, and others less dependably). If you are going to host on Windows though, why not just run your Mono app on the real .NET?
Why do you care if it is hosted on Mono if you are not hosting it yourself? You can certainly write an application on Mono using Windows, Linux, or Mac and then host it on a Windows/.NET host if those are the cheapest and easiest to find. Just think of .NET as the MS implementation of Mono.
I had the opposite problem originally. I wanted to host on Linux even though my employer provided a Windows dev environment. I developed in .NET and hosted on Mono/Linux. Mono worked excellently for me in this way.
My current employer is Mac crazy. I just deployed an ASP.NET MVC2 app to our Mac server yesterday. I wrote the whole thing on my MacBook Pro without touching Windows once.
My favourite host for running .NET/Mono on Linux is Linode. The cheapest plan is $20/month but I can host as many apps on the same server as I want. The performance is excellent so anything that is going to run well on a $5/month host is going to run just fine as one of four apps on a Linode instance that is for sure.
Compatibility with .NET
I find it is best to think of Mono as a platform itself rather than as a compatibility solution for your Microsoft apps. Mono supports almost the entire .NET framework. I love this because it is a great framework but I do not really care that it is MS compatible most days.
No offence, but I do not understand at all your implication of "Mono does not support LINQ-to-SQL to my satisfaction so I am considering Ruby-On-Rails". Mono supports LINQ-to-SQL a lot better than Ruby does I will tell you that. You could say that you are sticking with Windows only because you really need LINQ-to-SQL though I suppose. What is more important to you, "cross platform" or "LINQ-to-SQL"?
Mono gives you many choices for data access. If you want an ORM (Object Relational Mapper) like Rails offers then you could go for something like NHibernate, Subsonic, or Castle ActiveRecord for example. With Mono 2.10 you can even use WebMatrix Data. Of course, you could also use good 'ole ADO.NET which is what all this stuff is built on top of anyway.
Oh, and let's not lose site of the fact that Mono does support LINQ-to-SQL. I have only ever used it with SQLite where it worked fine. I agree that it has lagged .NET though. You are probably worried about Entity Framework support now. See my comments above.
To my mind, the question is how does Mono data access compare to Rails data access. My answer: Rails is a bit better integrated (simpler) and Mono is much more powerful and flexible.
Heavy-duty processing
This is where .NET and Mono are really going to shine.
You are correct to think that compiled bytecode is going to be much faster than interpreted code and that static languages like C# will be faster than dynamic languages like Ruby. Of course, everything is implementation dependent in the end.
I also agree that a static language like C# aids scalability in other ways. This is really a matter of personal opinion though. There are certainly people that think that authoring and maintaining a massive solution in a dynamic language is feasible. I do not see many people doing that of course. There is a reason that .NET and Java are the enterprise standards.
Bottom line
Should you learn another web framework? Well, I think you should. It is good for the mind.
Is that other web framework a superior choice to Mono or .NET? Well, it depends on the need of course. I think that Rails folks probably pump out the sites a little faster than the .NET crowd in general. The gap has really closed with ASP.NET MVC2+ though and I would much, much rather maintain and scale a .NET solution than a Rails one. Also, I like C# just fine so I do not find Ruby itself so intrinsically satisfying that I just have to use it. That is just me of course.
Also, just me, but I find Mono to be an excellent cross platform web development framework. I choose it everyday over other solutions. I also find that Mono fits into the majority of the .NET ecosystem (especially the Open Source universe) just fine. Again though, if what you really want is to use the very latest and greatest MS stuff and are hoping that Mono will allow you to run that on Linux or Mac sometimes then you may be disappointed.
If Windows Presentation Foundation (WPF), Entity Framework, or to a lesser extent LINQ-to-SQL are the most important part of your application strategy then Mono is not for you.
If you want a platform that gives you all the great advantages of .NET and runs pretty much everywhere you need it to then Mono is pretty damn hard to beat.

BOINC: Is there an easy example how to code a programm for it and how to implement it into their client/server system?

I did a numeric method as my diploma thesis and coded it in java. It needs a lot of computational time when adequately executed. So I looked for an alternative and found BOINC. Unfortunately I didn't have time for doing my method in BOINC, because I'm an Aerospace student and not a programmer and I decided to keep my priority on my java program. Now it's finished an I still would like to port this to BOINC environment.
Unfortunately I'm learning in re-doing examples and I couldn't find any, neither on the official site http://boinc.berkeley.edu nor in the internet.
So do you know a good and easy example or do you have any experience in BOINC and would like to start a new platform for such a boinc project?
I'm realistic about my method, that it wouldn't run 24/7, because there aren't as many work units as for seti or folding projects. So I would like to have a platform for more than just my project so that another platform project can be worked on, when one part of the project does not have any work units at that moment.
But to start this, I would keep it simple and just want to know how to code it and use it in the client and server system. It doesn't matter what the example projects will work on, as long as it is simple enough, that I can understand it and extending it for my method.
Thank you in advance, Andreas! :)
PS: I know that BOINC supports JAVA as a programming language, and my method is coded in JAVA.
As far as I know, JavaApps is just an idea; I don't know if anyone actually tried it in a real BOINC project. And it's Windows-only. And it seems to be a bit of a pain to redistribute the entire JRE as part of the BOINC application (both technically and legally).
Also, I generally dislike using that kind of “wrapper” where the science app (using the BOINC API) starts another process that then does the real computation. It's usually unreliable. There are lots of things that could go wrong with the wrapper, especially related to controlling the child process (eg. if something kills the wrapper, the child process has to quit too).
However, I just found something pretty interesting that may let me do a better Java wrapper for BOINC... Stay tuned! (but don't hold your breath either; it's the holidays!)
Meanwhile, I suggest you start by reading BOINC wiki and setting up a server with a “hello world” application; and if you have any trouble, ask a specific question about your trouble either here or in the boinc_projects mailing list.
(Of course, payin’ me to install the server for you is also an option ;) but I can't guarantee anything; not even my mere availability at this time of the year)

What successful conversion/rewrite of software have you done? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
What successful conversion/rewrite have you done of software you were involved with? What where the languages and framework involved in the process? How large was the software in question? Finally what is the top one or two thing you learned from being involved with the process.
This is related to this question
I'm going for "most abstruse" here:
Ported an 8080 simulator written in
FORTRAN 77 from a DECSystem-10 running TOPS-10 to an
IBM 4381 mainframe running VM/CMS.
I rewrote 20,000 lines of Perl to use "use strict" in every file. I had to add "my" everywhere it was needed and I had to fix the bugs that were uncovered during the process.
The biggest thing I learned from doing this is, "It always takes longer than you think."
I had to get it done all at once overnight so that the other coders would not be writing new, unfixed code at the same time. I thought it would go quickly, but it didn't, and I was still hacking on it at 6 AM the next morning.
I did get it complete and checked in before everyone else started work though!
I rewrote a large java web application to an ASP.Net application for a realty company for various reasons.
The biggest thing I learned is that, no matter how trivial the feature the original system had, if it's not in the second system, the client thinks the rewrite is a failure. Expectation management is everything when writing the new system.
This is the biggest reason rewrites are so hard: it seems so easy to the client ("Just re-do what I already have and add a few things.").
The coolest one for me, I think, was the port of MAME to the iPod. It was a great learning experience with embedded hardware, and I got to work with a lot of great people. Official site.
I am doing a rewrite of an Inhouse Project managment system to a more standard MVC model. Its in the LAMP stack (PHP) and i am close to the 1st milestone.
The things i have learned from that currently is how simple the program feels at the beginning and i tried to not add complexity until i have to.
Example is that i programmed all the functionality first (like i was an admin user) and then when that is sorted out, add the complexity of having restrictions (user levels etc)
I ported/redesigned/rewrote a 30,000-line MS-DOS C++ program into a similar-length but much more fully-featured and usable Java Swing program.
I learned never to take another job involving C++ or Java.
I ported a client server Powerbuilder app, a couple of hundred screens worth, into an ASP.NET app (C#).
Due to performance and maintainability issues, I had over the previous year moved a ton of embedded SQL out of Powerbuilder scripts and into stored procedures.
Although this would make a lot of you wince, having a lot of business logic in the database, it mean the Powerbuilder app was relatively "light" and when we built the .Net front end, it could take advantage of the SQL codebase and have a lot of functionality already built and tested.
Not saying I'd recommend building apps that way, but it certainly worked to our advantage in this instance.
We had a code generation tool in our application framework that was used to read in text-based data files, About 20 other applications made use of it.
We wanted to make use of XML data files instead of structured text-based files. The original code was quite outdated and difficult to maintain. We replaced this tool by a combination of XSLT scripts and a utility library. For the utility library we could make use of some code in the old tool.
The result was that all 20 applications could now make use of either the obsolete text based file format or the new XML based format. We also delivered a conversion-generation tool that converted old data files to new XML data files.
After bringing out one or two release we have now decided that we will no longer support the old text based format and everybody is able to convert their data to XML.
We did hardly have to do manual conversions,
Converted the main company app from pre-standard C++ to standard C++. We had a multimillion dollar sale contingent on making it work on AIX, and after looking at it we decided that converting to standard C++ was going to be just as easy as converting to IBM's traditional C++.
I don't know the line count, but the source code ran to hundreds of megabytes.
We used standard Unix tools to do this, including vi and the assorted compilers.
It took a few months. Most of the fixes were simple ones, caught by the compiler and almost mechanically fixed. Some of them were much more complicated.
I think my main takeaway was: Don't get too awfully clever with code in a language that hasn't been standardized yet, or is likely to have things change in unexpected ways. We had to do a lot of digging in some of the ingenious adaptations/abuses of C++ streams.
Ten years ago I managed a team that converted a CAD system from DOS into Windows. The DOS version used home-brew libraries for graphics drawing, the Windows version used MFC. The software was about 70.000 lines of C code at the time of the conversion. The most important thing we learned in the process is the power of abstraction. All device-specific non-portable routines were isolated in a few files. It was therefore relatively easy to substitute the calls to the DOS-based library that would draw by directly accessing the frame buffer with Windows API calls. Similarly, for input we just substituted the event loop that checked for keyboard and mouse events, with the corresponding Windows event loop. We continued our policy of isolating the non-portable (this time Windows) code from the rest of the system, but we have not yet found this particularly useful. Perhaps one day we will port the system to Mac OS X and be thankful again.
Several. But I mention one.
It was a performance modeling tool. Part delphi 1, part turbo pascal. It needed a rewrite else it was not going to survive. So we started as a team of 2, but only me survived to the end. And I was ready before the deadline ;-).
Several things we did:
Make it multimodel. The original had lots of globals. I removed them all and multi model was easy to adapt.
Extended error messages. Click on a message and get the help.
Lots of graphs and diagrams. All clickable to drill down.
Simulation. Change parameters over time and see how long the current configuration was enough.
We really made this one clean and it paid back heavily in the end. Such a big learning experience.
Re-wrote a system for a company that processes legal invoices - the original system was a VB monstrosity that had no idea of good OO principles - everything was mixed together. The HTML did SQL, and the SQL wrote HTML. A large part of it was a custom rules engine that used something like XML for the rules.
Two teams did the re-write, which took about 9 months. One team did the web front end and the backend workflow, while the other team (that I was on) re-wrote the rules engine. The new system was written in C#, and was done test-first. Adding new rules to the system when we were done was dirt simple, and it was all testable. Along the way we did things like convert the company from VSS to SVN, implement continuous integration, automate the deployment, and teach the other developers how to do TDD and other Scrum/XP practices.
Managing expectations was crucial through the project. Having a customer that was savvy about software was very helpful.
Having a mix of large scale (end-to-end) tests along with comprehensive unit and integration tests helped tons.
Converted vBulletin which is written in PHP into C#/Asp.NET. I'm pretty familiar with both languages, but PHP is the hands down the winner for building that software. The biggest pain in the rear was needing to do a C# equivalent of PHP's eval() for calling the templates.
It was my first challenge in trying to do a conversion. I learned that I need more experience with C# and that writing it from scratch is just the easier route sometimes.
I converted a dynamical build-process completely written in Perl to a C#/.Net solution using a workflow-engine a co-worker had developed (which was still in beta - so I had to do some refinements). That gave me the oppertunity to add fail-safe and fail-over functionality to the build process.
Before you ask - no - the microsoft workflow-foundation could not be used since you cannot dynamically change a process during its runtime.
What I learned:
to hate the Perl-developer
process-optimization using a wf-engine
fail-safe and fail-over strategies
some C# tweaks ;)
In the end it covered about 5k - 6k (including the wf-engine) LoC origin from 3 200 LoC Perl-files. But it was fun - and far better in the end ;)
Converting theoretically portable C code into theoretically portable C code across architectures to support a hardware change that saves the company X dollars per unit.
The size varies - this is a common need, and I've done small and large projects.
I learned to write more portable C code. Elegance is great, but when it comes right down to it the compiler takes care of performance, and the code should be as simple and portable as possible.
Ported a simulation written in Fortran 77 (despite being written in the 90s) to C/Java because the original only worked on small data sets. I learned to love big O notation after several times of explaining why just moving the entire data table into memory at the start of the program was not going to scale.
Migrating the B-2 Stealth Bomber mission software from JOVIAL to C. 100% fully automated conversion. Seriously!
Main lesson: using configurable automated conversion tools is a huge win.
See DMS Software Reengineering Toolkit.

Does it still make sense to learn low level WinAPI programming? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Does it make sense, having all of the C#-managed-bliss, to go back to Petzold's Programming Windows and try to produce code w/ pure WinAPI?
What can be learn from it? Isn't it just too outdated to be useful?
This question is bordering on religious :) But I'll give my thoughts anyway.
I do see value in learing the Win32 API. Most, if not all, GUI libraries (managed or unmanaged) result in calls to the Win32 API. Even the most thorough libraries don't cover 100% of the API, and hence there are always gaps which need to be plugged by direct API calls or P/invoking. Some of the names of the wrappers around the API calls have similar names to the underlying API calls, but those names aren't exactly self-documenting. So understanding the underlying API, and the terminology used therein, will aid in understanding the wrapper APIs and what they actually do.
Plus, if you understand the nature of the underlying APIs that are used by frameworks, then you will make better choices with regards to which library functionality you should use in a given scenario.
Cheers!
I kept to standard C/C++ for years before learning Win32 API, and to be quite blunt, the "learning Win32 API" part is not the best technical experience of my life.
In one hand Win32 API is quite cool. It's like an extension of the C standard API (who needs fopen when you can have CreateFile. But I guess UNIX/Linux/WhateverOS have the same gizmo functions. Anyway, in Unix/Linux, they have the "Everything is a file". In Windows, they have the "Everything is a... Window" (no kidding! See CreateWindow!).
In the other hand, this is a legacy API. You will be dealing with raw C, and raw C madness.
Like telling one's structure its own size to pass through a void * pointer to some Win32 function.
Messaging can be quite confusing, too: Mixing C++ objects with Win32 windows lead to very interesting examples of Chicken or Egg problem (funny moments when you write a kind of delete this ; in a class method).
Having to subclass a WinProc when you're more familiar with object inheritance is head-splitting and less than optimal.
And of course, there is the joy of "Why in this fracking world they did this thing this way ??" moments when you strike your keyboard with your head once too many and get back home with keys engraved in your forehead, just because someone thought it more logical to write an API to enable the changing of the color of a "Window", not by changing one of its properties, but by asking it to its parent window.
etc.
In the last hand (three hands ???), consider that some people working with legacy APIs are themselves using legacy code styling. The moment you hear "const is for dummies" or "I don't use namespaces because they decrease the runtime speed", or the even better "Hey, who needs C++? I code in my own brand of object-oriented C!!!" (No kidding... In a professional environment, and the result was quite a sight...), you'll feel the kind of dread only condemned feel in front of the guillotine.
So... All in all, it's an interesting experience.
Edit
After re-reading this post, I see it could be seen as overly negative. It is not.
It is sometimes interesting (as well as frustrating) to know how the things work under the hood. You'll understand that, despite enormous (impossible?) constraints, the Win32 API team did wonderful work to be sure everything, from you "olde Win16 program" to your "last Win64 over-the-top application", can work together, in the past, now, and in the future.
The question is: Do you really want to?
Because spending weeks to do things that could be done (and done better) in other more high-level and/or object-oriented API can be quite de-motivational (real life experience: 3 weeks for Win API, against 4 hours in three other languages and/or libraries).
Anyway, you'll find Raymond Chen's Blog very interesting because of his insider's view on both Win API and its evolution through the years:
https://blogs.msdn.microsoft.com/oldnewthing/
Absolutely. When nobody knows the low level, who will update and write the high level languages? Also, when you understand the low level stuff, you can write more efficient code in a higher level language, and also debug more efficiently.
The native APIs are the "real" operating system APIs. The .NET library is (with few exceptions) nothing more than a fancy wrapper around them. So yes, I'd say that anybody who can understand .NET with all its complexity, can understand relatively mundane things like talking to the API without the benefit of a middle-man.
Just try to do DLL Injection from managed code. It can't be done. You will be forced to write native code for this, for windowing tweaks, for real subclassing, and a dozen other things.
So yes: you should (must) know both.
Edit: even if you plan to use P/Invoke.
On the assumption that you're building apps targeted at Windows:
it can sure be informative to understand lower levels of the system - how they work, how your code interacts with them (even if only indirectly), and where you have additional options that aren't available in the higher-level abstractions
there are times when your code might not be as efficient, high-performance or precise enough for your requirements
However, in more and more cases, folks like us (who never learned "unmanaged coding") will be able to pull off the programming we're trying to do without "learning" Win32.
Further, there's plenty of sites that provide working samples, code fragments and even fully-functional source code that you can "leverage" (borrow, plagiarize - but check that you're complying with any re-use license or copyright!) to fill in any gaps that aren't handled by the .NET framework class libraries (or the libraries that you can download or license).
If you can pull off the feats you need without messing around in Win32, and you're doing a good job of developing well-formed, readable managed code, then I'd say mastering .NET would be a better choice than spreading yourself thin over two very different environments.
If you frequently need to leverage those features of Windows that haven't received good Framework class library coverage, then by all means, learn the skills you need.
I've personally spent far too much time worrying about the "other areas" of coding that I'm supposed to understand to produce "good programs", but there's plenty of masochists out there that think everyone's needs and desires are like their own. Misery loves company. :)
On the assumption that you're building apps for the "Web 2.0" world, or that would be just as useful/beneficial to *NIX & MacOS users:
Stick with languages and compilers that target as many cross-platform environments as possible.
pure .NET in Visual Studio is better than Win32 obviously, but developing against the MONO libraries, perhaps using the Sharp Develop IDE, is probably an even better approach.
you could also spend your time learning Java, and those skills would transfer very well to C# programming (plus the Java code would theoretically run on any platform with the matching JRE). I've heard it said that Java is more like "write once, debug everywhere", but that's probably as true as (or even moreso than) C#.
Analogy: If you build cars for a living (programming), then its very pertinent to know how the engine works (Win32).
Simple answer, YES.
This is the answer to any question that is like.. "does it make sense to learn a low level language/api X even when a higher level language/api Y is there"
YES
You are able to boot up your Windows PC (or any other OS) and ask this question in SO because a couple of guys in Microsoft wrote 16-bit assembly code that loads your OS.
Your browser works because someone wrote an OS kernel in C that serves all your browser's requests.
It goes all the way up to scripting languages.
Big or small, there is always a market and opportunity to write something in any level of abstraction. You just have to like it and fit in the right job.
No api/language at any level of abstraction is irrelevent unless there is a better one competing at the same level.
Another way of looking at it: A good example from one of Michael Abrash's book: A C programmer was given the task of writing a function to clear the screen. Since C was a better (higher level) abstraction over assembly and all, the programmer only knew C and knew it well. He did his best - he moved the cursor to each location on the screen and cleared the character there. He optimized the loop and made sure it ran as fast as it could. But still it was slow... until some guy came in and said there was some BIOS/VGA instruction or something that could clear the screen instantly.
It always helps to know what you are walking on.
Yes, for a few reasons:
1) .net wraps Win32 code. .net is usually a superior system to code against, but having some knowledge of the underlying Win32 layer (oops, WinAPI now that there is 64-bit code too) bolsters your knowledge of what is really happening.
2) in this economy, it is better to have some advantages over the other guy when you are looking for a job. Some WinAPI experience may provide this for you.
3) some system aspects are not available through the .net framework yet, and if you want to access those features you will need to use p/invoke (see http://www.pinvoke.net for some help there). Having at least a smattering of WinAPI experience will make your p/invoke development effort a lot more efficient.
4) (added) Now that Win8 has been around for awhile, it is still built on top of the WinAPI. iOS, Android, OS/X, and Linux are all out there, but the WinAPI will still be out there for many many years.
Learning a new programming language or technology is for one of three reasons:
1. Need: you're starting a project for building a web application and you don't know anything about ASP.NET
2. Enthusiasm: you're very excited about ASP.NET MVC. why not try that?
3. Free time: but who has that anyway.
The best reason to learn something new is Need. If you need to do something that the .NET framework can't do (like performance for example) then WinAPI is your solution. Until then we keep ourself busy with learning about .NET
For most needs on the desktop you wont need to know the Win32, however there is a LOT of Win32 not in .NET, but it is in the outlaying stuff that may end up being less than 1% of your application.
USB support, HID support, Windows Media Foundation just off the top of my head. There are many cool Vista API's only available from Win32.
You will do yourself a large favor by learning how to do interop with a Win32 API, if you do desktop programing, because when you do need to call Win32, and you will, you won't spend weeks scratching your head.
Personally I don't really like the Win32 API but there's value in learning it as the API will allow more control and efficiency using the GUI than a language like Visual Basic, and I believe that if you're going to make a living writing software you should know the API even if you don't use it directly. This is for reasons similar to the reasons it's good to learn C, like how a strcpy takes more time than copying an integer, or why you should use pointers to arrays as function parameters instead of arrays by value.
Learning C or a lower level language can definitely be useful. However, I don't see any obvious advantage in using the unmanaged WinAPI.
I've seen low level Windows API code... it ain't pretty... I wish I could unlearn it. I think it benefits to learn low level as in C, as you gain a better understanding of the hardware architecture and how all that stuff works. Learning old Windows API... I think that stuff can be left to the people at Microsoft who may need to learn it to build higher level languages and API... they built it, let them suffer with it ;-)
However, if you happen to find a situation where you feel you just can't do what you need to do in a higher level language (few and far between), then perhaps start the dangerous dive into that world.
yes. take a look at uTorrent, an amazing piece of software efficiency. Half of it's small size is due to the fact that much of it's core components were re-written to not use gargatuian libraries.
Much of this couldn't be done without understanding how these libraries interface with the lower level API's
It's important to know what is available with the Windows API. I don't think you need to crank out code with it, but you should know how it works. The .NET Framework contains a lot of functionality, but it doesn't provide managed code equivalents for the entire Windows API. Sometimes you have to get a bit closer to the metal, and knowing what's down there and how it behaves will give you a better understanding of how to use it.
This is really the same as the question, should I learn a low level language like C (or even assembler).
Coding in it is certainly slower (though of course the result is much faster), but its true advantage is you gain an insight into what is happening at close to the system level, rather than than just understanding someone else's metaphor for what is going on.
It can also be better when things won't work well, or fast enough or with the sort of granularity that you need. (And do at least some subclassing and superclassing.)
I'll put it this way. I don't like programming to the Win32 API. It can be a pain compared to managed code. BUT, I'm glad I know it because I can write programs that otherwise I wouldn't be able to. I can write programs that other people can't. Plus it gives you more insight into what your managed code is doing behind the scenes.
The amount of value you get out of learning the Win32 API, (aside from the sorts of general insights you get from learning about how the nuts and bolts of the machine fit together) depends on what you're trying to achieve. A lot of the Win32 API has been wrapped nicely in .NET library classes, but not all of it. If for instance you're looking to do some serious audio programming, that portion of the Win32 API would be an excellent subject of study because only the most basic of operations are available from .NET classes. Last I checked even the managed DirectX DirectSound library was awful.
At the risk of shameless self-promotion....
I just came across a situation where the Win32 API was my only option. I want to have different tooltips on each item in a listbox. I wrote up how I did it on this question.
Even in very very high level languages you still make use of the API. Why? Well not every aspect of the API has been replicated by the various libraries, frameworks, etc. You need to learn the API for as long as you will need the API to accomplish what you are trying to do. (And no longer.)
Apart from some very special cases when you need direct access to APIs, I would say NO.
There is considerable time and effort required to learn to implement the native API calls correctly and the returning value is just not worth it. I would rather spend the time learning some new hot technology or framework that will make your life easier and programming less painful. Not decades-old obsolete COM libraries that nobody really uses anymore (sorry to COM users).
Please don't stone me for this view. I know a lot of engineers here have really curious souls and there is nothing wrong with learning how things work. Curiousity is good and really helps understanding. But from a managerial point of view, I would rather spend a week learning how to develop Android apps than how to calls OLEs or COMs.
If you planning to develop a cross platform application, If you use win32, then your application could easily run on linux through WINE. This results in a highly maintainable application. This is one of the advantages of learning win32.

Resources