Is there anything preventing interoperability between modern languages and COBOL? - interop

I was reading about how people were having trouble finding people to work with COBOL when working government systems that still use it. I was also reading about how Fortran, a language made two years before COBOL, is interoperable with C, C++, R, and Python with the right libraries.
This allows Fortran scripts to work with modern programming languages to some degree and even create scripts in modern programming languages that can work alongside Fortran code, making it easier for novices of Fortran to work with it. Are there any particular issues that prevent COBOL from having similar interoperability with other programming languages like SQL (which is used for databases similar to COBOL) that would make it easier for modern programmers who might not normally learn COBOL to work with it?

Q1: Does anything prevents interoperability between modern languages and COBOL?
A1: Short answer similar to those above: No, it is actually often done.
But that may depends on what "modern language" is defined for the reader.
Even with "real" COBOL (not some "shiny" [may be read as "blending"] "managed COBOL") you are in most cases free to directly call any C functions so more or less can call anything (at least with a C wrapper) and also can call binaries as you can do on the operating system (`CALL 'SYSTEM' USING 'some-executabe-or-script "param1" "param2"' is a common extension).
For calling into any "native code" directly (like Win32 or POSIX) you obviously have to ensure you are using the correct parameter definitions, but COBOL 2002+ have stuff like USAGE SIGNED-LONG, USAGE POINTER and similar (the extension USAGE COMP-5 is also common in this place).
Additional there are often direct ways to inter-operate with socket servers, HTTP(S), XML, JSON, ... ; and many COBOL implementations also allow to ASSIGN a (line-)sequential file to a pipe, allowing to interact with other programs in this way, too.
Q2: Are there any particular issues that prevent COBOL from having [...] interoperability with [...] SQL?
A2: No, and SQL is a very common directly used in COBOL: EXEC SQL
Many people will say that SQL is no "programming language". It is a query language and may be used in different environments, including COBOL.
Depending on the environment used, EXEC SQL may be directly integrated into the COBOL environment or with a pre-parser that adjusts the code to be plain COBOL (normally CALLing some "native" code, see Q1).
Q3: [... stuff] that would make it easier for modern programmers who might not normally learn COBOL to work with it?
... this is a completely different question, whatever a "modern programmer" is.
For a programmer to get to know a programming language it all depends on the programmer and the resources (like time, manuals, tutorials, mentors) - and the will of the programmer. Many people actually don't "want" to learn COBOL (for reasons I've heard but don't understand or disagree), other miss some of the resources (a free compiler is available with GnuCOBOL, nearly all COBOL compiler have their manuals available online and the ISO working group for COBOL publish the draft standards online, too; you often can find mentors in COBOL discussion forums or mailing lists, along with many samples).
One thing that often is special with COBOL is not the language itself, but the environment it is used on ("mainframe" with job control language "jcl" instead of a GUI to click or a shell to use) and/or the software that is actually coded in COBOL; every software that is maintained over decades has "special ways" here and there, and if you get to "decade old code that wasn't actually maintained for years" you get into even more troubles/fun (this is not something COBOL specific, but with COBOL you may encounter this software more often).

No, there is nothing preventing interoperability.
The main reason (this is an opinion, not based on known facts) that Fortran seems to have more interop out-of-the-box was that there was a free software GNU/Fortran for interested parties to work with. COBOL was very late in the game getting a viable free software compiler. That is no longer an issue with GnuCOBOL and people are finally starting to write the code needed to catch up.
Adding to Simon's answer; proof of concept for direct embedding is in a branch for GnuCOBOL; intrinsic functions added to support FUNCTION TCL, FUNCTION PYTHON, FUNCTION REXX, FUNCION LUA and FUNCTION JVM, so far. With FUNCTION JVM tests for Scala, Groovy, Java, Frink, all worked. This allows data transfer between COBOL working storage and the other language engine using simple COBOL syntax. Including setups for callbacks to and from. Those functions are embedded into the compiler and libcob run-time, when using that branch.
For other interface trials, not built into the compiler, but still allowing interop; the GnuCOBOL FAQ has dozens of examples. Shakespeare? Yep. Falcon? Yep. C, well, GnuCOBOL emits intermediate C so that's covered in spades. There is also a C++ edition of the compiler, so C++ is also covered, in spades. Javascript; Jsish, Duktape, Spidermonkey, Quickjs to name a few of the trials.
Ada, D, Vala, Genie, S-Lang, ROOT/CINT, J, Gambas, Forth, Perl, Postscript, Pure, Icon and Unicon, Nim, BaCon, SWIG (which opens up many multiples), PARI/GP, Gretl, R, Red, Ruby, Haxe/Neko, Pascal, Erlang, Elixir, SQLite, Rust, Go, more..., including a fair number of esolangs, and GNU Lightning for on the fly assembly modules. Trials documented in the GnuCOBOL FAQ.
Framework interfacing for AWT/Swing, GTK, Agar, and things like ZeroMQ, CGI and websockets also proved successful and are in productive use. Along with at least 7 EXEC SQL preprocessors successfully tested, and in use.
It comes down to someone caring to try, and writing some glue or properly aligning call frames. No attempts I've tried have failed to produce satisfactory results, although Perl 5 was a hair pull of unraveling macro layers. (Ok, I just lied, while attempting to embed jq, which relies on using C call and return by struct features, I would have had to leave pure COBOL interface coding, and didn't bother with the C middleware that would have made it easy). ;-) Will do that someday though, as jq is quite the powerful little JSON handler.
Use the search engine you mistrust the least and look for "gnu-cobol-builtin-script" and "GnuCOBOL FAQ", and visit the hits on SourceForge.
In my particular explorations I usually focus on languages with a C Application Binary Interface, but other ABIs would be along a similar vein. It only takes sitting down and writing some middleware or figuring out how to properly synch the call frames.
Are these current samples perfect? Not always, there are edge and corner cases with some datatypes and COBOL PICTURE data that would require more work, but that is all; a little bit of work and testing to smooth over the bumps. When exploring, I don't always go that far until an actual need arises. These seed work experiments are just to get some proof in the pudding, all done for the simple joy of it.
One of the lead developers for GnuCOBOL just added uni and bi-directional piping using simple filenames, which provides access to whatever the base OS offers, using basic COBOL OPEN/READ/WRITE/CLOSE (and other file IO) statements. Code was committed to trunk just a few hours before I started typing this response.
Basically, the answer to the titular question is a resounding No.

The scenario involved in the governmental systems is most likely IBM mainframe hardware with a flavor of z/OS, z/VSE, or z/VM operating system.
It somewhat depends on what is meant by interoperability in the sense that most any modern mainframe supports TCP/IP and that pretty much opens up the whole networked computing ecosystem to networked interoperability.
My guess is when all is said and done, the reason there is a problem is that the state refuses to pay a market rate for experienced mainframe developers and has kicked the maintenance can down the road as cost-saving measures.
It most likely is not a matter of there being no mainframe COBOL professionals able to make the systems work; it's most likely the state won't pay the price.
But this is speculation on my part since all I know is that the governor blames inanimate objects for appropriations and management failures within the state IT administration.
As a 40-year mainframe veteran, I'm dying to know details as to how this perfectly good technology is at fault for problems dealing with (again, I assume) unprecedented volumes of processing demand.

We found an interoperability problem between C and GnuCOBOL.
Our problem was addressed so this answer is just for educational purposes so you can understand what kind of problems you may have.
The problem manifests when C calls COBOL(a, b) calls C(c) calls COBOL(a, b).
And specifically when the number of arguments varies.
A recent change to GnuCOBOL assumed that COBOL called COBOL so it passed meta data about the arguments in some global area. Then the called COBOL program cleared out the second argument because is falsely thought it was being called with one argument. That is, the intermediate C call was transparent to COBOL.
This is under the guise of making it more compatible with IBM mainframe but it caused me a lot of grief. It was quick addressed with runtime changes. I would like to see it addressed with a compile time option:
Make .so file a stand alone .so file called from any language but programmer has to be vigilant.
Make .so file assume it will be called from COBOL and has the additional protections afforded by mainframe COBOL.
BTW: GnuCOBOL is great and has a great community behind it. If you are experiencing problems report it and you will get better response than commercial products.

Related

(Windows) How to make auto-update function for plugin?

I have a question. I'm writing a plugin (.dll) for an application (.exe). And I want to code auto-update function for my plugin but I catched an issue, it could not apply. Because my plugin has loaded while application is running, in run-time it cannot replace. It just apply until application has exited. So, how can I do it?
This is my code: http://codepad.org/4a22ccMa
Thanks!
(this answers the original question, which has been edited to something completely different)
It depends upon the operating system (which I guess might be Windows, since you speak of DLLs; on Linux you have shared objects (in ELF) with a different semantics). Read Levine's book Linkers and Loaders.
You might read more about Dynamic Software Updates. This is a entire research subject with a lot of scientific literature about it. Read e.g. the paper about Kitsune: Efficient, General-purpose Dynamic Software Updating for C (at least to understand several issues in your question).
On Linux, you could rename(2) the old .so and dlopen(3) the new one (and probably dlclose the older one, but you should do that later, when no active call frame on the call stack points into the old plugin), and my manydl.c example shows that you practically can dlopen a big lot (more than a million in practice) of shared objects.
On Windows (which I don't know) you probably need to dynamically load the new version of the plugin in a different file path. Probably, restarting your program after a plugin update should make things much easier.
(if you can afford that, switching to Linux might be very helpful, because I guess that it is much easier)
Notice that in some languages (and some of their implementations) replacing some code is much easier than in C or C++ on Windows. I guess that in CLR (managed code, e.g. in C#) or in a JVM (e.g. in Java, Scala, Clojure) it should be easier. And in Common Lisp (at least with SBCL) it is quite easy (in particular since Common Lisp is an homoiconic language).
Take care of the continuation, i.e. of the call stack. You'll understand that your question (also related to orthogonal persistence & application checkpointing) is much deeper and more difficult than you have imagined. And upgrading classes and their instances is also very difficult.

Creating GUI desktop applications that call into either OCaml or Haskell -- Is it a fool's errand?

In both Haskell and OCaml, it's possible to call into the language from C programs. How feasible would it be to create Native applications for either Windows, Mac, or Linux which made extensive use of this technique?
(I know that there are GUI libraries like wxHaskell, but suppose one wanted to just have a portion of your application logic in the foreign language.)
Or is this a terrible idea?
Well, the main risk is that while facilities exist, they're not well tested -- not a lot of apps do this. You shouldn't have much trouble calling Haskell from C, looks pretty easy:
http://www.haskell.org/haskellwiki/Calling_Haskell_from_C
I'd say if there is some compelling reason to use C for the front end (e.g. you have a legacy app) and you really need a Haskell library, or want to use Haskell for some other reason, then, yes, go for it. The main risk is just that not a lot of people do this, so less documentation and examples than for calling the other way.
You can embed OCaml in C as well (see the manual), although this is not as commonly done as extending OCaml with C.
I believe that the best approach, even if both GUI and logic are written in the same language, is to run two processes which communicates via a human-readable, text-based protocol (a DSL of some sort). This architecture applies to your case as well.
Advantages are obvious: GUI is detachable and replaceable, automated tests are easier, logging and debugging are much easier.
I make extensive use of this by compiling haskell shared libs that are called outside Haskell.
usually the tasks involved would be to
create the proper foreign export declarations
create Storable instances for any datatypes you need to marshal
create the C structures (or structures in the language you're using) to read this information
since I don't want to manually initialize the haskell RTS, i add initiallisation/termination code to the lib itself. (dllmain in windows __attribute__ ((constructor)) on unix)
since I no longer need any of them, I create a .def file to hide all the closure and rts functions from being in the export table (windows)
use GHC to compile everything together
These tasks are rather robotic and structured, to a point you could write something to automate them. Infact what I use myself to do this is a tool I created which does dependency tracing on functions you marked to be exported, and it'll wrap them up and compile the shared lib for you along with giving you the declarations in C/C++.
(unfortunately, this tool is not yet on hackage, because there is something I still need to fix and test alot more before I'm comfortable doing so)
Tool is available here http://hackage.haskell.org/package/Hs2lib-0.4.8
Or is this a terrible idea?
It's not a terrible idea at all. But as Don Stewart notes, it's probably a less-trodden path. You could certainly launch your program as Haskell or OCaml, then have it do a foreign-function call right out of the starting gate—and I recommend you structure your code that way—but it doesn't change the fact that many more people call from Haskell into C than from C into Haskell. Likewise for OCaml.

Haskell UI framework?

Is there, by chance, an emerging Haskell UI framework for Windows?
I recently took up looking over the language, and from what I see, it would be for great little "one-off" applications (elaborate scripts).
However, without a good UI framework I can't see it getting in under the smoke and mirrors of the more obvious contenders.
I've read that there are many frameworks, but none are full-featured.
I'm just wondering if this is something that's on the rise, or is it simply too difficult to get enough developers going in the same direction with one?
The two main frameworks are wxHaskell and Gtk2Hs. Both of these have been used for real work. From what I know my preference would be Gtk2Hs because it handles resources properly (i.e. uses the GC). wxHaskell requires the programmer to release widgets once they are no longer required, so you can get all the classic memory leaks and stale pointer screws with it.
The problem with both is that everything is in the IO monad. This reflects the fact that they are comparatively thin wrappers around existing GUI libraries for imperative languages. Of course this means you are no worse off than you would be writing a GUI in an imperative language, but you are hardly much better off either.
There are some interesting experimental libraries to be found on Hackage, including Grapefruit and Conal Elliott's "Tangible Values" ideas in GuiTV. Both of these try for a more declarative approach.
(Disclaimer: I am the wxHaskell maintainer)
Both wxHaskell and Gtk2Hs are more or less complete. That's to say, both wrap a great deal of the functionality provided by their underlying libraries. They also both, as mentioned earlier, require a rather 'imperative' style of programming in the IO monad.
There have been many discussions on the relative merits of each. I would say that wxHaskell is the easier of the two to get working, especially on Windows, as it can be installed via cabal (see http://www.haskell.org/haskellwiki/WxHaskell/Install#On_Windows)
The FRP frameworks (Grapefruit and others) provide a more 'functional' style of programming, at the cost of having much reduced widget coverage. I have the feeling that this is still an open research area, and not really ready for 'prime time'.
In practice, I've never had resource management issues with wxHaskell, although I agree that it's possible, and is an area handled better by Gtk2Hs, which uses reference counting in the underlying library.
For completeness, I should also mention that a Qt binding (QtHaskell?) also exists - it is relatively young, but apparently reasonably complete.
I rather feel that the Haskell community, small as it is, would do well to fix on one GUI framework, but accept the difficulty of this (e.g. licensing, support for all OS platforms etc.).
Also you can use wxWidgets (i mean C++ library) with Haskell. Here is an example: https://bitbucket.org/afiskon/hs-a-star-gui/src Such approach has some advantages over wxHaskell: 1. You can use UI generators (Code::Blocks, wxFormBuilder) 2. Your application takes less disk space 3. You can use all features of wxWidgets.
It should also be noted, that last version of wxHaskell uses wxWidgets 2.9, which probably will never be ported to Debian: http://bugs.debian.org/cgi-bin/bugreport.cgi?msg=16;bug=613431

Is there a Linux equivalent of Windows' "resource files"?

I have a C library, which I build as a shared object for Linux and a DLL for Windows with MinGW32. The API depends on a couple of data files (statistical models) which I'd really like to roll in with the SO/DLL so that deployment is just one file.
It looks like I can achieve this for Windows with a "resource file" compiled with windres, but then I've got to write a bunch of resource-handling code for Windows, and I'm still stuck with the files on Linux.
Is there a way to achieve the same functionality on Linux?
Even better, is there a portable solution?
It's actually quite simple on Linux and other ELF systems: http://www.linuxjournal.com/content/embedding-file-executable-aka-hello-world-version-5967
OS X has bundles, so you just build your library as a framework and put the file in the bundle.
Two potential solutions:
Phong Vo's sfio library, which is part of the AT&T Advanced Software Technology toolset, is a wonderful replacement for C stdio.h, and it will allow you to open either files or memory blocks using a single API. So you can easily convert your existing files to C initialized data to include in your DLL or SO file.
This is a good cross-platform solution, but the penalty is that the learning curve to get started is pretty high. They don't make it easy to figure out how stuff works or to take one part of their toolset and split it out for use independent of the other parts. But the good news is that if you want to adopt their U/Win system for running Unix codes on windows (all part of the same toolset), you can create DLLs and SOs using the same system.
For this kind of problem I often fall back on Lua; I can stored Lua data either in external files or within C as initialized data. This is great for distributing everything in one .so file; I do this for my students.
Again the downside is that you have to master and incorporate a new technology.
In my own work I use Lua over the AT&T stuff for these reasons:
Lua has a much smaller footprint and is designed to play well with others; with AST you really have do adopt their way of doing things.
The learning curve with Lua is much less steep; you can be productive very quickly.
Lua is dead easy to install and it's easy to get information about it. AST has its own quirky installation process shared by nobody else in the world; it's often hard to make the installation work; and it's harder to get information about it.
Using Lua has a lot of other payoffs, so the effort spent learning Lua and learning how to incorporate Lua into C codes is easy to amortize over multiple projects.

Windows API for VISTA, 7 & Beyond

Is there any fundamental differences in the WinAPI/Win32? Is there any additional knowledge required to take advantage of new OS features?
Are there any pitfalls which someone who's coded Win32 apps in the past might fall in?
I'm not talking about Silverlight, that's a whole different ball of wax. (I don't have the VS that supports that at work yet.)
Edit:
Drew has a pretty good answer so far, but what's critical for a programmer to know?
i.e. What should be in an appendix to Charles Petzold's book? (Theoretically)
There are of course lots of new APIs that you should be aware of to make sure you have the tools you need. Beyond that, there are some changes to note.
Philosophical changes
Large parts of the old win32 APIs focused on C-style APIs where handles were passed around. Nowadays, many of the new APIs being developed are COM-based, so boning up on COM and ATL would be worthwhile.
You might also want to take note of the new API style if you're writing your own libraries, which is a bit more consistent and avoids things like hungarian notation.
Replacements
Generally, don't assume that the methods you knew about 10 years ago are still state-of-the-art; they all still exist, so you won't necessarily be told you're doing it wrong. Check MSDN to see if it refers you to something better, and use the latest SDK so that you'll get deprecation warnings for some functions. Especially, make sure string functions you're using are secure.
Specifically, one 'replacement' API is Direct 2d, which is a DirectX-style API for UIs. If you're writing graphics code for Windows 7, you should consider Direct2d over GDI, which has a programming model that is compatible with, but very different than, GDI's. Direct 2d may be ported back to Vista.
Also, instead of using win32-style menuing, consider using the Ribbon, which will be available for Vista as well as Win7.
If you're using the common controls library, make sure to use v6, not the default of v5.
Finally, make sure you're not unnecessarily calling things that require admin privileges, as that will prompt UAC.
All I can think of for now.
There are new API's for each.
There is additional knowledge, though it may not be required, you should be familiar with 64-bit and multi-threaded application development to name a few. The higher level constructs such as Direct2D, .NET etc., etc., are what require the adjustment in knowledge, not necessarily the lower level APIs.
You have the choice: traditional C/C++ or use the newer .Net framework languages (C# / VB.net / Python.net and more). For the latter, knowing the framework is more important than the implementation. You're isolated (in general) from pointers, threading, buffers and memory management and apart from a few differences in syntax once you know the framework it's portable between languages (ie, you could easily pick up VB.net programming if you're a C# guy as most of what your apps will do is call parts of the framework). You can create a class in C#, use it in a VB.net program and reference the same class out of a Powershell cmdlet for example.
The old-style C interfaces are still for Win32 there but unless you've got a specific need to use them (legacy code, Direct X, device drivers for example) I'd look at the newer stuff. As for things like WPF, there isn't even a direct route in via unmanaged code - you have to jump through all sorts of ugly interop hoops.
Nothing special. The old stuff pretty much works as it did.
There are some new APIs, but nothing earth shattering (and following the old Win32 conventions).
So everything you know from Vista is still true for Win7.
Now, there are some new guidelines regarding user experience (touch screen, libraries (user experience stuff, not programmer stuff)), but the API style is the same.
Integrity levels are also a good thing to learn about. Depending on the nature of your application, if it attempts to do anything involving other processes that are running on the OS, it is important to know about this. This technology prevents processes at a lower integrity level from interacting with processes that are running at a higher integrity level. This includes messaging, hooks, DLL injection, opening handles, and many other techniques.

Resources