How do I read and write binary data in OS-X? - macos

I need to write a DLL (dynalib, whatever) for OS X 10.6 (Snow Leopard) or later The DLL reads and writes a binary file. (It's the company's proprietary format.) It'll be used by an app that's all Cocoa (AFIK). Everything 64 bit only.
From reading Apple's docs, books, and asking questions here, I still don't have a clear and confident idea as to the good, proper way to deal with binary files. I have the impression I can't use the standard unix/C fopen(), fread() or open(), read() etc. Or I can but I'd be asking for trouble. Is this true? Should I be using something else, and just what?

I have the impression I can't use the standard unix/C fopen(), fread()
or open(), read() etc.
The POSIX/BSD personality is fully supported by the operating system. Feel free to write standards compliant C.
Foundation Kit has Objective-C classes and messages you can use (NSData, NSFile, etc), however they are often more of a pain to use especially when you are maintaining portable code amongst platforms. There is also the side-effect of Objective-C being somewhat easier to reverse engineer than straight C (a trait shared by all higher-level languages).
Depending on the needs of your library, you can consider wrapping it into a Framework as opposed to a naked dylib.

Related

Is there anything preventing interoperability between modern languages and COBOL?

I was reading about how people were having trouble finding people to work with COBOL when working government systems that still use it. I was also reading about how Fortran, a language made two years before COBOL, is interoperable with C, C++, R, and Python with the right libraries.
This allows Fortran scripts to work with modern programming languages to some degree and even create scripts in modern programming languages that can work alongside Fortran code, making it easier for novices of Fortran to work with it. Are there any particular issues that prevent COBOL from having similar interoperability with other programming languages like SQL (which is used for databases similar to COBOL) that would make it easier for modern programmers who might not normally learn COBOL to work with it?
Q1: Does anything prevents interoperability between modern languages and COBOL?
A1: Short answer similar to those above: No, it is actually often done.
But that may depends on what "modern language" is defined for the reader.
Even with "real" COBOL (not some "shiny" [may be read as "blending"] "managed COBOL") you are in most cases free to directly call any C functions so more or less can call anything (at least with a C wrapper) and also can call binaries as you can do on the operating system (`CALL 'SYSTEM' USING 'some-executabe-or-script "param1" "param2"' is a common extension).
For calling into any "native code" directly (like Win32 or POSIX) you obviously have to ensure you are using the correct parameter definitions, but COBOL 2002+ have stuff like USAGE SIGNED-LONG, USAGE POINTER and similar (the extension USAGE COMP-5 is also common in this place).
Additional there are often direct ways to inter-operate with socket servers, HTTP(S), XML, JSON, ... ; and many COBOL implementations also allow to ASSIGN a (line-)sequential file to a pipe, allowing to interact with other programs in this way, too.
Q2: Are there any particular issues that prevent COBOL from having [...] interoperability with [...] SQL?
A2: No, and SQL is a very common directly used in COBOL: EXEC SQL
Many people will say that SQL is no "programming language". It is a query language and may be used in different environments, including COBOL.
Depending on the environment used, EXEC SQL may be directly integrated into the COBOL environment or with a pre-parser that adjusts the code to be plain COBOL (normally CALLing some "native" code, see Q1).
Q3: [... stuff] that would make it easier for modern programmers who might not normally learn COBOL to work with it?
... this is a completely different question, whatever a "modern programmer" is.
For a programmer to get to know a programming language it all depends on the programmer and the resources (like time, manuals, tutorials, mentors) - and the will of the programmer. Many people actually don't "want" to learn COBOL (for reasons I've heard but don't understand or disagree), other miss some of the resources (a free compiler is available with GnuCOBOL, nearly all COBOL compiler have their manuals available online and the ISO working group for COBOL publish the draft standards online, too; you often can find mentors in COBOL discussion forums or mailing lists, along with many samples).
One thing that often is special with COBOL is not the language itself, but the environment it is used on ("mainframe" with job control language "jcl" instead of a GUI to click or a shell to use) and/or the software that is actually coded in COBOL; every software that is maintained over decades has "special ways" here and there, and if you get to "decade old code that wasn't actually maintained for years" you get into even more troubles/fun (this is not something COBOL specific, but with COBOL you may encounter this software more often).
No, there is nothing preventing interoperability.
The main reason (this is an opinion, not based on known facts) that Fortran seems to have more interop out-of-the-box was that there was a free software GNU/Fortran for interested parties to work with. COBOL was very late in the game getting a viable free software compiler. That is no longer an issue with GnuCOBOL and people are finally starting to write the code needed to catch up.
Adding to Simon's answer; proof of concept for direct embedding is in a branch for GnuCOBOL; intrinsic functions added to support FUNCTION TCL, FUNCTION PYTHON, FUNCTION REXX, FUNCION LUA and FUNCTION JVM, so far. With FUNCTION JVM tests for Scala, Groovy, Java, Frink, all worked. This allows data transfer between COBOL working storage and the other language engine using simple COBOL syntax. Including setups for callbacks to and from. Those functions are embedded into the compiler and libcob run-time, when using that branch.
For other interface trials, not built into the compiler, but still allowing interop; the GnuCOBOL FAQ has dozens of examples. Shakespeare? Yep. Falcon? Yep. C, well, GnuCOBOL emits intermediate C so that's covered in spades. There is also a C++ edition of the compiler, so C++ is also covered, in spades. Javascript; Jsish, Duktape, Spidermonkey, Quickjs to name a few of the trials.
Ada, D, Vala, Genie, S-Lang, ROOT/CINT, J, Gambas, Forth, Perl, Postscript, Pure, Icon and Unicon, Nim, BaCon, SWIG (which opens up many multiples), PARI/GP, Gretl, R, Red, Ruby, Haxe/Neko, Pascal, Erlang, Elixir, SQLite, Rust, Go, more..., including a fair number of esolangs, and GNU Lightning for on the fly assembly modules. Trials documented in the GnuCOBOL FAQ.
Framework interfacing for AWT/Swing, GTK, Agar, and things like ZeroMQ, CGI and websockets also proved successful and are in productive use. Along with at least 7 EXEC SQL preprocessors successfully tested, and in use.
It comes down to someone caring to try, and writing some glue or properly aligning call frames. No attempts I've tried have failed to produce satisfactory results, although Perl 5 was a hair pull of unraveling macro layers. (Ok, I just lied, while attempting to embed jq, which relies on using C call and return by struct features, I would have had to leave pure COBOL interface coding, and didn't bother with the C middleware that would have made it easy). ;-) Will do that someday though, as jq is quite the powerful little JSON handler.
Use the search engine you mistrust the least and look for "gnu-cobol-builtin-script" and "GnuCOBOL FAQ", and visit the hits on SourceForge.
In my particular explorations I usually focus on languages with a C Application Binary Interface, but other ABIs would be along a similar vein. It only takes sitting down and writing some middleware or figuring out how to properly synch the call frames.
Are these current samples perfect? Not always, there are edge and corner cases with some datatypes and COBOL PICTURE data that would require more work, but that is all; a little bit of work and testing to smooth over the bumps. When exploring, I don't always go that far until an actual need arises. These seed work experiments are just to get some proof in the pudding, all done for the simple joy of it.
One of the lead developers for GnuCOBOL just added uni and bi-directional piping using simple filenames, which provides access to whatever the base OS offers, using basic COBOL OPEN/READ/WRITE/CLOSE (and other file IO) statements. Code was committed to trunk just a few hours before I started typing this response.
Basically, the answer to the titular question is a resounding No.
The scenario involved in the governmental systems is most likely IBM mainframe hardware with a flavor of z/OS, z/VSE, or z/VM operating system.
It somewhat depends on what is meant by interoperability in the sense that most any modern mainframe supports TCP/IP and that pretty much opens up the whole networked computing ecosystem to networked interoperability.
My guess is when all is said and done, the reason there is a problem is that the state refuses to pay a market rate for experienced mainframe developers and has kicked the maintenance can down the road as cost-saving measures.
It most likely is not a matter of there being no mainframe COBOL professionals able to make the systems work; it's most likely the state won't pay the price.
But this is speculation on my part since all I know is that the governor blames inanimate objects for appropriations and management failures within the state IT administration.
As a 40-year mainframe veteran, I'm dying to know details as to how this perfectly good technology is at fault for problems dealing with (again, I assume) unprecedented volumes of processing demand.
We found an interoperability problem between C and GnuCOBOL.
Our problem was addressed so this answer is just for educational purposes so you can understand what kind of problems you may have.
The problem manifests when C calls COBOL(a, b) calls C(c) calls COBOL(a, b).
And specifically when the number of arguments varies.
A recent change to GnuCOBOL assumed that COBOL called COBOL so it passed meta data about the arguments in some global area. Then the called COBOL program cleared out the second argument because is falsely thought it was being called with one argument. That is, the intermediate C call was transparent to COBOL.
This is under the guise of making it more compatible with IBM mainframe but it caused me a lot of grief. It was quick addressed with runtime changes. I would like to see it addressed with a compile time option:
Make .so file a stand alone .so file called from any language but programmer has to be vigilant.
Make .so file assume it will be called from COBOL and has the additional protections afforded by mainframe COBOL.
BTW: GnuCOBOL is great and has a great community behind it. If you are experiencing problems report it and you will get better response than commercial products.

Can the Ruby language be used to build operating systems?

Can the Ruby language be used to create an entire new mobile operating system or desktop operating system i.e. can it be used in system programming?
Well there are a few operating systems out there right now which use higher-level languages than C. Basically the ruby interpreter itself would need to be written in something low-level, and there would need to be some boot-loading code that loaded a fully-functional ruby interpreter into memory as a standalone kernel. Once the ruby interpreter is bootstrapped and running in kernel-mode (or one of the inner rings), there would be nothing stopping you from builing a whole OS on top of it.
Unfortunately, it would likely be very slow. Garbage collection for every OS function would probably be rather noticeable. The ruby interpreter would be responsible for basic things like task scheduling and the network stack, which using a garbage-collecting framework would slow things down considerably. To work around this, odds are good that the "performance critical" pieces would still be written in C.
So yes, technically speaking this is possible. But no one in their right mind would try it (queue crazy person in 3... 2...)
For all practical purposes: No.
While the language itself is not suited for such a task, it is imaginable (in some other universe ;-) that there a Ruby run-time developed with such a goal in mind.
The only "high level" -- yes, the quotes are there for a reason, I don't consider C very "high level" these days -- language I know of designed for Systems Programming is BitC. (Which is quite unlike Ruby.)
Happy coding.
Edit: Here is a list of "Lisp-based OSes". While not Ruby, the dynamically-typed/garbage-collected nature of (many) Lisp implementations makes for a favorable comparison: if those crazy Lispers can do/attempt it, then so can some Ruby fanatic ... or at least they can wish for it ;-) There is even a link to an OCaml OS on the list...
No, not directly
In the same way that Rails is built on top of Ruby, Ruby is built on top of the services that lower layers .. the real OS .. provide.
I suppose one could subset Ruby until it functionally resembled C and then build an OS out of that, but it wouldn't be worth it. Sure, it would have a nice if .. end but C syntax is perfectly usable and we already have C language systems. Also, operating systems don't handle character data very much, so all of the Ruby features to manipulate it wouldn't be as valuable in a kernel.
If we were starting from scratch today we might actually try (as various experimental projects have) to use garbage collected memory allocation in a kernel but we already have OS kernels.
People are making investments at the higher layers rather than redoing work already done. After all, with all the upper level software to run these days, a new kernel would need to present a compatible interface and the question would then be asked "why not just run the nice kernels we already have?".
Now, the application API for a mobile OS could indeed be done for Ruby. So, just as Android apps are written in Java, RubyPhone apps could be written in Ruby. But Ruby might not be the best possible starting point for a rich application platform. Its development so far has been oriented to server-side problems. There exist various graphical interface gems but I don't think they are widely used.
basically yes, but with a big disclamer .. which is basically Chris' answer but a different spin on it. Since for kernel performance it would kinda suck to use ruby, you'd probably want to build around a linux-ish kernel and just not load any of the rest of the operating system. This is basically what Android does: the kernel is a fork from Linux (and is maintained close to linux), the console is a webkit screen, and the interpreter is Java with some Android specific libraries. IE, Android is Java masquerading as an OS, .. you could do about the same thing with Ruby instead of Java and only a smallish hit to performance from java
While building a whole OS from scratch in Ruby seems like
a multi-billion project (think of all the drivers), a
linux kernel module that runs simple ruby scripts does
make sense for me - even it was only for prototyping
new linux drivers.

Is it possible to hook API calls on Mac OS?

On Windows there a few libraries that allow you to intercept calls to DLLs:
http://www.codeproject.com/kb/system/hooksys.aspx
Is it possible to do this on Mac OS? If so, how is it done?
The answer depends on whether you want to do this in your own application or systemwide. In your own application, it's pretty easy; the dynamic linker provides features such as DYLD_INSERT_LIBRARIES. If you're doing this for debugging/instrumentation purposes, also check out DTrace.
You can replace Objective-C method implementations with method swizzling, e.g. JRSwizzle or Apple's method_exchangeImplementations (10.5+).
If you want to modify library behavior systemwide, you're going to need to load into other processes' address spaces.
Two loading mechanisms originally designed for other purposes (input managers and scripting additions) are commonly abused for this purpose, but I wouldn't really recommend them.
mach_inject/mach_override are an open-source set of libraries for loading code and replacing function implementations, respectively; however, you're responsible for writing your own application which uses the libraries. (Also, take a look at this answer; you need special permissions to inject code into other processes.)
Please keep in mind that application patching/code injection for non-debugging purposes is strongly discouraged by Apple and some Mac users (and developers) are extremely critical of the practice. Much of this criticism is poorly informed, but there have been a number of legitimately poorly written "plug-ins" (particularly those which patch Safari) that have been implicated in application crashes and problems. Code defensively.
(Disclaimer: I am the author of a (free) APE module and an application which uses mach_inject.)

How does porting between Linux and Windows work?

If a particular piece of software is made to be run on one platform and the programmer/company/whatever wants to port it to the other, what exactly is done? I mean, do they just rewrite linux or windows-specific references to the equivalent in the other? Or is an entire rewrite necessary?
Just trying to understand what makes it so cost-prohibitive that so many major vendors don't port their software to Linux (specifically thinking about Adobe)
Thanks
this is the point of a cross-platform toolkit like qt or gtk, they provide a platform-agnostic API, which delegates to whichever platform the program is compiled for.
some companies don't use such a toolkit, and write their own (for whatever reason - could well be optimisation-related), meaning they can't just recompile their code for another os.
There are also libraries available that ease, at least on a specific area, the port of Windows API calls to Linux. See the Windows to Linux porting library.
In my experience, there are three main reasons why it's cost-prohibitive to take a large existing program on one platform and port it to another:
it has (not necessarily purposely) extensively used some library or API (often GUI, but there are also plenty of other things) that turns out not to exist on the other platform
it has unknowingly become riddled with dependency on nonstandard features or oddities of the compiler or other tools
it was written by somebody who didn't know that you had to use some oddball feature to get things to work on the other platform (like a Linux library that isn't sprinkled with the right __declspec directives you need for a good Windows DLL).
It's much easier to write a cross-platform app if you consider that a design goal from the start, and I have three specific recommendations:
Use Boost—oodles of handy things you might ordinarily get from platform-specific APIs and libraries, but by using Boost you get it cross-platform.
Do all your GUI programming using a cross-platform library. My favorite these days is Qt, but there are other worthy ones as well.
Build and test every day on both platforms, never provide an opportunity for the code to develop a dependency on only one platform and discover it only too late.
There are many reasons why it may be very difficult to port an application to a different platform, most often it is because some interfaces the application uses to communicate with the system are not available, and one either has to implement them on their own, port a library your application depends on, or rewrite the application, so that it uses alternative functions. Most languages today are very portable across hardware architectures and operating systems, but the problem is with libraries, system calls and potentially other interfaces the OS (or platform) provides. To be more specific:
Compilers may differ in their configuration and the standard functions they provide. On Windows the most popular compiler for C/C++ is Visual Studio, while on unix it is gcc and llvm (in combination with the standard library glibc or BSD libc). They expect different flags, different forms of declaration, produce different file format of executables and shared libraries. Even though C and C++ have standard libraries, they are implemented differently across platforms. There are some systems whose aim is to make compilation portable, such as Autotools, CMake and SCons.
On top of standard libraries there are additional functions OS provides. On Windows they are covered by win32 API, on unix systems these are part of the POSIX standard, with various GNU, BSD and Linux specific extensions, and there are also plain system calls (the lowest-level interface between applications and the operating system). POSIX functions are available on Windows via systems such as cygwin and mingw, win32 API function are available on unix via Wine. The problem is that these implementations are not complete, and often there are minor (but important) differences.
Communication with the desktop system (in order to make a GUI interface) is done differently. On Linux this might be the X Window System (together with freedesktop libraries) or Wayland, while Windows has its own systems. There are GUI libraries which try to provide an abstract interface for these, such as Qt, GTK, wxWidgets, EFL.
Other services the OS provides, such as network communication may be implemented differently. On Windows many applications use .NET libraries, for which there is only limited support on unix systems. Some unix applications rely on Linux-specific features such as systemd, /proc, KMS, cgroups, namespaces. This limits portability even among unix systems (Linux, BSD systems, Mac OS X, ...). Even .NET libraries are not very compatible across different versions, and they might not be available on an older version of Windows or on embedded systems. Android and iOS have different interfaces entirely.
Web applications are usually the most portable, but HTML5 is a live standard, and many interfaces may not be available yet in some browsers/web engines. This requires the use of polyfills, but it is usually much less painful than the situation with "native" applications.
Because of all of these limitations, porting can be a pretty hard work and sometimes it is easier to create a new application from scratch, either specifically for the other platform, using cross-platform abstraction libraries/platforms (such as Qt or Java), or as a web application (potentially bundled in something like Electron). It is a good idea to use these from the beginning, but many programmers choose not to because the applications tend to look and behave differently from "native" applications on the platform, and they might also be slower and more restricted in the way they interact with the OS.
Porting a piece of software that has not been made platform-independant upfront can be an enormous task. Often, the code is deeply ingrained with non-portable APIs, whether 3rd party or just OS libraries. If the 3rd party vendor does not provide the API for the platform you are porting on, you are pretty much forced into a full rewrite of that functionality, or finding another 3rd party that is portable. This only can be awfully costly.
Finally, porting software also means supporting it on another platform, which means hiring some specialists, and training support to answer more complex queries.
In the end, such a process can be very costly, for very little additional sales. Sadly, the decision is easy: concentrate on new functionality on your current platform that you know your customers are going to pay for.
If the software was written for a single OS, a major rework is likely. The first step is to move absolutely all platform-specific code into a single area of the code base; this area should have little or no app-specific stuff. Then rewrite this isolated portion of the code for the new target OS.
Of course, this glosses over some extremely major implications. For instance, if your first version targeted the Win32 API, then any GUI code will be heavily tied to Windows, and to maintain any hope of preserving your sanity, you will need to move all that code to a cross-platform GUI framework like Qt or GTK.
Under Mono, you can write a C# Winforms program that works on both platforms. But to make that possible, the Mono team had to write their own Winforms library that essentially duplicates all of the functions of Winforms. So there is still no free lunch.
Most software is portable to some extent. In the case of a C app - there will be a lot of #ifdefs in the area, apart from path changes, etc.
Rarely windows/linux version of the same software don't share a common codebase - this would actually mean that they only share a common name. It's always harder to maintain more codebases, but I think that the actual problem with porting applications has little to do with the technical side and a lot with business side. Linux has much fewer users that Windows/OSX, most of them expect everything to be free as in beer or simply hate commercial software on some religious grounds.
When you come to think about it - most open source software is multiplatform, no matter what language was used to implement it. This speaks for itself...
P.S. Disclaimer - I'm an avid supporter of Free and Open source software, I don't want to insult anybody - I just share my perspective on the topic.

Is there a Linux equivalent of Windows' "resource files"?

I have a C library, which I build as a shared object for Linux and a DLL for Windows with MinGW32. The API depends on a couple of data files (statistical models) which I'd really like to roll in with the SO/DLL so that deployment is just one file.
It looks like I can achieve this for Windows with a "resource file" compiled with windres, but then I've got to write a bunch of resource-handling code for Windows, and I'm still stuck with the files on Linux.
Is there a way to achieve the same functionality on Linux?
Even better, is there a portable solution?
It's actually quite simple on Linux and other ELF systems: http://www.linuxjournal.com/content/embedding-file-executable-aka-hello-world-version-5967
OS X has bundles, so you just build your library as a framework and put the file in the bundle.
Two potential solutions:
Phong Vo's sfio library, which is part of the AT&T Advanced Software Technology toolset, is a wonderful replacement for C stdio.h, and it will allow you to open either files or memory blocks using a single API. So you can easily convert your existing files to C initialized data to include in your DLL or SO file.
This is a good cross-platform solution, but the penalty is that the learning curve to get started is pretty high. They don't make it easy to figure out how stuff works or to take one part of their toolset and split it out for use independent of the other parts. But the good news is that if you want to adopt their U/Win system for running Unix codes on windows (all part of the same toolset), you can create DLLs and SOs using the same system.
For this kind of problem I often fall back on Lua; I can stored Lua data either in external files or within C as initialized data. This is great for distributing everything in one .so file; I do this for my students.
Again the downside is that you have to master and incorporate a new technology.
In my own work I use Lua over the AT&T stuff for these reasons:
Lua has a much smaller footprint and is designed to play well with others; with AST you really have do adopt their way of doing things.
The learning curve with Lua is much less steep; you can be productive very quickly.
Lua is dead easy to install and it's easy to get information about it. AST has its own quirky installation process shared by nobody else in the world; it's often hard to make the installation work; and it's harder to get information about it.
Using Lua has a lot of other payoffs, so the effort spent learning Lua and learning how to incorporate Lua into C codes is easy to amortize over multiple projects.

Resources