Haskell SDL on OS X - macos

SDL on OS X uses preprocessor tricks to overload main() with their own entry point, written in Objective C, which calls the user's main.
These tricks make the lives of non-C SDL users (e.g: the Haskell bindings) very difficult.
Is there a good reason for this?
Why couldn't SDL do the objective-C Cocoa initialization in SDL_init?

The approach for Mac OS X is not much different from the approach for other non-Linux platforms (Windows, old Mac, BeOS). You could ask the SDL developers themselves why they did it this way, but I can see several reasons they may have chosen to do this:
This keeps the dependencies of SDL code, which is focused on initializing SDL-specific subsystems (video, audio, timing, etc.) limited to the specific subsystems that SDL is especially designed to work with. I.e. this way, SDL stays lean and mean.
It avoids having to introduce a new platform-specific subsystem for application initialization. Not everyone is going to want the bare-bones application object and menu that SDL sets up for Mac apps, not by a long shot – so if you were going to put it into SDL_init, you'd need to make it an optional subsystem so as not to inconvenience developers who don't need it.
It handles inversion of control correctly, which is how Mac OS X and other application frameworks typically operate, while maintaining the operational semantics of SDL routines. SDL_init assumes it's going to be returning to the caller after initialization is complete, but if you tried naively to create an application object in SDL_init and invoke [app run] on it to finish initializing the application and launching, you'd never return. If you didn't call run there, you'd have to create a separate SDL function to set up the application run loop. This could complicate the SDL library quite a bit. The approach that was chosen avoids all this, by letting the framework take care of all the application set up first, and invoke the SDL_main() routine from applicationDidFinishLaunching.
It makes it easy to convert SDL demos coded on Linux over to Mac OS X. You don't even have to rename main – the preprocessor renaming of main() to SDL_main() takes care of that for you!
I'm guessing the last of these reasons is the primary driver behind the redefinition of main in SDL_main.h, which I agree is an ugly hack.
If you're prepared to give up that level of cross-platform portability for your library and apps, I'd suggest simply modifying your SDL_main.h to remove the following line:
#define main SDL_main
and removing the following from the SDLMain.m in your project:
#ifdef main
# undef main
#endif
You shouldn't even need to recompile SDL if you do this. Note that SDLMain.m is already set up to invoke SDL_main() without the preprocessor hack, and nothing else in SDL is going to use this, so in this way you can you simply provide SDL_main() as your game's entry point.
If you want to go the other way, taking over main() yourself, you'd still want to get rid of the #define main SDL_main hack in SDL_main.h, but other than that, you're not beholden to the main() that SDL provides for you. First, note that SDLMain.{h,m} are not part of the library proper; you must include them separately in your project. Second, note the following comments in SDLMain.h:
/* SDLMain.m - main entry point for our Cocoa-ized SDL app
Initial Version: Darrell Walisser <dwaliss1#purdue.edu>
Non-NIB-Code & other changes: Max Horn <max#quendi.de>
Feel free to customize this file to suit your needs
*/
That sounds to me like an invitation to go roll your own if these aren't working for you, starting with SDLMain.{h,m} as a model. And if you're rolling your own, you can do what you want! For that matter, you could write the equivalent of SDLMain.m in Haskell, using HOC, if that's what you want. Unless you're a whiz with HOC, though, I'd keep it simple.

Related

How to create an embeddable C-API library in Go?

I am planning to write a cross-platform app that has most of its functionality shared across all platforms (Linux, OS X, Windows, iOS, Android).
These are mostly helper function (calculations, internal lists, networking etc.) so I figured it would be convenient to have those functions in a library I can compile for every platform while still being able to create custom UI for each platform individually.
Dominant languages across those platforms I mentioned are C, Objective-C, C# and Java. All these languages support calling C-API functions from a library either directly or via internal wrappers. Since I don't want to write 80% of my application's code in C/C++, I searched and found Go.
cgo seems to be the solution for my problem.
My current thought is to code the core library in Go and then compile it for each platform, however, invoking go build does not create anything at all.
I import "C".
I have declared a func and added the //export statement before.
I read about gccgo but people keep pointing out that it is outdated and should not be used.
Maybe anyone can point out a flaw in my thoughts or help me bring this library file together. Thanks in advance.
If your aim is to build a library that can be linked into arbitrary C, Objective-C or Java programs, you are out of luck with the currently released standard tool chain. There are plans to change this in the future, but at present the Go runtime is not embeddable in other applications.
While cgo will allow you to export functions to be called from C, this is only really useful for cases when the C code you call from Go needs to call back to Go.

Interposing of OS X system calls

I need to interpose (get my functions called instead of the original functions) some OS X system calls to overcome a flaw in a piece of closed-source software.
Preferably, the resulting solution would work under 10.5 (Leopard) and newer, but I might be able to require 10.6 (Snow Leopard) if the argument were strong enough.
Preferably, the resulting solution would be an executable, but I might settle for a script.
Preferably, the resulting solution would be able to interpose ("steal the vectors") even after the target application is running, but I could settle for a technology that must inject itself as the application is loading.
Preferably, the resulting solution would be developed in C or C++, but I could settle for Objective-C or something else.
So far, I've experimented with:
1) DTrace scripting, which has taught me a lot, but the limitations of the D language (limited flow control, etc.) make it a major pain for what I'm doing, not to mention that the result would be a script, which isn't as tidy and self-contained as what I'm shooting for.
2) DYLD_INSERT_LIBRARIES interposition, which is slick in many ways, but perhaps due to namespace flattening (I won't pretend to deeply understand what this means), it works nicely against simpler executables, but makes my target application choke, even when I build a do-nothing library that doesn't actually interpose any calls.
My latest idea is to experiment with mach_star (https://github.com/rentzsch/mach_star), but I'm stopping here first, to ask the Stack Overflow community which invariably knows more than do I...
...should I be looking at something besides mach_star next?
I think you've made the right choice looking at mach_star.
If you actually want to learn how the darwin link-loader works, etc., I'd put more time into your DYLD insertion problems. But obviously you're looking for a quick solution, not an in-depth learning experience. And I doubt anyone's going to be able to figure out the problems you're having without having access to your project. So, this is probably a dead end. Besides, Mach overriding and injection are more fun anyway.
The basics of Mach injection aren't actually that hard, but there are a ton of things you have to get right, most of which aren't well documented. You're going to get 11 things wrong before you get something that works on your system, and then it won't work for the next function you try, and then it won't work on 10.5 or 10.8, and… The mach_star library wraps up all that stuff for you. So, why not use it?
I should mention that I haven't used mach_star since pre-Intel days. But it looks like it's still being updated regularly-ish, with changes for x86_64 and 10.7 and Xcode 4 and so on.

Can I mix arm-eabi with arm-elf?

I have a product which bootloader and application are compiled using a compiler (gnuarm GCC 4.1.1) that generates "arm-elf".
The bootloader and application are segregated in different FLASH memory areas in the linker script.
The application has a feature that enables it to call the bootloader (as a simple c-function with 2 parameters).
I need to be able to upgrade existing products around the world, and I can safely do this using always the same compiler.
Now I'd like to be able to compile this product application using a new GCC version that outputs arm-eabi.
Everything will be fine for new products, where both application and bootloader are compiled using the same toolchain, but what happens with existing products?
If I flash a new application, compiled with GCC 4.6.x and arm-none-eabi, will my application still be able to call the bootloader function from the old arm-elf bootloader?
Furthermore, not directly related to the above question, can I mix object files compiled with arm-elf into a binary compiled with arm-eabi?
EDIT:
I think is good to make clear I am building for a bare metal ARM7, if it makes any difference...
No. An ABI is the magic that makes binaries compatible. The Application Binary Interface determines various conventions on how to communicate with other libraries/applications. For example, an ABI will define calling convention, which makes implicit assumptions about things like which registers are used for passing arguments to C functions, and how to deal with excess arguments.
I don't know the exact differences between EABI and ABI, but you can find some of them by reading up on EABI. Debian's page mentions the syscall convention is different, along with some alignment changes.
Given the above, of course, you cannot mix arm-elf and arm-eabi objects.
The above answer is given on the assumption that you talk to the bootloader code in your main application. Given that the interface may be very simple (just a function call with two parameters), it's possible that it might work. It'd be an interesting experiment to try. However, it is not ** guaranteed** to work.
Please keep in mind you do not have to use EABI. You can generate an arm-elf toolchain with gcc 4.6 just as well as with older versions. Since you're using a binary toolchain on windows, you may have more of a challenge. I'd suggest investigating crosstool-ng, which works quite well on Linux, and may work okay on cygwin to build the appropriate toolchain.
There is always the option of making the call to bootloader in inline assembly, in which case you can adhere to any calling standard you need :).
However, besides the portability issue it introduces, this approach will also make two assumptions about your bootloader and application:
you are able to detect in your app that a particular device has a bootloader built with your non-EABI toolchain, as you can only call the older type bootloader using the assembly code.
the two parameters you mentioned are used as primitive data by your bootloader. Should the bootloader use them, for example, as pointers to structs then you could be facing issues with incorrect alignment, padding and so forth.
I Think that this will be OK. I did a migration something like this myself, from what I remember I only ran into a problem to do with handling division.
This is the best info I can find about the differences, it suggests that if you don't have struct alignment issues, you may be OK.

Creating GUI desktop applications that call into either OCaml or Haskell -- Is it a fool's errand?

In both Haskell and OCaml, it's possible to call into the language from C programs. How feasible would it be to create Native applications for either Windows, Mac, or Linux which made extensive use of this technique?
(I know that there are GUI libraries like wxHaskell, but suppose one wanted to just have a portion of your application logic in the foreign language.)
Or is this a terrible idea?
Well, the main risk is that while facilities exist, they're not well tested -- not a lot of apps do this. You shouldn't have much trouble calling Haskell from C, looks pretty easy:
http://www.haskell.org/haskellwiki/Calling_Haskell_from_C
I'd say if there is some compelling reason to use C for the front end (e.g. you have a legacy app) and you really need a Haskell library, or want to use Haskell for some other reason, then, yes, go for it. The main risk is just that not a lot of people do this, so less documentation and examples than for calling the other way.
You can embed OCaml in C as well (see the manual), although this is not as commonly done as extending OCaml with C.
I believe that the best approach, even if both GUI and logic are written in the same language, is to run two processes which communicates via a human-readable, text-based protocol (a DSL of some sort). This architecture applies to your case as well.
Advantages are obvious: GUI is detachable and replaceable, automated tests are easier, logging and debugging are much easier.
I make extensive use of this by compiling haskell shared libs that are called outside Haskell.
usually the tasks involved would be to
create the proper foreign export declarations
create Storable instances for any datatypes you need to marshal
create the C structures (or structures in the language you're using) to read this information
since I don't want to manually initialize the haskell RTS, i add initiallisation/termination code to the lib itself. (dllmain in windows __attribute__ ((constructor)) on unix)
since I no longer need any of them, I create a .def file to hide all the closure and rts functions from being in the export table (windows)
use GHC to compile everything together
These tasks are rather robotic and structured, to a point you could write something to automate them. Infact what I use myself to do this is a tool I created which does dependency tracing on functions you marked to be exported, and it'll wrap them up and compile the shared lib for you along with giving you the declarations in C/C++.
(unfortunately, this tool is not yet on hackage, because there is something I still need to fix and test alot more before I'm comfortable doing so)
Tool is available here http://hackage.haskell.org/package/Hs2lib-0.4.8
Or is this a terrible idea?
It's not a terrible idea at all. But as Don Stewart notes, it's probably a less-trodden path. You could certainly launch your program as Haskell or OCaml, then have it do a foreign-function call right out of the starting gate—and I recommend you structure your code that way—but it doesn't change the fact that many more people call from Haskell into C than from C into Haskell. Likewise for OCaml.

How to use GLUT not in main thread on OS X?

I once tried to open a GLUT window from a sub-thread and got lots of nasty problems. I remember this post on lists.apple.com:
GLUT functions may only be called from the application's main thread
Has anything changed in this regard with GLUT on Mac OS X ? Is there a thread-safe GLUT that let's you open windows from any thread ?
If GLUT is not an option, is there a tiny library that replaces GLUT and would work from any thread ?
[edit]
Here is the result of my tests triggered by the various solutions proposed as answers:
GLFW looked nice but did not compile (current branch is 3 years old)
Agar was another pretender but it's too big for the tiny need I had
SDL is not BSD-license compatible and it's a huge library for code that should fit on a single file
GLUT cannot run in any thread.
I decided to reinvent the wheel (yes, that's good sometimes) and the final class is just 200 lines of code. It let's me open and close a window from any thread (openGL draw in new thread) and I have full control on vertical sync and such (SDL uses double buffering = slow for openGL). I had to trick around the NSApp to properly start and stop the application (which does not use an event loop otherwise).
To those telling me that OpenGL is not thread-safe, that's not exactly true: you can run multiple OpenGL threads and the draw commands will be executed in the OpenGL state assigned to that thread. OpenGL is thread-specific.
If anyone needs some bare-bones code to create OpenGL windows using Cocoa: gl_window.mm
GLUT is not thread safe. You'll need locking primitives with whatever solution you choose to implement. I'd recommend setting up your own GL view in Cocoa and rewriting the plumbing that GLUT provides.
Take a look at SDL as a modern GLUT replacement. It should give you all the cross-platform you want. As far a cross-platform threading, Boost provides a portable library.
As a replacement for GLUT, have a look at GLFW. It's similar in purpose and workings, but better. And it does not have a glfwMainLoop that your program is stuck with; it allows you full control. Never since I discovered GLFW have I had a need to switch back to GLUT.
Note that GLFW is not thread-safe, in the sense that it is unsafe to call GLFW functions from different threads (FAQ entry). However, as long as you call all GLFW functions from the same thread, it's your choice which thread that will be.
Not only is GLUT is not thread safe, but OpenGL is a state machine, and therefore isn't thread safe. Having said that, you can have multithreaded applications that use OpenGL. Just make sure all your OpenGL calls are made from the same thread.
The next step up from GLUT on Mac OS X is the Cocoa OpenGL Sample Code. This is a true Cocoa application that demonstrates the Cocoa way of setting up an OpenGL window, with interactivity using the Cocoa event model. From this starting point, it's fairly easy to add code to handle your program logic in a separate thread (or threads) from your OpenGL drawing code.

Resources