Detecting an installed Common Lisp implementation programmatically - bash

I'm writing a Common Lisp application. I'd like to have a Bash script which will serve as the entry point to the application. Currently, I've written the script so that the user must pass in their name of the Common Lisp implementation to run it, so I would write ./script.sh clisp for GNU CLISP but someone with SBCL would have to write ./script.sh sbcl. This is necessary since, unlike languages like Python and Ruby, Common Lisp implementations do not have any standard name or standardized way of invoking them.
Is there any trick to detecting which Common Lisp implementation is installed, perhaps an environment variable or something? Basically, I'm looking for something better than forcing the user to pass in the name of the implementation.

You could use Roswell, which provides ways to set the implementation on a user or invocation level. You still need wrapper scripts, but roswell standardizes them.

Install the cl-launch Unix utility program which implements the abstraction described in #bishop's answer. The utility will detect most implementations of Common Lisp and can be used to execute a script or dump an executable which calls the content of a script (loads faster).

TL;DR: I don't think there's a trick, but you need not require the clisp interpreter on every invocation.
This is a relatively common pattern: you have a bash script that depends upon a certain executable being available, and it may well be available, but in different locations, possibly with the user having their own compiled version and/or the system having several alternatives.
The approach I've seen boils down to this algorithm:
If there is an environment variable that specifies the full path to an executable, prefer that
Otherwise, if there is a configuration file in the user's home directory that specifies the location, and possibly other parameters, prefer that
Otherwise, if there is a configuration in /etc that specifies the location, and possibly other parameters, prefer that
Otherwise, ask the system package manager to list the packages matching your application's typical installation names
The first three are easy enough to implement using the bash test functions and, I'm guessing, if you got this far you know how to do that. (If not, ask and I'll post examples.)
It is the fourth point that becomes interesting. There are two variables to deal with. First, determining the package manager in the installed environment. There are no shortage of these, and I've seen both table approaches (mapping OS to a package manager) and inquiry approaches (looking for executable that match expected names like rpm, yum, emerge, etc). Second, determining the package name appropriate for your package manager. This too can be tricky. On the one hand, you're probably safe iterating through the list of known executable and grepping the list. On the other hand, your package manager may provide "virtual" or "alternative" packages that generically provide a service, regardless of the specific implementation. For example, you could grep the portage tree for dev-lisp and be reasonably sure to find one installed package.
The easiest case is when your script is meant to be run in a small number of well-known environments: implement the one or more of the first three points to the let the user override the script's auto-selection, then your script's auto-selection just iterates over the known alternatives in your known environment until it finds one it prefers.
The hard case is when you have to support multiple environments. You end up writing an abstraction layer that knows about the different possible package managers and how to interrogate those package systems for various packages, either at a generic level or for specific packages. Having done this for a script set that deployed on AIX, HP-UX, Solaris, a couple of Linux distros, and cygwin Windows I can say: not fun.
As I read your question, you have a script that will be distributed to different users' machines whose environments you don't control. The only requirement of these target machines is they have bash and at least one Common LISP interpreter installed. From this, I inferred you couldn't install any loaders. However, if you can install, require, or detect the presence of any of the launchers mentioned in other answers, that will certainly save a ton of work.

Related

$(shell [foo]) in Windows

I've got a makefile (a file called 'Makefile' which is run by cmake in Linux, but works in Windows via nmake I believe and needs to be run in VS command prompt.)
And most of the 'sample' ones I can see are just one line (and the rest appear to be stuff I don't 'yet' understand and then this same one line.
include $(shell rospack find mk)/cmake.mk
(in the terminal rospack find [package] returns the path to said package, and cmake.mk is obviously the file it wants to include)
My problem is, that this appears (to me at least) to be written for use in a Linux system (which basically the entirety of ros, the program I'm working with, was) and in Windows this appears to just try to be
include /cmake.mk
(which unsurprisingly doesn't work)
Basically I need to know how to do the same thing in windows, generally in a 'dynamic' way, as it will only cause more problems down the line if I get this working by hard-coding the directory path and then it breaks because its not set properly some time in the future)
So I guess if this isn't possible or is particularly hard, a way of hard coding it would be a stopgap.
I tried:
include C:\[directory]\cmake.mk
but it seems to have issues with the ':'
I'm trying to work with Windows, because later in my project I'll be needing to use another program (for i90 robot) for which we only have Windows support.
OK, so apparently it acts differently if the file is actually in the folder.
as in
include C:\[directory]\cmake.mk
Errors with
C:\[directory]\cmake.mk not found
if the file isn't there, and
fatal error U1034: syntax error : separator missing
if it is
While this doesn't really seem to impact on the original problem, I guess it indicates I'm trying to do something funky windows doesn't like.
The short answer is, you'll never get a single makefile that does much of anything complicated that will work both with standard UNIX-style make (such as GNU make from GNU/Linux) and also work with nmake. Nmake is a completely different beast.
As an aside, it's confusing that your makefiles here are called "cmake", because cmake is an actual program, distinct from make (and nmake). I'm assuming, though, from the context that the use of the term "cmake" here doesn't refer to the actual cmake utility. Which is too bad, because if it did use cmake things would be simpler for you. Maybe.
It's not clear exactly what your requirement to use nmake is, though. If you laid out your real requirements, it would be a lot easier for us to advise you. For example, you say you need to use a "another program" which runs only on Windows. What does this program do, exactly, and how will you need to use it? Does it provide libraries that need to be linked with the "ros" code?
Basically, your simplest way forward is to obtain a UNIX-like environment, including tools like GNU make, for your Windows system. There are two main choices: Cygwin, which provides a completely POSIX infrastructure including shell, compiler, etc. which are ports of the GNU environment to Windows but require a POSIX layer, and MinGW, which has various GNU tools that run more or less natively on Windows.
However, if you MUST use Visual Studio as your compiler, for example, then these will be much more difficult to integrate.

Lua registry for environment(s)?

I'm trying to read the windows registry to figure out what scripting environments are installed and where the stand alone interpreter executables are available.
When I did python, for example, I searched for
HKEY_LOCAL_MACHINE/SOFTWARE/Python/PythonCore/InstallPath
This gives me the install path for the python executable for the environment, this allows me to find if I have python 2.7, 3, etc and where those exes are.
I'm looking for something similar for Lua for windows.
I must use the registry for this search.
What Nicol said. You will be better served by scanning PATH against the list of known executables (but even that is not a guarantee as many of my local installs are not in PATH). Still, I think there is a better chance of finding those scripting engines that don't leave their traces in the registry. Or maybe use a combination of mechanisms.
Lua does not have an install path. Lua does not have an installation. Lua is not like Python, with an installer that puts everything in one specific place and sets up registry and PATH entries and such.
The standalone interpreter does not have a mechanism to query its location. If you're interested in learning the Lua version, you can always check the _VERSION variable field. But other than that, no, there is no mechanism for doing what you want.

Is it possible to write a libPOSIX for Windows (Win32) without requiring a background service or DLL that's always loaded?

I know about Cygwin, and I know of its shortcomings. I also know about the slowness of fork, but not why on Earth it's not possible to work around that. I also know Cygwin requires a DLL. I also understand POSIX defines a whole environment (shell, etc...), that's not really what I care about here.
My question is asking if there is another way to tackle the problem. I see more and more of POSIX functionality being implemented by the MinGW projects, but there's no complete solution providing a full-blown (comparable to Linux/Mac/BSD implementation status) POSIX functionality.
The question really boils down to:
Can the Win32 API (as of MSVC20??) be efficiently used to provide a complete POSIX layer over the Windows API?
Perhaps this will turn out to be a full libc that only taps into the OS library for low-level things like filesystem access, threads, and process control. But I don't know exactly what else POSIX consists of. I doubt a library can turn Win32 into a POSIX compliant entiity.
POSIX <> Win32.
If you're trying to write apps that target POSIX, why are you not using some variant of *N*X? If you prefer to run Windows, you can run Linux/BSD/whatever inside Hyper-V/VMWare/Parallels/VirtualBox on your PC/laptop/etc.
Windows used to have a POSIX compliant environment that ran alongside the Win32 subsystem, but was discontinued after NT4 due to lack of demand. Microsoft bought Interix and released Services For Unix (SFU). While it's still available for download, SFU 3.5 is now deprecated and no longer developed or supported.
As to why fork is so slow, you need to understand that fork isn't just "Create a new process", it's "create a new process (itself an expensive operation) which is a duplicate of the calling process along with all memory".
In *N*X, the forked process is mapped to the same memory pages as the parent (i.e. is pretty quick) and is only given new pages as and when the forked process tried to modify any shared pages. This is known as copy on write. This is largely achievable because in UNIX, there is no hard barrier between the parent and forked processes.
In NT, on the other hand, all processes are separated by a barrier enforced by CPU hardware. In NT, the easiest way to spawn a parallel activity which has access to your process' memory and resources, is to create a thread. Threads run within the memory space of the creating process and have access to all of the process' memory and resources.
You can also share data between processes via various forms of IPC, RPC, Named Pipes, mailslots, memory-mapped files but each technique has its own complexities, performance characteristics, etc. Read this for more details.
Because it tries to mimic UNIX, CygWin's 'fork' operation creates a new child process (in its own isolated memory space) and has to duplicate every page of memory in the parent process within the newly forked child. This can be a very costly operation.
Again, if you want to write POSIX code, do so in *N*X, not NT.
How about this
Most of the Unix API is implemented by the POSIX.DLL dynamically loaded (shared) library. Programs linked with POSIX.DLL run under the Win32 subsystem instead of the POSIX subsystem, so programs can freely intermix Unix and Win32 library calls.
From http://en.wikipedia.org/wiki/UWIN
The UWIN environment may be what you're looking for, but note that it is hosted at research.att.com, while UWIN is distributed under a liberal license it is not the GNU license. Also, as it is research for att, and only 2ndarily something that they are distributing for use, there are a lot of issues with documentation.
See more info see my write-up as the last answer for Regarding 'for' loop in KornShell
Hmm main UWIN link is bad link in that post, try
http://www2.research.att.com/sw/download/
Also, You can look at
https://mailman.research.att.com/pipermail/uwin-users/
OR
https://mailman.research.att.com/pipermail/uwin-developers/
To get a sense of the features vs issues.
I hope this helps.
The question really boils down to: Can the Win32 API (as of MSVC20??)
be efficiently used to provide a complete POSIX layer over the Windows
API?
Short answer: No.
"Complete POSIX" means fork(), mmap(), signal() and such, and these are [almost] impossible to implement on NT.
To drive the point home: GNU Hurd has problems with fork() as well, because Hurd kernel is not POSIX.
NT is not POSIX too.
Another difference is persisence:
In POSIX-compliant systems it is possible to create system objects and leave them there. Examples of such objects are named pipes and shared memory objects (shms). You can create a named pipe or a shm, and leave it in the filesystem (or in a special filesystem-like place) where other processes will be able to access it. The downside is that a process might die and fail to clean up after itself, leaving unused objects behind (you know about zombie processes? same thing).
In NT every object is reference-counted, and is destroyed as soon as its last handle is closed. Files are among the few objects that persist.
Symlinks are a filesystem feature, and don't exactly depend on NT kernel, but current implementation (in Vista and later) is incapable of creating object-type-agnostic symlinks. That is, a symlink is either a file or a directory, and must link to either a file or a directory. If the target has wrong type, the symlink won't work. You can give it the right type if the target exists when you create the symlink, but POSIX requires that symlinks may be created without their target existing. I can't imagine a use-case for a symlink that points first to a file, then to a directory, but POSIX says that this should work, and if it doesn't, you're not completely POSIX-compliant. Or if your symlinking API/utility can be given an option that specifies the right type, when target doesn't exist, that also breaks POSIX compatibility.
It is possible to replicate some POSIX features to some degree (such as "integer descriptors from in a single namespace, referencing any I/O object, and being select()able" without sacrificing [much] performance, but that is still a major undertaking, and POSIX interface is really restrictive (that is, if you could just add one more argument to that function, it would have been possible to Do The Right Thing...but you couldn't, unless you want to throw POSIX compliance away).
Your best bet is to not to rely on POSIX features that are difficult to port to non-POSIX systems, or abstract in such a way that lower levels may have separate implementations for different OSes, and upper levels do not care about the details.

Erlang compilation - Erlang as stand alone executeable

is there a way to compile Erlang to be a stand-alone executable?
this means, to run it as an exe without the Erlang runtime.
While it's possible to wrap everything up in a single EXE, you're not going to get away from having an Erlang runtime. Dynamic languages like Erlang can't really be compiled to native x86 code, for instance, due to their nature. There has to be an interpreter in there somewhere.
It's possible to come up with a scheme that bundles the interpreter and all the BEAM files into a single EXE you can double-click and run directly, but that's probably more work than you were wanting to go to. I've seen it done before, but there's rarely a good reason to do it, so I won't bother going into detail on the techniques here.
Instead, I suggest you use the same technique they use for Python's py2exe and py2app programs for creating Windows and Mac OS X executables, respectively. These programs load the program's main module up into a Python interpreter, figure out which other modules it needs using the language's built-in reflection mechanisms, then write out all those compiled modules along with a copy of the language interpreter and a small wrapper program that launches the program's main module with the interpreter. The directory containing those files is then a stand-alone environment, having everything needed to run the program. The only difference in the Erlang case is that python.exe becomes erl.exe, and *.pyc becomes *.beam. The basic idea is still the same.
You can simplify this if you don't need it to work with any arbitrary Erlang program, but only yours. In that case, you just copy the Erlang interpreter and all the .beam files that make up your program into a single directory. You can make this part of your program's Makefile, for instance.
You can then use your favorite setup.exe or MSI creation method for creating a distributable package that installs this collection of files into c:\Program Files\MyProgram on the end user's system and creates a shortcut for "erl mainmodule.beam" in their Start menu. The end user doesn't care that as part of the program they also get a copy of Erlang. That's an implementation detail.
you can use Warp. I've added examples for wrapping an Erlang release.

Third-party Uniform Type Identifier implementations?

The madness of file extensions and MIME types and creator codes and magic numbers to determine file types is a huge mess. Coming from a background of Cocoa programming, I supposed I'm spoiled: in Tiger, OS X added a system called Uniform Type Identifiers (UTIs) that makes the entire process sane.
Given that I'm doing a bunch of web development in (insert your favorite web development environment here), is there anything similar that's not dependent on running OS X and - better yet - works in multiple programming languages?
Right now, I'm using the file command on Linux to replicate some of the functionality, but it's just not the same. And, of course, everybody has their huge lookup tables, but nothing is centralized.
Has anybody done this or run across this before?
There just doesn't seem to be any such thing outside of OS X. The file command is the best that you can do on linux, all the file type identification systems I've seen on linux use it internally (when they aren't just using the file extension).
In particular you can use file -i to output a MIME type rather than the plain human-readable strings.
The UTI system seems to have a great deal of useful functionality, maybe if you could tell us what in particular you miss about it that the other methods you've found don't give you, it might be easier for us to find you something useful.

Resources