How do I link multiple lisp-fasl files into a single file? - compilation

I started learning lisp and am looking for an efficient way to manage my personal libraries.
So i thought it would be useful to compile my library into a single fasl-file (containing both package-information and actual implementation), that i can afterwards load with (load "lib.fasl") to include the library. Problem is, the library consists of multiple *.lisp-files, lets say foo.lisp and bar.lisp.
I came as far as to compile them separately using (compile-file "foo.lisp") and (compile-file "bar.lisp"), respectively.
Obviously it would be rather messy having to LOAD every file of the library (i.e. foo.fasl and bar.fasl) manually when i want to use them, so i am looking for something like
(link "foo.fasl" "bar.fasl" :output "lib.fasl")
or
(compile-file "foo.lisp" "bar.lisp" :output "lib.fasl")
to produce a single lib.fasl, which I can then LOAD.
I don't want to use core-files, because I want to be able to combine my libraries flexibly (which would require to create a separate core-file for every possible combination of libraries).
I searched both the SBCL user-manual for lisp-functions doing this and the SBCL manpage for functionality using the CLI, but I wasn't able to find anything.
I would prefer a solution using SBCL, but I will take anything else too.
Thanks in advance!

IIRC for SBCL you just concatenate the FASL files into one file.
ASDF 3 has a way to build a single FASL file out of a system or a system with all dependencies (see compile-bundle-op and monolithic-compile-bundle-op).
In its portable library uiop there is also a function combine-fasls, which supports multiple CL implementations.

Use cat:
$ cat f1.fasl f2.fasl .... > mypackage.fasl
Note that the more common way is creating images.
You might also want to explore asdf.

Related

Linking against an external object file (.o) with autoconf

For work purposes I need to link against an object file generated by another program and found in its folder, the case is that I did not find information about this kind of linkage. I think that if I hardcode the paths and put the name-of-obj.o in front of the package_LDADD variable should work, but the case is that I don't want to do it that way.
If the object is not found I want the configure to fail and tell the user that the name-of-obj.o is missing.
I tried by using AC_LIBOBJ([name-of-obj.o]) but this will try to find in the root directory a name-of-obj.c and compile it.
Any tip or solution around this issue?
Thank you!
I need to link against an object file generated by another program and
found in its folder
What you describe is a very unusual requirement, not among those that the Autotools are designed to handle cleanly or easily. In particular, Autoconf has no mechanisms specifically applicable to searching for bare object files, as opposed to libraries, and Automake has no particular automation around including such objects when it links. Nevertheless, these tools do have enough general purpose functionality to do what you want; it just won't be as tidy as you might like.
I think that if I hardcode the paths and put the
name-of-obj.o in front of the package_LDADD variable should work, but
the case is that I don't want to do it that way.
I take it that it is the "hardcode the paths" part that you want to avoid. Adding an item to an appropriate LDADD variable is not negotiable; it is the right way to get your object included in the link.
If the object is not found I want the configure to fail and tell the
user that the name-of-obj.o is missing.
Well, then, the key thing appears to be to get configure to perform a search for your object file. Autoconf does not have a built-in mechanism to perform such a search, but it's just a macro-based shell-script generator, so you can write such a search in shell script + Autoconf, maybe something like this:
AC_MSG_CHECKING([for name-of-obj.o])
OTHER_LOCATION=
for my_dir in
/some/location/other_program/src
/another/location/other_program.12345/src
$srcdir/../relative/location/other_program/src; do
AS_IF([test -r "${my_dir}/name-of-obj.o"], [
# optionally, perform any desired test to check that the object is usable
# ... perhaps one using AC_LINK_IFELSE ...
# if it passes, then
OTHER_LOCATION=${my_dir}
break
])
done
# Check whether the object was in fact discovered, and act appropriately
AS_IF([test "x${OTHER_LOCATION}" = x], [
# Not found
AC_MSG_RESULT([not found])
AC_MSG_ERROR([Cannot configure without name-of-obj.o])
], [
AC_MSG_RESULT([${OTHER_LOCATION}/name-of-obj.o])
AC_SUBST([OTHER_LOCATION])
])
That's functional, but of course you could embellish, such as by providing for the package builder to specify a location to use via a command-line argument (AC_ARG_WITH(...)). And if you want to do this for multiple objects, then you would probably want to wrap up at least some of that into a custom macro.
The Automake side is much less involved. To get the object linked, you just need to add it to the appropriate LDADD variable, using the output variable created by the above, such as:
foo_LDADD = $(OTHER_LOCATION)/name-of-obj.o
Note that if you're building just one program target then you can use the general LDADD instead of foo_LDADD, but note that by default these are alternatives not complements.
With that said, this is a bad idea overall. If you want to link something that is not part of your project, then you should get it from an installed library. That can be a local, custom-built library, of course, so long as it is a library, not a bare object file, and it is installed. It can be a static library if you don't want to rely on or distribute a separate shared library.
On the other hand, if your project is part of a larger build, then the best approach is probably to integrate it into that build, maybe as a subproject. It would still be best to link a library instead of a bare object file, but in a subproject context it might make sense to use a lib that was not installed to the build system. In conjunction with a command-line argument that tells it where to find the wanted lib, this could make the needed Autoconf code much cleaner and clearer.

Is there a way to use load-mode for files in DrRacket?

I would like to use DrRacket in the same way that it works for some of the ‘legacy languages’. In particular, I would like to go through a file as if it were a sequence of commands issued to the interpreter, and not as a module.
Essentially I want to run at least one file in load-mode, but I’m not sure if it’s possible to do it using DrRacket.
Ideally, I could:
*Specify a file that sets the language and maybe loads some modules, which runs by default at startup.
*Then load a file that is not a module (and has no #lang specification) and run it.
It’d also be nice (since I want to use Scheme) if it would allow redefinitions, just as the legacy languages do.
Yes you can, and in fact, the 'legacy languages' (and 'teaching languages') are actually just implemented as DrRacket Plugins. You can remove them from your copy of DrRacket and even add new ones.
There are various ways to do this depending on if you are okay with a #lang (or #reader) saved in the file. If you're not, its still doable, you just need to use drracket:get/extend:extend-unit-frame to add your tool to DrRacket, and possibly drracket:get/extend:extend-definitions-text to easily extend the definitions window.
I won't go into the details of making a generic DrRacket plugin here, that belongs in a different question...also the DrRacket Plugins Manual has the information you need.1 I will, however, point you in the direction of how you can use DrRacket in load mode out of the box.
Check out the racket/load language. It is designed to run each expression in the top level as if you were at a REPL typing it. I find it very useful for testing the differences between Racket module and top level interactions.
Of course, if you don't make a DrRacket plugin, you will still need to put:
#lang racket/load
at the top of your file, but you otherwise get a 'legacy mode' out of the box.
1If it doesn't please continue to ask questions, and of course we always love help from anyone who is willing to contribute. <3

Boost.Tests: Specify function name instead of main

I am working on a project using the cmake build system. By default CMake has a nice framework for generating a single executable from a set of C/C++ code. The cmake function is called create_test_sourcelist. What it does is generate a C/C++ dispatcher with a single main entry point which will call other C/C++ code.
Therefore I have a bunch of C/C++ files with a function signature such as: int TestFunctionality1(int argc, char *argv[]), which I'd like to keep as-is, unless of course it means even more work.
How can I keep this system in place and start using BOOST_CHECK ? I could not figure out how to specify the actual main entry point is not called int main(int argc, char *argv[]).
My goal is have a framework for integration with Jenkins, since the project already uses Boost, I believe this should be doable without re-writing the existing CMake test suite and changing all tests into independent main function.
Unfortunately, it seems there is no straightforward and clean way to do that.
From one side the only useful function of create_test_sourcelist is to generate a test driver: a (stupid pretty simple, naive and with lack of abilities to hack/extend) C/C++ translation unit based on ${cmake-prefix}/share/cmake/Templates/TestDriver.cxx.in (and there is no way to select some other template).
From other side, Boost UTF offers its own test runner (which is equal to test driver in CMake's terminology), but in any variant (static, dynamic, single-header) it contains definition for main() some way or another (even in case of external test runner).
…so you end up with two main() functions and no ability to choose a single one.
Digging a little into sources of create_test_sourcelist I've really wonder why are they implemented it as a command and not as ordinal (extern) cmake module — it doesn't do any special (which can't be implemented using CMake language). This command is really stupid — it doesn't check that required functions are really exists (you'll get compile errors if smth wrong). There are no ways for flexible customization of output source file at all. All that is does is stripping paths and extensions from a list of source files and substitute it to mentioned template using ordinal configure_file()…
So, personally I see no reason to use it at all. It is why I've done the same (but better ;) job in the module mentioned in the comment above.
If you are still want to use that command, generated test driver completely useless if you want to use Boost UTF. You need to provide your own initialization function anyway (and it is not main()), where you can manually register your test cases into a master test suite (or organize your tests into some more complex tree). In that case there is absolutely no reason to use create_test_sourcelist! All what you can get from it is a list of sources that needs to be given to add_executable()… but it is much more easy to do w/ set()… This command even can't help you w/ list of test functions (list of filenames w/o extension actually) to call (it used internally and not exported). Do you still want to use that command??

Multiple classes in one file, Ruby Style Question

I am writing a script that takes data from a database and creates GoogleChart URLs from the parsed data. I only need to create two type of charts, Pie and Bar, so is it wrong if I stick both of those classes in the same file just to keep the number of files I have low?
Thanks
If you're asking the "ruby" way, then it is to put your classes in separate files. As some others have alluded to, placing your classes in separate files scales better. If you place multiple classes in the same file and they start to grow, then you're going to need to separate them later.
So why not have them separate from the beginning?
UPDATE
I should also mention that autoload works by expecting classes to be in their own files. For instance, if you're in a Rails environment and you don't separate classes into different files, you'll have to require the file explicitly. The best place to do this would be in the application.rb file. I know you're not in a Rails environment, but for others who may find this answer, maybe this info will be helpful.
UPDATE2
By 'autoload', I meant Rails autoload. If you configure your own autoload, you can put classes in the same file; but again, why? The Ruby and Java communities usually expect classes to be in separate files. The only exception are nested classes, but that's for more advanced design patterns.
Usually more less-complex files are better than less more-complex ones. Specially if you need to share the code with others.
It's not wrong. If your code is simple enough* then by all means put all of it in one file.
On the other hand, if you think your code is going to get more complex later, or if you plan to use automated testing tools later, you will be doing yourself a big favour if you deal with the structure of all that now.
(* My personal rule of thumb: about 200 lines.)

SWeave with non-R code chunks?

I often use Sweave to produce LaTeX documents where certain chunks are produced dynamically by executing R code. This works well - but is it also possible to have code chunks that are executed in different ways, e.g. by executing the code in the shell, or by running Perl, and so on? It would be helpful to be able to mix things up, so I could do things like run some shell commands to fetch some data, run some perl commands to pre-process it, and then run R commands to analyze it.
Of course I could use all R chunks and use system() as a poor-man's substitute, but that doesn't make for very pleasant reading in the document.
The new new thing (for multi-language, multi-format) docs may be dexy.it which for example these guys at opengamma.org use as the backend.
Ana, who is behind dexy, is also giving a lot of talks about it so also look at the dexy blog.
It's not directly related to Sweave, but org-babel, which is part of Emacs org-mode, allows to mix code chunks of different languages in one file, pass data from one chunk to another, execute them, and generate LaTeX or HTML export from the output.
You can find more informations about org-mode here :
http://www.orgmode.org/
And to see how org-babel works :
http://orgmode.org/worg/org-contrib/babel/
There is certainly no easy way to do this other than through either foreign language interfaces from R (maybe through inline if it's supported), or system(). For what it's worth, I would just use system(); that should be easy enough.
You can see this previous question about having a Sweave equivalent for Python, where one of the respondents actually creates a separate interface. This can give you a sense what what it would take to embed other languages which may not already be supported. At a minimum, you have to do major hacking on the Sweave driver.
Do you know emacs" org-mode and, more specifically, Babel? If you already know Emacs or are willing to switch to Emacs, then org-mode and Babel are the answer to your question(s).
For instance, I am currently working on a document which contains some shell-scripts, does computations with R and creates flow charts with dot (graphviz). Org-mode can export a variety of formats, e.g. LaTeX (that's what I use).
There is the StatWeave project which uses java rather than R to do the weaving, but will run multiple programs instead of just R. I don't know how hard it would be to get it to do Perl or other programs like that, but the homepage indicates that it already works with R, SAS, Stata, and others:
http://www.cs.uiowa.edu/~rlenth/StatWeave/

Resources